WorldWideScience

Sample records for weighted head-banging algorithm

  1. Head banging persisting during adolescence: A case with polysomnographic findings

    Directory of Open Access Journals (Sweden)

    Ravi Gupta

    2014-01-01

    Full Text Available Head banging is a sleep-related rhythmic movement disorder of unknown etiology. It is common during infancy; however, available literature suggests that prevalence decreases dramatically after childhood. We report the case of a 16-year-old male who presented with head banging. The symptoms were interfering with his functioning and he had been injured because of the same in the past. We are presenting the video-polysomnographic data of the case. Possible differential diagnoses, etiology, and treatment modalities are discussed. The boy was prescribed clonazepam and followed up for 3 months. Parents did not report any episode afterward.

  2. An efficient algorithm for weighted PCA

    NARCIS (Netherlands)

    Krijnen, W.P.; Kiers, H.A.L.

    1995-01-01

    The method for analyzing three-way data where one of the three components matrices in TUCKALS3 is chosen to have one column is called Replicated PCA. The corresponding algorithm is relatively inefficient. This is shown by offering an alternative algorithm called Weighted PCA. Specifically it is

  3. Information filtering via weighted heat conduction algorithm

    Science.gov (United States)

    Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng

    2011-06-01

    In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.

  4. Greedy algorithm with weights for decision tree construction

    KAUST Repository

    Moshkov, Mikhail

    2010-12-01

    An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.

  5. Greedy algorithm with weights for decision tree construction

    KAUST Repository

    Moshkov, Mikhail

    2010-01-01

    An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.

  6. A Coulomb collision algorithm for weighted particle simulations

    Science.gov (United States)

    Miller, Ronald H.; Combi, Michael R.

    1994-01-01

    A binary Coulomb collision algorithm is developed for weighted particle simulations employing Monte Carlo techniques. Charged particles within a given spatial grid cell are pair-wise scattered, explicitly conserving momentum and implicitly conserving energy. A similar algorithm developed by Takizuka and Abe (1977) conserves momentum and energy provided the particles are unweighted (each particle representing equal fractions of the total particle density). If applied as is to simulations incorporating weighted particles, the plasma temperatures equilibrate to an incorrect temperature, as compared to theory. Using the appropriate pairing statistics, a Coulomb collision algorithm is developed for weighted particles. The algorithm conserves energy and momentum and produces the appropriate relaxation time scales as compared to theoretical predictions. Such an algorithm is necessary for future work studying self-consistent multi-species kinetic transport.

  7. Weighted Flow Algorithms (WFA) for stochastic particle coagulation

    International Nuclear Information System (INIS)

    DeVille, R.E.L.; Riemer, N.; West, M.

    2011-01-01

    Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.

  8. Weighted Flow Algorithms (WFA) for stochastic particle coagulation

    Science.gov (United States)

    DeVille, R. E. L.; Riemer, N.; West, M.

    2011-09-01

    Stochastic particle-resolved methods are a useful way to compute the time evolution of the multi-dimensional size distribution of atmospheric aerosol particles. An effective approach to improve the efficiency of such models is the use of weighted computational particles. Here we introduce particle weighting functions that are power laws in particle size to the recently-developed particle-resolved model PartMC-MOSAIC and present the mathematical formalism of these Weighted Flow Algorithms (WFA) for particle coagulation and growth. We apply this to an urban plume scenario that simulates a particle population undergoing emission of different particle types, dilution, coagulation and aerosol chemistry along a Lagrangian trajectory. We quantify the performance of the Weighted Flow Algorithm for number and mass-based quantities of relevance for atmospheric sciences applications.

  9. A dynamic inertia weight particle swarm optimization algorithm

    International Nuclear Information System (INIS)

    Jiao Bin; Lian Zhigang; Gu Xingsheng

    2008-01-01

    Particle swarm optimization (PSO) algorithm has been developing rapidly and has been applied widely since it was introduced, as it is easily understood and realized. This paper presents an improved particle swarm optimization algorithm (IPSO) to improve the performance of standard PSO, which uses the dynamic inertia weight that decreases according to iterative generation increasing. It is tested with a set of 6 benchmark functions with 30, 50 and 150 different dimensions and compared with standard PSO. Experimental results indicate that the IPSO improves the search performance on the benchmark functions significantly

  10. A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm

    Science.gov (United States)

    Shamsi, Mousa; Sedaaghi, Mohammad Hossein

    2016-01-01

    Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO’s parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate. PMID:27560945

  11. A Novel Flexible Inertia Weight Particle Swarm Optimization Algorithm.

    Science.gov (United States)

    Amoshahy, Mohammad Javad; Shamsi, Mousa; Sedaaghi, Mohammad Hossein

    2016-01-01

    Particle swarm optimization (PSO) is an evolutionary computing method based on intelligent collective behavior of some animals. It is easy to implement and there are few parameters to adjust. The performance of PSO algorithm depends greatly on the appropriate parameter selection strategies for fine tuning its parameters. Inertia weight (IW) is one of PSO's parameters used to bring about a balance between the exploration and exploitation characteristics of PSO. This paper proposes a new nonlinear strategy for selecting inertia weight which is named Flexible Exponential Inertia Weight (FEIW) strategy because according to each problem we can construct an increasing or decreasing inertia weight strategy with suitable parameters selection. The efficacy and efficiency of PSO algorithm with FEIW strategy (FEPSO) is validated on a suite of benchmark problems with different dimensions. Also FEIW is compared with best time-varying, adaptive, constant and random inertia weights. Experimental results and statistical analysis prove that FEIW improves the search performance in terms of solution quality as well as convergence rate.

  12. A Weight-Aware Recommendation Algorithm for Mobile Multimedia Systems

    Directory of Open Access Journals (Sweden)

    Pedro M. P. Rosa

    2013-01-01

    Full Text Available In the last years, information flood is becoming a common reality, and the general user, hit by thousands of possible interesting information, has great difficulties identifying the best ones, that can guide him in his/her daily choices, like concerts, restaurants, sport gatherings, or culture events. The current growth of mobile smartphones and tablets with embedded GPS receiver, Internet access, camera, and accelerometer offer new opportunities to mobile ubiquitous multimedia applications that helps gathering the best information out of an always growing list of possibly good ones. This paper presents a mobile recommendation system for events, based on few weighted context-awareness data-fusion algorithms to combine several multimedia sources. A demonstrative deployment were utilized relevance like location data, user habits and user sharing statistics, and data-fusion algorithms like the classical CombSUM and CombMNZ, simple, and weighted. Still, the developed methodology is generic, and can be extended to other relevance, both direct (background noise volume and indirect (local temperature extrapolated by GPS coordinates in a Web service and other data-fusion techniques. To experiment, demonstrate, and evaluate the performance of different algorithms, the proposed system was created and deployed into a working mobile application providing real time awareness-based information of local events and news.

  13. Unwinding the hairball graph: Pruning algorithms for weighted complex networks

    Science.gov (United States)

    Dianati, Navid

    2016-01-01

    Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model and extracting the subgraph consisting of those edges. Here, we focus on integer-weighted graphs commonly arising when weights count the occurrences of an "event" relating the nodes. We introduce a simple and intuitive null model related to the configuration model of network generation and derive two significance filters from it: the marginal likelihood filter (MLF) and the global likelihood filter (GLF). The former is a fast algorithm assigning a significance score to each edge based on the marginal distribution of edge weights, whereas the latter is an ensemble approach which takes into account the correlations among edges. We apply these filters to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filters extract a larger and significantly sparser giant component.

  14. Weight optimization of plane truss using genetic algorithm

    Science.gov (United States)

    Neeraja, D.; Kamireddy, Thejesh; Santosh Kumar, Potnuru; Simha Reddy, Vijay

    2017-11-01

    Optimization of structure on basis of weight has many practical benefits in every engineering field. The efficiency is proportionally related to its weight and hence weight optimization gains prime importance. Considering the field of civil engineering, weight optimized structural elements are economical and easier to transport to the site. In this study, genetic optimization algorithm for weight optimization of steel truss considering its shape, size and topology aspects has been developed in MATLAB. Material strength and Buckling stability have been adopted from IS 800-2007 code of construction steel. The constraints considered in the present study are fabrication, basic nodes, displacements, and compatibility. Genetic programming is a natural selection search technique intended to combine good solutions to a problem from many generations to improve the results. All solutions are generated randomly and represented individually by a binary string with similarities of natural chromosomes, and hence it is termed as genetic programming. The outcome of the study is a MATLAB program, which can optimise a steel truss and display the optimised topology along with element shapes, deflections, and stress results.

  15. A Performance Weighted Collaborative Filtering algorithm for personalized radiology education.

    Science.gov (United States)

    Lin, Hongli; Yang, Xuedong; Wang, Weisheng; Luo, Jiawei

    2014-10-01

    Devising an accurate prediction algorithm that can predict the difficulty level of cases for individuals and then selects suitable cases for them is essential to the development of a personalized training system. In this paper, we propose a novel approach, called Performance Weighted Collaborative Filtering (PWCF), to predict the difficulty level of each case for individuals. The main idea of PWCF is to assign an optimal weight to each rating used for predicting the difficulty level of a target case for a trainee, rather than using an equal weight for all ratings as in traditional collaborative filtering methods. The assigned weight is a function of the performance level of the trainee at which the rating was made. The PWCF method and the traditional method are compared using two datasets. The experimental data are then evaluated by means of the MAE metric. Our experimental results show that PWCF outperforms the traditional methods by 8.12% and 17.05%, respectively, over the two datasets, in terms of prediction precision. This suggests that PWCF is a viable method for the development of personalized training systems in radiology education. Copyright © 2014. Published by Elsevier Inc.

  16. Dynamic Inertia Weight Binary Bat Algorithm with Neighborhood Search

    Directory of Open Access Journals (Sweden)

    Xingwang Huang

    2017-01-01

    Full Text Available Binary bat algorithm (BBA is a binary version of the bat algorithm (BA. It has been proven that BBA is competitive compared to other binary heuristic algorithms. Since the update processes of velocity in the algorithm are consistent with BA, in some cases, this algorithm also faces the premature convergence problem. This paper proposes an improved binary bat algorithm (IBBA to solve this problem. To evaluate the performance of IBBA, standard benchmark functions and zero-one knapsack problems have been employed. The numeric results obtained by benchmark functions experiment prove that the proposed approach greatly outperforms the original BBA and binary particle swarm optimization (BPSO. Compared with several other heuristic algorithms on zero-one knapsack problems, it also verifies that the proposed algorithm is more able to avoid local minima.

  17. The improved Apriori algorithm based on matrix pruning and weight analysis

    Science.gov (United States)

    Lang, Zhenhong

    2018-04-01

    This paper uses the matrix compression algorithm and weight analysis algorithm for reference and proposes an improved matrix pruning and weight analysis Apriori algorithm. After the transactional database is scanned for only once, the algorithm will construct the boolean transaction matrix. Through the calculation of one figure in the rows and columns of the matrix, the infrequent item set is pruned, and a new candidate item set is formed. Then, the item's weight and the transaction's weight as well as the weight support for items are calculated, thus the frequent item sets are gained. The experimental result shows that the improved Apriori algorithm not only reduces the number of repeated scans of the database, but also improves the efficiency of data correlation mining.

  18. Algorithm for the generation of nuclear spin species and nuclear spin statistical weights

    International Nuclear Information System (INIS)

    Balasubramanian, K.

    1982-01-01

    A set of algorithms for the computer generation of nuclear spin species and nuclear spin statistical weights potentially useful in molecular spectroscopy is developed. These algorithms generate the nuclear spin species from group structures known as generalized character cycle indices (GCCIs). Thus the required input for these algorithms is just the set of all GCCIs for the symmetry group of the molecule which can be computed easily from the character table. The algorithms are executed and illustrated with examples

  19. A new hybrid-FBP inversion algorithm with inverse distance backprojection weight for CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Narasimhadhan, A.V.; Rajgopal, Kasi

    2011-07-01

    This paper presents a new hybrid filtered backprojection (FBP) algorithm for fan-beam and cone-beam scan. The hybrid reconstruction kernel is the sum of the ramp and Hilbert filters. We modify the redundancy weighting function to reduce the inverse square distance weighting in the backprojection to inverse distance weight. The modified weight also eliminates the derivative associated with the Hilbert filter kernel. Thus, the proposed reconstruction algorithm has the advantages of the inverse distance weight in the backprojection. We evaluate the performance of the new algorithm in terms of the magnitude level and uniformity in noise for the fan-beam geometry. The computer simulations show that the spatial resolution is nearly identical to the standard fan-beam ramp filtered algorithm while the noise is spatially uniform and the noise variance is reduced. (orig.)

  20. A recurrence-weighted prediction algorithm for musical analysis

    Science.gov (United States)

    Colucci, Renato; Leguizamon Cucunuba, Juan Sebastián; Lloyd, Simon

    2018-03-01

    Forecasting the future behaviour of a system using past data is an important topic. In this article we apply nonlinear time series analysis in the context of music, and present new algorithms for extending a sample of music, while maintaining characteristics similar to the original piece. By using ideas from ergodic theory, we adapt the classical prediction method of Lorenz analogues so as to take into account recurrence times, and demonstrate with examples, how the new algorithm can produce predictions with a high degree of similarity to the original sample.

  1. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé

    2013-02-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.

  2. Algorithm for Optimizing Bipolar Interconnection Weights with Applications in Associative Memories and Multitarget Classification

    Science.gov (United States)

    Chang, Shengjiang; Wong, Kwok-Wo; Zhang, Wenwei; Zhang, Yanxin

    1999-08-01

    An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.

  3. A curvature-based weighted fuzzy c-means algorithm for point clouds de-noising

    Science.gov (United States)

    Cui, Xin; Li, Shipeng; Yan, Xiutian; He, Xinhua

    2018-04-01

    In order to remove the noise of three-dimensional scattered point cloud and smooth the data without damnify the sharp geometric feature simultaneity, a novel algorithm is proposed in this paper. The feature-preserving weight is added to fuzzy c-means algorithm which invented a curvature weighted fuzzy c-means clustering algorithm. Firstly, the large-scale outliers are removed by the statistics of r radius neighboring points. Then, the algorithm estimates the curvature of the point cloud data by using conicoid parabolic fitting method and calculates the curvature feature value. Finally, the proposed clustering algorithm is adapted to calculate the weighted cluster centers. The cluster centers are regarded as the new points. The experimental results show that this approach is efficient to different scale and intensities of noise in point cloud with a high precision, and perform a feature-preserving nature at the same time. Also it is robust enough to different noise model.

  4. A simple algorithm for computing positively weighted straight skeletons of monotone polygons☆

    Science.gov (United States)

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-01-01

    We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in O(nlog⁡n) time and O(n) space, where n denotes the number of vertices of the polygon. PMID:25648376

  5. A simple algorithm for computing positively weighted straight skeletons of monotone polygons.

    Science.gov (United States)

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-02-01

    We study the characteristics of straight skeletons of monotone polygonal chains and use them to devise an algorithm for computing positively weighted straight skeletons of monotone polygons. Our algorithm runs in [Formula: see text] time and [Formula: see text] space, where n denotes the number of vertices of the polygon.

  6. Fast weighted centroid algorithm for single particle localization near the information limit.

    Science.gov (United States)

    Fish, Jeremie; Scrimgeour, Jan

    2015-07-10

    A simple weighting scheme that enhances the localization precision of center of mass calculations for radially symmetric intensity distributions is presented. The algorithm effectively removes the biasing that is common in such center of mass calculations. Localization precision compares favorably with other localization algorithms used in super-resolution microscopy and particle tracking, while significantly reducing the processing time and memory usage. We expect that the algorithm presented will be of significant utility when fast computationally lightweight particle localization or tracking is desired.

  7. A deterministic algorithm for fitting a step function to a weighted point-set

    KAUST Repository

    Fournier, Hervé ; Vigneron, Antoine E.

    2013-01-01

    Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance

  8. The impact of different algorithms for ideal body weight on screening for hydroxychloroquine retinopathy in women

    Directory of Open Access Journals (Sweden)

    Browning DJ

    2014-07-01

    Full Text Available David J Browning, Chong Lee, David Rotberg Charlotte Eye, Ear, Nose and Throat Associates, Charlotte, North Carolina, NC, USA Purpose: To determine how algorithms for ideal body weight (IBW affect hydroxychloroquine dosing in women.Methods: This was a retrospective study of 520 patients screened for hydroxychloroquine retinopathy. Charts were reviewed for sex, height, weight, and daily dose. The outcome measures were ranges of IBW across algorithms; rates of potentially toxic dosing; height thresholds below which 400 mg/d dosing is potentially toxic; and rates for which actual body weight (ABW was less than IBW.Results: Women made up 474 (91% of the patients. The IBWs for a height varied from 30–34 pounds (13.6–15.5 kg across algorithms. The threshold heights below which toxic dosing occurred varied from 62–70 inches (157.5–177.8 cm. Different algorithms placed 16%–98% of women in the toxic dosing range. The proportion for whom dosing should have been based on ABW rather than IBW ranged from 5%–31% across algorithms. Conclusion: Although hydroxychloroquine dosing should be based on the lesser of ABW and IBW, there is no consensus about the definition of IBW. The Michaelides algorithm is associated with the most frequent need to adjust dosing; the Metropolitan Life Insurance, large frame, mean value table with the least frequent need. No evidence indicates that one algorithm is superior to others. Keywords: hydroxychloroquine, ideal body weight, actual body weight, toxicity, retinopathy, algorithms

  9. An improved version of Inverse Distance Weighting metamodel assisted Harmony Search algorithm for truss design optimization

    Directory of Open Access Journals (Sweden)

    Y. Gholipour

    Full Text Available This paper focuses on a metamodel-based design optimization algorithm. The intention is to improve its computational cost and convergence rate. Metamodel-based optimization method introduced here, provides the necessary means to reduce the computational cost and convergence rate of the optimization through a surrogate. This algorithm is a combination of a high quality approximation technique called Inverse Distance Weighting and a meta-heuristic algorithm called Harmony Search. The outcome is then polished by a semi-tabu search algorithm. This algorithm adopts a filtering system and determines solution vectors where exact simulation should be applied. The performance of the algorithm is evaluated by standard truss design problems and there has been a significant decrease in the computational effort and improvement of convergence rate.

  10. Weighted expectation maximization reconstruction algorithms with application to gated megavoltage tomography

    International Nuclear Information System (INIS)

    Zhang Jin; Shi Daxin; Anastasio, Mark A; Sillanpaa, Jussi; Chang Jenghwa

    2005-01-01

    We propose and investigate weighted expectation maximization (EM) algorithms for image reconstruction in x-ray tomography. The development of the algorithms is motivated by the respiratory-gated megavoltage tomography problem, in which the acquired asymmetric cone-beam projections are limited in number and unevenly sampled over view angle. In these cases, images reconstructed by use of the conventional EM algorithm can contain ring- and streak-like artefacts that are attributable to a combination of data inconsistencies and truncation of the projection data. By use of computer-simulated and clinical gated fan-beam megavoltage projection data, we demonstrate that the proposed weighted EM algorithms effectively mitigate such image artefacts. (note)

  11. A digital combining-weight estimation algorithm for broadband sources with the array feed compensation system

    Science.gov (United States)

    Vilnrotter, V. A.; Rodemich, E. R.

    1994-01-01

    An algorithm for estimating the optimum combining weights for the Ka-band (33.7-GHz) array feed compensation system was developed and analyzed. The input signal is assumed to be broadband radiation of thermal origin, generated by a distant radio source. Currently, seven video converters operating in conjunction with the real-time correlator are used to obtain these weight estimates. The algorithm described here requires only simple operations that can be implemented on a PC-based combining system, greatly reducing the amount of hardware. Therefore, system reliability and portability will be improved.

  12. Weighted-Bit-Flipping-Based Sequential Scheduling Decoding Algorithms for LDPC Codes

    Directory of Open Access Journals (Sweden)

    Qing Zhu

    2013-01-01

    Full Text Available Low-density parity-check (LDPC codes can be applied in a lot of different scenarios such as video broadcasting and satellite communications. LDPC codes are commonly decoded by an iterative algorithm called belief propagation (BP over the corresponding Tanner graph. The original BP updates all the variable-nodes simultaneously, followed by all the check-nodes simultaneously as well. We propose a sequential scheduling algorithm based on weighted bit-flipping (WBF algorithm for the sake of improving the convergence speed. Notoriously, WBF is a low-complexity and simple algorithm. We combine it with BP to obtain advantages of these two algorithms. Flipping function used in WBF is borrowed to determine the priority of scheduling. Simulation results show that it can provide a good tradeoff between FER performance and computation complexity for short-length LDPC codes.

  13. A collaborative filtering recommendation algorithm based on weighted SimRank and social trust

    Science.gov (United States)

    Su, Chang; Zhang, Butao

    2017-05-01

    Collaborative filtering is one of the most widely used recommendation technologies, but the data sparsity and cold start problem of collaborative filtering algorithms are difficult to solve effectively. In order to alleviate the problem of data sparsity in collaborative filtering algorithm, firstly, a weighted improved SimRank algorithm is proposed to compute the rating similarity between users in rating data set. The improved SimRank can find more nearest neighbors for target users according to the transmissibility of rating similarity. Then, we build trust network and introduce the calculation of trust degree in the trust relationship data set. Finally, we combine rating similarity and trust to build a comprehensive similarity in order to find more appropriate nearest neighbors for target user. Experimental results show that the algorithm proposed in this paper improves the recommendation precision of the Collaborative algorithm effectively.

  14. A novel orthoimage mosaic method using the weighted A* algorithm for UAV imagery

    Science.gov (United States)

    Zheng, Maoteng; Zhou, Shunping; Xiong, Xiaodong; Zhu, Junfeng

    2017-12-01

    A weighted A* algorithm is proposed to select optimal seam-lines in orthoimage mosaic for UAV (Unmanned Aircraft Vehicle) imagery. The whole workflow includes four steps: the initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is then detected based on DSM (Digital Surface Model) data; the vertices (conjunction nodes) of initial network are relocated since some of them are on the high objects (buildings, trees and other artificial structures); and, the initial seam-lines are finally refined using the weighted A* algorithm based on the edge diagram and the relocated vertices. The method was tested with two real UAV datasets. Preliminary results show that the proposed method produces acceptable mosaic images in both the urban and mountainous areas, and is better than the result of the state-of-the-art methods on the datasets.

  15. A novel orthoimage mosaic method using a weighted A∗ algorithm - Implementation and evaluation

    Science.gov (United States)

    Zheng, Maoteng; Xiong, Xiaodong; Zhu, Junfeng

    2018-04-01

    The implementation and evaluation of a weighted A∗ algorithm for orthoimage mosaic with UAV (Unmanned Aircraft Vehicle) imagery is proposed. The initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is generated based on DSM (Digital Surface Model) data; the vertices (conjunction nodes of seam-lines) of the initial network are relocated if they are on high objects (buildings, trees and other artificial structures); and the initial seam-lines are refined using the weighted A∗ algorithm based on the edge diagram and the relocated vertices. Our method was tested with three real UAV datasets. Two quantitative terms are introduced to evaluate the results of the proposed method. Preliminary results show that the method is suitable for regular and irregular aligned UAV images for most terrain types (flat or mountainous areas), and is better than the state-of-the-art method in both quality and efficiency based on the test datasets.

  16. Can individualized weight monitoring using the HeartPhone algorithm improve sensitivity for clinical deterioration of heart failure?

    LENUS (Irish Health Repository)

    Ledwidge, Mark T

    2013-04-01

    Previous studies have demonstrated poor sensitivity of guideline weight monitoring in predicting clinical deterioration of heart failure (HF). This study aimed to evaluate patterns of remotely transmitted daily weights in a high-risk HF population and also to compare guideline weight monitoring and an individualized weight monitoring algorithm.

  17. Identification of Protein Complexes Using Weighted PageRank-Nibble Algorithm and Core-Attachment Structure.

    Science.gov (United States)

    Peng, Wei; Wang, Jianxin; Zhao, Bihai; Wang, Lusheng

    2015-01-01

    Protein complexes play a significant role in understanding the underlying mechanism of most cellular functions. Recently, many researchers have explored computational methods to identify protein complexes from protein-protein interaction (PPI) networks. One group of researchers focus on detecting local dense subgraphs which correspond to protein complexes by considering local neighbors. The drawback of this kind of approach is that the global information of the networks is ignored. Some methods such as Markov Clustering algorithm (MCL), PageRank-Nibble are proposed to find protein complexes based on random walk technique which can exploit the global structure of networks. However, these methods ignore the inherent core-attachment structure of protein complexes and treat adjacent node equally. In this paper, we design a weighted PageRank-Nibble algorithm which assigns each adjacent node with different probability, and propose a novel method named WPNCA to detect protein complex from PPI networks by using weighted PageRank-Nibble algorithm and core-attachment structure. Firstly, WPNCA partitions the PPI networks into multiple dense clusters by using weighted PageRank-Nibble algorithm. Then the cores of these clusters are detected and the rest of proteins in the clusters will be selected as attachments to form the final predicted protein complexes. The experiments on yeast data show that WPNCA outperforms the existing methods in terms of both accuracy and p-value. The software for WPNCA is available at "http://netlab.csu.edu.cn/bioinfomatics/weipeng/WPNCA/download.html".

  18. Fuzzy Weight Cluster-Based Routing Algorithm for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Teng Gao

    2015-01-01

    Full Text Available Cluster-based protocol is a kind of important routing in wireless sensor networks. However, due to the uneven distribution of cluster heads in classical clustering algorithm, some nodes may run out of energy too early, which is not suitable for large-scale wireless sensor networks. In this paper, a distributed clustering algorithm based on fuzzy weighted attributes is put forward to ensure both energy efficiency and extensibility. On the premise of a comprehensive consideration of all attributes, the corresponding weight of each parameter is assigned by using the direct method of fuzzy engineering theory. Then, each node works out property value. These property values will be mapped to the time axis and be triggered by a timer to broadcast cluster headers. At the same time, the radio coverage method is adopted, in order to avoid collisions and to ensure the symmetrical distribution of cluster heads. The aggregated data are forwarded to the sink node in the form of multihop. The simulation results demonstrate that clustering algorithm based on fuzzy weighted attributes has a longer life expectancy and better extensibility than LEACH-like algorithms.

  19. [Application of an Adaptive Inertia Weight Particle Swarm Algorithm in the Magnetic Resonance Bias Field Correction].

    Science.gov (United States)

    Wang, Chang; Qin, Xin; Liu, Yan; Zhang, Wenchao

    2016-06-01

    An adaptive inertia weight particle swarm algorithm is proposed in this study to solve the local optimal problem with the method of traditional particle swarm optimization in the process of estimating magnetic resonance(MR)image bias field.An indicator measuring the degree of premature convergence was designed for the defect of traditional particle swarm optimization algorithm.The inertia weight was adjusted adaptively based on this indicator to ensure particle swarm to be optimized globally and to avoid it from falling into local optimum.The Legendre polynomial was used to fit bias field,the polynomial parameters were optimized globally,and finally the bias field was estimated and corrected.Compared to those with the improved entropy minimum algorithm,the entropy of corrected image was smaller and the estimated bias field was more accurate in this study.Then the corrected image was segmented and the segmentation accuracy obtained in this research was 10% higher than that with improved entropy minimum algorithm.This algorithm can be applied to the correction of MR image bias field.

  20. A Line-Based Adaptive-Weight Matching Algorithm Using Loopy Belief Propagation

    Directory of Open Access Journals (Sweden)

    Hui Li

    2015-01-01

    Full Text Available In traditional adaptive-weight stereo matching, the rectangular shaped support region requires excess memory consumption and time. We propose a novel line-based stereo matching algorithm for obtaining a more accurate disparity map with low computation complexity. This algorithm can be divided into two steps: disparity map initialization and disparity map refinement. In the initialization step, a new adaptive-weight model based on the linear support region is put forward for cost aggregation. In this model, the neural network is used to evaluate the spatial proximity, and the mean-shift segmentation method is used to improve the accuracy of color similarity; the Birchfield pixel dissimilarity function and the census transform are adopted to establish the dissimilarity measurement function. Then the initial disparity map is obtained by loopy belief propagation. In the refinement step, the disparity map is optimized by iterative left-right consistency checking method and segmentation voting method. The parameter values involved in this algorithm are determined with many simulation experiments to further improve the matching effect. Simulation results indicate that this new matching method performs well on standard stereo benchmarks and running time of our algorithm is remarkably lower than that of algorithm with rectangle-shaped support region.

  1. An Algorithm for the Weighted Earliness-Tardiness Unconstrained Project Scheduling Problem

    Science.gov (United States)

    Afshar Nadjafi, Behrouz; Shadrokh, Shahram

    This research considers a project scheduling problem with the object of minimizing weighted earliness-tardiness penalty costs, taking into account a deadline for the project and precedence relations among the activities. An exact recursive method has been proposed for solving the basic form of this problem. We present a new depth-first branch and bound algorithm for extended form of the problem, which time value of money is taken into account by discounting the cash flows. The algorithm is extended with two bounding rules in order to reduce the size of the branch and bound tree. Finally, some test problems are solved and computational results are reported.

  2. Algorithms for the optimization of RBE-weighted dose in particle therapy.

    Science.gov (United States)

    Horcicka, M; Meyer, C; Buschbacher, A; Durante, M; Krämer, M

    2013-01-21

    We report on various algorithms used for the nonlinear optimization of RBE-weighted dose in particle therapy. Concerning the dose calculation carbon ions are considered and biological effects are calculated by the Local Effect Model. Taking biological effects fully into account requires iterative methods to solve the optimization problem. We implemented several additional algorithms into GSI's treatment planning system TRiP98, like the BFGS-algorithm and the method of conjugated gradients, in order to investigate their computational performance. We modified textbook iteration procedures to improve the convergence speed. The performance of the algorithms is presented by convergence in terms of iterations and computation time. We found that the Fletcher-Reeves variant of the method of conjugated gradients is the algorithm with the best computational performance. With this algorithm we could speed up computation times by a factor of 4 compared to the method of steepest descent, which was used before. With our new methods it is possible to optimize complex treatment plans in a few minutes leading to good dose distributions. At the end we discuss future goals concerning dose optimization issues in particle therapy which might benefit from fast optimization solvers.

  3. A Generalized Dynamic Composition Algorithm of Weighted Finite State Transducers for Large Vocabulary Speech Recognition

    OpenAIRE

    Cheng, Octavian; Dines, John; Magimai.-Doss, Mathew

    2006-01-01

    We propose a generalized dynamic composition algorithm of weighted finite state transducers (WFST), which avoids the creation of non-coaccessible paths, performs weight look-ahead and does not impose any constraints to the topology of the WFSTs. Experimental results on Wall Street Journal (WSJ1) 20k-word trigram task show that at 17\\% WER (moderately-wide beam width), the decoding time of the proposed approach is about 48\\% and 65\\% of the other two dynamic composition approaches. In comparis...

  4. Three-dimensional weight-accumulation algorithm for generating multiple excitation spots in fast optical stimulation

    Science.gov (United States)

    Takiguchi, Yu; Toyoda, Haruyoshi

    2017-11-01

    We report here an algorithm for calculating a hologram to be employed in a high-access speed microscope for observing sensory-driven synaptic activity across all inputs to single living neurons in an intact cerebral cortex. The system is based on holographic multi-beam generation using a two-dimensional phase-only spatial light modulator to excite multiple locations in three dimensions with a single hologram. The hologram was calculated with a three-dimensional weighted iterative Fourier transform method using the Ewald sphere restriction to increase the calculation speed. Our algorithm achieved good uniformity of three dimensionally generated excitation spots; the standard deviation of the spot intensities was reduced by a factor of two compared with a conventional algorithm.

  5. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression.

    Science.gov (United States)

    Yu, Xu; Lin, Jun-Yu; Jiang, Feng; Du, Jun-Wei; Han, Ji-Zhong

    2018-01-01

    Cross-domain collaborative filtering (CDCF) solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR). We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR) model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.

  6. A Cross-Domain Collaborative Filtering Algorithm Based on Feature Construction and Locally Weighted Linear Regression

    Directory of Open Access Journals (Sweden)

    Xu Yu

    2018-01-01

    Full Text Available Cross-domain collaborative filtering (CDCF solves the sparsity problem by transferring rating knowledge from auxiliary domains. Obviously, different auxiliary domains have different importance to the target domain. However, previous works cannot evaluate effectively the significance of different auxiliary domains. To overcome this drawback, we propose a cross-domain collaborative filtering algorithm based on Feature Construction and Locally Weighted Linear Regression (FCLWLR. We first construct features in different domains and use these features to represent different auxiliary domains. Thus the weight computation across different domains can be converted as the weight computation across different features. Then we combine the features in the target domain and in the auxiliary domains together and convert the cross-domain recommendation problem into a regression problem. Finally, we employ a Locally Weighted Linear Regression (LWLR model to solve the regression problem. As LWLR is a nonparametric regression method, it can effectively avoid underfitting or overfitting problem occurring in parametric regression methods. We conduct extensive experiments to show that the proposed FCLWLR algorithm is effective in addressing the data sparsity problem by transferring the useful knowledge from the auxiliary domains, as compared to many state-of-the-art single-domain or cross-domain CF methods.

  7. Newton-Gauss Algorithm of Robust Weighted Total Least Squares Model

    Directory of Open Access Journals (Sweden)

    WANG Bin

    2015-06-01

    Full Text Available Based on the Newton-Gauss iterative algorithm of weighted total least squares (WTLS, a robust WTLS (RWTLS model is presented. The model utilizes the standardized residuals to construct the weight factor function and the square root of the variance component estimator with robustness is obtained by introducing the median method. Therefore, the robustness in both the observation and structure spaces can be simultaneously achieved. To obtain standardized residuals, the linearly approximate cofactor propagation law is employed to derive the expression of the cofactor matrix of WTLS residuals. The iterative calculation steps for RWTLS are also described. The experiment indicates that the model proposed in this paper exhibits satisfactory robustness for gross errors handling problem of WTLS, the obtained parameters have no significant difference with the results of WTLS without gross errors. Therefore, it is superior to the robust weighted total least squares model directly constructed with residuals.

  8. Particle swarm optimizer for weighting factor selection in intensity-modulated radiation therapy optimization algorithms.

    Science.gov (United States)

    Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo

    2017-01-01

    In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human

  9. Comparison of predictive performance of data mining algorithms in predicting body weight in Mengali rams of Pakistan

    Directory of Open Access Journals (Sweden)

    Senol Celik

    Full Text Available ABSTRACT The present study aimed at comparing predictive performance of some data mining algorithms (CART, CHAID, Exhaustive CHAID, MARS, MLP, and RBF in biometrical data of Mengali rams. To compare the predictive capability of the algorithms, the biometrical data regarding body (body length, withers height, and heart girth and testicular (testicular length, scrotal length, and scrotal circumference measurements of Mengali rams in predicting live body weight were evaluated by most goodness of fit criteria. In addition, age was considered as a continuous independent variable. In this context, MARS data mining algorithm was used for the first time to predict body weight in two forms, without (MARS_1 and with interaction (MARS_2 terms. The superiority order in the predictive accuracy of the algorithms was found as CART > CHAID ≈ Exhaustive CHAID > MARS_2 > MARS_1 > RBF > MLP. Moreover, all tested algorithms provided a strong predictive accuracy for estimating body weight. However, MARS is the only algorithm that generated a prediction equation for body weight. Therefore, it is hoped that the available results might present a valuable contribution in terms of predicting body weight and describing the relationship between the body weight and body and testicular measurements in revealing breed standards and the conservation of indigenous gene sources for Mengali sheep breeding. Therefore, it will be possible to perform more profitable and productive sheep production. Use of data mining algorithms is useful for revealing the relationship between body weight and testicular traits in describing breed standards of Mengali sheep.

  10. Selection and determination of beam weights based on genetic algorithms for conformal radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Xingen Wu; Zunliang Wang

    2000-01-01

    A genetic algorithm has been used to optimize the selection of beam weights for external beam three-dimensional conformal radiotherapy treatment planning. A fitness function is defined, which includes a difference function to achieve a least-square fit to doses at preselected points in a planning target volume, and a penalty item to constrain the maximum allowable doses delivered to critical organs. Adjustment between the dose uniformity within the target volume and the dose constraint to the critical structures can be achieved by varying the beam weight variables in the fitness function. A floating-point encoding schema and several operators, like uniform crossover, arithmetical crossover, geometrical crossover, Gaussian mutation and uniform mutation, have been used to evolve the population. Three different cases were used to verify the correctness of the algorithm and quality assessment based on dose-volume histograms and three-dimensional dose distributions were given. The results indicate that the genetic algorithm presented here has considerable potential. (author)

  11. Configuration space analysis of common cost functions in radiotherapy beam-weight optimization algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Rowbottom, Carl Graham [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom); Webb, Steve [Joint Department of Physics, Institute of Cancer Research and the Royal Marsden NHS Trust, Sutton, Surrey (United Kingdom)

    2002-01-07

    The successful implementation of downhill search engines in radiotherapy optimization algorithms depends on the absence of local minima in the search space. Such techniques are much faster than stochastic optimization methods but may become trapped in local minima if they exist. A technique known as 'configuration space analysis' was applied to examine the search space of cost functions used in radiotherapy beam-weight optimization algorithms. A downhill-simplex beam-weight optimization algorithm was run repeatedly to produce a frequency distribution of final cost values. By plotting the frequency distribution as a function of final cost, the existence of local minima can be determined. Common cost functions such as the quadratic deviation of dose to the planning target volume (PTV), integral dose to organs-at-risk (OARs), dose-threshold and dose-volume constraints for OARs were studied. Combinations of the cost functions were also considered. The simple cost function terms such as the quadratic PTV dose and integral dose to OAR cost function terms are not susceptible to local minima. In contrast, dose-threshold and dose-volume OAR constraint cost function terms are able to produce local minima in the example case studied. (author)

  12. Classification of EEG Signals using adaptive weighted distance nearest neighbor algorithm

    Directory of Open Access Journals (Sweden)

    E. Parvinnia

    2014-01-01

    Full Text Available Electroencephalogram (EEG signals are often used to diagnose diseases such as seizure, alzheimer, and schizophrenia. One main problem with the recorded EEG samples is that they are not equally reliable due to the artifacts at the time of recording. EEG signal classification algorithms should have a mechanism to handle this issue. It seems that using adaptive classifiers can be useful for the biological signals such as EEG. In this paper, a general adaptive method named weighted distance nearest neighbor (WDNN is applied for EEG signal classification to tackle this problem. This classification algorithm assigns a weight to each training sample to control its influence in classifying test samples. The weights of training samples are used to find the nearest neighbor of an input query pattern. To assess the performance of this scheme, EEG signals of thirteen schizophrenic patients and eighteen normal subjects are analyzed for the classification of these two groups. Several features including, fractal dimension, band power and autoregressive (AR model are extracted from EEG signals. The classification results are evaluated using Leave one (subject out cross validation for reliable estimation. The results indicate that combination of WDNN and selected features can significantly outperform the basic nearest-neighbor and the other methods proposed in the past for the classification of these two groups. Therefore, this method can be a complementary tool for specialists to distinguish schizophrenia disorder.

  13. A combination-weighted Feldkamp-based reconstruction algorithm for cone-beam CT

    International Nuclear Information System (INIS)

    Mori, Shinichiro; Endo, Masahiro; Komatsu, Shuhei; Kandatsu, Susumu; Yashiro, Tomoyasu; Baba, Masayuki

    2006-01-01

    The combination-weighted Feldkamp algorithm (CW-FDK) was developed and tested in a phantom in order to reduce cone-beam artefacts and enhance cranio-caudal reconstruction coverage in an attempt to improve image quality when utilizing cone-beam computed tomography (CBCT). Using a 256-slice cone-beam CT (256CBCT), image quality (CT-number uniformity and geometrical accuracy) was quantitatively evaluated in phantom and clinical studies, and the results were compared to those obtained with the original Feldkamp algorithm. A clinical study was done in lung cancer patients under breath holding and free breathing. Image quality for the original Feldkamp algorithm is degraded at the edge of the scan region due to the missing volume, commensurate with the cranio-caudal distance between the reconstruction and central planes. The CW-FDK extended the reconstruction coverage to equal the scan coverage and improved reconstruction accuracy, unaffected by the cranio-caudal distance. The extended reconstruction coverage with good image quality provided by the CW-FDK will be clinically investigated for improving diagnostic and radiotherapy applications. In addition, this algorithm can also be adapted for use in relatively wide cone-angle CBCT such as with a flat-panel detector CBCT

  14. Advertisement Click-Through Rate Prediction Based on the Weighted-ELM and Adaboost Algorithm

    Directory of Open Access Journals (Sweden)

    Sen Zhang

    2017-01-01

    Full Text Available Accurate click-through rate (CTR prediction can not only improve the advertisement company’s reputation and revenue, but also help the advertisers to optimize the advertising performance. There are two main unsolved problems of the CTR prediction: low prediction accuracy due to the imbalanced distribution of the advertising data and the lack of the real-time advertisement bidding implementation. In this paper, we will develop a novel online CTR prediction approach by incorporating the real-time bidding (RTB advertising by the following strategies: user profile system is constructed from the historical data of the RTB advertising to describe the user features, the historical CTR features, the ID features, and the other numerical features. A novel CTR prediction approach is presented to address the imbalanced learning sample distribution by integrating the Weighted-ELM (WELM and the Adaboost algorithm. Compared to the commonly used algorithms, the proposed approach can improve the CTR significantly.

  15. A hybrid algorithm for instant optimization of beam weights in anatomy-based intensity modulated radiotherapy: a performance evaluation study

    International Nuclear Information System (INIS)

    Vaitheeswaran, Ranganathan; Sathiya Narayanan, V.K.; Bhangle, Janhavi R.; Nirhali, Amit; Kumar, Namita; Basu, Sumit; Maiya, Vikram

    2011-01-01

    The study aims to introduce a hybrid optimization algorithm for anatomy-based intensity modulated radiotherapy (AB-IMRT). Our proposal is that by integrating an exact optimization algorithm with a heuristic optimization algorithm, the advantages of both the algorithms can be combined, which will lead to an efficient global optimizer solving the problem at a very fast rate. Our hybrid approach combines Gaussian elimination algorithm (exact optimizer) with fast simulated annealing algorithm (a heuristic global optimizer) for the optimization of beam weights in AB-IMRT. The algorithm has been implemented using MATLAB software. The optimization efficiency of the hybrid algorithm is clarified by (i) analysis of the numerical characteristics of the algorithm and (ii) analysis of the clinical capabilities of the algorithm. The numerical and clinical characteristics of the hybrid algorithm are compared with Gaussian elimination method (GEM) and fast simulated annealing (FSA). The numerical characteristics include convergence, consistency, number of iterations and overall optimization speed, which were analyzed for the respective cases of 8 patients. The clinical capabilities of the hybrid algorithm are demonstrated in cases of (a) prostate and (b) brain. The analyses reveal that (i) the convergence speed of the hybrid algorithm is approximately three times higher than that of FSA algorithm (ii) the convergence (percentage reduction in the cost function) in hybrid algorithm is about 20% improved as compared to that in GEM algorithm (iii) the hybrid algorithm is capable of producing relatively better treatment plans in terms of Conformity Index (CI) (∼ 2% - 5% improvement) and Homogeneity Index (HI) (∼ 4% - 10% improvement) as compared to GEM and FSA algorithms (iv) the sparing of organs at risk in hybrid algorithm-based plans is better than that in GEM-based plans and comparable to that in FSA-based plans; and (v) the beam weights resulting from the hybrid algorithm are

  16. Half-unit weighted bilinear algorithm for image contrast enhancement in capsule endoscopy

    Science.gov (United States)

    Rukundo, Olivier

    2018-04-01

    This paper proposes a novel enhancement method based exclusively on the bilinear interpolation algorithm for capsule endoscopy images. The proposed method does not convert the original RBG image components to HSV or any other color space or model; instead, it processes directly RGB components. In each component, a group of four adjacent pixels and half-unit weight in the bilinear weighting function are used to calculate the average pixel value, identical for each pixel in that particular group. After calculations, groups of identical pixels are overlapped successively in horizontal and vertical directions to achieve a preliminary-enhanced image. The final-enhanced image is achieved by halving the sum of the original and preliminary-enhanced image pixels. Quantitative and qualitative experiments were conducted focusing on pairwise comparisons between original and enhanced images. Final-enhanced images have generally the best diagnostic quality and gave more details about the visibility of vessels and structures in capsule endoscopy images.

  17. Weight optimization of large span steel truss structures with genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Mojolic, Cristian; Hulea, Radu; Pârv, Bianca Roxana [Technical University of Cluj-Napoca, Faculty of Civil Engineering, Department of Structural Mechanics, Str. Constantin Daicoviciu nr. 15, Cluj-Napoca (Romania)

    2015-03-10

    The paper presents the weight optimization process of the main steel truss that supports the Slatina Sport Hall roof. The structure was loaded with self-weight, dead loads, live loads, snow, wind and temperature, grouped in eleven load cases. The optimization of the structure was made using genetic algorithms implemented in a Matlab code. A total number of four different cases were taken into consideration when trying to determine the lowest weight of the structure, depending on the types of connections with the concrete structure ( types of supports, bearing modes), and the possibility of the lower truss chord nodes to change their vertical position. A number of restrictions for tension, maximum displacement and buckling were enforced on the elements, and the cross sections are chosen by the program from a user data base. The results in each of the four cases were analyzed in terms of weight, element tension, element section and displacement. The paper presents the optimization process and the conclusions drawn.

  18. Energy Efficient and Safe Weighted Clustering Algorithm for Mobile Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Amine Dahane

    2015-01-01

    Full Text Available The main concern of clustering approaches for mobile wireless sensor networks (WSNs is to prolong the battery life of the individual sensors and the network lifetime. For a successful clustering approach the need of a powerful mechanism to safely elect a cluster head remains a challenging task in many research works that take into account the mobility of the network. The approach based on the computing of the weight of each node in the network is one of the proposed techniques to deal with this problem. In this paper, we propose an energy efficient and safe weighted clustering algorithm (ES-WCA for mobile WSNs using a combination of five metrics. Among these metrics lies the behavioral level metric which promotes a safe choice of a cluster head in the sense where this last one will never be a malicious node. Moreover, the highlight of our work is summarized in a comprehensive strategy for monitoring the network, in order to detect and remove the malicious nodes. We use simulation study to demonstrate the performance of the proposed algorithm.

  19. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit.

    Science.gov (United States)

    Mei, Gang; Xu, Liangliang; Xu, Nengxiong

    2017-09-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.

  20. Assessment of various failure theories for weight and cost optimized laminated composites using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Goyal, T. [Indian Institute of Technology Kanpur. Dept. of Aerospace Engineering, UP (India); Gupta, R. [Infotech Enterprises Ltd., Hyderabad (India)

    2012-07-01

    In this work, minimum weight-cost design for laminated composites is presented. A genetic algorithm has been developed for the optimization process. Maximum-Stress, Tsai-Wu and Tsai-Hill failure criteria have been used along with buckling analysis parameter for the margin of safety calculations. The design variables include three materials; namely Carbon-Epoxy, Glass-Epoxy, Kevlar-Epoxy; number of plies; ply orientation angles, varying from -75 deg. to 90 deg. in the intervals of 15 deg. and ply thicknesses which depend on the material in use. The total cost is a sum of material cost and layup cost. Layup cost is a function of the ply angle. Validation studies for solution convergence and weight-cost inverse proportionality are carried out. One set of results for shear loading are also validated from literature for a particular case. A Pareto-Optimal solution set is demonstrated for biaxial loading conditions. It is then extended to applied moments. It is found that global optimum for a given loading condition is a function of the failure criteria for shear loading, with Maximum Stress criteria giving the lightest-cheapest and Tsai-Wu criteria giving the heaviest-costliest optimized laminates. Optimized weight results are plotted from the three criteria to do a comparative study. This work gives a global optimized laminated composite and also a set of other local optimum laminates for a given set of loading conditions. The current algorithm also provides with adequate data to supplement the use of different failure criteria for varying loadings. This work can find use in the industry and/or academia considering the increased use of laminated composites in modern wind blades. (Author)

  1. An algorithmic decomposition of claw-free graphs leading to an O(n^3) algorithm for the weighted stable set problem

    OpenAIRE

    Faenza, Y.; Oriolo, G.; Stauffer, G.

    2011-01-01

    We propose an algorithm for solving the maximum weighted stable set problem on claw-free graphs that runs in O(n^3)-time, drastically improving the previous best known complexity bound. This algorithm is based on a novel decomposition theorem for claw-free graphs, which is also intioduced in the present paper. Despite being weaker than the well-known structure result for claw-free graphs given by Chudnovsky and Seymour, our decomposition theorem is, on the other hand, algorithmic, i.e. it is ...

  2. Performance evaluation of an algorithm for fast optimization of beam weights in anatomy-based intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Ranganathan, Vaitheeswaran; Sathiya Narayanan, V.K.; Bhangle, Janhavi R.; Gupta, Kamlesh K.; Basu, Sumit; Maiya, Vikram; Joseph, Jolly; Nirhali, Amit

    2010-01-01

    This study aims to evaluate the performance of a new algorithm for optimization of beam weights in anatomy-based intensity modulated radiotherapy (IMRT). The algorithm uses a numerical technique called Gaussian-Elimination that derives the optimum beam weights in an exact or non-iterative way. The distinct feature of the algorithm is that it takes only fraction of a second to optimize the beam weights, irrespective of the complexity of the given case. The algorithm has been implemented using MATLAB with a Graphical User Interface (GUI) option for convenient specification of dose constraints and penalties to different structures. We have tested the numerical and clinical capabilities of the proposed algorithm in several patient cases in comparison with KonRad inverse planning system. The comparative analysis shows that the algorithm can generate anatomy-based IMRT plans with about 50% reduction in number of MUs and 60% reduction in number of apertures, while producing dose distribution comparable to that of beamlet-based IMRT plans. Hence, it is clearly evident from the study that the proposed algorithm can be effectively used for clinical applications. (author)

  3. Improved event positioning in a gamma ray detector using an iterative position-weighted centre-of-gravity algorithm.

    Science.gov (United States)

    Liu, Chen-Yi; Goertzen, Andrew L

    2013-07-21

    An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.

  4. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  5. Momentum-weighted conjugate gradient descent algorithm for gradient coil optimization.

    Science.gov (United States)

    Lu, Hanbing; Jesmanowicz, Andrzej; Li, Shi-Jiang; Hyde, James S

    2004-01-01

    MRI gradient coil design is a type of nonlinear constrained optimization. A practical problem in transverse gradient coil design using the conjugate gradient descent (CGD) method is that wire elements move at different rates along orthogonal directions (r, phi, z), and tend to cross, breaking the constraints. A momentum-weighted conjugate gradient descent (MW-CGD) method is presented to overcome this problem. This method takes advantage of the efficiency of the CGD method combined with momentum weighting, which is also an intrinsic property of the Levenberg-Marquardt algorithm, to adjust step sizes along the three orthogonal directions. A water-cooled, 12.8 cm inner diameter, three axis torque-balanced gradient coil for rat imaging was developed based on this method, with an efficiency of 2.13, 2.08, and 4.12 mT.m(-1).A(-1) along X, Y, and Z, respectively. Experimental data demonstrate that this method can improve efficiency by 40% and field uniformity by 27%. This method has also been applied to the design of a gradient coil for the human brain, employing remote current return paths. The benefits of this design include improved gradient field uniformity and efficiency, with a shorter length than gradient coil designs using coaxial return paths. Copyright 2003 Wiley-Liss, Inc.

  6. Structure and weights optimisation of a modified Elman network emotion classifier using hybrid computational intelligence algorithms: a comparative study

    Science.gov (United States)

    Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood

    2015-10-01

    Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.

  7. Meta-heuristic algorithms for parallel identical machines scheduling problem with weighted late work criterion and common due date.

    Science.gov (United States)

    Xu, Zhenzhen; Zou, Yongxing; Kong, Xiangjie

    2015-01-01

    To our knowledge, this paper investigates the first application of meta-heuristic algorithms to tackle the parallel machines scheduling problem with weighted late work criterion and common due date ([Formula: see text]). Late work criterion is one of the performance measures of scheduling problems which considers the length of late parts of particular jobs when evaluating the quality of scheduling. Since this problem is known to be NP-hard, three meta-heuristic algorithms, namely ant colony system, genetic algorithm, and simulated annealing are designed and implemented, respectively. We also propose a novel algorithm named LDF (largest density first) which is improved from LPT (longest processing time first). The computational experiments compared these meta-heuristic algorithms with LDF, LPT and LS (list scheduling), and the experimental results show that SA performs the best in most cases. However, LDF is better than SA in some conditions, moreover, the running time of LDF is much shorter than SA.

  8. A Weighted Spatial-Spectral Kernel RX Algorithm and Efficient Implementation on GPUs

    Directory of Open Access Journals (Sweden)

    Chunhui Zhao

    2017-02-01

    Full Text Available The kernel RX (KRX detector proposed by Kwon and Nasrabadi exploits a kernel function to obtain a better detection performance. However, it still has two limits that can be improved. On the one hand, reasonable integration of spatial-spectral information can be used to further improve its detection accuracy. On the other hand, parallel computing can be used to reduce the processing time in available KRX detectors. Accordingly, this paper presents a novel weighted spatial-spectral kernel RX (WSSKRX detector and its parallel implementation on graphics processing units (GPUs. The WSSKRX utilizes the spatial neighborhood resources to reconstruct the testing pixels by introducing a spectral factor and a spatial window, thereby effectively reducing the interference of background noise. Then, the kernel function is redesigned as a mapping trick in a KRX detector to implement the anomaly detection. In addition, a powerful architecture based on the GPU technique is designed to accelerate WSSKRX. To substantiate the performance of the proposed algorithm, both synthetic and real data are conducted for experiments.

  9. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA) for Lp-Regularization Using the Multiple Sub-Dictionary Representation.

    Science.gov (United States)

    Li, Yunyi; Zhang, Jie; Fan, Shangang; Yang, Jie; Xiong, Jian; Cheng, Xiefeng; Sari, Hikmet; Adachi, Fumiyuki; Gui, Guan

    2017-12-15

    Both L 1/2 and L 2/3 are two typical non-convex regularizations of L p (0dictionary sparse transform strategies for the two typical cases p∈{1/2, 2/3} based on an iterative Lp thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA). Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L₁ algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  10. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  11. Cell-centered particle weighting algorithm for PIC simulations in a non-uniform 2D axisymmetric mesh

    Science.gov (United States)

    Araki, Samuel J.; Wirz, Richard E.

    2014-09-01

    Standard area weighting methods for particle-in-cell simulations result in systematic errors on particle densities for a non-uniform mesh in cylindrical coordinates. These errors can be significantly reduced by using weighted cell volumes for density calculations. A detailed description on the corrected volume calculations and cell-centered weighting algorithm in a non-uniform mesh is provided. The simple formulas for the corrected volume can be used for any type of quadrilateral and/or triangular mesh in cylindrical coordinates. Density errors arising from the cell-centered weighting algorithm are computed for radial density profiles of uniform, linearly decreasing, and Bessel function in an adaptive Cartesian mesh and an unstructured mesh. For all the density profiles, it is shown that the weighting algorithm provides a significant improvement for density calculations. However, relatively large density errors may persist at outermost cells for monotonically decreasing density profiles. A further analysis has been performed to investigate the effect of the density errors in potential calculations, and it is shown that the error at the outermost cell does not propagate into the potential solution for the density profiles investigated.

  12. Access Selection Algorithm of Heterogeneous Wireless Networks for Smart Distribution Grid Based on Entropy-Weight and Rough Set

    Science.gov (United States)

    Xiang, Min; Qu, Qinqin; Chen, Cheng; Tian, Li; Zeng, Lingkang

    2017-11-01

    To improve the reliability of communication service in smart distribution grid (SDG), an access selection algorithm based on dynamic network status and different service types for heterogeneous wireless networks was proposed. The network performance index values were obtained in real time by multimode terminal and the variation trend of index values was analyzed by the growth matrix. The index weights were calculated by entropy-weight and then modified by rough set to get the final weights. Combining the grey relational analysis to sort the candidate networks, and the optimum communication network is selected. Simulation results show that the proposed algorithm can implement dynamically access selection in heterogeneous wireless networks of SDG effectively and reduce the network blocking probability.

  13. Task-based evaluation of segmentation algorithms for diffusion-weighted MRI without using a gold standard

    International Nuclear Information System (INIS)

    Jha, Abhinav K; Kupinski, Matthew A; Rodríguez, Jeffrey J; Stephen, Renu M; Stopeck, Alison T

    2012-01-01

    In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both the ensemble mean square error and precision. We also propose consistency checks for this evaluation technique. (paper)

  14. Management of Excessive Weight Loss Following Laparoscopic Roux-en-Y Gastric Bypass: Clinical Algorithm and Surgical Techniques.

    Science.gov (United States)

    Akusoba, Ikemefuna; Birriel, T Javier; El Chaar, Maher

    2016-01-01

    There are no clinical guidelines or published studies addressing excessive weight loss and protein calorie malnutrition following a standard Roux-en-Y gastric bypass (RYGB) to guide nutritional management and treatment strategies. This study demonstrates the presentation, clinical algorithm, surgical technique, and outcomes of patients afflicted and successfully treated with excessive weight loss following a standard RYGB. Three patients were successfully reversed to normal anatomy after evaluation, management, and treatment by multidisciplinary team. Lowest BMI (kg/m(2)) was 18.9, 17.9, and 14.2, respectively. Twelve-month post-operative BMI (kg/m(2)) was 28.9, 22.8, and 26.1, respectively. Lowest weight (lbs) was 117, 128, and 79, respectively. Twelve-month post-operative weight (lbs) was 179, 161, and 145, respectively. Pre-reversal gastrostomy tube was inserted into the remnant stomach to demonstrate weight gain and improve nutritional status prior to reversal to original anatomy. We propose a practical clinical algorithm for the work-up and management of patients with excessive weight loss and protein calorie malnutrition after standard RYGB including reversal to normal anatomy.

  15. New Facets and a Branch-and-Cut Algorithm for the Weighted Clique Problem

    DEFF Research Database (Denmark)

    Sørensen, Michael Malmros

    2001-01-01

    of facet defining inequalities in a branch-and-cut algorithm for the problem. We give a description of this algorithm, including some separation procedures, and present the computational results for different sets of test problems. The computation times that are obtained indicate that this algorithm...... four new classes of facet defining inequalities for the associated b-clique polytope. One of these inequality classes constitutes a generalization of the well known tree inequalities; the other classes are associated with multistars. We utilize these inequality classes together with other classes...... is more efficient than previously described algorithms for the problem....

  16. A Local Weighted Nearest Neighbor Algorithm and a Weighted and Constrained Least-Squared Method for Mixed Odor Analysis by Electronic Nose Systems

    Directory of Open Access Journals (Sweden)

    Jyuo-Min Shyu

    2010-11-01

    Full Text Available A great deal of work has been done to develop techniques for odor analysis by electronic nose systems. These analyses mostly focus on identifying a particular odor by comparing with a known odor dataset. However, in many situations, it would be more practical if each individual odorant could be determined directly. This paper proposes two methods for such odor components analysis for electronic nose systems. First, a K-nearest neighbor (KNN-based local weighted nearest neighbor (LWNN algorithm is proposed to determine the components of an odor. According to the component analysis, the odor training data is firstly categorized into several groups, each of which is represented by its centroid. The examined odor is then classified as the class of the nearest centroid. The distance between the examined odor and the centroid is calculated based on a weighting scheme, which captures the local structure of each predefined group. To further determine the concentration of each component, odor models are built by regressions. Then, a weighted and constrained least-squares (WCLS method is proposed to estimate the component concentrations. Experiments were carried out to assess the effectiveness of the proposed methods. The LWNN algorithm is able to classify mixed odors with different mixing ratios, while the WCLS method can provide good estimates on component concentrations.

  17. BiCluE - Exact and heuristic algorithms for weighted bi-cluster editing of biomedical data

    DEFF Research Database (Denmark)

    Sun, Peng; Guo, Jiong; Baumbach, Jan

    2013-01-01

    to solve the weighted bi-cluster editing problem. It implements (1) an exact algorithm based on fixed-parameter tractability and (2) a polynomial-time greedy heuristics based on solving the hardest part, edge deletions, first. We evaluated its performance on artificial graphs. Afterwards we exemplarily...... problem. BiCluE as well as the supplementary results are available online at http://biclue.mpi-inf.mpg.de webcite....

  18. Optimization the initial weights of artificial neural networks via genetic algorithm applied to hip bone fracture prediction

    OpenAIRE

    Chang, Y-T; Lin, J; Shieh, J-S; Abbod, MF

    2012-01-01

    This paper aims to find the optimal set of initial weights to enhance the accuracy of artificial neural networks (ANNs) by using genetic algorithms (GA). The sample in this study included 228 patients with first low-trauma hip fracture and 215 patients without hip fracture, both of them were interviewed with 78 questions. We used logistic regression to select 5 important factors (i.e., bone mineral density, experience of fracture, average hand grip strength, intake of coffee, and peak expirat...

  19. An IPv6 routing lookup algorithm using weight-balanced tree based on prefix value for virtual router

    Science.gov (United States)

    Chen, Lingjiang; Zhou, Shuguang; Zhang, Qiaoduo; Li, Fenghua

    2016-10-01

    Virtual router enables the coexistence of different networks on the same physical facility and has lately attracted a great deal of attention from researchers. As the number of IPv6 addresses is rapidly increasing in virtual routers, designing an efficient IPv6 routing lookup algorithm is of great importance. In this paper, we present an IPv6 lookup algorithm called weight-balanced tree (WBT). WBT merges Forwarding Information Bases (FIBs) of virtual routers into one spanning tree, and compresses the space cost. WBT's average time complexity and the worst case time complexity of lookup and update process are both O(logN) and space complexity is O(cN) where N is the size of routing table and c is a constant. Experiments show that WBT helps reduce more than 80% Static Random Access Memory (SRAM) cost in comparison to those separation schemes. WBT also achieves the least average search depth comparing with other homogeneous algorithms.

  20. Sparse Adaptive Iteratively-Weighted Thresholding Algorithm (SAITA for L p -Regularization Using the Multiple Sub-Dictionary Representation

    Directory of Open Access Journals (Sweden)

    Yunyi Li

    2017-12-01

    Full Text Available Both L 1 / 2 and L 2 / 3 are two typical non-convex regularizations of L p ( 0 < p < 1 , which can be employed to obtain a sparser solution than the L 1 regularization. Recently, the multiple-state sparse transformation strategy has been developed to exploit the sparsity in L 1 regularization for sparse signal recovery, which combines the iterative reweighted algorithms. To further exploit the sparse structure of signal and image, this paper adopts multiple dictionary sparse transform strategies for the two typical cases p ∈ { 1 / 2 ,   2 / 3 } based on an iterative L p thresholding algorithm and then proposes a sparse adaptive iterative-weighted L p thresholding algorithm (SAITA. Moreover, a simple yet effective regularization parameter is proposed to weight each sub-dictionary-based L p regularizer. Simulation results have shown that the proposed SAITA not only performs better than the corresponding L 1 algorithms but can also obtain a better recovery performance and achieve faster convergence than the conventional single-dictionary sparse transform-based L p case. Moreover, we conduct some applications about sparse image recovery and obtain good results by comparison with relative work.

  1. Mean curvature and texture constrained composite weighted random walk algorithm for optic disc segmentation towards glaucoma screening.

    Science.gov (United States)

    Panda, Rashmi; Puhan, N B; Panda, Ganapati

    2018-02-01

    Accurate optic disc (OD) segmentation is an important step in obtaining cup-to-disc ratio-based glaucoma screening using fundus imaging. It is a challenging task because of the subtle OD boundary, blood vessel occlusion and intensity inhomogeneity. In this Letter, the authors propose an improved version of the random walk algorithm for OD segmentation to tackle such challenges. The algorithm incorporates the mean curvature and Gabor texture energy features to define the new composite weight function to compute the edge weights. Unlike the deformable model-based OD segmentation techniques, the proposed algorithm remains unaffected by curve initialisation and local energy minima problem. The effectiveness of the proposed method is verified with DRIVE, DIARETDB1, DRISHTI-GS and MESSIDOR database images using the performance measures such as mean absolute distance, overlapping ratio, dice coefficient, sensitivity, specificity and precision. The obtained OD segmentation results and quantitative performance measures show robustness and superiority of the proposed algorithm in handling the complex challenges in OD segmentation.

  2. Optimization the Initial Weights of Artificial Neural Networks via Genetic Algorithm Applied to Hip Bone Fracture Prediction

    Directory of Open Access Journals (Sweden)

    Yu-Tzu Chang

    2012-01-01

    Full Text Available This paper aims to find the optimal set of initial weights to enhance the accuracy of artificial neural networks (ANNs by using genetic algorithms (GA. The sample in this study included 228 patients with first low-trauma hip fracture and 215 patients without hip fracture, both of them were interviewed with 78 questions. We used logistic regression to select 5 important factors (i.e., bone mineral density, experience of fracture, average hand grip strength, intake of coffee, and peak expiratory flow rate for building artificial neural networks to predict the probabilities of hip fractures. Three-layer (one hidden layer ANNs models with back-propagation training algorithms were adopted. The purpose in this paper is to find the optimal initial weights of neural networks via genetic algorithm to improve the predictability. Area under the ROC curve (AUC was used to assess the performance of neural networks. The study results showed the genetic algorithm obtained an AUC of 0.858±0.00493 on modeling data and 0.802 ± 0.03318 on testing data. They were slightly better than the results of our previous study (0.868±0.00387 and 0.796±0.02559, resp.. Thus, the preliminary study for only using simple GA has been proved to be effective for improving the accuracy of artificial neural networks.

  3. New facets and a branch-and-cut algorithm for the weighted clique problem

    DEFF Research Database (Denmark)

    Sørensen, Michael Malmros

    2004-01-01

    four new classes of facet defining inequalities for the associated b-clique polytope. One of these inequality classes constitutes a generalization of the well known tree inequalities; the other classes are associated with multistars. We use these inequalities together with other classes of facet...... defining inequalities in a branch-and-cut algorithm for the problem. We give a description of this algorithm, including some separation procedures, and present the computational results for different sets of test problems. The computation times that are obtained indicate that this algorithm is more...

  4. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  5. New backpropagation algorithm with type-2 fuzzy weights for neural networks

    CERN Document Server

    Gaxiola, Fernando; Valdez, Fevrier

    2016-01-01

    In this book a neural network learning method with type-2 fuzzy weight adjustment is proposed. The mathematical analysis of the proposed learning method architecture and the adaptation of type-2 fuzzy weights are presented. The proposed method is based on research of recent methods that handle weight adaptation and especially fuzzy weights. The internal operation of the neuron is changed to work with two internal calculations for the activation function to obtain two results as outputs of the proposed method. Simulation results and a comparative study among monolithic neural networks, neural network with type-1 fuzzy weights and neural network with type-2 fuzzy weights are presented to illustrate the advantages of the proposed method. The proposed approach is based on recent methods that handle adaptation of weights using fuzzy logic of type-1 and type-2. The proposed approach is applied to a cases of prediction for the Mackey-Glass (for ô=17) and Dow-Jones time series, and recognition of person with iris bi...

  6. Algorithms for the prediction of retinopathy of prematurity based on postnatal weight gain.

    Science.gov (United States)

    Binenbaum, Gil

    2013-06-01

    Current ROP screening guidelines represent a simple risk model with two dichotomized factors, birth weight and gestational age at birth. Pioneering work has shown that tracking postnatal weight gain, a surrogate for low insulin-like growth factor 1, may capture the influence of many other ROP risk factors and improve risk prediction. Models including weight gain, such as WINROP, ROPScore, and CHOP ROP, have demonstrated accurate ROP risk assessment and a potentially large reduction in ROP examinations, compared to current guidelines. However, there is a need for larger studies, and generalizability is limited in countries with developing neonatal care systems. Copyright © 2013 Elsevier Inc. All rights reserved.

  7. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  8. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  9. A phi-competitive algorithm for collecting items with increasing weights from a dynamic queue

    Czech Academy of Sciences Publication Activity Database

    Bienkowski, M.; Chrobak, M.; Dürr, Ch.; Hurand, M.; Jeż, A.; Jeż, Łukasz; Stachowiak, G.

    2013-01-01

    Roč. 475, 4 March (2013), s. 92-102 ISSN 0304-3975 Institutional support: RVO:67985840 Keywords : online algorithms * competitive analysis * buffer management Subject RIV: BA - General Mathematics Impact factor: 0.516, year: 2013 http://www.sciencedirect.com/science/article/pii/S0304397513000121

  10. Estimating the kinetic parameters of activated sludge storage using weighted non-linear least-squares and accelerating genetic algorithm.

    Science.gov (United States)

    Fang, Fang; Ni, Bing-Jie; Yu, Han-Qing

    2009-06-01

    In this study, weighted non-linear least-squares analysis and accelerating genetic algorithm are integrated to estimate the kinetic parameters of substrate consumption and storage product formation of activated sludge. A storage product formation equation is developed and used to construct the objective function for the determination of its production kinetics. The weighted least-squares analysis is employed to calculate the differences in the storage product concentration between the model predictions and the experimental data as the sum of squared weighted errors. The kinetic parameters for the substrate consumption and the storage product formation are estimated to be the maximum heterotrophic growth rate of 0.121/h, the yield coefficient of 0.44 mg CODX/mg CODS (COD, chemical oxygen demand) and the substrate half saturation constant of 16.9 mg/L, respectively, by minimizing the objective function using a real-coding-based accelerating genetic algorithm. Also, the fraction of substrate electrons diverted to the storage product formation is estimated to be 0.43 mg CODSTO/mg CODS. The validity of our approach is confirmed by the results of independent tests and the kinetic parameter values reported in literature, suggesting that this approach could be useful to evaluate the product formation kinetics of mixed cultures like activated sludge. More importantly, as this integrated approach could estimate the kinetic parameters rapidly and accurately, it could be applied to other biological processes.

  11. High-efficiency space-based software radio architectures & algorithms (a minimum size, weight, and power TeraOps processor)

    Energy Technology Data Exchange (ETDEWEB)

    Dunham, Mark Edward [Los Alamos National Laboratory; Baker, Zachary K [Los Alamos National Laboratory; Stettler, Matthew W [Los Alamos National Laboratory; Pigue, Michael J [Los Alamos National Laboratory; Schmierer, Eric N [Los Alamos National Laboratory; Power, John F [Los Alamos National Laboratory; Graham, Paul S [Los Alamos National Laboratory

    2009-01-01

    Los Alamos has recently completed the latest in a series of Reconfigurable Software Radios, which incorporates several key innovations in both hardware design and algorithms. Due to our focus on satellite applications, each design must extract the best size, weight, and power performance possible from the ensemble of Commodity Off-the-Shelf (COTS) parts available at the time of design. In this case we have achieved 1 TeraOps/second signal processing on a 1920 Megabit/second datastream, while using only 53 Watts mains power, 5.5 kg, and 3 liters. This processing capability enables very advanced algorithms such as our wideband RF compression scheme to operate remotely, allowing network bandwidth constrained applications to deliver previously unattainable performance.

  12. Icing Forecasting of High Voltage Transmission Line Using Weighted Least Square Support Vector Machine with Fireworks Algorithm for Feature Selection

    Directory of Open Access Journals (Sweden)

    Tiannan Ma

    2016-12-01

    Full Text Available Accurate forecasting of icing thickness has great significance for ensuring the security and stability of the power grid. In order to improve the forecasting accuracy, this paper proposes an icing forecasting system based on the fireworks algorithm and weighted least square support vector machine (W-LSSVM. The method of the fireworks algorithm is employed to select the proper input features with the purpose of eliminating redundant influence. In addition, the aim of the W-LSSVM model is to train and test the historical data-set with the selected features. The capability of this proposed icing forecasting model and framework is tested through simulation experiments using real-world icing data from the monitoring center of the key laboratory of anti-ice disaster, Hunan, South China. The results show that the proposed W-LSSVM-FA method has a higher prediction accuracy and it may be a promising alternative for icing thickness forecasting.

  13. A Scalable Weight-Free Learning Algorithm for Regulatory Control of Cell Activity in Spiking Neuronal Networks.

    Science.gov (United States)

    Zhang, Xu; Foderaro, Greg; Henriquez, Craig; Ferrari, Silvia

    2018-03-01

    Recent developments in neural stimulation and recording technologies are providing scientists with the ability of recording and controlling the activity of individual neurons in vitro or in vivo, with very high spatial and temporal resolution. Tools such as optogenetics, for example, are having a significant impact in the neuroscience field by delivering optical firing control with the precision and spatiotemporal resolution required for investigating information processing and plasticity in biological brains. While a number of training algorithms have been developed to date for spiking neural network (SNN) models of biological neuronal circuits, exiting methods rely on learning rules that adjust the synaptic strengths (or weights) directly, in order to obtain the desired network-level (or functional-level) performance. As such, they are not applicable to modifying plasticity in biological neuronal circuits, in which synaptic strengths only change as a result of pre- and post-synaptic neuron firings or biological mechanisms beyond our control. This paper presents a weight-free training algorithm that relies solely on adjusting the spatiotemporal delivery of neuron firings in order to optimize the network performance. The proposed weight-free algorithm does not require any knowledge of the SNN model or its plasticity mechanisms. As a result, this training approach is potentially realizable in vitro or in vivo via neural stimulation and recording technologies, such as optogenetics and multielectrode arrays, and could be utilized to control plasticity at multiple scales of biological neuronal circuits. The approach is demonstrated by training SNNs with hundreds of units to control a virtual insect navigating in an unknown environment.

  14. A three-dimensional-weighted cone beam filtered backprojection (CB-FBP) algorithm for image reconstruction in volumetric CT-helical scanning

    International Nuclear Information System (INIS)

    Tang Xiangyang; Hsieh Jiang; Nilsen, Roy A; Dutta, Sandeep; Samsonov, Dmitry; Hagiwara, Akira

    2006-01-01

    Based on the structure of the original helical FDK algorithm, a three-dimensional (3D)-weighted cone beam filtered backprojection (CB-FBP) algorithm is proposed for image reconstruction in volumetric CT under helical source trajectory. In addition to its dependence on view and fan angles, the 3D weighting utilizes the cone angle dependency of a ray to improve reconstruction accuracy. The 3D weighting is ray-dependent and the underlying mechanism is to give a favourable weight to the ray with the smaller cone angle out of a pair of conjugate rays but an unfavourable weight to the ray with the larger cone angle out of the conjugate ray pair. The proposed 3D-weighted helical CB-FBP reconstruction algorithm is implemented in the cone-parallel geometry that can improve noise uniformity and image generation speed significantly. Under the cone-parallel geometry, the filtering is naturally carried out along the tangential direction of the helical source trajectory. By exploring the 3D weighting's dependence on cone angle, the proposed helical 3D-weighted CB-FBP reconstruction algorithm can provide significantly improved reconstruction accuracy at moderate cone angle and high helical pitches. The 3D-weighted CB-FBP algorithm is experimentally evaluated by computer-simulated phantoms and phantoms scanned by a diagnostic volumetric CT system with a detector dimension of 64 x 0.625 mm over various helical pitches. The computer simulation study shows that the 3D weighting enables the proposed algorithm to reach reconstruction accuracy comparable to that of exact CB reconstruction algorithms, such as the Katsevich algorithm, under a moderate cone angle (4 deg.) and various helical pitches. Meanwhile, the experimental evaluation using the phantoms scanned by a volumetric CT system shows that the spatial resolution along the z-direction and noise characteristics of the proposed 3D-weighted helical CB-FBP reconstruction algorithm are maintained very well in comparison to the FDK

  15. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  16. Clustering for Different Scales of Measurement - the Gap-Ratio Weighted K-means Algorithm

    OpenAIRE

    Guérin, Joris; Gibaru, Olivier; Thiery, Stéphane; Nyiri, Eric

    2017-01-01

    This paper describes a method for clustering data that are spread out over large regions and which dimensions are on different scales of measurement. Such an algorithm was developed to implement a robotics application consisting in sorting and storing objects in an unsupervised way. The toy dataset used to validate such application consists of Lego bricks of different shapes and colors. The uncontrolled lighting conditions together with the use of RGB color features, respectively involve data...

  17. Using exponentially weighted moving average algorithm to defend against DDoS attacks

    CSIR Research Space (South Africa)

    Machaka, P

    2016-11-01

    Full Text Available This paper seeks to investigate the performance of the Exponentially Weighted Moving Average (EWMA) for mining big data and detection of DDoS attacks in Internet of Things (IoT) infrastructure. The paper will investigate the tradeoff between...

  18. A novel iris patterns matching algorithm of weighted polar frequency correlation

    Science.gov (United States)

    Zhao, Weijie; Jiang, Linhua

    2014-11-01

    Iris recognition is recognized as one of the most accurate techniques for biometric authentication. In this paper, we present a novel correlation method - Weighted Polar Frequency Correlation(WPFC) - to match and evaluate two iris images, actually it can also be used for evaluating the similarity of any two images. The WPFC method is a novel matching and evaluating method for iris image matching, which is complete different from the conventional methods. For instance, the classical John Daugman's method of iris recognition uses 2D Gabor wavelets to extract features of iris image into a compact bit stream, and then matching two bit streams with hamming distance. Our new method is based on the correlation in the polar coordinate system in frequency domain with regulated weights. The new method is motivated by the observation that the pattern of iris that contains far more information for recognition is fine structure at high frequency other than the gross shapes of iris images. Therefore, we transform iris images into frequency domain and set different weights to frequencies. Then calculate the correlation of two iris images in frequency domain. We evaluate the iris images by summing the discrete correlation values with regulated weights, comparing the value with preset threshold to tell whether these two iris images are captured from the same person or not. Experiments are carried out on both CASIA database and self-obtained images. The results show that our method is functional and reliable. Our method provides a new prospect for iris recognition system.

  19. Weighted Clustering

    DEFF Research Database (Denmark)

    Ackerman, Margareta; Ben-David, Shai; Branzei, Simina

    2012-01-01

    We investigate a natural generalization of the classical clustering problem, considering clustering tasks in which different instances may have different weights.We conduct the first extensive theoretical analysis on the influence of weighted data on standard clustering algorithms in both...... the partitional and hierarchical settings, characterizing the conditions under which algorithms react to weights. Extending a recent framework for clustering algorithm selection, we propose intuitive properties that would allow users to choose between clustering algorithms in the weighted setting and classify...

  20. An Improved Semisupervised Outlier Detection Algorithm Based on Adaptive Feature Weighted Clustering

    Directory of Open Access Journals (Sweden)

    Tingquan Deng

    2016-01-01

    Full Text Available There exist already various approaches to outlier detection, in which semisupervised methods achieve encouraging superiority due to the introduction of prior knowledge. In this paper, an adaptive feature weighted clustering-based semisupervised outlier detection strategy is proposed. This method maximizes the membership degree of a labeled normal object to the cluster it belongs to and minimizes the membership degrees of a labeled outlier to all clusters. In consideration of distinct significance of features or components in a dataset in determining an object being an inlier or outlier, each feature is adaptively assigned different weights according to the deviation degrees between this feature of all objects and that of a certain cluster prototype. A series of experiments on a synthetic dataset and several real-world datasets are implemented to verify the effectiveness and efficiency of the proposal.

  1. Development of an inverse distance weighted active infrared stealth scheme using the repulsive particle swarm optimization algorithm.

    Science.gov (United States)

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk

    2018-04-20

    Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.

  2. A Real-Time Smooth Weighted Data Fusion Algorithm for Greenhouse Sensing Based on Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Tengyue Zou

    2017-11-01

    Full Text Available Wireless sensor networks are widely used to acquire environmental parameters to support agricultural production. However, data variation and noise caused by actuators often produce complex measurement conditions. These factors can lead to nonconformity in reporting samples from different nodes and cause errors when making a final decision. Data fusion is well suited to reduce the influence of actuator-based noise and improve automation accuracy. A key step is to identify the sensor nodes disturbed by actuator noise and reduce their degree of participation in the data fusion results. A smoothing value is introduced and a searching method based on Prim’s algorithm is designed to help obtain stable sensing data. A voting mechanism with dynamic weights is then proposed to obtain the data fusion result. The dynamic weighting process can sharply reduce the influence of actuator noise in data fusion and gradually condition the data to normal levels over time. To shorten the data fusion time in large networks, an acceleration method with prediction is also presented to reduce the data collection time. A real-time system is implemented on STMicroelectronics STM32F103 and NORDIC nRF24L01 platforms and the experimental results verify the improvement provided by these new algorithms.

  3. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks.

    Science.gov (United States)

    Devi, D Chitra; Uthariaraj, V Rhymend

    2016-01-01

    Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.

  4. Load Balancing in Cloud Computing Environment Using Improved Weighted Round Robin Algorithm for Nonpreemptive Dependent Tasks

    Directory of Open Access Journals (Sweden)

    D. Chitra Devi

    2016-01-01

    Full Text Available Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM’s multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM, the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.

  5. Multi-objective ACO algorithms to minimise the makespan and the total rejection cost on BPMs with arbitrary job weights

    Science.gov (United States)

    Jia, Zhao-hong; Pei, Ming-li; Leung, Joseph Y.-T.

    2017-12-01

    In this paper, we investigate the batch-scheduling problem with rejection on parallel machines with non-identical job sizes and arbitrary job-rejected weights. If a job is rejected, the corresponding penalty has to be paid. Our objective is to minimise the makespan of the processed jobs and the total rejection cost of the rejected jobs. Based on the selected multi-objective optimisation approaches, two problems, P1 and P2, are considered. In P1, the two objectives are linearly combined into one single objective. In P2, the two objectives are simultaneously minimised and the Pareto non-dominated solution set is to be found. Based on the ant colony optimisation (ACO), two algorithms, called LACO and PACO, are proposed to address the two problems, respectively. Two different objective-oriented pheromone matrices and heuristic information are designed. Additionally, a local optimisation algorithm is adopted to improve the solution quality. Finally, simulated experiments are conducted, and the comparative results verify the effectiveness and efficiency of the proposed algorithms, especially on large-scale instances.

  6. A Novel Ant Colony Algorithm for the Single-Machine Total Weighted Tardiness Problem with Sequence Dependent Setup Times

    Directory of Open Access Journals (Sweden)

    Fardin Ahmadizar

    2011-08-01

    Full Text Available This paper deals with the NP-hard single-machine total weighted tardiness problem with sequence dependent setup times. Incorporating fuzzy sets and genetic operators, a novel ant colony optimization algorithm is developed for the problem. In the proposed algorithm, artificial ants construct solutions as orders of jobs based on the heuristic information as well as pheromone trails. To calculate the heuristic information, three well-known priority rules are adopted as fuzzy sets and then aggregated. When all artificial ants have terminated their constructions, genetic operators such as crossover and mutation are applied to generate new regions of the solution space. A local search is then performed to improve the performance quality of some of the solutions found. Moreover, at run-time the pheromone trails are locally as well as globally updated, and limited between lower and upper bounds. The proposed algorithm is experimented on a set of benchmark problems from the literature and compared with other metaheuristics.

  7. Performance comparison of weighted sum-minimum mean square error and virtual signal-to-interference plus noise ratio algorithms in simulated and measured channels

    DEFF Research Database (Denmark)

    Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels

    2014-01-01

    A comparison in data achievement between two well-known algorithms with simulated and real measured data is presented. The algorithms maximise the data rate in cooperative base stations (BS) multiple-input-single-output scenario. Weighted sum-minimum mean square error algorithm could be used...... in multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results....

  8. The image evaluation of iterative motion correction reconstruction algorithm PROPELLER T2-weighted imaging compared with MultiVane T2-weighted imaging

    Science.gov (United States)

    Lee, Suk-Jun; Yu, Seung-Man

    2017-08-01

    The purpose of this study was to evaluate the usefulness and clinical applications of MultiVaneXD which was applying iterative motion correction reconstruction algorithm T2-weighted images compared with MultiVane images taken with a 3T MRI. A total of 20 patients with suspected pathologies of the liver and pancreatic-biliary system based on clinical and laboratory findings underwent upper abdominal MRI, acquired using the MultiVane and MultiVaneXD techniques. Two reviewers analyzed the MultiVane and MultiVaneXD T2-weighted images qualitatively and quantitatively. Each reviewer evaluated vessel conspicuity by observing motion artifacts and the sharpness of the portal vein, hepatic vein, and upper organs. The signal-to-noise ratio (SNR) and contrast-to-noise ratio (CNR) were calculated by one reviewer for quantitative analysis. The interclass correlation coefficient was evaluated to measure inter-observer reliability. There were significant differences between MultiVane and MultiVaneXD in motion artifact evaluation. Furthermore, MultiVane was given a better score than MultiVaneXD in abdominal organ sharpness and vessel conspicuity, but the difference was insignificant. The reliability coefficient values were over 0.8 in every evaluation. MultiVaneXD (2.12) showed a higher value than did MultiVane (1.98), but the difference was insignificant ( p = 0.135). MultiVaneXD is a motion correction method that is more advanced than MultiVane, and it produced an increased SNR, resulting in a greater ability to detect focal abdominal lesions.

  9. Microdosimetry calculations for monoenergetic electrons using Geant4-DNA combined with a weighted track sampling algorithm.

    Science.gov (United States)

    Famulari, Gabriel; Pater, Piotr; Enger, Shirin A

    2017-07-07

    The aim of this study was to calculate microdosimetric distributions for low energy electrons simulated using the Monte Carlo track structure code Geant4-DNA. Tracks for monoenergetic electrons with kinetic energies ranging from 100 eV to 1 MeV were simulated in an infinite spherical water phantom using the Geant4-DNA extension included in Geant4 toolkit version 10.2 (patch 02). The microdosimetric distributions were obtained through random sampling of transfer points and overlaying scoring volumes within the associated volume of the tracks. Relative frequency distributions of energy deposition f(>E)/f(>0) and dose mean lineal energy ([Formula: see text]) values were calculated in nanometer-sized spherical and cylindrical targets. The effects of scoring volume and scoring techniques were examined. The results were compared with published data generated using MOCA8B and KURBUC. Geant4-DNA produces a lower frequency of higher energy deposits than MOCA8B. The [Formula: see text] values calculated with Geant4-DNA are smaller than those calculated using MOCA8B and KURBUC. The differences are mainly due to the lower ionization and excitation cross sections of Geant4-DNA for low energy electrons. To a lesser extent, discrepancies can also be attributed to the implementation in this study of a new and fast scoring technique that differs from that used in previous studies. For the same mean chord length ([Formula: see text]), the [Formula: see text] calculated in cylindrical volumes are larger than those calculated in spherical volumes. The discrepancies due to cross sections and scoring geometries increase with decreasing scoring site dimensions. A new set of [Formula: see text] values has been presented for monoenergetic electrons using a fast track sampling algorithm and the most recent physics models implemented in Geant4-DNA. This dataset can be combined with primary electron spectra to predict the radiation quality of photon and electron beams.

  10. A deterministic iterative least-squares algorithm for beam weight optimization in conformal radiotherapy

    International Nuclear Information System (INIS)

    Chen Yan; Michalski, Darek; Houser, Christopher; Galvin, James M.

    2002-01-01

    Currently, inverse treatment planning in conformal radiotherapy is, in part, a trial-and-error process due to the interplay of many competing criteria for obtaining a clinically acceptable dose distribution. A new method is developed for beam weight optimization that incorporates clinically relevant nonlinear and linear constraints. The process is driven by a nonlinear, quasi-quadratic objective function and the solution space is defined by a set of linear constraints. At each step of iteration, the optimization problem is linearized by a self-consistent approximation that is local to the existing dose distribution. The dose distribution is then improved by solving a series of constrained least-squares problems using an established method until all prescribed constraints are satisfied. This differs from the current approaches in that it does not rely on the search for the global minimum of a specific objective function. Essentially, our proposed objective function can be construed as a functional that comprises a class of dose-based quadratic objective functions. Empirical adjustment for appropriate model parameters in the construction of objective function is minimized, since these parameters are in effect adaptively adjusted during optimization. The method is robust in solving difficult clinical cases using either aperture or pencil beam based planning techniques for intensity-modulated radiation therapy. (author)

  11. Order Batching in Warehouses by Minimizing Total Tardiness: A Hybrid Approach of Weighted Association Rule Mining and Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Amir Hossein Azadnia

    2013-01-01

    Full Text Available One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.

  12. A weighted least-squares lump correction algorithm for transmission-corrected gamma-ray nondestructive assay

    International Nuclear Information System (INIS)

    Prettyman, T.H.; Sprinkle, J.K. Jr.; Sheppard, G.A.

    1993-01-01

    With transmission-corrected gamma-ray nondestructive assay instruments such as the Segmented Gamma Scanner (SGS) and the Tomographic Gamma Scanner (TGS) that is currently under development at Los Alamos National Laboratory, the amount of gamma-ray emitting material can be underestimated for samples in which the emitting material consists of particles or lumps of highly attenuating material. This problem is encountered in the assay of uranium and plutonium-bearing samples. To correct for this source of bias, we have developed a least-squares algorithm that uses transmission-corrected assay results for several emitted energies and a weighting function to account for statistical uncertainties in the assay results. The variation of effective lump size in the fitted model is parameterized; this allows the correction to be performed for a wide range of lump-size distributions. It may be possible to use the reduced chi-squared value obtained in the fit to identify samples in which assay assumptions have been violated. We found that the algorithm significantly reduced bias in simulated assays and improved SGS assay results for plutonium-bearing samples. Further testing will be conducted with the TGS, which is expected to be less susceptible than the SGS to systematic source of bias

  13. Increasing Accuracy: A New Design and Algorithm for Automatically Measuring Weights, Travel Direction and Radio Frequency Identification (RFID) of Penguins.

    Science.gov (United States)

    Afanasyev, Vsevolod; Buldyrev, Sergey V; Dunn, Michael J; Robst, Jeremy; Preston, Mark; Bremner, Steve F; Briggs, Dirk R; Brown, Ruth; Adlard, Stacey; Peat, Helen J

    2015-01-01

    A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.

  14. Improved hybridization of Fuzzy Analytic Hierarchy Process (FAHP) algorithm with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW)

    Science.gov (United States)

    Zaiwani, B. E.; Zarlis, M.; Efendi, S.

    2018-03-01

    In this research, the improvement of hybridization algorithm of Fuzzy Analytic Hierarchy Process (FAHP) with Fuzzy Technique for Order Preference by Similarity to Ideal Solution (FTOPSIS) in selecting the best bank chief inspector based on several qualitative and quantitative criteria with various priorities. To improve the performance of the above research, FAHP algorithm hybridization with Fuzzy Multiple Attribute Decision Making - Simple Additive Weighting (FMADM-SAW) algorithm was adopted, which applied FAHP algorithm to the weighting process and SAW for the ranking process to determine the promotion of employee at a government institution. The result of improvement of the average value of Efficiency Rate (ER) is 85.24%, which means that this research has succeeded in improving the previous research that is equal to 77.82%. Keywords: Ranking and Selection, Fuzzy AHP, Fuzzy TOPSIS, FMADM-SAW.

  15. Selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays and impacts of using incorrect weighting factors on curve stability, data quality, and assay performance.

    Science.gov (United States)

    Gu, Huidong; Liu, Guowen; Wang, Jian; Aubry, Anne-Françoise; Arnold, Mark E

    2014-09-16

    A simple procedure for selecting the correct weighting factors for linear and quadratic calibration curves with least-squares regression algorithm in bioanalytical LC-MS/MS assays is reported. The correct weighting factor is determined by the relationship between the standard deviation of instrument responses (σ) and the concentrations (x). The weighting factor of 1, 1/x, or 1/x(2) should be selected if, over the entire concentration range, σ is a constant, σ(2) is proportional to x, or σ is proportional to x, respectively. For the first time, we demonstrated with detailed scientific reasoning, solid historical data, and convincing justification that 1/x(2) should always be used as the weighting factor for all bioanalytical LC-MS/MS assays. The impacts of using incorrect weighting factors on curve stability, data quality, and assay performance were thoroughly investigated. It was found that the most stable curve could be obtained when the correct weighting factor was used, whereas other curves using incorrect weighting factors were unstable. It was also found that there was a very insignificant impact on the concentrations reported with calibration curves using incorrect weighting factors as the concentrations were always reported with the passing curves which actually overlapped with or were very close to the curves using the correct weighting factor. However, the use of incorrect weighting factors did impact the assay performance significantly. Finally, the difference between the weighting factors of 1/x(2) and 1/y(2) was discussed. All of the findings can be generalized and applied into other quantitative analysis techniques using calibration curves with weighted least-squares regression algorithm.

  16. A fast random walk algorithm for computing diffusion-weighted NMR signals in multi-scale porous media: A feasibility study for a Menger sponge

    International Nuclear Information System (INIS)

    Grebenkov, Denis S.; Nguyen, Hang T.; Li, Jing-Rebecca

    2013-01-01

    A fast random walk (FRW) algorithm is adapted to compute diffusion-weighted NMR signals in a Menger sponge which is formed by multiple channels of broadly distributed sizes and often considered as a model for soils and porous materials. The self-similar structure of a Menger sponge allows for rapid simulations that were not feasible by other numerical techniques. The role of multiple length scales on diffusion-weighted NMR signals is investigated. (authors)

  17. Assessment of algorithms for inferring positional weight matrix motifs of transcription factor binding sites using protein binding microarray data.

    Directory of Open Access Journals (Sweden)

    Yaron Orenstein

    Full Text Available The new technology of protein binding microarrays (PBMs allows simultaneous measurement of the binding intensities of a transcription factor to tens of thousands of synthetic double-stranded DNA probes, covering all possible 10-mers. A key computational challenge is inferring the binding motif from these data. We present a systematic comparison of four methods developed specifically for reconstructing a binding site motif represented as a positional weight matrix from PBM data. The reconstructed motifs were evaluated in terms of three criteria: concordance with reference motifs from the literature and ability to predict in vivo and in vitro bindings. The evaluation encompassed over 200 transcription factors and some 300 assays. The results show a tradeoff between how the methods perform according to the different criteria, and a dichotomy of method types. Algorithms that construct motifs with low information content predict PBM probe ranking more faithfully, while methods that produce highly informative motifs match reference motifs better. Interestingly, in predicting high-affinity binding, all methods give far poorer results for in vivo assays compared to in vitro assays.

  18. A weighted sampling algorithm for the design of RNA sequences with targeted secondary structure and nucleotide distribution.

    Science.gov (United States)

    Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme

    2013-07-01

    The design of RNA sequences folding into predefined secondary structures is a milestone for many synthetic biology and gene therapy studies. Most of the current software uses similar local search strategies (i.e. a random seed is progressively adapted to acquire the desired folding properties) and more importantly do not allow the user to control explicitly the nucleotide distribution such as the GC-content in their sequences. However, the latter is an important criterion for large-scale applications as it could presumably be used to design sequences with better transcription rates and/or structural plasticity. In this article, we introduce IncaRNAtion, a novel algorithm to design RNA sequences folding into target secondary structures with a predefined nucleotide distribution. IncaRNAtion uses a global sampling approach and weighted sampling techniques. We show that our approach is fast (i.e. running time comparable or better than local search methods), seedless (we remove the bias of the seed in local search heuristics) and successfully generates high-quality sequences (i.e. thermodynamically stable) for any GC-content. To complete this study, we develop a hybrid method combining our global sampling approach with local search strategies. Remarkably, our glocal methodology overcomes both local and global approaches for sampling sequences with a specific GC-content and target structure. IncaRNAtion is available at csb.cs.mcgill.ca/incarnation/. Supplementary data are available at Bioinformatics online.

  19. Design and Implementation of a Smart LED Lighting System Using a Self Adaptive Weighted Data Fusion Algorithm

    Science.gov (United States)

    Sung, Wen-Tsai; Lin, Jia-Syun

    2013-01-01

    This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.

  20. Design and Implementation of a Smart LED Lighting System Using a Self Adaptive Weighted Data Fusion Algorithm

    Directory of Open Access Journals (Sweden)

    Wen-Tsai Sung

    2013-12-01

    Full Text Available This work aims to develop a smart LED lighting system, which is remotely controlled by Android apps via handheld devices, e.g., smartphones, tablets, and so forth. The status of energy use is reflected by readings displayed on a handheld device, and it is treated as a criterion in the lighting mode design of a system. A multimeter, a wireless light dimmer, an IR learning remote module, etc. are connected to a server by means of RS 232/485 and a human computer interface on a touch screen. The wireless data communication is designed to operate in compliance with the ZigBee standard, and signal processing on sensed data is made through a self adaptive weighted data fusion algorithm. A low variation in data fusion together with a high stability is experimentally demonstrated in this work. The wireless light dimmer as well as the IR learning remote module can be instructed directly by command given on the human computer interface, and the reading on a multimeter can be displayed thereon via the server. This proposed smart LED lighting system can be remotely controlled and self learning mode can be enabled by a single handheld device via WiFi transmission. Hence, this proposal is validated as an approach to power monitoring for home appliances, and is demonstrated as a digital home network in consideration of energy efficiency.

  1. A novel grooming algorithm with the adaptive weight and load balancing for dynamic holding-time-aware traffic in optical networks

    Science.gov (United States)

    Xu, Zhanqi; Huang, Jiangjiang; Zhou, Zhiqiang; Ding, Zhe; Ma, Tao; Wang, Junping

    2013-10-01

    To maximize the resource utilization of optical networks, the dynamic traffic grooming, which could efficiently multiplex many low-speed services arriving dynamically onto high-capacity optical channels, has been studied extensively and used widely. However, the link weights in the existing research works can be improved since they do not adapt to the network status and load well. By exploiting the information on the holding times of the preexisting and new lightpaths, and the requested bandwidth of a user service, this paper proposes a grooming algorithm using Adaptively Weighted Links for Holding-Time-Aware (HTA) (abbreviated as AWL-HTA) traffic, especially in the setup process of new lightpath(s). Therefore, the proposed algorithm can not only establish a lightpath that uses network resource efficiently, but also achieve load balancing. In this paper, the key issues on the link weight assignment and procedure within the AWL-HTA are addressed in detail. Comprehensive simulation and experimental results show that the proposed algorithm has a much lower blocking ratio and latency than other existing algorithms.

  2. An algorithmic approach for the dynamic reliability analysis of non-repairable multi-state weighted k-out-of-n:G system

    International Nuclear Information System (INIS)

    Eryilmaz, Serkan; Rıza Bozbulut, Ali

    2014-01-01

    In this paper, we study a multi-state weighted k-out-of-n:G system model in a dynamic setup. In particular, we study the random time spent by the system with a minimum performance level of k. Our method is based on ordering the lifetimes of the system's components in different state subsets. Using this ordering along with the Monte-Carlo simulation algorithm, we obtain estimates of the mean and survival function of the time spent by the system in state k or above. We present illustrative computational results when the degradation in the components follows a Markov process. - Highlights: • A multi-state weighted k-out-of-n:G system is studied. • A Monte-Carlo simulation algorithm is provided for the dynamic analysis. • Numerics are presented when the components' degradation follow the Markov process

  3. Determination of Optimal Initial Weights of an Artificial Neural Network by Using the Harmony Search Algorithm: Application to Breakwater Armor Stones

    Directory of Open Access Journals (Sweden)

    Anzy Lee

    2016-05-01

    Full Text Available In this study, an artificial neural network (ANN model is developed to predict the stability number of breakwater armor stones based on the experimental data reported by Van der Meer in 1988. The harmony search (HS algorithm is used to determine the near-global optimal initial weights in the training of the model. The stratified sampling is used to sample the training data. A total of 25 HS-ANN hybrid models are tested with different combinations of HS algorithm parameters. The HS-ANN models are compared with the conventional ANN model, which uses a Monte Carlo simulation to determine the initial weights. Each model is run 50 times and the statistical analyses are conducted for the model results. The present models using stratified sampling are shown to be more accurate than those of previous studies. The statistical analyses for the model results show that the HS-ANN model with proper values of HS algorithm parameters can give much better and more stable prediction than the conventional ANN model.

  4. New algorithm to reduce the number of computing steps in reliability formula of Weighted-k-out-of-n system

    Directory of Open Access Journals (Sweden)

    Tatsunari Ohkura

    2007-02-01

    Full Text Available In the disjoint products version of reliability analysis of weighted–k–out–of–n systems, it is necessary to determine the order in which the weight of components is to be considered. The k–out–of–n:G(F system consists of n components; each com-ponent has its own probability and positive integer weight such that the system is operational (failed if and only if the total weight of some operational (failure components is at least k. This paper designs a method to compute the reliability in O(nk computing time and in O(nk memory space. The proposed method expresses the system reliability in fewer product terms than those already published.

  5. Using a combination of weighting factor method and imperialist competitive algorithm to improve speed and enhance process of reloading pattern optimization of VVER-1000 reactors in transient cycles

    Energy Technology Data Exchange (ETDEWEB)

    Rahmani, Yashar, E-mail: yashar.rahmani@gmail.com [Department of Physics, Faculty of Engineering, Islamic Azad University, Sari Branch, Sari (Iran, Islamic Republic of); Shahvari, Yaser [Department of Computer Engineering, Payame Noor University (PNU), P.O. Box 19395-3697, Tehran (Iran, Islamic Republic of); Kia, Faezeh [Golestan Institute of Higher Education, Gorgan 49139-83635 (Iran, Islamic Republic of)

    2017-03-15

    Highlights: • This article was an attempt to optimize reloading pattern of Bushehr VVER-1000 reactor. • A combination of weighting factor method and the imperialist competitive algorithm was used. • The speed of optimization and desirability of the proposed pattern increased considerably. • To evaluate arrangements, a coupling of WIMSD5-B, CITATION-LDI2 and WERL codes was used. • Results reflected the considerable superiority of the proposed method over direct optimization. - Abstract: In this research, an innovative solution is described which can be used with a combination of the new imperialist competitive algorithm and the weighting factor method to improve speed and increase globality of search in reloading pattern optimization of VVER-1000 reactors in transient cycles and even obtain more desirable results than conventional direct method. In this regard, to reduce the scope of the assumed searchable arrangements, first using the weighting factor method and based on values of these coefficients in each of the 16 types of loadable fuel assemblies in the second cycle, the fuel assemblies were classified in more limited groups. In consequence, the types of fuel assemblies were reduced from 16 to 6 and consequently the number of possible arrangements was reduced considerably. Afterwards, in the first phase of optimization the imperialist competitive algorithm was used to propose an optimum reloading pattern with 6 groups. In the second phase, the algorithm was reused for finding desirable placement of the subset assemblies of each group in the optimum arrangement obtained from the previous phase, and thus the retransformation of the optimum arrangement takes place from the virtual 6-group mode to the real mode with 16 fuel types. In this research, the optimization process was conducted in two states. In the first state, it was tried to obtain an arrangement with the maximum effective multiplication factor and the smallest maximum power peaking factor. In

  6. A robot and control algorithm that can synchronously assist in naturalistic motion during body-weight-supported gait training following neurologic injury.

    Science.gov (United States)

    Aoyagi, Daisuke; Ichinose, Wade E; Harkema, Susan J; Reinkensmeyer, David J; Bobrow, James E

    2007-09-01

    Locomotor training using body weight support on a treadmill and manual assistance is a promising rehabilitation technique following neurological injuries, such as spinal cord injury (SCI) and stroke. Previous robots that automate this technique impose constraints on naturalistic walking due to their kinematic structure, and are typically operated in a stiff mode, limiting the ability of the patient or human trainer to influence the stepping pattern. We developed a pneumatic gait training robot that allows for a full range of natural motion of the legs and pelvis during treadmill walking, and provides compliant assistance. However, we observed an unexpected consequence of the device's compliance: unimpaired and SCI individuals invariably began walking out-of-phase with the device. Thus, the robot perturbed rather than assisted stepping. To address this problem, we developed a novel algorithm that synchronizes the device in real-time to the actual motion of the individual by sensing the state error and adjusting the replay timing to reduce this error. This paper describes data from experiments with individuals with SCI that demonstrate the effectiveness of the synchronization algorithm, and the potential of the device for relieving the trainers of strenuous work while maintaining naturalistic stepping.

  7. Optimization of low-dose protocol in thoracic aorta CTA: weighting of adaptive statistical iterative reconstruction (ASIR) algorithm and scanning parameters

    International Nuclear Information System (INIS)

    Zhao Yongxia; Chang Jin; Zuo Ziwei; Zhang Changda; Zhang Tianle

    2014-01-01

    Objective: To investigate the best weighting of adaptive statistical iterative reconstruction (ASIR) algorithm and optimized low-dose scanning parameters in thoracic aorta CT angiography(CTA). Methods: Totally 120 patients with the body mass index (BMI) of 19-24 were randomly divided into 6 groups. All patients underwent thoracic aorta CTA with a GE Discovery CT 750 HD scanner (ranging from 290-330 mm). The default parameters (100 kV, 240 mAs) were applied in Group 1. Reconstructions were performed with different weightings of ASIR(10%-100% with 10%), and the signal to noise ratio (S/N) and contrast to noise ratio(C/N) of images were calculated. The images of series were evaluated by 2 independent radiologists with 5-point-scale and lastly the best weighting were revealed. Then the mAs in Group 2-6 were defined as 210, 180, 150, 120 and 90 with the kilovoltage 100. The CTDI_v_o_l and DLP in every scan series were recorded and the effective dose (E) was calculated. The S/N and C/N were calculated and the image quality was assessed by two radiologists. Results: The best weighing of ASIR was 60% at the 100 kV, 240 mAs. Under 60% of ASIR and 100 kV, the scores of image quality from 240 mAs to 90 mAs were (4.78±0.30)-(3.15±0.23). The CTDI_v_o_l and DLP were 12.64-4.41 mGy and 331.81-128.27 mGy, and the E was 4.98-1.92 mSv. The image qualities among Group 1-5 were nor significantly different (F = 5.365, P > 0.05), but the CTDI_v_o_l and DLP of Group 5 were reduced by 37.0% and 36.9%, respectively compared with Group 1. Conclusions: In thoracic aorta CT Angiography, the best weighting of ASIR is 60%, and 120 mAs is the best mAs with 100 kV in patients with BMI 19-24. (authors)

  8. Ultra-Short-Term Wind-Power Forecasting Based on the Weighted Random Forest Optimized by the Niche Immune Lion Algorithm

    Directory of Open Access Journals (Sweden)

    Dongxiao Niu

    2018-04-01

    Full Text Available The continuous increase in energy consumption has made the potential of wind-power generation tremendous. However, the obvious intermittency and randomness of wind speed results in the fluctuation of the output power in a wind farm, seriously affecting the power quality. Therefore, the accurate prediction of wind power in advance can improve the ability of wind-power integration and enhance the reliability of the power system. In this paper, a model of wavelet decomposition (WD and weighted random forest (WRF optimized by the niche immune lion algorithm (NILA-WRF is presented for ultra-short-term wind power prediction. Firstly, the original serials of wind speed and power are decomposed into several sub-serials by WD because the original serials have no obvious day characteristics. Then, the model parameters are set and the model trained with the sub-serials of wind speed and wind power decomposed. Finally, the WD-NILA-WRF model is used to predict the wind power of the relative sub-serials and the result is reconstructed to obtain the final prediction result. The WD-NILA-WRF model combines the advantage of each single model, which uses WD for signal de-noising, and uses the niche immune lion algorithm (NILA to improve the model’s optimization efficiency. In this paper, two empirical analyses are carried out to prove the accuracy of the model, and the experimental results verify the proposed model’s validity and superiority compared with the back propagation neural network (BP neural network, support vector machine (SVM, RF and NILA-RF, indicating that the proposed method is superior in cases influenced by noise and unstable factors, and possesses an excellent generalization ability and robustness.

  9. Multiplex protein pattern unmixing using a non-linear variable-weighted support vector machine as optimized by a particle swarm optimization algorithm.

    Science.gov (United States)

    Yang, Qin; Zou, Hong-Yan; Zhang, Yan; Tang, Li-Juan; Shen, Guo-Li; Jiang, Jian-Hui; Yu, Ru-Qin

    2016-01-15

    Most of the proteins locate more than one organelle in a cell. Unmixing the localization patterns of proteins is critical for understanding the protein functions and other vital cellular processes. Herein, non-linear machine learning technique is proposed for the first time upon protein pattern unmixing. Variable-weighted support vector machine (VW-SVM) is a demonstrated robust modeling technique with flexible and rational variable selection. As optimized by a global stochastic optimization technique, particle swarm optimization (PSO) algorithm, it makes VW-SVM to be an adaptive parameter-free method for automated unmixing of protein subcellular patterns. Results obtained by pattern unmixing of a set of fluorescence microscope images of cells indicate VW-SVM as optimized by PSO is able to extract useful pattern features by optimally rescaling each variable for non-linear SVM modeling, consequently leading to improved performances in multiplex protein pattern unmixing compared with conventional SVM and other exiting pattern unmixing methods. Copyright © 2015 Elsevier B.V. All rights reserved.

  10. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  11. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  12. TU-AB-BRA-03: Atlas-Based Algorithms with Local Registration-Goodness Weighting for MRI-Driven Electron Density Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Farjam, R; Tyagi, N [Memorial Sloan-Kettering Cancer Center, New York, NY (United States); Veeraraghavan, H; Apte, A; Zakian, K; Deasy, J [Memorial Sloan Kettering Cancer Center, New York, NY (United States); Hunt, M [Mem Sloan-Kettering Cancer Center, New York, NY (United States)

    2016-06-15

    Purpose: To develop image-analysis algorithms to synthesize CT with accurate electron densities for MR-only radiotherapy of head & neck (H&N) and pelvis anatomies. Methods: CT and 3T-MRI (Philips, mDixon sequence) scans were randomly selected from a pool of H&N (n=11) and pelvis (n=12) anatomies to form an atlas. All MRIs were pre-processed to eliminate scanner and patient-induced intensity inhomogeneities and standardize their intensity histograms. CT and MRI for each patient were then co-registered to construct CT-MRI atlases. For more accurate CT-MR fusion, bone intensities in CT were suppressed to improve the similarity between CT and MRI. For a new patient, all CT-MRI atlases are deformed onto the new patients’ MRI initially. A newly-developed generalized registration error (GRE) metric was then calculated as a measure of local registration accuracy. The synthetic CT value at each point is a 1/GRE-weighted average of CTs from all CT-MR atlases. For evaluation, the mean absolute error (MAE) between the original and synthetic CT (generated in a leave-one-out scheme) was computed. The planning dose from the original and synthetic CT was also compared. Results: For H&N patients, MAE was 67±9, 114±22, and 116±9 HU over the entire-CT, air and bone regions, respectively. For pelvis anatomy, MAE was 47±5 and 146±14 for the entire and bone regions. In comparison with MIRADA medical, an FDA-approved registration tool, we found that our proposed registration strategy reduces MAE by ∼30% and ∼50% over the entire and bone regions, respectively. GRE-weighted strategy further lowers MAE by ∼15% to ∼40%. Our primary dose calculation also showed highly consistent results between the original and synthetic CT. Conclusion: We’ve developed a novel image-analysis technique to synthesize CT for H&N and pelvis anatomies. Our proposed image fusion strategy and GRE metric help generate more accurate synthetic CT using locally more similar atlases (Support: Philips

  13. TU-AB-BRA-03: Atlas-Based Algorithms with Local Registration-Goodness Weighting for MRI-Driven Electron Density Mapping

    International Nuclear Information System (INIS)

    Farjam, R; Tyagi, N; Veeraraghavan, H; Apte, A; Zakian, K; Deasy, J; Hunt, M

    2016-01-01

    Purpose: To develop image-analysis algorithms to synthesize CT with accurate electron densities for MR-only radiotherapy of head & neck (H&N) and pelvis anatomies. Methods: CT and 3T-MRI (Philips, mDixon sequence) scans were randomly selected from a pool of H&N (n=11) and pelvis (n=12) anatomies to form an atlas. All MRIs were pre-processed to eliminate scanner and patient-induced intensity inhomogeneities and standardize their intensity histograms. CT and MRI for each patient were then co-registered to construct CT-MRI atlases. For more accurate CT-MR fusion, bone intensities in CT were suppressed to improve the similarity between CT and MRI. For a new patient, all CT-MRI atlases are deformed onto the new patients’ MRI initially. A newly-developed generalized registration error (GRE) metric was then calculated as a measure of local registration accuracy. The synthetic CT value at each point is a 1/GRE-weighted average of CTs from all CT-MR atlases. For evaluation, the mean absolute error (MAE) between the original and synthetic CT (generated in a leave-one-out scheme) was computed. The planning dose from the original and synthetic CT was also compared. Results: For H&N patients, MAE was 67±9, 114±22, and 116±9 HU over the entire-CT, air and bone regions, respectively. For pelvis anatomy, MAE was 47±5 and 146±14 for the entire and bone regions. In comparison with MIRADA medical, an FDA-approved registration tool, we found that our proposed registration strategy reduces MAE by ∼30% and ∼50% over the entire and bone regions, respectively. GRE-weighted strategy further lowers MAE by ∼15% to ∼40%. Our primary dose calculation also showed highly consistent results between the original and synthetic CT. Conclusion: We’ve developed a novel image-analysis technique to synthesize CT for H&N and pelvis anatomies. Our proposed image fusion strategy and GRE metric help generate more accurate synthetic CT using locally more similar atlases (Support: Philips

  14. Weighted network modules

    International Nuclear Information System (INIS)

    Farkas, Illes; Abel, Daniel; Palla, Gergely; Vicsek, Tamas

    2007-01-01

    The inclusion of link weights into the analysis of network properties allows a deeper insight into the (often overlapping) modular structure of real-world webs. We introduce a clustering algorithm clique percolation method with weights (CPMw) for weighted networks based on the concept of percolating k-cliques with high enough intensity. The algorithm allows overlaps between the modules. First, we give detailed analytical and numerical results about the critical point of weighted k-clique percolation on (weighted) Erdos-Renyi graphs. Then, for a scientist collaboration web and a stock correlation graph we compute three-link weight correlations and with the CPMw the weighted modules. After reshuffling link weights in both networks and computing the same quantities for the randomized control graphs as well, we show that groups of three or more strong links prefer to cluster together in both original graphs

  15. A new cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    1998-01-01

    textabstractA new cluster algorithm for graphs called the emph{Markov Cluster algorithm ($MCL$ algorithm) is introduced. The graphs may be both weighted (with nonnegative weight) and directed. Let~$G$~be such a graph. The $MCL$ algorithm simulates flow in $G$ by first identifying $G$ in a

  16. Knowledge discovery from patients' behavior via clustering-classification algorithms based on weighted eRFM and CLV model: An empirical study in public health care services.

    Science.gov (United States)

    Zare Hosseini, Zeinab; Mohammadzadeh, Mahdi

    2016-01-01

    The rapid growing of information technology (IT) motivates and makes competitive advantages in health care industry. Nowadays, many hospitals try to build a successful customer relationship management (CRM) to recognize target and potential patients, increase patient loyalty and satisfaction and finally maximize their profitability. Many hospitals have large data warehouses containing customer demographic and transactions information. Data mining techniques can be used to analyze this data and discover hidden knowledge of customers. This research develops an extended RFM model, namely RFML (added parameter: Length) based on health care services for a public sector hospital in Iran with the idea that there is contrast between patient and customer loyalty, to estimate customer life time value (CLV) for each patient. We used Two-step and K-means algorithms as clustering methods and Decision tree (CHAID) as classification technique to segment the patients to find out target, potential and loyal customers in order to implement strengthen CRM. Two approaches are used for classification: first, the result of clustering is considered as Decision attribute in classification process and second, the result of segmentation based on CLV value of patients (estimated by RFML) is considered as Decision attribute. Finally the results of CHAID algorithm show the significant hidden rules and identify existing patterns of hospital consumers.

  17. Knowledge discovery from patients’ behavior via clustering-classification algorithms based on weighted eRFM and CLV model: An empirical study in public health care services

    Science.gov (United States)

    Zare Hosseini, Zeinab; Mohammadzadeh, Mahdi

    2016-01-01

    The rapid growing of information technology (IT) motivates and makes competitive advantages in health care industry. Nowadays, many hospitals try to build a successful customer relationship management (CRM) to recognize target and potential patients, increase patient loyalty and satisfaction and finally maximize their profitability. Many hospitals have large data warehouses containing customer demographic and transactions information. Data mining techniques can be used to analyze this data and discover hidden knowledge of customers. This research develops an extended RFM model, namely RFML (added parameter: Length) based on health care services for a public sector hospital in Iran with the idea that there is contrast between patient and customer loyalty, to estimate customer life time value (CLV) for each patient. We used Two-step and K-means algorithms as clustering methods and Decision tree (CHAID) as classification technique to segment the patients to find out target, potential and loyal customers in order to implement strengthen CRM. Two approaches are used for classification: first, the result of clustering is considered as Decision attribute in classification process and second, the result of segmentation based on CLV value of patients (estimated by RFML) is considered as Decision attribute. Finally the results of CHAID algorithm show the significant hidden rules and identify existing patterns of hospital consumers. PMID:27610177

  18. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  19. Measurement of the top quark mass using dilepton events and a neutrino weighting algorithm with the DOe experiment at the Tevatron (Run II)

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, J.

    2007-07-01

    Several measurements of the top quark mass in the dilepton final states with the DOe experiment are presented. The theoretical and experimental properties of the top quark are described together with a brief introduction of the Standard Model of particle physics and the physics of hadron collisions. An overview over the experimental setup is given. The Tevatron at Fermilab is presently the highest-energy hadron collider in the world with a center-of-mass energy of 1.96 TeV. There are two main experiments called CDF and DOe, A description of the components of the multipurpose DOe detector is given. The reconstruction of simulated events and data events is explained and the criteria for the identification of electrons, muons, jets, and missing transverse energy is given. The kinematics in the dilepton final state is underconstraint. Therefore, the top quark mass is extracted by the so-called Neutrino Weighting method. This method is introduced and several different approaches are described, compared, and enhanced. Results for the international summer conferences 2006 and winter 2007 are presented. The top quark mass measurement for the combination of all three dilepton channels with a dataset of 1.05 1/fb yields: mtop=172.5{+-}5.5 (stat.) {+-} 5.8 (syst.) GeV. This result is presently the most precise top quark mass measurement of the DOe experiment in the dilepton chann el. It entered the top quark mass wold average from March 2007. (orig.)

  20. Greedy algorithms withweights for construction of partial association rules

    KAUST Repository

    Moshkov, Mikhail; Piliszczu, Marcin; Zielosko, Beata Marta

    2009-01-01

    This paper is devoted to the study of approximate algorithms for minimization of the total weight of attributes occurring in partial association rules. We consider mainly greedy algorithms with weights for construction of rules. The paper contains bounds on precision of these algorithms and bounds on the minimal weight of partial association rules based on an information obtained during the greedy algorithm run.

  1. Greedy algorithms withweights for construction of partial association rules

    KAUST Repository

    Moshkov, Mikhail

    2009-09-10

    This paper is devoted to the study of approximate algorithms for minimization of the total weight of attributes occurring in partial association rules. We consider mainly greedy algorithms with weights for construction of rules. The paper contains bounds on precision of these algorithms and bounds on the minimal weight of partial association rules based on an information obtained during the greedy algorithm run.

  2. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  3. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  4. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  5. Flow enforcement algorithms for ATM networks

    DEFF Research Database (Denmark)

    Dittmann, Lars; Jacobsen, Søren B.; Moth, Klaus

    1991-01-01

    Four measurement algorithms for flow enforcement in asynchronous transfer mode (ATM) networks are presented. The algorithms are the leaky bucket, the rectangular sliding window, the triangular sliding window, and the exponentially weighted moving average. A comparison, based partly on teletraffic...

  6. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  7. Weight Management

    Science.gov (United States)

    ... Health Information Weight Management English English Español Weight Management Obesity is a chronic condition that affects more ... Liver (NASH) Heart Disease & Stroke Sleep Apnea Weight Management Topics About Food Portions Bariatric Surgery for Severe ...

  8. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  9. "Accelerated Perceptron": A Self-Learning Linear Decision Algorithm

    OpenAIRE

    Zuev, Yu. A.

    2003-01-01

    The class of linear decision rules is studied. A new algorithm for weight correction, called an "accelerated perceptron", is proposed. In contrast to classical Rosenblatt's perceptron this algorithm modifies the weight vector at each step. The algorithm may be employed both in learning and in self-learning modes. The theoretical aspects of the behaviour of the algorithm are studied when the algorithm is used for the purpose of increasing the decision reliability by means of weighted voting. I...

  10. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  11. Handling Dynamic Weights in Weighted Frequent Pattern Mining

    Science.gov (United States)

    Ahmed, Chowdhury Farhan; Tanbeer, Syed Khairuzzaman; Jeong, Byeong-Soo; Lee, Young-Koo

    Even though weighted frequent pattern (WFP) mining is more effective than traditional frequent pattern mining because it can consider different semantic significances (weights) of items, existing WFP algorithms assume that each item has a fixed weight. But in real world scenarios, the weight (price or significance) of an item can vary with time. Reflecting these changes in item weight is necessary in several mining applications, such as retail market data analysis and web click stream analysis. In this paper, we introduce the concept of a dynamic weight for each item, and propose an algorithm, DWFPM (dynamic weighted frequent pattern mining), that makes use of this concept. Our algorithm can address situations where the weight (price or significance) of an item varies dynamically. It exploits a pattern growth mining technique to avoid the level-wise candidate set generation-and-test methodology. Furthermore, it requires only one database scan, so it is eligible for use in stream data mining. An extensive performance analysis shows that our algorithm is efficient and scalable for WFP mining using dynamic weights.

  12. Efficient Completion of Weighted Automata

    Directory of Open Access Journals (Sweden)

    Johannes Waldmann

    2016-09-01

    Full Text Available We consider directed graphs with edge labels from a semiring. We present an algorithm that allows efficient execution of queries for existence and weights of paths, and allows updates of the graph: adding nodes and edges, and changing weights of existing edges. We apply this method in the construction of matchbound certificates for automatically proving termination of string rewriting. We re-implement the decomposition/completion algorithm of Endrullis et al. (2006 in our framework, and achieve comparable performance.

  13. Some software algorithms for microprocessor ratemeters

    International Nuclear Information System (INIS)

    Savic, Z.

    1991-01-01

    After a review of the basic theoretical ratemeter problem and a general discussion of microprocessor ratemeters, a short insight into their hardware organization is given. Three software algorithms are described: the old ones the quasi-exponential and floating mean algorithm, and a new weighted moving average algorithm. The equations for statistical characterization of the new algorithm are given and an intercomparison is made. It is concluded that the new algorithm has statistical advantages over the old ones. (orig.)

  14. Some software algorithms for microprocessor ratemeters

    Energy Technology Data Exchange (ETDEWEB)

    Savic, Z. (Military Technical Inst., Belgrade (Yugoslavia))

    1991-03-15

    After a review of the basic theoretical ratemeter problem and a general discussion of microprocessor ratemeters, a short insight into their hardware organization is given. Three software algorithms are described: the old ones the quasi-exponential and floating mean algorithm, and a new weighted moving average algorithm. The equations for statistical characterization of the new algorithm are given and an intercomparison is made. It is concluded that the new algorithm has statistical advantages over the old ones. (orig.).

  15. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    Science.gov (United States)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  16. Optimal design of condenser weight

    International Nuclear Information System (INIS)

    Zheng Jing; Yan Changqi; Wang Jianjun

    2011-01-01

    The condenser is an important component in nuclear power plants, which dimension and weight will effect the economical performance and the arrangement of the nuclear power plants. In this paper, the calculation model is established according to the design experience. The corresponding codes are also developed, and the sensitivity of design parameters which influence the condenser weight is analyzed. The present design optimization of the condenser, taking the weight minimization as the objective, is carried out with the self-developed complex-genetic algorithm. The results show that the reference condenser design is far from the best scheme, and also verify the feasibility of the complex-genetic algorithm. (authors)

  17. Integrated Association Rules Complete Hiding Algorithms

    Directory of Open Access Journals (Sweden)

    Mohamed Refaat Abdellah

    2017-01-01

    Full Text Available This paper presents database security approach for complete hiding of sensitive association rules by using six novel algorithms. These algorithms utilize three new weights to reduce the needed database modifications and support complete hiding, as well as they reduce the knowledge distortion and the data distortions. Complete weighted hiding algorithms enhance the hiding failure by 100%; these algorithms have the advantage of performing only a single scan for the database to gather the required information to form the hiding process. These proposed algorithms are built within the database structure which enables the sanitized database to be generated on run time as needed.

  18. An Efficient Compiler for Weighted Rewrite Rules

    OpenAIRE

    Mohri, Mehryar; Sproat, Richard

    1996-01-01

    Context-dependent rewrite rules are used in many areas of natural language and speech processing. Work in computational phonology has demonstrated that, given certain conditions, such rewrite rules can be represented as finite-state transducers (FSTs). We describe a new algorithm for compiling rewrite rules into FSTs. We show the algorithm to be simpler and more efficient than existing algorithms. Further, many of our applications demand the ability to compile weighted rules into weighted FST...

  19. Some observations on weighted GMRES

    KAUST Repository

    Güttel, Stefan

    2014-01-10

    We investigate the convergence of the weighted GMRES method for solving linear systems. Two different weighting variants are compared with unweighted GMRES for three model problems, giving a phenomenological explanation of cases where weighting improves convergence, and a case where weighting has no effect on the convergence. We also present a new alternative implementation of the weighted Arnoldi algorithm which under known circumstances will be favourable in terms of computational complexity. These implementations of weighted GMRES are compared for a large number of examples. We find that weighted GMRES may outperform unweighted GMRES for some problems, but more often this method is not competitive with other Krylov subspace methods like GMRES with deflated restarting or BICGSTAB, in particular when a preconditioner is used. © 2014 Springer Science+Business Media New York.

  20. Some observations on weighted GMRES

    KAUST Repository

    Gü ttel, Stefan; Pestana, Jennifer

    2014-01-01

    We investigate the convergence of the weighted GMRES method for solving linear systems. Two different weighting variants are compared with unweighted GMRES for three model problems, giving a phenomenological explanation of cases where weighting improves convergence, and a case where weighting has no effect on the convergence. We also present a new alternative implementation of the weighted Arnoldi algorithm which under known circumstances will be favourable in terms of computational complexity. These implementations of weighted GMRES are compared for a large number of examples. We find that weighted GMRES may outperform unweighted GMRES for some problems, but more often this method is not competitive with other Krylov subspace methods like GMRES with deflated restarting or BICGSTAB, in particular when a preconditioner is used. © 2014 Springer Science+Business Media New York.

  1. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  2. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  3. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  4. Weight Loss

    Science.gov (United States)

    ... Rights Employment Discrimination Health Care Professionals Law Enforcement Driver's License For Lawyers Food & Fitness Home Food MyFoodAdvisor ... Fit Types of Activity Weight Loss Assess Your Lifestyle Getting Started Food Choices In My Community Home ...

  5. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  6. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  7. Healthy Weight

    Science.gov (United States)

    ... such diets limit your nutritional intake, can be unhealthy, and tend to fail in the long run. The key to achieving and maintaining a healthy weight isn't about short-term dietary changes. It's about a lifestyle that includes healthy eating, regular physical activity, and ...

  8. Weighted approximation with varying weight

    CERN Document Server

    Totik, Vilmos

    1994-01-01

    A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.

  9. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  10. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  11. Do country-specific preference weights matter in the choice of mapping algorithms? The case of mapping the Diabetes-39 onto eight country-specific EQ-5D-5L value sets.

    Science.gov (United States)

    Lamu, Admassu N; Chen, Gang; Gamst-Klaussen, Thor; Olsen, Jan Abel

    2018-03-22

    To develop mapping algorithms that transform Diabetes-39 (D-39) scores onto EQ-5D-5L utility values for each of eight recently published country-specific EQ-5D-5L value sets, and to compare mapping functions across the EQ-5D-5L value sets. Data include 924 individuals with self-reported diabetes from six countries. The D-39 dimensions, age and gender were used as potential predictors for EQ-5D-5L utilities, which were scored using value sets from eight countries (England, Netherland, Spain, Canada, Uruguay, China, Japan and Korea). Ordinary least squares, generalised linear model, beta binomial regression, fractional regression, MM estimation and censored least absolute deviation were used to estimate the mapping algorithms. The optimal algorithm for each country-specific value set was primarily selected based on normalised root mean square error (NRMSE), normalised mean absolute error (NMAE) and adjusted-r 2 . Cross-validation with fivefold approach was conducted to test the generalizability of each model. The fractional regression model with loglog as a link function consistently performed best in all country-specific value sets. For instance, the NRMSE (0.1282) and NMAE (0.0914) were the lowest, while adjusted-r 2 was the highest (52.5%) when the English value set was considered. Among D-39 dimensions, the energy and mobility was the only one that was consistently significant for all models. The D-39 can be mapped onto the EQ-5D-5L utilities with good predictive accuracy. The fractional regression model, which is appropriate for handling bounded outcomes, outperformed other candidate methods in all country-specific value sets. However, the regression coefficients differed reflecting preference heterogeneity across countries.

  12. Normalization based K means Clustering Algorithm

    OpenAIRE

    Virmani, Deepali; Taneja, Shweta; Malhotra, Geetika

    2015-01-01

    K-means is an effective clustering technique used to separate similar data into groups based on initial centroids of clusters. In this paper, Normalization based K-means clustering algorithm(N-K means) is proposed. Proposed N-K means clustering algorithm applies normalization prior to clustering on the available data as well as the proposed approach calculates initial centroids based on weights. Experimental results prove the betterment of proposed N-K means clustering algorithm over existing...

  13. Random walk term weighting for information retrieval

    DEFF Research Database (Denmark)

    Blanco, R.; Lioma, Christina

    2007-01-01

    We present a way of estimating term weights for Information Retrieval (IR), using term co-occurrence as a measure of dependency between terms.We use the random walk graph-based ranking algorithm on a graph that encodes terms and co-occurrence dependencies in text, from which we derive term weights...

  14. Iterative methods for weighted least-squares

    Energy Technology Data Exchange (ETDEWEB)

    Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  15. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  16. An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2013-01-01

    Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.

  17. Iterative Mixture Component Pruning Algorithm for Gaussian Mixture PHD Filter

    Directory of Open Access Journals (Sweden)

    Xiaoxi Yan

    2014-01-01

    Full Text Available As far as the increasing number of mixture components in the Gaussian mixture PHD filter is concerned, an iterative mixture component pruning algorithm is proposed. The pruning algorithm is based on maximizing the posterior probability density of the mixture weights. The entropy distribution of the mixture weights is adopted as the prior distribution of mixture component parameters. The iterative update formulations of the mixture weights are derived by Lagrange multiplier and Lambert W function. Mixture components, whose weights become negative during iterative procedure, are pruned by setting corresponding mixture weights to zeros. In addition, multiple mixture components with similar parameters describing the same PHD peak can be merged into one mixture component in the algorithm. Simulation results show that the proposed iterative mixture component pruning algorithm is superior to the typical pruning algorithm based on thresholds.

  18. Evolutionary Algorithms For Neural Networks Binary And Real Data Classification

    Directory of Open Access Journals (Sweden)

    Dr. Hanan A.R. Akkar

    2015-08-01

    Full Text Available Artificial neural networks are complex networks emulating the way human rational neurons process data. They have been widely used generally in prediction clustering classification and association. The training algorithms that used to determine the network weights are almost the most important factor that influence the neural networks performance. Recently many meta-heuristic and Evolutionary algorithms are employed to optimize neural networks weights to achieve better neural performance. This paper aims to use recently proposed algorithms for optimizing neural networks weights comparing these algorithms performance with other classical meta-heuristic algorithms used for the same purpose. However to evaluate the performance of such algorithms for training neural networks we examine such algorithms to classify four opposite binary XOR clusters and classification of continuous real data sets such as Iris and Ecoli.

  19. Foam: A general purpose Monte Carlo cellular algorithm

    International Nuclear Information System (INIS)

    Jadach, S.

    2003-01-01

    A general-purpose, self-adapting Monte Carlo (MC) algorithm implemented in the program Foam is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be n-dimensional simplices, hyperrectangles cells. The next cell to be divided and the position/direction of the division hyperplane is chosen by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution

  20. Uma proposta de solução para o problema da construção de escalas de motoristas e cobradores de ônibus por meio do algoritmo do matching de peso máximo A proposed solution for bus driver and fare collector scheduling problems using the maximum weight matching algorithm

    Directory of Open Access Journals (Sweden)

    Paulo Henrique Siqueira

    2004-08-01

    Full Text Available O objetivo deste trabalho é mostrar a aplicação do Algoritmo do Matching de peso máximo, na elaboração de jornadas de trabalho para motoristas e cobradores de ônibus. Este problema deve ser resolvido levando-se em consideração o maior aproveitamento possível das tabelas de horários, com o objetivo de minimizar o número de funcionários, de horas extras e de horas ociosas. Desta forma, os custos das companhias de transporte público são minimizados. Na primeira fase do trabalho, supondo-se que as tabelas de horários já estejam divididas em escalas de curta e de longa duração, as escalas de curta duração são combinadas para a formação da jornada diária de trabalho de um funcionário. Esta combinação é feita com o Algoritmo do Matching de peso máximo, no qual as escalas são representadas por vértices de um grafo, e o peso máximo é atribuído às combinações de escalas que não formam horas extras e horas ociosas. Na segunda fase, uma jornada de final de semana é designada para cada jornada semanal de dias úteis. Por meio destas duas fases, as jornadas semanais de trabalho para motoristas e cobradores de ônibus podem ser construídas com custo mínimo. A terceira e última fase deste trabalho consiste na designação das jornadas semanais de trabalho para cada motorista e cobrador de ônibus, considerando-se suas preferências. O Algoritmo do Matching de peso máximo é utilizado para esta fase também. Este trabalho foi aplicado em três empresas de transporte público da cidade de Curitiba - PR, nas quais os algoritmos utilizados anteriormente eram heurísticos, baseados apenas na experiência do encarregado por esta tarefa.The purpose of this paper is to discuss how the maximum weight Matching Algorithm can be applied to schedule the workdays of bus drivers and bus fare collectors. This scheduling should be based on the best possible use of timetables in order to minimize the number of employees, overtime and

  1. Proposals for Updating Tai Algorithm

    Science.gov (United States)

    1997-12-01

    1997 meeting, the Comiti International des Poids et Mesures (CIPM) decided to change the name of the Comiti Consultatif pour la Difinition de la ...Report of the BIPM Time Section, 1988,1, D1-D22. [2] P. Tavella, C. Thomas, Comparative study of time scale algorithms, Metrologia , 1991, 28, 57...alternative choice for implementing an upper limit of clock weights, Metrologia , 1996, 33, 227-240. [5] C. Thomas, Impact of New Clock Technologies

  2. Computed laminography and reconstruction algorithm

    International Nuclear Information System (INIS)

    Que Jiemin; Cao Daquan; Zhao Wei; Tang Xiao

    2012-01-01

    Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution. This is especially true for planar objects. In this paper, we set up a new scanning geometry for CL, and study the algebraic reconstruction technique (ART) for CL imaging. We compare the results of ART with variant weighted functions by computer simulation with a digital phantom. It proves that ART algorithm is a good choice for the CL system. (authors)

  3. Overweight, Obesity, and Weight Loss

    Science.gov (United States)

    ... Back to section menu Healthy Weight Weight and obesity Underweight Weight, fertility, and pregnancy Weight loss and ... section Home Healthy Weight Healthy Weight Weight and obesity Underweight Weight, fertility, and pregnancy Weight loss and ...

  4. Generalized-ensemble molecular dynamics and Monte Carlo algorithms beyond the limit of the multicanonical algorithm

    International Nuclear Information System (INIS)

    Okumura, Hisashi

    2010-01-01

    I review two new generalized-ensemble algorithms for molecular dynamics and Monte Carlo simulations of biomolecules, that is, the multibaric–multithermal algorithm and the partial multicanonical algorithm. In the multibaric–multithermal algorithm, two-dimensional random walks not only in the potential-energy space but also in the volume space are realized. One can discuss the temperature dependence and pressure dependence of biomolecules with this algorithm. The partial multicanonical simulation samples a wide range of only an important part of potential energy, so that one can concentrate the effort to determine a multicanonical weight factor only on the important energy terms. This algorithm has higher sampling efficiency than the multicanonical and canonical algorithms. (review)

  5. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  6. Weight Loss Surgery

    Science.gov (United States)

    Weight loss surgery helps people with extreme obesity to lose weight. It may be an option if you cannot lose weight ... obesity. There are different types of weight loss surgery. They often limit the amount of food you ...

  7. Level-0 trigger algorithms for the ALICE PHOS detector

    CERN Document Server

    Wang, D; Wang, Y P; Huang, G M; Kral, J; Yin, Z B; Zhou, D C; Zhang, F; Ullaland, K; Muller, H; Liu, L J

    2011-01-01

    The PHOS level-0 trigger provides a minimum bias trigger for p-p collisions and information for a level-1 trigger at both p-p and Pb-Pb collisions. There are two level-0 trigger generating algorithms under consideration: the Direct Comparison algorithm and the Weighted Sum algorithm. In order to study trigger algorithms via simulation, a simplified equivalent model is extracted from the trigger electronics to derive the waveform function of the Analog-or signal as input to the trigger algorithms. Simulations shown that the Weighted Sum algorithm can achieve higher trigger efficiency and provide more precise single channel energy information than the direct compare algorithm. An energy resolution of 9.75 MeV can be achieved with the Weighted Sum algorithm at a sampling rate of 40 Msps (mega samples per second) at 1 GeV. The timing performance at a sampling rate of 40 Msps with the Weighted Sum algorithm is better than that at a sampling rate of 20 Msps with both algorithms. The level-0 trigger can be delivered...

  8. Interpolation algorithm for asynchronous ADC-data

    Directory of Open Access Journals (Sweden)

    S. Bramburger

    2017-09-01

    Full Text Available This paper presents a modified interpolation algorithm for signals with variable data rate from asynchronous ADCs. The Adaptive weights Conjugate gradient Toeplitz matrix (ACT algorithm is extended to operate with a continuous data stream. An additional preprocessing of data with constant and linear sections and a weighted overlap of step-by-step into spectral domain transformed signals improve the reconstruction of the asycnhronous ADC signal. The interpolation method can be used if asynchronous ADC data is fed into synchronous digital signal processing.

  9. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  10. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  11. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  12. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  13. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  14. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  15. Weighted straight skeletons in the plane.

    Science.gov (United States)

    Biedl, Therese; Held, Martin; Huber, Stefan; Kaaser, Dominik; Palfrader, Peter

    2015-02-01

    We investigate weighted straight skeletons from a geometric, graph-theoretical, and combinatorial point of view. We start with a thorough definition and shed light on some ambiguity issues in the procedural definition. We investigate the geometry, combinatorics, and topology of faces and the roof model, and we discuss in which cases a weighted straight skeleton is connected. Finally, we show that the weighted straight skeleton of even a simple polygon may be non-planar and may contain cycles, and we discuss under which restrictions on the weights and/or the input polygon the weighted straight skeleton still behaves similar to its unweighted counterpart. In particular, we obtain a non-procedural description and a linear-time construction algorithm for the straight skeleton of strictly convex polygons with arbitrary weights.

  16. Mindfulness Approaches and Weight Loss, Weight Maintenance, and Weight Regain.

    Science.gov (United States)

    Dunn, Carolyn; Haubenreiser, Megan; Johnson, Madison; Nordby, Kelly; Aggarwal, Surabhi; Myer, Sarah; Thomas, Cathy

    2018-03-01

    There is an urgent need for effective weight management techniques, as more than one third of US adults are overweight or obese. Recommendations for weight loss include a combination of reducing caloric intake, increasing physical activity, and behavior modification. Behavior modification includes mindful eating or eating with awareness. The purpose of this review was to summarize the literature and examine the impact of mindful eating on weight management. The practice of mindful eating has been applied to the reduction of food cravings, portion control, body mass index, and body weight. Past reviews evaluating the relationship between mindfulness and weight management did not focus on change in mindful eating as the primary outcome or mindful eating as a measured variable. This review demonstrates strong support for inclusion of mindful eating as a component of weight management programs and may provide substantial benefit to the treatment of overweight and obesity.

  17. A Weighted Block Dictionary Learning Algorithm for Classification

    OpenAIRE

    Shi, Zhongrong

    2016-01-01

    Discriminative dictionary learning, playing a critical role in sparse representation based classification, has led to state-of-the-art classification results. Among the existing discriminative dictionary learning methods, two different approaches, shared dictionary and class-specific dictionary, which associate each dictionary atom to all classes or a single class, have been studied. The shared dictionary is a compact method but with lack of discriminative information; the class-specific dict...

  18. light-weight digital signature algorithm for wireless sensor networks

    Indian Academy of Sciences (India)

    M LAVANYA

    2017-09-14

    Sep 14, 2017 ... WSN applications do not even consider the security aspects because of the heavy ...... security scheme in wireless sensor networks with mobile sinks. IEEE Trans. ... security protocols. PhD Thesis, Eindhoven University of.

  19. Efficient learning algorithm for quantum perceptron unitary weights

    OpenAIRE

    Seow, Kok-Leong; Behrman, Elizabeth; Steck, James

    2015-01-01

    For the past two decades, researchers have attempted to create a Quantum Neural Network (QNN) by combining the merits of quantum computing and neural computing. In order to exploit the advantages of the two prolific fields, the QNN must meet the non-trivial task of integrating the unitary dynamics of quantum computing and the dissipative dynamics of neural computing. At the core of quantum computing and neural computing lies the qubit and perceptron, respectively. We see that past implementat...

  20. Stochastic weighted particle methods for population balance equations

    International Nuclear Information System (INIS)

    Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus

    2011-01-01

    Highlights: → Weight transfer functions for Monte Carlo simulation of coagulation. → Efficient support for single-particle growth processes. → Comparisons to analytic solutions and soot formation problems. → Better numerical accuracy for less common particles. - Abstract: A class of coagulation weight transfer functions is constructed, each member of which leads to a stochastic particle algorithm for the numerical treatment of population balance equations. These algorithms are based on systems of weighted computational particles and the weight transfer functions are constructed such that the number of computational particles does not change during coagulation events. The algorithms also facilitate the simulation of physical processes that change single particles, such as growth, or other surface reactions. Four members of the algorithm family have been numerically validated by comparison to analytic solutions to simple problems. Numerical experiments have been performed for complex laminar premixed flame systems in which members of the class of stochastic weighted particle methods were compared to each other and to a direct simulation algorithm. Two of the weighted algorithms have been shown to offer performance advantages over the direct simulation algorithm in situations where interest is focused on the larger particles in a system. The extent of this advantage depends on the particular system and on the quantities of interest.

  1. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  2. A Novel Modified Algorithm with Reduced Complexity LDPC Code Decoder

    Directory of Open Access Journals (Sweden)

    Song Yang

    2014-10-01

    Full Text Available A novel efficient decoding algorithm reduced the sum-product algorithm (SPA Complexity with LPDC code is proposed. Base on the hyperbolic tangent rule, modified the Check node update with two horizontal process, which have similar calculation, Motivated by the finding that sun- min (MS algorithm reduce the complexity reducing the approximation error in the horizontal process, simplify the information weight small part. Compared with the exiting approximations, the proposed method is less computational complexity than SPA algorithm. Simulation results show that the author algorithm can achieve performance very close SPA.

  3. Adaptive discrete-ordinates algorithms and strategies

    International Nuclear Information System (INIS)

    Stone, J.C.; Adams, M.L.

    2005-01-01

    We present our latest algorithms and strategies for adaptively refined discrete-ordinates quadrature sets. In our basic strategy, which we apply here in two-dimensional Cartesian geometry, the spatial domain is divided into regions. Each region has its own quadrature set, which is adapted to the region's angular flux. Our algorithms add a 'test' direction to the quadrature set if the angular flux calculated at that direction differs by more than a user-specified tolerance from the angular flux interpolated from other directions. Different algorithms have different prescriptions for the method of interpolation and/or choice of test directions and/or prescriptions for quadrature weights. We discuss three different algorithms of different interpolation orders. We demonstrate through numerical results that each algorithm is capable of generating solutions with negligible angular discretization error. This includes elimination of ray effects. We demonstrate that all of our algorithms achieve a given level of error with far fewer unknowns than does a standard quadrature set applied to an entire problem. To address a potential issue with other algorithms, we present one algorithm that retains exact integration of high-order spherical-harmonics functions, no matter how much local refinement takes place. To address another potential issue, we demonstrate that all of our methods conserve partial currents across interfaces where quadrature sets change. We conclude that our approach is extremely promising for solving the long-standing problem of angular discretization error in multidimensional transport problems. (authors)

  4. Weight Gain during Pregnancy

    Science.gov (United States)

    ... Global Map Premature Birth Report Cards Careers Archives Pregnancy Before or between pregnancies Nutrition, weight & fitness Prenatal ... fitness > Weight gain during pregnancy Weight gain during pregnancy E-mail to a friend Please fill in ...

  5. Should I Gain Weight?

    Science.gov (United States)

    ... Videos for Educators Search English Español Should I Gain Weight? KidsHealth / For Teens / Should I Gain Weight? ... something about it. Why Do People Want to Gain Weight? Some of the reasons people give for ...

  6. Safe reduction rules for weighted treewidth

    NARCIS (Netherlands)

    Eijkhof, F. van den; Bodlaender, H.L.; Koster, A.M.C.A.

    2002-01-01

    Several sets of reductions rules are known for preprocessing a graph when computing its treewidth. In this paper, we give reduction rules for a weighted variant of treewidth, motivated by the analysis of algorithms for probabilistic networks. We present two general reduction rules that are safe for

  7. Approximate Shortest Homotopic Paths in Weighted Regions

    KAUST Repository

    Cheng, Siu-Wing; Jin, Jiongxin; Vigneron, Antoine; Wang, Yajun

    2010-01-01

    Let P be a path between two points s and t in a polygonal subdivision T with obstacles and weighted regions. Given a relative error tolerance ε ∈(0,1), we present the first algorithm to compute a path between s and t that can be deformed to P

  8. Approximate shortest homotopic paths in weighted regions

    KAUST Repository

    Cheng, Siuwing; Jin, Jiongxin; Vigneron, Antoine E.; Wang, Yajun

    2012-01-01

    A path P between two points s and t in a polygonal subdivision T with obstacles and weighted regions defines a class of paths that can be deformed to P without passing over any obstacle. We present the first algorithm that, given P and a relative

  9. Quantifying Pathology in Diffusion Weighted MRI

    NARCIS (Netherlands)

    Caan, M.W.A.

    2010-01-01

    In this thesis algorithms are proposed for quantification of pathology in Diffusion Weighted Magnetic Resonance Imaging (DW-MRI) data. Functional evidence for brain diseases can be explained by specific structural loss in the white matter of the brain. That is, certain biomarkers may exist where the

  10. Financial Time Series Forecasting Using Directed-Weighted Chunking SVMs

    Directory of Open Access Journals (Sweden)

    Yongming Cai

    2014-01-01

    Full Text Available Support vector machines (SVMs are a promising alternative to traditional regression estimation approaches. But, when dealing with massive-scale data set, there exist many problems, such as the long training time and excessive demand of memory space. So, the SVMs algorithm is not suitable to deal with financial time series data. In order to solve these problems, directed-weighted chunking SVMs algorithm is proposed. In this algorithm, the whole training data set is split into several chunks, and then the support vectors are obtained on each subset. Furthermore, the weighted support vector regressions are calculated to obtain the forecast model on the new working data set. Our directed-weighted chunking algorithm provides a new method of support vectors decomposing and combining according to the importance of chunks, which can improve the operation speed without reducing prediction accuracy. Finally, IBM stock daily close prices data are used to verify the validity of the proposed algorithm.

  11. Robust reactor power control system design by genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Joon; Cho, Kyung Ho; Kim, Sin [Cheju National University, Cheju (Korea, Republic of)

    1997-12-31

    The H{sub {infinity}} robust controller for the reactor power control system is designed by use of the mixed weight sensitivity. The system is configured into the typical two-port model with which the weight functions are augmented. Since the solution depends on the weighting functions and the problem is of nonconvex, the genetic algorithm is used to determine the weighting functions. The cost function applied in the genetic algorithm permits the direct control of the power tracking performances. In addition, the actual operating constraints such as rod velocity and acceleration can be treated as design parameters. Compared with the conventional approach, the controller designed by the genetic algorithm results in the better performances with the realistic constraints. Also, it is found that the genetic algorithm could be used as an effective tool in the robust design. 4 refs., 6 figs. (Author)

  12. Robust reactor power control system design by genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Yoon Joon; Cho, Kyung Ho; Kim, Sin [Cheju National University, Cheju (Korea, Republic of)

    1998-12-31

    The H{sub {infinity}} robust controller for the reactor power control system is designed by use of the mixed weight sensitivity. The system is configured into the typical two-port model with which the weight functions are augmented. Since the solution depends on the weighting functions and the problem is of nonconvex, the genetic algorithm is used to determine the weighting functions. The cost function applied in the genetic algorithm permits the direct control of the power tracking performances. In addition, the actual operating constraints such as rod velocity and acceleration can be treated as design parameters. Compared with the conventional approach, the controller designed by the genetic algorithm results in the better performances with the realistic constraints. Also, it is found that the genetic algorithm could be used as an effective tool in the robust design. 4 refs., 6 figs. (Author)

  13. Polynomial weights and code constructions

    DEFF Research Database (Denmark)

    Massey, J; Costello, D; Justesen, Jørn

    1973-01-01

    polynomial included. This fundamental property is then used as the key to a variety of code constructions including 1) a simplified derivation of the binary Reed-Muller codes and, for any primepgreater than 2, a new extensive class ofp-ary "Reed-Muller codes," 2) a new class of "repeated-root" cyclic codes...... of long constraint length binary convolutional codes derived from2^r-ary Reed-Solomon codes, and 6) a new class ofq-ary "repeated-root" constacyclic codes with an algebraic decoding algorithm.......For any nonzero elementcof a general finite fieldGF(q), it is shown that the polynomials(x - c)^i, i = 0,1,2,cdots, have the "weight-retaining" property that any linear combination of these polynomials with coefficients inGF(q)has Hamming weight at least as great as that of the minimum degree...

  14. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  15. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  16. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  17. A new inertia weight control strategy for particle swarm optimization

    Science.gov (United States)

    Zhu, Xianming; Wang, Hongbo

    2018-04-01

    Particle Swarm Optimization is a member of swarm intelligence algorithms, which is inspired by the behavior of bird flocks. The inertia weight, one of the most important parameters of PSO, is crucial for PSO, for it balances the performance of exploration and exploitation of the algorithm. This paper proposes a new inertia weight control strategy and PSO with this new strategy is tested by four benchmark functions. The results shows that the new strategy provides the PSO with better performance.

  18. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  19. Mechanisms of Weight Regain following Weight Loss.

    Science.gov (United States)

    Blomain, Erik Scott; Dirhan, Dara Anne; Valentino, Michael Anthony; Kim, Gilbert Won; Waldman, Scott Arthur

    2013-01-01

    Obesity is a world-wide pandemic and its incidence is on the rise along with associated comorbidities. Currently, there are few effective therapies to combat obesity. The use of lifestyle modification therapy, namely, improvements in diet and exercise, is preferable over bariatric surgery or pharmacotherapy due to surgical risks and issues with drug efficacy and safety. Although they are initially successful in producing weight loss, such lifestyle intervention strategies are generally unsuccessful in achieving long-term weight maintenance, with the vast majority of obese patients regaining their lost weight during followup. Recently, various compensatory mechanisms have been elucidated by which the body may oppose new weight loss, and this compensation may result in weight regain back to the obese baseline. The present review summarizes the available evidence on these compensatory mechanisms, with a focus on weight loss-induced changes in energy expenditure, neuroendocrine pathways, nutrient metabolism, and gut physiology. These findings have added a major focus to the field of antiobesity research. In addition to investigating pathways that induce weight loss, the present work also focuses on pathways that may instead prevent weight regain. Such strategies will be necessary for improving long-term weight loss maintenance and outcomes for patients who struggle with obesity.

  20. A New Algorithm for System of Integral Equations

    Directory of Open Access Journals (Sweden)

    Abdujabar Rasulov

    2014-01-01

    Full Text Available We develop a new algorithm to solve the system of integral equations. In this new method no need to use matrix weights. Beacause of it, we reduce computational complexity considerable. Using the new algorithm it is also possible to solve an initial boundary value problem for system of parabolic equations. To verify the efficiency, the results of computational experiments are given.

  1. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  2. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  3. Approximate Shortest Homotopic Paths in Weighted Regions

    KAUST Repository

    Cheng, Siu-Wing

    2010-01-01

    Let P be a path between two points s and t in a polygonal subdivision T with obstacles and weighted regions. Given a relative error tolerance ε ∈(0,1), we present the first algorithm to compute a path between s and t that can be deformed to P without passing over any obstacle and the path cost is within a factor 1 + ε of the optimum. The running time is O(h 3/ε2 kn polylog(k, n, 1/ε)), where k is the number of segments in P and h and n are the numbers of obstacles and vertices in T, respectively. The constant in the running time of our algorithm depends on some geometric parameters and the ratio of the maximum region weight to the minimum region weight. © 2010 Springer-Verlag.

  4. Approximate shortest homotopic paths in weighted regions

    KAUST Repository

    Cheng, Siuwing

    2012-02-01

    A path P between two points s and t in a polygonal subdivision T with obstacles and weighted regions defines a class of paths that can be deformed to P without passing over any obstacle. We present the first algorithm that, given P and a relative error tolerance ε (0, 1), computes a path from this class with cost at most 1 + ε times the optimum. The running time is O(h 3/ε 2kn polylog (k,n,1/ε)), where k is the number of segments in P and h and n are the numbers of obstacles and vertices in T, respectively. The constant in the running time of our algorithm depends on some geometric parameters and the ratio of the maximum region weight to the minimum region weight. © 2012 World Scientific Publishing Company.

  5. Color-to-grayscale conversion through weighted multiresolution channel fusion

    NARCIS (Netherlands)

    Wu, T.; Toet, A.

    2014-01-01

    We present a color-to-gray conversion algorithm that retains both the overall appearance and the discriminability of details of the input color image. The algorithm employs a weighted pyramid image fusion scheme to blend the R, G, and B color channels of the input image into a single grayscale

  6. Proven Weight Loss Methods

    Science.gov (United States)

    Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. It can lower ... at www.hormone.org/Spanish . Proven Weight Loss Methods Fact Sheet www.hormone.org

  7. The Dropout Learning Algorithm

    Science.gov (United States)

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  8. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  9. Sparse Signal Reconstruction with Multiple Side Information using Adaptive Weights for Multiview Sources

    DEFF Research Database (Denmark)

    Luong, Huynh Van; Seiler, Jürgen; Kaup, André

    2016-01-01

    weights by solving a proposed weighted $n$-$\\ell_{1}$ minimization.The proposed algorithm computes the adaptive weights in two levels, first eachindividual intra-SI and then inter-SI weights are iteratively updated at everyreconstructed iteration. This two-level optimization leads...

  10. Partially linearized algorithms in gyrokinetic particle simulation

    Energy Technology Data Exchange (ETDEWEB)

    Dimits, A.M.; Lee, W.W.

    1990-10-01

    In this paper, particle simulation algorithms with time-varying weights for the gyrokinetic Vlasov-Poisson system have been developed. The primary purpose is to use them for the removal of the selected nonlinearities in the simulation of gradient-driven microturbulence so that the relative importance of the various nonlinear effects can be assessed. It is hoped that the use of these procedures will result in a better understanding of the transport mechanisms and scaling in tokamaks. Another application of these algorithms is for the improvement of the numerical properties of the simulation plasma. For instance, implementations of such algorithms (1) enable us to suppress the intrinsic numerical noise in the simulation, and (2) also make it possible to regulate the weights of the fast-moving particles and, in turn, to eliminate the associated high frequency oscillations. Examples of their application to drift-type instabilities in slab geometry are given. We note that the work reported here represents the first successful use of the weighted algorithms in particle codes for the nonlinear simulation of plasmas.

  11. Partially linearized algorithms in gyrokinetic particle simulation

    International Nuclear Information System (INIS)

    Dimits, A.M.; Lee, W.W.

    1990-10-01

    In this paper, particle simulation algorithms with time-varying weights for the gyrokinetic Vlasov-Poisson system have been developed. The primary purpose is to use them for the removal of the selected nonlinearities in the simulation of gradient-driven microturbulence so that the relative importance of the various nonlinear effects can be assessed. It is hoped that the use of these procedures will result in a better understanding of the transport mechanisms and scaling in tokamaks. Another application of these algorithms is for the improvement of the numerical properties of the simulation plasma. For instance, implementations of such algorithms (1) enable us to suppress the intrinsic numerical noise in the simulation, and (2) also make it possible to regulate the weights of the fast-moving particles and, in turn, to eliminate the associated high frequency oscillations. Examples of their application to drift-type instabilities in slab geometry are given. We note that the work reported here represents the first successful use of the weighted algorithms in particle codes for the nonlinear simulation of plasmas

  12. Improving EEG signal peak detection using feature weight learning ...

    Indian Academy of Sciences (India)

    Therefore, we aimed to develop a general procedure for eye event-related applications based on feature weight learning (FWL), through the use of a neural network with random weights (NNRW) as the classifier. The FWL is performed using a particle swarm optimization algorithm, applied to the well-studied Dumpala, Acir, ...

  13. Ensemble of data-driven prognostic algorithms for robust prediction of remaining useful life

    International Nuclear Information System (INIS)

    Hu Chao; Youn, Byeng D.; Wang Pingfeng; Taek Yoon, Joung

    2012-01-01

    Prognostics aims at determining whether a failure of an engineered system (e.g., a nuclear power plant) is impending and estimating the remaining useful life (RUL) before the failure occurs. The traditional data-driven prognostic approach is to construct multiple candidate algorithms using a training data set, evaluate their respective performance using a testing data set, and select the one with the best performance while discarding all the others. This approach has three shortcomings: (i) the selected standalone algorithm may not be robust; (ii) it wastes the resources for constructing the algorithms that are discarded; (iii) it requires the testing data in addition to the training data. To overcome these drawbacks, this paper proposes an ensemble data-driven prognostic approach which combines multiple member algorithms with a weighted-sum formulation. Three weighting schemes, namely the accuracy-based weighting, diversity-based weighting and optimization-based weighting, are proposed to determine the weights of member algorithms. The k-fold cross validation (CV) is employed to estimate the prediction error required by the weighting schemes. The results obtained from three case studies suggest that the ensemble approach with any weighting scheme gives more accurate RUL predictions compared to any sole algorithm when member algorithms producing diverse RUL predictions have comparable prediction accuracy and that the optimization-based weighting scheme gives the best overall performance among the three weighting schemes.

  14. Introduction to Evolutionary Algorithms

    CERN Document Server

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  15. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  16. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  17. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  18. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  19. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  20. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  1. Cloud Service Scheduling Algorithm Research and Optimization

    Directory of Open Access Journals (Sweden)

    Hongyan Cui

    2017-01-01

    Full Text Available We propose a cloud service scheduling model that is referred to as the Task Scheduling System (TSS. In the user module, the process time of each task is in accordance with a general distribution. In the task scheduling module, we take a weighted sum of makespan and flowtime as the objective function and use an Ant Colony Optimization (ACO and a Genetic Algorithm (GA to solve the problem of cloud task scheduling. Simulation results show that the convergence speed and output performance of our Genetic Algorithm-Chaos Ant Colony Optimization (GA-CACO are optimal.

  2. Dynamic airspace configuration method based on a weighted graph model

    Directory of Open Access Journals (Sweden)

    Chen Yangzhou

    2014-08-01

    Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.

  3. Resizing Technique-Based Hybrid Genetic Algorithm for Optimal Drift Design of Multistory Steel Frame Buildings

    Directory of Open Access Journals (Sweden)

    Hyo Seon Park

    2014-01-01

    Full Text Available Since genetic algorithm-based optimization methods are computationally expensive for practical use in the field of structural optimization, a resizing technique-based hybrid genetic algorithm for the drift design of multistory steel frame buildings is proposed to increase the convergence speed of genetic algorithms. To reduce the number of structural analyses required for the convergence, a genetic algorithm is combined with a resizing technique that is an efficient optimal technique to control the drift of buildings without the repetitive structural analysis. The resizing technique-based hybrid genetic algorithm proposed in this paper is applied to the minimum weight design of three steel frame buildings. To evaluate the performance of the algorithm, optimum weights, computational times, and generation numbers from the proposed algorithm are compared with those from a genetic algorithm. Based on the comparisons, it is concluded that the hybrid genetic algorithm shows clear improvements in convergence properties.

  4. Geometry correction Algorithm for UAV Remote Sensing Image Based on Improved Neural Network

    Science.gov (United States)

    Liu, Ruian; Liu, Nan; Zeng, Beibei; Chen, Tingting; Yin, Ninghao

    2018-03-01

    Aiming at the disadvantage of current geometry correction algorithm for UAV remote sensing image, a new algorithm is proposed. Adaptive genetic algorithm (AGA) and RBF neural network are introduced into this algorithm. And combined with the geometry correction principle for UAV remote sensing image, the algorithm and solving steps of AGA-RBF are presented in order to realize geometry correction for UAV remote sensing. The correction accuracy and operational efficiency is improved through optimizing the structure and connection weight of RBF neural network separately with AGA and LMS algorithm. Finally, experiments show that AGA-RBF algorithm has the advantages of high correction accuracy, high running rate and strong generalization ability.

  5. Algorithm 426 : Merge sort algorithm [M1

    NARCIS (Netherlands)

    Bron, C.

    1972-01-01

    Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives

  6. Robust stability analysis of adaptation algorithms for single perceptron.

    Science.gov (United States)

    Hui, S; Zak, S H

    1991-01-01

    The problem of robust stability and convergence of learning parameters of adaptation algorithms in a noisy environment for the single preceptron is addressed. The case in which the same input pattern is presented in the adaptation cycle is analyzed. The algorithm proposed is of the Widrow-Hoff type. It is concluded that this algorithm is robust. However, the weight vectors do not necessarily converge in the presence of measurement noise. A modified version of this algorithm in which the reduction factors are allowed to vary with time is proposed, and it is shown that this algorithm is robust and that the weight vectors converge in the presence of bounded noise. Only deterministic-type arguments are used in the analysis. An ultimate bound on the error in terms of a convex combination of the initial error and the bound on the noise is obtained.

  7. Preventing Weight Gain

    Science.gov (United States)

    ... Local Programs Related Topics Diabetes Nutrition Preventing Weight Gain Language: English (US) Español (Spanish) Recommend on Facebook ... cancer. Choosing an Eating Plan to Prevent Weight Gain So, how do you choose a healthful eating ...

  8. An objective approach to determining criteria weights

    Directory of Open Access Journals (Sweden)

    Milić R. Milićević

    2012-01-01

    Full Text Available This paper presents an objective approach to determining criteria weights that can be successfully used in multiple criteria models. The methods of entropy, CRITIC and FANMA are presented in this paper as well as a possible combination of the methods of objective and subjective approaches. Although based on different theoretical settings, and therefore with different algorithms of realization, all methods have a decision matrix as a starting point. An objective approach to determining the weight of criteria eliminates the negative impacts of a decision maker on criteria weights as well as on the final solution of multicriteria problems. The main aim of this paper is to systematize description procedures as a kind of help when encountering a problem of determining the criteria weights for solving multicriteria tasks. A possibility of the method application is shown in a numerical example.

  9. Stochastic weighted particle methods for population balance equations with coagulation, fragmentation and spatial inhomogeneity

    International Nuclear Information System (INIS)

    Lee, Kok Foong; Patterson, Robert I.A.; Wagner, Wolfgang; Kraft, Markus

    2015-01-01

    Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. The weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.

  10. Stochastic weighted particle methods for population balance equations with coagulation, fragmentation and spatial inhomogeneity

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kok Foong [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); Patterson, Robert I.A.; Wagner, Wolfgang [Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstraße 39, 10117 Berlin (Germany); Kraft, Markus, E-mail: mk306@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, Singapore, 637459 (Singapore)

    2015-12-15

    Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. The weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.

  11. Weight management in pregnancy

    OpenAIRE

    Olander, E. K.

    2015-01-01

    Key learning points:\\ud - Women who start pregnancy in an overweight or obese weight category have increased health risks\\ud - Irrespective of pre-pregnancy weight category, there are health risks associated with gaining too much weight in pregnancy for both mother and baby\\ud - There are currently no official weight gain guidelines for pregnancy in the UK, thus focus needs to be on supporting pregnant women to eat healthily and keep active

  12. Predicting Students’ Performance using Modified ID3 Algorithm

    OpenAIRE

    Ramanathan L; Saksham Dhanda; Suresh Kumar D

    2013-01-01

    The ability to predict performance of students is very crucial in our present education system. We can use data mining concepts for this purpose. ID3 algorithm is one of the famous algorithms present today to generate decision trees. But this algorithm has a shortcoming that it is inclined to attributes with many values. So , this research aims to overcome this shortcoming of the algorithm by using gain ratio(instead of information gain) as well as by giving weights to each attribute at every...

  13. Foam: A general purpose Monte Carlo cellular algorithm

    International Nuclear Information System (INIS)

    Jadach, S.

    2002-01-01

    A general-purpose, self-adapting Monte Carlo (MC) algorithm implemented in the program Foam is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be n-dimensional simplices, hyperrectangles or a Cartesian product of them. The grid of cells, called 'foam', is produced in the process of the binary split of the cells. The choice of the next cell to be divided and the position/direction of the division hyperplane is driven by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution. (author)

  14. Foam A General purpose Monte Carlo Cellular Algorithm

    CERN Document Server

    Jadach, Stanislaw

    2002-01-01

    A general-purpose, self-adapting Monte Carlo (MC) algorithm implemented in the program {\\tt Foam} is described. The high efficiency of the MC, that is small maximum weight or variance of the MC weight is achieved by means of dividing the integration domain into small cells. The cells can be $n$-dimensional simplices, hyperrectangles or a Cartesian product of them. The grid of cells, ``foam'', is produced in the process of the binary split of the cells. The next cell to be divided and the position/direction of the division hyperplane is chosen by the algorithm which optimizes the ratio of the maximum weight to the average weight or (optionally) the total variance. The algorithm is able to deal, in principle, with an arbitrary pattern of the singularities in the distribution.

  15. Gradient Weight in Phonology

    Science.gov (United States)

    Ryan, Kevin Michael

    2011-01-01

    Research on syllable weight in generative phonology has focused almost exclusively on systems in which weight is treated as an ordinal hierarchy of clearly delineated categories (e.g. light and heavy). As I discuss, canonical weight-sensitive phenomena in phonology, including quantitative meter and quantity-sensitive stress, can also treat weight…

  16. Gestational weight gain.

    Science.gov (United States)

    Kominiarek, Michelle A; Peaceman, Alan M

    2017-12-01

    Prenatal care providers are advised to evaluate maternal weight at each regularly scheduled prenatal visit, monitor progress toward meeting weight gain goals, and provide individualized counseling if significant deviations from a woman's goals occur. Today, nearly 50% of women exceed their weight gain goals with overweight and obese women having the highest prevalence of excessive weight gain. Risks of inadequate weight gain include low birthweight and failure to initiate breast-feeding whereas the risks of excessive weight gain include cesarean deliveries and postpartum weight retention for the mother and large-for-gestational-age infants, macrosomia, and childhood overweight or obesity for the offspring. Prenatal care providers have many resources and tools to incorporate weight and other health behavior counseling into routine prenatal practices. Because many women are motivated to improve health behaviors, pregnancy is often considered the optimal time to intervene for issues related to eating habits and physical activity to prevent excessive weight gain. Gestational weight gain is a potentially modifiable risk factor for a number of adverse maternal and neonatal outcomes and meta-analyses of randomized controlled trials report that diet or exercise interventions during pregnancy can help reduce excessive weight gain. However, health behavior interventions for gestational weight gain have not significantly improved other maternal and neonatal outcomes and have limited effectiveness in overweight and obese women. Copyright © 2017 Elsevier Inc. All rights reserved.

  17. Composite Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  18. Algorithms and Their Explanations

    NARCIS (Netherlands)

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  19. Finite lattice extrapolation algorithms

    International Nuclear Information System (INIS)

    Henkel, M.; Schuetz, G.

    1987-08-01

    Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)

  20. Recursive automatic classification algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, E V; Dorofeyuk, A A

    1982-03-01

    A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.

  1. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  2. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  3. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  4. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  5. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  6. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  7. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  8. Where genetic algorithms excel.

    Science.gov (United States)

    Baum, E B; Boneh, D; Garrett, C

    2001-01-01

    We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.

  9. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....

  10. Research on AHP decision algorithms based on BP algorithm

    Science.gov (United States)

    Ma, Ning; Guan, Jianhe

    2017-10-01

    Decision making is the thinking activity that people choose or judge, and scientific decision-making has always been a hot issue in the field of research. Analytic Hierarchy Process (AHP) is a simple and practical multi-criteria and multi-objective decision-making method that combines quantitative and qualitative and can show and calculate the subjective judgment in digital form. In the process of decision analysis using AHP method, the rationality of the two-dimensional judgment matrix has a great influence on the decision result. However, in dealing with the real problem, the judgment matrix produced by the two-dimensional comparison is often inconsistent, that is, it does not meet the consistency requirements. BP neural network algorithm is an adaptive nonlinear dynamic system. It has powerful collective computing ability and learning ability. It can perfect the data by constantly modifying the weights and thresholds of the network to achieve the goal of minimizing the mean square error. In this paper, the BP algorithm is used to deal with the consistency of the two-dimensional judgment matrix of the AHP.

  11. Minkowski metrics in creating universal ranking algorithms

    Directory of Open Access Journals (Sweden)

    Andrzej Ameljańczyk

    2014-06-01

    Full Text Available The paper presents a general procedure for creating the rankings of a set of objects, while the relation of preference based on any ranking function. The analysis was possible to use the ranking functions began by showing the fundamental drawbacks of commonly used functions in the form of a weighted sum. As a special case of the ranking procedure in the space of a relation, the procedure based on the notion of an ideal element and generalized Minkowski distance from the element was proposed. This procedure, presented as universal ranking algorithm, eliminates most of the disadvantages of ranking functions in the form of a weighted sum.[b]Keywords[/b]: ranking functions, preference relation, ranking clusters, categories, ideal point, universal ranking algorithm

  12. A Novel Parallel Algorithm for Edit Distance Computation

    Directory of Open Access Journals (Sweden)

    Muhammad Murtaza Yousaf

    2018-01-01

    Full Text Available The edit distance between two sequences is the minimum number of weighted transformation-operations that are required to transform one string into the other. The weighted transformation-operations are insert, remove, and substitute. Dynamic programming solution to find edit distance exists but it becomes computationally intensive when the lengths of strings become very large. This work presents a novel parallel algorithm to solve edit distance problem of string matching. The algorithm is based on resolving dependencies in the dynamic programming solution of the problem and it is able to compute each row of edit distance table in parallel. In this way, it becomes possible to compute the complete table in min(m,n iterations for strings of size m and n whereas state-of-the-art parallel algorithm solves the problem in max(m,n iterations. The proposed algorithm also increases the amount of parallelism in each of its iteration. The algorithm is also capable of exploiting spatial locality while its implementation. Additionally, the algorithm works in a load balanced way that further improves its performance. The algorithm is implemented for multicore systems having shared memory. Implementation of the algorithm in OpenMP shows linear speedup and better execution time as compared to state-of-the-art parallel approach. Efficiency of the algorithm is also proven better in comparison to its competitor.

  13. Thyroid weight with age

    International Nuclear Information System (INIS)

    Raulier-Fabry, C.; Hammer, R.

    1965-01-01

    A large number of data on thyroid weight of euthyroid subjects have been collected from the literature in this study. The most probable average weight of the gland appears to be 20 g in the adult and 2 g in the newborn. A decrease in weight has been observed during the first year of life (1 g at 6 months) and only in the second year of life the initial birth weight is reached again. The weight curve may be considered as consisting of three straight lines: from 2 to 7, from 7 to 18 and from 18 to 25 years, their slopes being respectively 0.6, 0.9 and 0.5 g/year. The variations in weight of the thyroid during adulthood are sufficiently small to consider it as having constant value between 25 and 55 years. The available information points to a negligible weight difference between sexes. (authors) [fr

  14. Navigating Weighted Regions with Scattered Skinny Tetrahedra

    KAUST Repository

    Cheng, Siu-Wing; Chiu, Man-Kwun; Jin, Jiongxin; Vigneron, Antoine E.

    2015-01-01

    We propose an algorithm for finding a (1 + ε)-approximate shortest path through a weighted 3D simplicial complex T. The weights are integers from the range [1,W] and the vertices have integral coordinates. Let N be the largest vertex coordinate magnitude, and let n be the number of tetrahedra in T. Let ρ be some arbitrary constant. Let κ be the size of the largest connected component of tetrahedra whose aspect ratios exceed ρ. There exists a constant C dependent on ρ but independent of T such that if κ ≤ 1 C log log n + O(1), the running time of our algorithm is polynomial in n, 1/ε and log(NW). If κ = O(1), the running time reduces to O(nε(log(NW))).

  15. Navigating Weighted Regions with Scattered Skinny Tetrahedra

    KAUST Repository

    Cheng, Siu-Wing

    2015-11-26

    We propose an algorithm for finding a (1 + ε)-approximate shortest path through a weighted 3D simplicial complex T. The weights are integers from the range [1,W] and the vertices have integral coordinates. Let N be the largest vertex coordinate magnitude, and let n be the number of tetrahedra in T. Let ρ be some arbitrary constant. Let κ be the size of the largest connected component of tetrahedra whose aspect ratios exceed ρ. There exists a constant C dependent on ρ but independent of T such that if κ ≤ 1 C log log n + O(1), the running time of our algorithm is polynomial in n, 1/ε and log(NW). If κ = O(1), the running time reduces to O(nε(log(NW))).

  16. Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm

    Directory of Open Access Journals (Sweden)

    Jianyong Liu

    2015-01-01

    Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.

  17. Yogurt and weight management.

    Science.gov (United States)

    Jacques, Paul F; Wang, Huifen

    2014-05-01

    A large body of observational studies and randomized controlled trials (RCTs) has examined the role of dairy products in weight loss and maintenance of healthy weight. Yogurt is a dairy product that is generally very similar to milk, but it also has some unique properties that may enhance its possible role in weight maintenance. This review summarizes the human RCT and prospective observational evidence on the relation of yogurt consumption to the management and maintenance of body weight and composition. The RCT evidence is limited to 2 small, short-term, energy-restricted trials. They both showed greater weight losses with yogurt interventions, but the difference between the yogurt intervention and the control diet was only significant in one of these trials. There are 5 prospective observational studies that have examined the association between yogurt and weight gain. The results of these studies are equivocal. Two of these studies reported that individuals with higher yogurt consumption gained less weight over time. One of these same studies also considered changes in waist circumference (WC) and showed that higher yogurt consumption was associated with smaller increases in WC. A third study was inconclusive because of low statistical power. A fourth study observed no association between changes in yogurt intake and weight gain, but the results suggested that those with the largest increases in yogurt intake during the study also had the highest increase in WC. The final study examined weight and WC change separately by sex and baseline weight status and showed benefits for both weight and WC changes for higher yogurt consumption in overweight men, but it also found that higher yogurt consumption in normal-weight women was associated with a greater increase in weight over follow-up. Potential underlying mechanisms for the action of yogurt on weight are briefly discussed.

  18. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  19. Online EM with weight-based forgetting

    OpenAIRE

    Celaya, Enric; Agostini, Alejandro

    2015-01-01

    In the on-line version of the EM algorithm introduced by Sato and Ishii (2000), a time-dependent discount factor is introduced for forgetting the effect of the old posterior values obtained with an earlier, inaccurate estimator. In their approach, forgetting is uniformly applied to the estimators of each mixture component depending exclusively on time, irrespective of the weight attributed to each unit for the observed sample. This causes an excessive forgetting in the less frequently sampled...

  20. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  1. Malliavin Weight Sampling: A Practical Guide

    Directory of Open Access Journals (Sweden)

    Patrick B. Warren

    2013-12-01

    Full Text Available Malliavin weight sampling (MWS is a stochastic calculus technique for computing the derivatives of averaged system properties with respect to parameters in stochastic simulations, without perturbing the system’s dynamics. It applies to systems in or out of equilibrium, in steady state or time-dependent situations, and has applications in the calculation of response coefficients, parameter sensitivities and Jacobian matrices for gradient-based parameter optimisation algorithms. The implementation of MWS has been described in the specific contexts of kinetic Monte Carlo and Brownian dynamics simulation algorithms. Here, we present a general theoretical framework for deriving the appropriate MWS update rule for any stochastic simulation algorithm. We also provide pedagogical information on its practical implementation.

  2. Magnet sorting algorithms

    International Nuclear Information System (INIS)

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  3. Optimization of Pressurizer Based on Genetic-Simplex Algorithm

    International Nuclear Information System (INIS)

    Wang, Cheng; Yan, Chang Qi; Wang, Jian Jun

    2014-01-01

    Pressurizer is one of key components in nuclear power system. It's important to control the dimension in the design of pressurizer through optimization techniques. In this work, a mathematic model of a vertical electric heating pressurizer was established. A new Genetic-Simplex Algorithm (GSA) that combines genetic algorithm and simplex algorithm was developed to enhance the searching ability, and the comparison among modified and original algorithms is conducted by calculating the benchmark function. Furthermore, the optimization design of pressurizer, taking minimization of volume and net weight as objectives, was carried out considering thermal-hydraulic and geometric constraints through GSA. The results indicate that the mathematical model is agreeable for the pressurizer and the new algorithm is more effective than the traditional genetic algorithm. The optimization design shows obvious validity and can provide guidance for real engineering design

  4. Minimum nonuniform graph partitioning with unrelated weights

    Science.gov (United States)

    Makarychev, K. S.; Makarychev, Yu S.

    2017-12-01

    We give a bi-criteria approximation algorithm for the Minimum Nonuniform Graph Partitioning problem, recently introduced by Krauthgamer, Naor, Schwartz and Talwar. In this problem, we are given a graph G=(V,E) and k numbers ρ_1,\\dots, ρ_k. The goal is to partition V into k disjoint sets (bins) P_1,\\dots, P_k satisfying \\vert P_i\\vert≤ ρi \\vert V\\vert for all i, so as to minimize the number of edges cut by the partition. Our bi-criteria algorithm gives an O(\\sqrt{log \\vert V\\vert log k}) approximation for the objective function in general graphs and an O(1) approximation in graphs excluding a fixed minor. The approximate solution satisfies the relaxed capacity constraints \\vert P_i\\vert ≤ (5+ \\varepsilon)ρi \\vert V\\vert. This algorithm is an improvement upon the O(log \\vert V\\vert)-approximation algorithm by Krauthgamer, Naor, Schwartz and Talwar. We extend our results to the case of 'unrelated weights' and to the case of 'unrelated d-dimensional weights'. A preliminary version of this work was presented at the 41st International Colloquium on Automata, Languages and Programming (ICALP 2014). Bibliography: 7 titles.

  5. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  6. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  7. Algorithmic phase diagrams

    Science.gov (United States)

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  8. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  9. Inclusive Flavour Tagging Algorithm

    International Nuclear Information System (INIS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-01-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)

  10. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  11. Predictors of weight maintenance

    NARCIS (Netherlands)

    Pasman, W.J.; Saris, W.H.M.; Westerterp-Plantenga, M.S.

    1999-01-01

    Objective: To obtain predictors of weight maintenance after a weight-loss intervention. Research Methods and Procedures: An overall analysis of data from two-long intervention studies [n = 67 women; age: 37.9±1.0 years; body weight (BW): 87.0±1.2 kg; body mass index: 32.1±0.5 kg·m-2; % body fat:

  12. Robustness of weighted networks

    Science.gov (United States)

    Bellingeri, Michele; Cassi, Davide

    2018-01-01

    Complex network response to node loss is a central question in different fields of network science because node failure can cause the fragmentation of the network, thus compromising the system functioning. Previous studies considered binary networks where the intensity (weight) of the links is not accounted for, i.e. a link is either present or absent. However, in real-world networks the weights of connections, and thus their importance for network functioning, can be widely different. Here, we analyzed the response of real-world and model networks to node loss accounting for link intensity and the weighted structure of the network. We used both classic binary node properties and network functioning measure, introduced a weighted rank for node importance (node strength), and used a measure for network functioning that accounts for the weight of the links (weighted efficiency). We find that: (i) the efficiency of the attack strategies changed using binary or weighted network functioning measures, both for real-world or model networks; (ii) in some cases, removing nodes according to weighted rank produced the highest damage when functioning was measured by the weighted efficiency; (iii) adopting weighted measure for the network damage changed the efficacy of the attack strategy with respect the binary analyses. Our results show that if the weighted structure of complex networks is not taken into account, this may produce misleading models to forecast the system response to node failure, i.e. consider binary links may not unveil the real damage induced in the system. Last, once weighted measures are introduced, in order to discover the best attack strategy, it is important to analyze the network response to node loss using nodes rank accounting the intensity of the links to the node.

  13. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  14. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  15. From Genetics to Genetic Algorithms

    Indian Academy of Sciences (India)

    Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.

  16. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  17. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  18. Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-01-01

    Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.

  19. Optimal Weighting for Exam Composition

    Directory of Open Access Journals (Sweden)

    Sam Ganzfried

    2018-03-01

    Full Text Available A problem faced by many instructors is that of designing exams that accurately assess the abilities of the students. Typically, these exams are prepared several days in advance, and generic question scores are used based on rough approximation of the question difficulty and length. For example, for a recent class taught by the author, there were 30 multiple choice questions worth 3 points, 15 true/false with explanation questions worth 4 points, and 5 analytical exercises worth 10 points. We describe a novel framework where algorithms from machine learning are used to modify the exam question weights in order to optimize the exam scores, using the overall final score as a proxy for a student’s true ability. We show that significant error reduction can be obtained by our approach over standard weighting schemes, i.e., for the final and midterm exam, the mean absolute error for prediction decreases by 90.58% and 97.70% for linear regression approach respectively resulting in better estimation. We make several new observations regarding the properties of the “good” and “bad” exam questions that can have impact on the design of improved future evaluation methods.

  20. Research on loss of coolant accident of pressurized-water reactor based on PSO algorithm

    International Nuclear Information System (INIS)

    Ma Jie; Guo Lifeng; Peng Qiao

    2012-01-01

    In order to improve the diagnosis performance of Loss of Coolant Accident (LOCA), based on Back Propagation (BP) algorithm study, a fault diagnosis network is established based on Particle Swarm Optimization (PSO) algorithm in this paper. The PSO algorithm is used to train the weights and the thresholds of neural network, which can conquer part convergence problem of BP algorithm. The test results show that the diagnosis network has higher accuracy of LOCA. (authors)

  1. Improved Artificial Fish Algorithm for Parameters Optimization of PID Neural Network

    OpenAIRE

    Jing Wang; Yourui Huang

    2013-01-01

    In order to solve problems such as initial weights are difficult to be determined, training results are easy to trap in local minima in optimization process of PID neural network parameters by traditional BP algorithm, this paper proposed a new method based on improved artificial fish algorithm for parameters optimization of PID neural network. This improved artificial fish algorithm uses a composite adaptive artificial fish algorithm based on optimal artificial fish and nearest artificial fi...

  2. Hardware Design Considerations for Edge-Accelerated Stereo Correspondence Algorithms

    Directory of Open Access Journals (Sweden)

    Christos Ttofis

    2012-01-01

    Full Text Available Stereo correspondence is a popular algorithm for the extraction of depth information from a pair of rectified 2D images. Hence, it has been used in many computer vision applications that require knowledge about depth. However, stereo correspondence is a computationally intensive algorithm and requires high-end hardware resources in order to achieve real-time processing speed in embedded computer vision systems. This paper presents an overview of the use of edge information as a means to accelerate hardware implementations of stereo correspondence algorithms. The presented approach restricts the stereo correspondence algorithm only to the edges of the input images rather than to all image points, thus resulting in a considerable reduction of the search space. The paper highlights the benefits of the edge-directed approach by applying it to two stereo correspondence algorithms: an SAD-based fixed-support algorithm and a more complex adaptive support weight algorithm. Furthermore, we present design considerations about the implementation of these algorithms on reconfigurable hardware and also discuss issues related to the memory structures needed, the amount of parallelism that can be exploited, the organization of the processing blocks, and so forth. The two architectures (fixed-support based versus adaptive-support weight based are compared in terms of processing speed, disparity map accuracy, and hardware overheads, when both are implemented on a Virtex-5 FPGA platform.

  3. Improved multivariate polynomial factoring algorithm

    International Nuclear Information System (INIS)

    Wang, P.S.

    1978-01-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included

  4. Weighted Bergman Kernels for Logarithmic Weights

    Czech Academy of Sciences Publication Activity Database

    Engliš, Miroslav

    2010-01-01

    Roč. 6, č. 3 (2010), s. 781-813 ISSN 1558-8599 R&D Projects: GA AV ČR IAA100190802 Keywords : Bergman kernel * Toeplitz operator * logarithmic weight * pseudodifferential operator Subject RIV: BA - General Mathematics Impact factor: 0.462, year: 2010 http://www.intlpress.com/site/pub/pages/journals/items/pamq/content/vols/0006/0003/a008/

  5. A Newton Algorithm for Multivariate Total Least Squares Problems

    Directory of Open Access Journals (Sweden)

    WANG Leyang

    2016-04-01

    Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.

  6. Visual Perception Based Rate Control Algorithm for HEVC

    Science.gov (United States)

    Feng, Zeqi; Liu, PengYu; Jia, Kebin

    2018-01-01

    For HEVC, rate control is an indispensably important video coding technology to alleviate the contradiction between video quality and the limited encoding resources during video communication. However, the rate control benchmark algorithm of HEVC ignores subjective visual perception. For key focus regions, bit allocation of LCU is not ideal and subjective quality is unsatisfied. In this paper, a visual perception based rate control algorithm for HEVC is proposed. First bit allocation weight of LCU level is optimized based on the visual perception of luminance and motion to ameliorate video subjective quality. Then λ and QP are adjusted in combination with the bit allocation weight to improve rate distortion performance. Experimental results show that the proposed algorithm reduces average 0.5% BD-BR and maximum 1.09% BD-BR at no cost in bitrate accuracy compared with HEVC (HM15.0). The proposed algorithm devotes to improving video subjective quality under various video applications.

  7. Body Weight - Multiple Languages

    Science.gov (United States)

    ... Supplements Videos & Tools You Are Here: Home → Multiple Languages → All Health Topics → Body Weight URL of this page: https://medlineplus.gov/ ... V W XYZ List of All Topics All Body Weight - Multiple Languages To use the sharing features on this page, ...

  8. Concurrent weighted logic

    DEFF Research Database (Denmark)

    Xue, Bingtian; Larsen, Kim Guldstrand; Mardare, Radu Iulian

    2015-01-01

    We introduce Concurrent Weighted Logic (CWL), a multimodal logic for concurrent labeled weighted transition systems (LWSs). The synchronization of LWSs is described using dedicated functions that, in various concurrency paradigms, allow us to encode the compositionality of LWSs. To reflect these......-completeness results for this logic. To complete these proofs we involve advanced topological techniques from Model Theory....

  9. Unexplained Weight Loss

    Science.gov (United States)

    ... weight is affected by your calorie intake, activity level, overall health, age, nutrient absorption, and economic and social factors. If you're losing weight without trying and you're concerned about it, consult your doctor — as a rule of thumb, losing more than 5 ...

  10. Thyroid and Weight

    Science.gov (United States)

    ... Hypothyroidism in Children and Adolescents Pediatric Differentiated Thyroid Cancer Thyroid Nodules in Children and Adolescents Thyroid and Weight Resources Thyroid and Weight Brochure PDF En Español La Tiroides y el Peso El folleto de La Tiroides y el ...

  11. Adolescent Weight Status

    DEFF Research Database (Denmark)

    Hjort Kjelldgaard, Heidi; Holstein, Bjørn Evald; Due, Pernille

    2017-01-01

    day) communication with friends through cellphones, SMS messages, or Internet (1.66, 1.03-2.67). In the full population, overweight/obese weight status was associated with not perceiving best friend as a confidant (1.59, 1.11-2.28). No associations were found between weight status and number of close...

  12. Isotopes and atomic weights

    International Nuclear Information System (INIS)

    Zhang Qinglian

    1990-01-01

    A review of the chemical and mass spectrometric methods of determining the atomic weights of elements is presented. A, special discussion is devoted to the calibration of the mass spectrometer with highly enriched isotopes. It is illustrated by the recent work on europium. How to choose the candidate element for new atomic weight determination forms the last section of the article

  13. Reciprocity of weighted networks.

    Science.gov (United States)

    Squartini, Tiziano; Picciolo, Francesco; Ruzzenenti, Franco; Garlaschelli, Diego

    2013-01-01

    In directed networks, reciprocal links have dramatic effects on dynamical processes, network growth, and higher-order structures such as motifs and communities. While the reciprocity of binary networks has been extensively studied, that of weighted networks is still poorly understood, implying an ever-increasing gap between the availability of weighted network data and our understanding of their dyadic properties. Here we introduce a general approach to the reciprocity of weighted networks, and define quantities and null models that consistently capture empirical reciprocity patterns at different structural levels. We show that, counter-intuitively, previous reciprocity measures based on the similarity of mutual weights are uninformative. By contrast, our measures allow to consistently classify different weighted networks according to their reciprocity, track the evolution of a network's reciprocity over time, identify patterns at the level of dyads and vertices, and distinguish the effects of flux (im)balances or other (a)symmetries from a true tendency towards (anti-)reciprocation.

  14. Weight discrimination and bullying.

    Science.gov (United States)

    Puhl, Rebecca M; King, Kelly M

    2013-04-01

    Despite significant attention to the medical impacts of obesity, often ignored are the negative outcomes that obese children and adults experience as a result of stigma, bias, and discrimination. Obese individuals are frequently stigmatized because of their weight in many domains of daily life. Research spanning several decades has documented consistent weight bias and stigmatization in employment, health care, schools, the media, and interpersonal relationships. For overweight and obese youth, weight stigmatization translates into pervasive victimization, teasing, and bullying. Multiple adverse outcomes are associated with exposure to weight stigmatization, including depression, anxiety, low self-esteem, body dissatisfaction, suicidal ideation, poor academic performance, lower physical activity, maladaptive eating behaviors, and avoidance of health care. This review summarizes the nature and extent of weight stigmatization against overweight and obese individuals, as well as the resulting consequences that these experiences create for social, psychological, and physical health for children and adults who are targeted. Copyright © 2013 Elsevier Ltd. All rights reserved.

  15. Verification test for on-line diagnosis algorithm based on noise analysis

    International Nuclear Information System (INIS)

    Tamaoki, T.; Naito, N.; Tsunoda, T.; Sato, M.; Kameda, A.

    1980-01-01

    An on-line diagnosis algorithm was developed and its verification test was performed using a minicomputer. This algorithm identifies the plant state by analyzing various system noise patterns, such as power spectral densities, coherence functions etc., in three procedure steps. Each obtained noise pattern is examined by using the distances from its reference patterns prepared for various plant states. Then, the plant state is identified by synthesizing each result with an evaluation weight. This weight is determined automatically from the reference noise patterns prior to on-line diagnosis. The test was performed with 50 MW (th) Steam Generator noise data recorded under various controller parameter values. The algorithm performance was evaluated based on a newly devised index. The results obtained with one kind of weight showed the algorithm efficiency under the proper selection of noise patterns. Results for another kind of weight showed the robustness of the algorithm to this selection. (orig.)

  16. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  17. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  18. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  19. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  20. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  1. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  2. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  3. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  4. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  5. ORDERED WEIGHTED DISTANCE MEASURE

    Institute of Scientific and Technical Information of China (English)

    Zeshui XU; Jian CHEN

    2008-01-01

    The aim of this paper is to develop an ordered weighted distance (OWD) measure, which is thegeneralization of some widely used distance measures, including the normalized Hamming distance, the normalized Euclidean distance, the normalized geometric distance, the max distance, the median distance and the min distance, etc. Moreover, the ordered weighted averaging operator, the generalized ordered weighted aggregation operator, the ordered weighted geometric operator, the averaging operator, the geometric mean operator, the ordered weighted square root operator, the square root operator, the max operator, the median operator and the min operator axe also the special cases of the OWD measure. Some methods depending on the input arguments are given to determine the weights associated with the OWD measure. The prominent characteristic of the OWD measure is that it can relieve (or intensify) the influence of unduly large or unduly small deviations on the aggregation results by assigning them low (or high) weights. This desirable characteristic makes the OWD measure very suitable to be used in many actual fields, including group decision making, medical diagnosis, data mining, and pattern recognition, etc. Finally, based on the OWD measure, we develop a group decision making approach, and illustrate it with a numerical example.

  6. Multidimensional generalized-ensemble algorithms for complex systems.

    Science.gov (United States)

    Mitsutake, Ayori; Okamoto, Yuko

    2009-06-07

    We give general formulations of the multidimensional multicanonical algorithm, simulated tempering, and replica-exchange method. We generalize the original potential energy function E(0) by adding any physical quantity V of interest as a new energy term. These multidimensional generalized-ensemble algorithms then perform a random walk not only in E(0) space but also in V space. Among the three algorithms, the replica-exchange method is the easiest to perform because the weight factor is just a product of regular Boltzmann-like factors, while the weight factors for the multicanonical algorithm and simulated tempering are not a priori known. We give a simple procedure for obtaining the weight factors for these two latter algorithms, which uses a short replica-exchange simulation and the multiple-histogram reweighting techniques. As an example of applications of these algorithms, we have performed a two-dimensional replica-exchange simulation and a two-dimensional simulated-tempering simulation using an alpha-helical peptide system. From these simulations, we study the helix-coil transitions of the peptide in gas phase and in aqueous solution.

  7. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  8. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  9. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  10. Animation of planning algorithms

    OpenAIRE

    Sun, Fan

    2014-01-01

    Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...

  11. Secondary Vertex Finder Algorithm

    CERN Document Server

    Heer, Sebastian; The ATLAS collaboration

    2017-01-01

    If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.

  12. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  13. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...

  14. Light-weight plastination.

    Science.gov (United States)

    Steinke, Hanno; Rabi, Suganthy; Saito, Toshiyuki; Sawutti, Alimjan; Miyaki, Takayoshi; Itoh, Masahiro; Spanel-Borowski, Katharina

    2008-11-20

    Plastination is an excellent technique which helps to keep the anatomical specimens in a dry, odourless state. Since the invention of plastination technique by von Hagens, research has been done to improve the quality of plastinated specimens. In this paper, we have described a method of producing light-weight plastinated specimens using xylene along with silicone and in the final step, substitute xylene with air. The finished plastinated specimens were light-weight, dry, odourless and robust. This method requires less use of resin thus making the plastination technique more cost-effective. The light-weight specimens are easy to carry and can easily be used for teaching.

  15. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  16. Motion tolerant iterative reconstruction algorithm for cone-beam helical CT imaging

    Energy Technology Data Exchange (ETDEWEB)

    Takahashi, Hisashi; Goto, Taiga; Hirokawa, Koichi; Miyazaki, Osamu [Hitachi Medical Corporation, Chiba-ken (Japan). CT System Div.

    2011-07-01

    We have developed a new advanced iterative reconstruction algorithm for cone-beam helical CT. The features of this algorithm are: (a) it uses separable paraboloidal surrogate (SPS) technique as a foundation for reconstruction to reduce noise and cone-beam artifact, (b) it uses a view weight in the back-projection process to reduce motion artifact. To confirm the improvement of our proposed algorithm over other existing algorithm, such as Feldkamp-Davis-Kress (FDK) or SPS algorithm, we compared the motion artifact reduction, image noise reduction (standard deviation of CT number), and cone-beam artifact reduction on simulated and clinical data set. Our results demonstrate that the proposed algorithm dramatically reduces motion artifacts compared with the SPS algorithm, and decreases image noise compared with the FDK algorithm. In addition, the proposed algorithm potentially improves time resolution of iterative reconstruction. (orig.)

  17. Textual and chemical information processing: different domains but similar algorithms

    Directory of Open Access Journals (Sweden)

    Peter Willett

    2000-01-01

    Full Text Available This paper discusses the extent to which algorithms developed for the processing of textual databases are also applicable to the processing of chemical structure databases, and vice versa. Applications discussed include: an algorithm for distribution sorting that has been applied to the design of screening systems for rapid chemical substructure searching; the use of measures of inter-molecular structural similarity for the analysis of hypertext graphs; a genetic algorithm for calculating term weights for relevance feedback searching for determining whether a molecule is likely to exhibit biological activity; and the use of data fusion to combine the results of different chemical similarity searches.

  18. Development and Evaluation of Algorithms for Breath Alcohol Screening.

    Science.gov (United States)

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-04-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone.

  19. Hybrid fuzzy charged system search algorithm based state estimation in distribution networks

    Directory of Open Access Journals (Sweden)

    Sachidananda Prasad

    2017-06-01

    Full Text Available This paper proposes a new hybrid charged system search (CSS algorithm based state estimation in radial distribution networks in fuzzy framework. The objective of the optimization problem is to minimize the weighted square of the difference between the measured and the estimated quantity. The proposed method of state estimation considers bus voltage magnitude and phase angle as state variable along with some equality and inequality constraints for state estimation in distribution networks. A rule based fuzzy inference system has been designed to control the parameters of the CSS algorithm to achieve better balance between the exploration and exploitation capability of the algorithm. The efficiency of the proposed fuzzy adaptive charged system search (FACSS algorithm has been tested on standard IEEE 33-bus system and Indian 85-bus practical radial distribution system. The obtained results have been compared with the conventional CSS algorithm, weighted least square (WLS algorithm and particle swarm optimization (PSO for feasibility of the algorithm.

  20. On a New Family of Kalman Filter Algorithms for Integrated Navigation

    Science.gov (United States)

    Mahboub, V.; Saadatseresht, M.; Ardalan, A. A.

    2017-09-01

    Here we present a review on a new family of Kalman filter algorithms which recently developed for integrated navigation. In particular it is useful for vision based navigation due to the type of data. Here we mainly focus on three algorithms namely weighted Total Kalman filter (WTKF), integrated Kalman filter (IKF) and constrained integrated Kalman filter (CIKF). The common characteristic of these algorithms is that they can consider the neglected random observed quantities which may appear in the dynamic model. Moreover, our approach makes use of condition equations and straightforward variance propagation rules. The WTKF algorithm can deal with problems with arbitrary weight matrixes. Both of the observation equations and system equations can be dynamic-errors-in-variables (DEIV) models in the IKF algorithms. In some problems a quadratic constraint may exist. They can be solved by CIKF algorithm. Finally, we compare four algorithms WTKF, IKF, CIKF and EKF in numerical examples.

  1. Generalized optical code construction for enhanced and Modified Double Weight like codes without mapping for SAC-OCDMA systems

    Science.gov (United States)

    Kumawat, Soma; Ravi Kumar, M.

    2016-07-01

    Double Weight (DW) code family is one of the coding schemes proposed for Spectral Amplitude Coding-Optical Code Division Multiple Access (SAC-OCDMA) systems. Modified Double Weight (MDW) code for even weights and Enhanced Double Weight (EDW) code for odd weights are two algorithms extending the use of DW code for SAC-OCDMA systems. The above mentioned codes use mapping technique to provide codes for higher number of users. A new generalized algorithm to construct EDW and MDW like codes without mapping for any weight greater than 2 is proposed. A single code construction algorithm gives same length increment, Bit Error Rate (BER) calculation and other properties for all weights greater than 2. Algorithm first constructs a generalized basic matrix which is repeated in a different way to produce the codes for all users (different from mapping). The generalized code is analysed for BER using balanced detection and direct detection techniques.

  2. Geometric Algorithms for Private-Cache Chip Multiprocessors

    DEFF Research Database (Denmark)

    Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert

    2010-01-01

    -D convex hulls. These results are obtained by analyzing adaptations of either the PEM merge sort algorithm or PRAM algorithms. For the second group of problems—orthogonal line segment intersection reporting, batched range reporting, and related problems—more effort is required. What distinguishes......We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2...... these problems from the ones in the previous group is the variable output size, which requires I/O-efficient load balancing strategies based on the contribution of the individual input elements to the output size. To obtain nearly optimal algorithms for these problems, we introduce a parallel distribution...

  3. Watermarking Algorithms for 3D NURBS Graphic Data

    Directory of Open Access Journals (Sweden)

    Jae Jun Lee

    2004-10-01

    Full Text Available Two watermarking algorithms for 3D nonuniform rational B-spline (NURBS graphic data are proposed: one is appropriate for the steganography, and the other for watermarking. Instead of directly embedding data into the parameters of NURBS, the proposed algorithms embed data into the 2D virtual images extracted by parameter sampling of 3D model. As a result, the proposed steganography algorithm can embed information into more places of the surface than the conventional algorithm, while preserving the data size of the model. Also, any existing 2D watermarking technique can be used for the watermarking of 3D NURBS surfaces. From the experiment, it is found that the algorithm for the watermarking is robust to the attacks on weights, control points, and knots. It is also found to be robust to the remodeling of NURBS models.

  4. Normalized Minimum Error Entropy Algorithm with Recursive Power Estimation

    Directory of Open Access Journals (Sweden)

    Namyong Kim

    2016-06-01

    Full Text Available The minimum error entropy (MEE algorithm is known to be superior in signal processing applications under impulsive noise. In this paper, based on the analysis of behavior of the optimum weight and the properties of robustness against impulsive noise, a normalized version of the MEE algorithm is proposed. The step size of the MEE algorithm is normalized with the power of input entropy that is estimated recursively for reducing its computational complexity. The proposed algorithm yields lower minimum MSE (mean squared error and faster convergence speed simultaneously than the original MEE algorithm does in the equalization simulation. On the condition of the same convergence speed, its performance enhancement in steady state MSE is above 3 dB.

  5. Enhanced backpropagation training algorithm for transient event identification

    International Nuclear Information System (INIS)

    Vitela, J.; Reifman, J.

    1993-01-01

    We present an enhanced backpropagation (BP) algorithm for training feedforward neural networks that avoids the undesirable premature saturation of the network output nodes and accelerates the training process even in cases where premature saturation is not present. When the standard BP algorithm is applied to train patterns of nuclear power plant (NPP) transients, the network output nodes often become prematurely saturated causing the already slow rate of convergence of the algorithm to become even slower. When premature saturation occurs, the gradient of the prediction error becomes very small, although the prediction error itself is still large, yielding negligible weight updates and hence no significant decrease in the prediction error until the eventual recovery of the output nodes from saturation. By defining the onset of premature saturation and systematically modifying the gradient of the prediction error at saturation, we developed an enhanced BP algorithm that is compared with the standard BP algorithm in training a network to identify NPP transients

  6. Antidepressants and Weight Gain

    Science.gov (United States)

    ... 2015;37:46. Blumenthal SR, et al. An electronic health records study of long-term weight gain following antidepressant ... your agreement to the Terms and Conditions and Privacy Policy linked below. Terms and Conditions Privacy Policy ...

  7. Weight gain - unintentional

    Science.gov (United States)

    ... diabetes Hormone changes or medical problems can also cause unintentional weight gain. This may be due to: Cushing syndrome Underactive thyroid, or low thyroid (hypothyroidism) Polycystic ovary syndrome Menopause Pregnancy Bloating, or swelling ...

  8. Weight and psychiatry

    African Journals Online (AJOL)

    Adele

    Beyond the physical aspects of weight, the psychological mean- ing sees the virtues of ... sion in the development and persistence of adolescent obesity. Pediat- rics 2002 ... or secondary as in mood, anxiety or psychotic disorders. II. a related.

  9. birth-weight infants

    African Journals Online (AJOL)

    including the CRIB (Clinical Risk Index for Babies) score, in a local ... these babies for expensive tertiary care. Subjects. ... patient numbers, the tendency is simply to increase the ... included birth weight, gestational age, 5-minute Apgar score ...

  10. Weight loss and alcohol

    Science.gov (United States)

    ... Maclean JC. Alcohol consumption and body weight. Health Econ . 2010;19(7):814-832. PMID: 19548203 www. ... member of Hi-Ethics and subscribes to the principles of the Health on the Net Foundation (www. ...

  11. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  12. Geomagnetic matching navigation algorithm based on robust estimation

    Science.gov (United States)

    Xie, Weinan; Huang, Liping; Qu, Zhenshen; Wang, Zhenhuan

    2017-08-01

    The outliers in the geomagnetic survey data seriously affect the precision of the geomagnetic matching navigation and badly disrupt its reliability. A novel algorithm which can eliminate the outliers influence is investigated in this paper. First, the weight function is designed and its principle of the robust estimation is introduced. By combining the relation equation between the matching trajectory and the reference trajectory with the Taylor series expansion for geomagnetic information, a mathematical expression of the longitude, latitude and heading errors is acquired. The robust target function is obtained by the weight function and the mathematical expression. Then the geomagnetic matching problem is converted to the solutions of nonlinear equations. Finally, Newton iteration is applied to implement the novel algorithm. Simulation results show that the matching error of the novel algorithm is decreased to 7.75% compared to the conventional mean square difference (MSD) algorithm, and is decreased to 18.39% to the conventional iterative contour matching algorithm when the outlier is 40nT. Meanwhile, the position error of the novel algorithm is 0.017° while the other two algorithms fail to match when the outlier is 400nT.

  13. Adaptation of Rejection Algorithms for a Radar Clutter

    Directory of Open Access Journals (Sweden)

    D. Popov

    2017-09-01

    Full Text Available In this paper, the algorithms for adaptive rejection of a radar clutter are synthesized for the case of a priori unknown spectral-correlation characteristics at wobbulation of a repetition period of the radar signal. The synthesis of algorithms for the non-recursive adaptive rejection filter (ARF of a given order is reduced to determination of the vector of weighting coefficients, which realizes the best effectiveness index for radar signal extraction from the moving targets on the background of the received clutter. As the effectiveness criterion, we consider the averaged (over the Doppler signal phase shift improvement coefficient for a signal-to-clutter ratio (SCR. On the base of extreme properties of the characteristic numbers (eigennumbers of the matrices, the optimal vector (according to this criterion maximum is defined as the eigenvector of the clutter correlation matrix corresponding to its minimal eigenvalue. The general type of the vector of optimal ARF weighting coefficients is de-termined and specific adaptive algorithms depending upon the ARF order are obtained, which in the specific cases can be reduced to the known algorithms confirming its authenticity. The comparative analysis of the synthesized and known algorithms is performed. Significant bene-fits are established in clutter rejection effectiveness by the offered processing algorithms compared to the known processing algorithms.

  14. Improving GPU-accelerated adaptive IDW interpolation algorithm using fast kNN search.

    Science.gov (United States)

    Mei, Gang; Xu, Nengxiong; Xu, Liangliang

    2016-01-01

    This paper presents an efficient parallel Adaptive Inverse Distance Weighting (AIDW) interpolation algorithm on modern Graphics Processing Unit (GPU). The presented algorithm is an improvement of our previous GPU-accelerated AIDW algorithm by adopting fast k-nearest neighbors (kNN) search. In AIDW, it needs to find several nearest neighboring data points for each interpolated point to adaptively determine the power parameter; and then the desired prediction value of the interpolated point is obtained by weighted interpolating using the power parameter. In this work, we develop a fast kNN search approach based on the space-partitioning data structure, even grid, to improve the previous GPU-accelerated AIDW algorithm. The improved algorithm is composed of the stages of kNN search and weighted interpolating. To evaluate the performance of the improved algorithm, we perform five groups of experimental tests. The experimental results indicate: (1) the improved algorithm can achieve a speedup of up to 1017 over the corresponding serial algorithm; (2) the improved algorithm is at least two times faster than our previous GPU-accelerated AIDW algorithm; and (3) the utilization of fast kNN search can significantly improve the computational efficiency of the entire GPU-accelerated AIDW algorithm.

  15. A propositional CONEstrip algorithm

    NARCIS (Netherlands)

    E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)

    2014-01-01

    textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations

  16. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...

  17. Efficient graph algorithms

    Indian Academy of Sciences (India)

    Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...

  18. The Copenhagen Triage Algorithm

    DEFF Research Database (Denmark)

    Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia

    2016-01-01

    is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...

  19. de Casteljau's Algorithm Revisited

    DEFF Research Database (Denmark)

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  20. Algorithms in ambient intelligence

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.

    2005-01-01

    We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of

  1. General Algorithm (High level)

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...

  2. Comprehensive eye evaluation algorithm

    Science.gov (United States)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  3. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  4. Algorithm Theory - SWAT 2006

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...

  5. Optimal Quadratic Programming Algorithms

    CERN Document Server

    Dostal, Zdenek

    2009-01-01

    Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers

  6. Graphs and matroids weighted in a bounded incline algebra.

    Science.gov (United States)

    Lu, Ling-Xia; Zhang, Bei

    2014-01-01

    Firstly, for a graph weighted in a bounded incline algebra (or called a dioid), a longest path problem (LPP, for short) is presented, which can be considered the uniform approach to the famous shortest path problem, the widest path problem, and the most reliable path problem. The solutions for LPP and related algorithms are given. Secondly, for a matroid weighted in a linear matroid, the maximum independent set problem is studied.

  7. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  8. Iris Matching Based on Personalized Weight Map.

    Science.gov (United States)

    Dong, Wenbo; Sun, Zhenan; Tan, Tieniu

    2011-09-01

    Iris recognition typically involves three steps, namely, iris image preprocessing, feature extraction, and feature matching. The first two steps of iris recognition have been well studied, but the last step is less addressed. Each human iris has its unique visual pattern and local image features also vary from region to region, which leads to significant differences in robustness and distinctiveness among the feature codes derived from different iris regions. However, most state-of-the-art iris recognition methods use a uniform matching strategy, where features extracted from different regions of the same person or the same region for different individuals are considered to be equally important. This paper proposes a personalized iris matching strategy using a class-specific weight map learned from the training images of the same iris class. The weight map can be updated online during the iris recognition procedure when the successfully recognized iris images are regarded as the new training data. The weight map reflects the robustness of an encoding algorithm on different iris regions by assigning an appropriate weight to each feature code for iris matching. Such a weight map trained by sufficient iris templates is convergent and robust against various noise. Extensive and comprehensive experiments demonstrate that the proposed personalized iris matching strategy achieves much better iris recognition performance than uniform strategies, especially for poor quality iris images.

  9. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  10. Weight Loss Nutritional Supplements

    Science.gov (United States)

    Eckerson, Joan M.

    Obesity has reached what may be considered epidemic proportions in the United States, not only for adults but for children. Because of the medical implications and health care costs associated with obesity, as well as the negative social and psychological impacts, many individuals turn to nonprescription nutritional weight loss supplements hoping for a quick fix, and the weight loss industry has responded by offering a variety of products that generates billions of dollars each year in sales. Most nutritional weight loss supplements are purported to work by increasing energy expenditure, modulating carbohydrate or fat metabolism, increasing satiety, inducing diuresis, or blocking fat absorption. To review the literally hundreds of nutritional weight loss supplements available on the market today is well beyond the scope of this chapter. Therefore, several of the most commonly used supplements were selected for critical review, and practical recommendations are provided based on the findings of well controlled, randomized clinical trials that examined their efficacy. In most cases, the nutritional supplements reviewed either elicited no meaningful effect or resulted in changes in body weight and composition that are similar to what occurs through a restricted diet and exercise program. Although there is some evidence to suggest that herbal forms of ephedrine, such as ma huang, combined with caffeine or caffeine and aspirin (i.e., ECA stack) is effective for inducing moderate weight loss in overweight adults, because of the recent ban on ephedra manufacturers must now use ephedra-free ingredients, such as bitter orange, which do not appear to be as effective. The dietary fiber, glucomannan, also appears to hold some promise as a possible treatment for weight loss, but other related forms of dietary fiber, including guar gum and psyllium, are ineffective.

  11. DE and NLP Based QPLS Algorithm

    Science.gov (United States)

    Yu, Xiaodong; Huang, Dexian; Wang, Xiong; Liu, Bo

    As a novel evolutionary computing technique, Differential Evolution (DE) has been considered to be an effective optimization method for complex optimization problems, and achieved many successful applications in engineering. In this paper, a new algorithm of Quadratic Partial Least Squares (QPLS) based on Nonlinear Programming (NLP) is presented. And DE is used to solve the NLP so as to calculate the optimal input weights and the parameters of inner relationship. The simulation results based on the soft measurement of diesel oil solidifying point on a real crude distillation unit demonstrate that the superiority of the proposed algorithm to linear PLS and QPLS which is based on Sequential Quadratic Programming (SQP) in terms of fitting accuracy and computational costs.

  12. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  13. Reactive Collision Avoidance Algorithm

    Science.gov (United States)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  14. Detecting community structure using label propagation with consensus weight in complex network

    International Nuclear Information System (INIS)

    Liang Zong-Wen; Li Jian-Ping; Yang Fan; Petropulu Athina

    2014-01-01

    Community detection is a fundamental work to analyse the structural and functional properties of complex networks. The label propagation algorithm (LPA) is a near linear time algorithm to find a good community structure. Despite various subsequent advances, an important issue of this algorithm has not yet been properly addressed. Random update orders within the algorithm severely hamper the stability of the identified community structure. In this paper, we executed the basic label propagation algorithm on networks multiple times, to obtain a set of consensus partitions. Based on these consensus partitions, we created a consensus weighted graph. In this consensus weighted graph, the weight value of the edge was the proportion value that the number of node pairs allocated in the same cluster was divided by the total number of partitions. Then, we introduced consensus weight to indicate the direction of label propagation. In label update steps, by computing the mixing value of consensus weight and label frequency, a node adopted the label which has the maximum mixing value instead of the most frequent one. For extending to different networks, we introduced a proportion parameter to adjust the proportion of consensus weight and label frequency in computing mixing value. Finally, we proposed an approach named the label propagation algorithm with consensus weight (LPAcw), and the experimental results showed that the LPAcw could enhance considerably both the stability and the accuracy of community partitions. (interdisciplinary physics and related areas of science and technology)

  15. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  16. Treatment Algorithm for Ameloblastoma

    Directory of Open Access Journals (Sweden)

    Madhumati Singh

    2014-01-01

    Full Text Available Ameloblastoma is the second most common benign odontogenic tumour (Shafer et al. 2006 which constitutes 1–3% of all cysts and tumours of jaw, with locally aggressive behaviour, high recurrence rate, and a malignant potential (Chaine et al. 2009. Various treatment algorithms for ameloblastoma have been reported; however, a universally accepted approach remains unsettled and controversial (Chaine et al. 2009. The treatment algorithm to be chosen depends on size (Escande et al. 2009 and Sampson and Pogrel 1999, anatomical location (Feinberg and Steinberg 1996, histologic variant (Philipsen and Reichart 1998, and anatomical involvement (Jackson et al. 1996. In this paper various such treatment modalities which include enucleation and peripheral osteotomy, partial maxillectomy, segmental resection and reconstruction done with fibula graft, and radical resection and reconstruction done with rib graft and their recurrence rate are reviewed with study of five cases.

  17. An Algorithmic Diversity Diet?

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik

    2016-01-01

    With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....

  18. DAL Algorithms and Python

    CERN Document Server

    Aydemir, Bahar

    2017-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...

  19. Genetic algorithm essentials

    CERN Document Server

    Kramer, Oliver

    2017-01-01

    This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.

  20. Weight Management in Phenylketonuria

    DEFF Research Database (Denmark)

    Rocha, Julio César; van Rijn, Margreet; van Dam, Esther

    2016-01-01

    . It is becoming evident that in addition to acceptable blood phenylalanine control, metabolic dieticians should regard weight management as part of routine clinical practice. SUMMARY: It is important for practitioners to differentiate the 3 levels for overweight interpretation: anthropometry, body composition...... and frequency and severity of associated metabolic comorbidities. The main objectives of this review are to suggest proposals for the minimal standard and gold standard for the assessment of weight management in PKU. While the former aims to underline the importance of nutritional status evaluation in every...... specialized clinic, the second objective is important in establishing an understanding of the breadth of overweight and obesity in PKU in Europe. KEY MESSAGES: In PKU, the importance of adopting a European nutritional management strategy on weight management is highlighted in order to optimize long...

  1. Family Weight School treatment

    DEFF Research Database (Denmark)

    Nowicka, Paulina; Höglund, Peter; Pietrobelli, Angelo

    2008-01-01

    OBJECTIVE: The aim was to evaluate the efficacy of a Family Weight School treatment based on family therapy in group meetings with adolescents with a high degree of obesity. METHODS: Seventy-two obese adolescents aged 12-19 years old were referred to a childhood obesity center by pediatricians...... and school nurses and offered a Family Weight School therapy program in group meetings given by a multidisciplinary team. Intervention was compared with an untreated waiting list control group. Body mass index (BMI) and BMI z-scores were calculated before and after intervention. RESULTS: Ninety percent...... group with initial BMI z-score 3.5. CONCLUSIONS: Family Weight School treatment model might be suitable for adolescents with BMI z...

  2. Weight for Stephen Finlay.

    Science.gov (United States)

    Evers, Daan

    2013-04-01

    According to Stephen Finlay, ' A ought to X ' means that X -ing is more conducive to contextually salient ends than relevant alternatives. This in turn is analysed in terms of probability. I show why this theory of 'ought' is hard to square with a theory of a reason's weight which could explain why ' A ought to X ' logically entails that the balance of reasons favours that A X -es. I develop two theories of weight to illustrate my point. I first look at the prospects of a theory of weight based on expected utility theory. I then suggest a simpler theory. Although neither allows that ' A ought to X ' logically entails that the balance of reasons favours that A X -es, this price may be accepted. For there remains a strong pragmatic relation between these claims.

  3. Boosting foundations and algorithms

    CERN Document Server

    Schapire, Robert E

    2012-01-01

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

  4. Stochastic split determinant algorithms

    International Nuclear Information System (INIS)

    Horvatha, Ivan

    2000-01-01

    I propose a large class of stochastic Markov processes associated with probability distributions analogous to that of lattice gauge theory with dynamical fermions. The construction incorporates the idea of approximate spectral split of the determinant through local loop action, and the idea of treating the infrared part of the split through explicit diagonalizations. I suggest that exact algorithms of practical relevance might be based on Markov processes so constructed

  5. Quantum gate decomposition algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Slepoy, Alexander

    2006-07-01

    Quantum computing algorithms can be conveniently expressed in a format of a quantum logical circuits. Such circuits consist of sequential coupled operations, termed ''quantum gates'', or quantum analogs of bits called qubits. We review a recently proposed method [1] for constructing general ''quantum gates'' operating on an qubits, as composed of a sequence of generic elementary ''gates''.

  6. KAM Tori Construction Algorithms

    Science.gov (United States)

    Wiesel, W.

    In this paper we evaluate and compare two algorithms for the calculation of KAM tori in Hamiltonian systems. The direct fitting of a torus Fourier series to a numerically integrated trajectory is the first method, while an accelerated finite Fourier transform is the second method. The finite Fourier transform, with Hanning window functions, is by far superior in both computational loading and numerical accuracy. Some thoughts on applications of KAM tori are offered.

  7. Irregular Applications: Architectures & Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Feo, John T.; Villa, Oreste; Tumeo, Antonino; Secchi, Simone

    2012-02-06

    Irregular applications are characterized by irregular data structures, control and communication patterns. Novel irregular high performance applications which deal with large data sets and require have recently appeared. Unfortunately, current high performance systems and software infrastructures executes irregular algorithms poorly. Only coordinated efforts by end user, area specialists and computer scientists that consider both the architecture and the software stack may be able to provide solutions to the challenges of modern irregular applications.

  8. Large scale tracking algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Hansen, Ross L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Love, Joshua Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Melgaard, David Kennett [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Karelitz, David B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Pitts, Todd Alan [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Zollweg, Joshua David [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Anderson, Dylan Z. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Nandy, Prabal [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Whitlow, Gary L. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Bender, Daniel A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Byrne, Raymond Harry [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.

  9. NEUTRON ALGORITHM VERIFICATION TESTING

    International Nuclear Information System (INIS)

    COWGILL, M.; MOSBY, W.; ARGONNE NATIONAL LABORATORY-WEST

    2000-01-01

    Active well coincidence counter assays have been performed on uranium metal highly enriched in 235 U. The data obtained in the present program, together with highly enriched uranium (HEU) metal data obtained in other programs, have been analyzed using two approaches, the standard approach and an alternative approach developed at BNL. Analysis of the data with the standard approach revealed that the form of the relationship between the measured reals and the 235 U mass varied, being sometimes linear and sometimes a second-order polynomial. In contrast, application of the BNL algorithm, which takes into consideration the totals, consistently yielded linear relationships between the totals-corrected reals and the 235 U mass. The constants in these linear relationships varied with geometric configuration and level of enrichment. This indicates that, when the BNL algorithm is used, calibration curves can be established with fewer data points and with more certainty than if a standard algorithm is used. However, this potential advantage has only been established for assays of HEU metal. In addition, the method is sensitive to the stability of natural background in the measurement facility

  10. Evolutionary Pseudo-Relaxation Learning Algorithm for Bidirectional Associative Memory

    Institute of Scientific and Technical Information of China (English)

    Sheng-Zhi Du; Zeng-Qiang Chen; Zhu-Zhi Yuan

    2005-01-01

    This paper analyzes the sensitivity to noise in BAM (Bidirectional Associative Memory), and then proves the noise immunity of BAM relates not only to the minimum absolute value of net inputs (MAV) but also to the variance of weights associated with synapse connections. In fact, it is a positive monotonically increasing function of the quotient of MAV divided by the variance of weights. Besides, the performance of pseudo-relaxation method depends on learning parameters (λ and ζ), but the relation of them is not linear. So it is hard to find a best combination of λ and ζ which leads to the best BAM performance. And it is obvious that pseudo-relaxation is a kind of local optimization method, so it cannot guarantee to get the global optimal solution. In this paper, a novel learning algorithm EPRBAM (evolutionary psendo-relaxation learning algorithm for bidirectional association memory) employing genetic algorithm and pseudo-relaxation method is proposed to get feasible solution of BAM weight matrix. This algorithm uses the quotient as the fitness of each individual and employs pseudo-relaxation method to adjust individual solution when it does not satisfy constraining condition any more after genetic operation. Experimental results show this algorithm improves noise immunity of BAM greatly. At the same time, EPRBAM does not depend on learning parameters and can get global optimal solution.

  11. Dairy cow disability weights.

    Science.gov (United States)

    McConnel, Craig S; McNeil, Ashleigh A; Hadrich, Joleen C; Lombard, Jason E; Garry, Franklyn B; Heller, Jane

    2017-08-01

    Over the past 175 years, data related to human disease and death have progressed to a summary measure of population health, the Disability-Adjusted Life Year (DALY). As dairies have intensified there has been no equivalent measure of the impact of disease on the productive life and well-being of animals. The development of a disease-adjusted metric requires a consistent set of disability weights that reflect the relative severity of important diseases. The objective of this study was to use an international survey of dairy authorities to derive disability weights for primary disease categories recorded on dairies. National and international dairy health and management authorities were contacted through professional organizations, dairy industry publications and conferences, and industry contacts. Estimates of minimum, most likely, and maximum disability weights were derived for 12 common dairy cow diseases. Survey participants were asked to estimate the impact of each disease on overall health and milk production. Diseases were classified from 1 (minimal adverse effects) to 10 (death). The data was modelled using BetaPERT distributions to demonstrate the variation in these dynamic disease processes, and to identify the most likely aggregated disability weights for each disease classification. A single disability weight was assigned to each disease using the average of the combined medians for the minimum, most likely, and maximum severity scores. A total of 96 respondents provided estimates of disability weights. The final disability weight values resulted in the following order from least to most severe: retained placenta, diarrhea, ketosis, metritis, mastitis, milk fever, lame (hoof only), calving trauma, left displaced abomasum, pneumonia, musculoskeletal injury (leg, hip, back), and right displaced abomasum. The peaks of the probability density functions indicated that for certain disease states such as retained placenta there was a relatively narrow range of

  12. Convex hull ranking algorithm for multi-objective evolutionary algorithms

    NARCIS (Netherlands)

    Davoodi Monfrared, M.; Mohades, A.; Rezaei, J.

    2012-01-01

    Due to many applications of multi-objective evolutionary algorithms in real world optimization problems, several studies have been done to improve these algorithms in recent years. Since most multi-objective evolutionary algorithms are based on the non-dominated principle, and their complexity

  13. Fuzzy Information Retrieval Using Genetic Algorithms and Relevance Feedback.

    Science.gov (United States)

    Petry, Frederick E.; And Others

    1993-01-01

    Describes an approach that combines concepts from information retrieval, fuzzy set theory, and genetic programing to improve weighted Boolean query formulation via relevance feedback. Highlights include background on information retrieval systems; genetic algorithms; subproblem formulation; and preliminary results based on a testbed. (Contains 12…

  14. The linear ordering problem: an algorithm for the optimal solution ...

    African Journals Online (AJOL)

    In this paper we describe and implement an algorithm for the exact solution of the Linear Ordering problem. Linear Ordering is the problem of finding a linear order of the nodes of a graph such that the sum of the weights which are consistent with this order is as large as possible. It is an NP - Hard combinatorial optimisation ...

  15. Nuclear power control system design using genetic algorithm

    International Nuclear Information System (INIS)

    Lee, Yoon Joon; Cho, Kyung Ho

    1996-01-01

    The genetic algorithm(GA) is applied to the design of the nuclear power control system. The reactor control system model is described in the LQR configuration. The LQR system order is increased to make the tracking system. The key parameters of the design are weighting matrices, and these are usually determined through numerous simulations in the conventional design. To determine the more objective and optimal weightings, the improved GA is applied. The results show that the weightings determined by the GA yield the better system responses than those obtained by the conventional design method

  16. The optimal algorithm for Multi-source RS image fusion.

    Science.gov (United States)

    Fu, Wei; Huang, Shui-Guang; Li, Zeng-Shun; Shen, Hao; Li, Jun-Shuai; Wang, Peng-Yuan

    2016-01-01

    In order to solve the issue which the fusion rules cannot be self-adaptively adjusted by using available fusion methods according to the subsequent processing requirements of Remote Sensing (RS) image, this paper puts forward GSDA (genetic-iterative self-organizing data analysis algorithm) by integrating the merit of genetic arithmetic together with the advantage of iterative self-organizing data analysis algorithm for multi-source RS image fusion. The proposed algorithm considers the wavelet transform of the translation invariance as the model operator, also regards the contrast pyramid conversion as the observed operator. The algorithm then designs the objective function by taking use of the weighted sum of evaluation indices, and optimizes the objective function by employing GSDA so as to get a higher resolution of RS image. As discussed above, the bullet points of the text are summarized as follows.•The contribution proposes the iterative self-organizing data analysis algorithm for multi-source RS image fusion.•This article presents GSDA algorithm for the self-adaptively adjustment of the fusion rules.•This text comes up with the model operator and the observed operator as the fusion scheme of RS image based on GSDA. The proposed algorithm opens up a novel algorithmic pathway for multi-source RS image fusion by means of GSDA.

  17. Minimum Covers of Fixed Cardinality in Weighted Graphs.

    Science.gov (United States)

    White, Lee J.

    Reported is the result of research on combinatorial and algorithmic techniques for information processing. A method is discussed for obtaining minimum covers of specified cardinality from a given weighted graph. By the indicated method, it is shown that the family of minimum covers of varying cardinality is related to the minimum spanning tree of…

  18. The experience of weight management in normal weight adults.

    Science.gov (United States)

    Hernandez, Cheri Ann; Hernandez, David A; Wellington, Christine M; Kidd, Art

    2016-11-01

    No prior research has been done with normal weight persons specific to their experience of weight management. The purpose of this research was to discover the experience of weight management in normal weight individuals. Glaserian grounded theory was used. Qualitative data (focus group) and quantitative data (food diary, study questionnaire, and anthropometric measures) were collected. Weight management was an ongoing process of trying to focus on living (family, work, and social), while maintaining their normal weight targets through five consciously and unconsciously used strategies. Despite maintaining normal weights, the nutritional composition of foods eaten was grossly inadequate. These five strategies can be used to develop new weight management strategies that could be integrated into existing weight management programs, or could be developed into novel weight management interventions. Surprisingly, normal weight individuals require dietary assessment and nutrition education to prevent future negative health consequences. Copyright © 2016 Elsevier Inc. All rights reserved.

  19. The effect of holiday weight gain on body weight.

    Science.gov (United States)

    Schoeller, Dale A

    2014-07-01

    The topic of holiday weight gain has been a frequent subject of the lay media; however, scientific interest has only been recent. Multiple studies in Western societies have reported average weight gains among adults during the period between mid-November and mid-January that were about 0.5 kg. The range in individual weight changes was large, however, and the already overweight and obese gain more weight than those who are healthy weight. When the average gain across the year was also measured, the holiday weight was the major contributor to annual excess weight gain. Efforts patterned to increase awareness to energy balance and body weight have been shown to be successful at reducing such gain. An exception to holiday weight gain being a major contributor to annual excess gain has been children, in whom summer weight gains have been observed to be the major contributor to average excess weight gain. Copyright © 2014 Elsevier Inc. All rights reserved.

  20. General Influence Coefficient Algorithm in Balancing of Rotating Machinery

    Directory of Open Access Journals (Sweden)

    Xiaoping Yu

    2004-01-01

    Full Text Available The General Influence Coefficient Algorithm (GICA, developed in this article, is a new calculation method for Influence Coefficients (ICs with a general formula. Compared to the traditional calculation method, GICA can solve the ICs' calculation task when the group of trial weights are installed on the rotor each time, the trial weights are retained on the rotor systems, or there is redundant trial balancing data, when even part of the ICs is known. GICA is also a powerful tool for refining the ICs from redundant balancing data or historical balancing data and serves as a general algorithm. With the general matrix formula, GICA is ready to be applied in a computer-aided balancing system as the key part of calculation software. Examples in industry are also presented to demonstrate the aplication of this new algorithm.

  1. Chomsky-Schützenberger parsing for weighted multiple context-free languages

    Directory of Open Access Journals (Sweden)

    Tobias Denkinger

    2017-07-01

    Full Text Available We prove a Chomsky-Schützenberger representation theorem for multiple context-free languages weighted over complete commutative strong bimonoids. Using this representation we devise a parsing algorithm for a restricted form of those devices.

  2. Image-reconstruction algorithms for positron-emission tomography systems

    International Nuclear Information System (INIS)

    Cheng, S.N.C.

    1982-01-01

    The positional uncertainty in the time-of-flight measurement of a positron-emission tomography system is modelled as a Gaussian distributed random variable and the image is assumed to be piecewise constant on a rectilinear lattice. A reconstruction algorithm using maximum-likelihood estimation is derived for the situation in which time-of-flight data are sorted as the most-likely-position array. The algorithm is formulated as a linear system described by a nonseparable, block-banded, Toeplitz matrix, and a sine-transform technique is used to implement this algorithm efficiently. The reconstruction algorithms for both the most-likely-position array and the confidence-weighted array are described by similar equations, hence similar linear systems can be used to described the reconstruction algorithm for a discrete, confidence-weighted array, when the matrix and the entries in the data array are properly identified. It is found that the mean square-error depends on the ratio of the full width at half the maximum of time-of-flight measurement over the size of a pixel. When other parameters are fixed, the larger the pixel size, the smaller is the mean square-error. In the study of resolution, parameters that affect the impulse response of time-of-flight reconstruction algorithms are identified. It is found that the larger the pixel size, the larger is the standard deviation of the impulse response. This shows that small mean square-error and fine resolution are two contradictory requirements

  3. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  4. THE APPROACHING TRAIN DETECTION ALGORITHM

    OpenAIRE

    S. V. Bibikov

    2015-01-01

    The paper deals with detection algorithm for rail vibroacoustic waves caused by approaching train on the background of increased noise. The urgency of algorithm development for train detection in view of increased rail noise, when railway lines are close to roads or road intersections is justified. The algorithm is based on the method of weak signals detection in a noisy environment. The information statistics ultimate expression is adjusted. We present the results of algorithm research and t...

  5. Combinatorial optimization algorithms and complexity

    CERN Document Server

    Papadimitriou, Christos H

    1998-01-01

    This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.

  6. Compositorial 'Weight' & 'Luminance'

    NARCIS (Netherlands)

    Koenderink, Jan; van Doorn, Andrea J.; Gegenfurtner, Karl

    2017-01-01

    Compositorial weight might be understood as an operational definition of salience. It is not a psychophysical entity, but holds a key position between psychophysics and aesthetics. Several factors ranging over raw photometric/colorimetric parameters, various kinds of psychophysical contrast, image

  7. Cigarette weight control systems

    International Nuclear Information System (INIS)

    Powell, G.F.W.; Bolt, R.C.; Simmons, A.

    1980-01-01

    A system is described for monitoring the weight of a continuous wrapped rod of tobacco formed by a cigarette-making machine. A scanner unit can be used which passes beta-rays from a primary radiation source through the rod. The absorption is measured by comparison of the intensity at a detector on the opposite side of the rod with that at a detector facing another smaller source, the balance unit. This is pre-set so that when the rod weight is correct the detected intensities from the two sources will be equal. It is essential that the scanning station is kept clean otherwise the dust is included in the weight reading and the cigarettes manufactured would be underweight. This can be checked using an artificial cigarette of known weight as a calibration check. In this device a test circuit can be connected to the scanner head and this opens the shutter over the radioactive source when the test is initiated. A warning device is initiated if the reading is beyond predetermined limits and can be made to prevent operation of the cigarette machine if a satisfactory test is not obtained. (U.K.)

  8. Exponential smoothing weighted correlations

    Science.gov (United States)

    Pozzi, F.; Di Matteo, T.; Aste, T.

    2012-06-01

    In many practical applications, correlation matrices might be affected by the "curse of dimensionality" and by an excessive sensitiveness to outliers and remote observations. These shortcomings can cause problems of statistical robustness especially accentuated when a system of dynamic correlations over a running window is concerned. These drawbacks can be partially mitigated by assigning a structure of weights to observational events. In this paper, we discuss Pearson's ρ and Kendall's τ correlation matrices, weighted with an exponential smoothing, computed on moving windows using a data-set of daily returns for 300 NYSE highly capitalized companies in the period between 2001 and 2003. Criteria for jointly determining optimal weights together with the optimal length of the running window are proposed. We find that the exponential smoothing can provide more robust and reliable dynamic measures and we discuss that a careful choice of the parameters can reduce the autocorrelation of dynamic correlations whilst keeping significance and robustness of the measure. Weighted correlations are found to be smoother and recovering faster from market turbulence than their unweighted counterparts, helping also to discriminate more effectively genuine from spurious correlations.

  9. Bessel Weighted Asymmetries

    Energy Technology Data Exchange (ETDEWEB)

    Avakian, Harut [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Gamberg, Leonard [Pennsylvania State Univ., University Park, PA (United States); Rossi, Patrizia [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Prokudin, Alexei [Pennsylvania State Univ., University Park, PA (United States)

    2016-05-01

    We review the concept of Bessel weighted asymmetries for semi-inclusive deep inelastic scattering and focus on the cross section in Fourier space, conjugate to the outgoing hadron’s transverse momentum, where convolutions of transverse momentum dependent parton distribution functions and fragmentation functions become simple products. Individual asymmetric terms in the cross section can be projected out by means of a generalized set of weights involving Bessel functions. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy and hard scale Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.

  10. Non-clairvoyant weighted flow time scheduling with rejection penalty

    DEFF Research Database (Denmark)

    Chan, Ho-Leung; Chan, Sze-Hang; Lam, Tak-Wah

    2012-01-01

    is defined as the weighted flow time of the job plus the penalty if it is rejected before completion. Previous work on minimizing the total user cost focused on the clairvoyant single-processor setting [BBC+03,CLL11] and has produced O(1)-competitive online algorithm for jobs with arbitrary weights...... algorithm has to decide job rejection and determine the order and speed of job execution. It is interesting to study the tradeoff between the above-mentioned user cost and energy. This paper gives two O(1)-competitive non-clairvoyant algorithms for minimizing the user cost plus energy on a single processor......This paper initiates the study of online scheduling with rejection penalty in the non-clairvoyant setting, i.e., the size (processing time) of a job is not assumed to be known at its release time. In the rejection penalty model, jobs can be rejected with a penalty, and the user cost of a job...

  11. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  12. Efficient GPS Position Determination Algorithms

    National Research Council Canada - National Science Library

    Nguyen, Thao Q

    2007-01-01

    ... differential GPS algorithm for a network of users. The stand-alone user GPS algorithm is a direct, closed-form, and efficient new position determination algorithm that exploits the closed-form solution of the GPS trilateration equations and works...

  13. Algorithmic approach to diagram techniques

    International Nuclear Information System (INIS)

    Ponticopoulos, L.

    1980-10-01

    An algorithmic approach to diagram techniques of elementary particles is proposed. The definition and axiomatics of the theory of algorithms are presented, followed by the list of instructions of an algorithm formalizing the construction of graphs and the assignment of mathematical objects to them. (T.A.)

  14. Wavelength converter placement for different RWA algorithms in wavelength-routed all-optical networks

    Science.gov (United States)

    Chu, Xiaowen; Li, Bo; Chlamtac, Imrich

    2002-07-01

    Sparse wavelength conversion and appropriate routing and wavelength assignment (RWA) algorithms are the two key factors in improving the blocking performance in wavelength-routed all-optical networks. It has been shown that the optimal placement of a limited number of wavelength converters in an arbitrary mesh network is an NP complete problem. There have been various heuristic algorithms proposed in the literature, in which most of them assume that a static routing and random wavelength assignment RWA algorithm is employed. However, the existing work shows that fixed-alternate routing and dynamic routing RWA algorithms can achieve much better blocking performance. Our study in this paper further demonstrates that the wavelength converter placement and RWA algorithms are closely related in the sense that a well designed wavelength converter placement mechanism for a particular RWA algorithm might not work well with a different RWA algorithm. Therefore, the wavelength converter placement and the RWA have to be considered jointly. The objective of this paper is to investigate the wavelength converter placement problem under fixed-alternate routing algorithm and least-loaded routing algorithm. Under the fixed-alternate routing algorithm, we propose a heuristic algorithm called Minimum Blocking Probability First (MBPF) algorithm for wavelength converter placement. Under the least-loaded routing algorithm, we propose a heuristic converter placement algorithm called Weighted Maximum Segment Length (WMSL) algorithm. The objective of the converter placement algorithm is to minimize the overall blocking probability. Extensive simulation studies have been carried out over three typical mesh networks, including the 14-node NSFNET, 19-node EON and 38-node CTNET. We observe that the proposed algorithms not only outperform existing wavelength converter placement algorithms by a large margin, but they also can achieve almost the same performance comparing with full wavelength

  15. Selfish Gene Algorithm Vs Genetic Algorithm: A Review

    Science.gov (United States)

    Ariff, Norharyati Md; Khalid, Noor Elaiza Abdul; Hashim, Rathiah; Noor, Noorhayati Mohamed

    2016-11-01

    Evolutionary algorithm is one of the algorithms inspired by the nature. Within little more than a decade hundreds of papers have reported successful applications of EAs. In this paper, the Selfish Gene Algorithms (SFGA), as one of the latest evolutionary algorithms (EAs) inspired from the Selfish Gene Theory which is an interpretation of Darwinian Theory ideas from the biologist Richards Dawkins on 1989. In this paper, following a brief introduction to the Selfish Gene Algorithm (SFGA), the chronology of its evolution is presented. It is the purpose of this paper is to present an overview of the concepts of Selfish Gene Algorithm (SFGA) as well as its opportunities and challenges. Accordingly, the history, step involves in the algorithm are discussed and its different applications together with an analysis of these applications are evaluated.

  16. Weight and weight gain during early infancy predict childhood obesity

    DEFF Research Database (Denmark)

    Andersen, Lise Geisler; Holst, Claus; Michaelsen, Kim F.

    2012-01-01

    Infant weight and weight gain are positively associated with later obesity, but whether there is a particular critical time during infancy remains uncertain.......Infant weight and weight gain are positively associated with later obesity, but whether there is a particular critical time during infancy remains uncertain....

  17. Optimization of Gad Pattern with Geometrical Weight

    International Nuclear Information System (INIS)

    Chang, Do Ik; Woo, Hae Seuk; Choi, Seong Min

    2009-01-01

    The prevailing burnable absorber for domestic nuclear power plants is a gad fuel rod which is used for the partial control of excess reactivity and power peaking. The radial peaking factor, which is one of the critical constraints for the plant safety depends largely on the number of gad bearing rods and the location of gad rods within fuel assembly. Also the concentration of gad, UO 2 enrichment in the gad fuel rod, and fuel lattice type play important roles for the resultant radial power peaking. Since fuel is upgraded periodically and longer fuel cycle management requires more burnable absorbers or higher gad weight percent, it is required frequently to search for the optimized gad patterns, i.e., the distribution of gad fuel rods within assembly, for the various fuel environment and fuel management changes. In this study, the gad pattern optimization algorithm with respect to radial power peaking factor using geometrical weight is proposed for a single gad weight percent, in which the candidates of the optimized gad pattern are determined based on the weighting of the gad rod location and the guide tube. Also the pattern evaluation is performed systematically to determine the optimal gad pattern for the various situation

  18. Generating Realistic Labelled, Weighted Random Graphs

    Directory of Open Access Journals (Sweden)

    Michael Charles Davis

    2015-12-01

    Full Text Available Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many real-world networks also have labelled vertices and weighted edges. In this paper, we present a generative model for random graphs with discrete vertex labels and numeric edge weights. The weights are represented as a set of Beta Mixture Models (BMMs with an arbitrary number of mixtures, which are learned from real-world networks. We propose a Bayesian Variational Inference (VI approach, which yields an accurate estimation while keeping computation times tractable. We compare our approach to state-of-the-art random labelled graph generators and an earlier approach based on Gaussian Mixture Models (GMMs. Our results allow us to draw conclusions about the contribution of vertex labels and edge weights to graph structure.

  19. Green IGP Link Weights for Energy-efficiency and Load-balancing in IP Backbone Networks

    OpenAIRE

    Francois, Frederic; Wang, Ning; Moessner, Klaus; Georgoulas, Stylianos; Xu, Ke

    2013-01-01

    The energy consumption of backbone networks has become a primary concern for network operators and regulators due to the pervasive deployment of wired backbone networks to meet the requirements of bandwidth-hungry applications. While traditional optimization of IGP link weights has been used in IP based load-balancing operations, in this paper we introduce a novel link weight setting algorithm, the Green Load-balancing Algorithm (GLA), which is able to jointly optimize both energy efficiency ...

  20. An Efficient Supervised Training Algorithm for Multilayer Spiking Neural Networks.

    Science.gov (United States)

    Xie, Xiurui; Qu, Hong; Liu, Guisong; Zhang, Malu; Kurths, Jürgen

    2016-01-01

    The spiking neural networks (SNNs) are the third generation of neural networks and perform remarkably well in cognitive tasks such as pattern recognition. The spike emitting and information processing mechanisms found in biological cognitive systems motivate the application of the hierarchical structure and temporal encoding mechanism in spiking neural networks, which have exhibited strong computational capability. However, the hierarchical structure and temporal encoding approach require neurons to process information serially in space and time respectively, which reduce the training efficiency significantly. For training the hierarchical SNNs, most existing methods are based on the traditional back-propagation algorithm, inheriting its drawbacks of the gradient diffusion and the sensitivity on parameters. To keep the powerful computation capability of the hierarchical structure and temporal encoding mechanism, but to overcome the low efficiency of the existing algorithms, a new training algorithm, the Normalized Spiking Error Back Propagation (NSEBP) is proposed in this paper. In the feedforward calculation, the output spike times are calculated by solving the quadratic function in the spike response model instead of detecting postsynaptic voltage states at all time points in traditional algorithms. Besides, in the feedback weight modification, the computational error is propagated to previous layers by the presynaptic spike jitter instead of the gradient decent rule, which realizes the layer-wised training. Furthermore, our algorithm investigates the mathematical relation between the weight variation and voltage error change, which makes the normalization in the weight modification applicable. Adopting these strategies, our algorithm outperforms the traditional SNN multi-layer algorithms in terms of learning efficiency and parameter sensitivity, that are also demonstrated by the comprehensive experimental results in this paper.

  1. Honing process optimization algorithms

    Science.gov (United States)

    Kadyrov, Ramil R.; Charikov, Pavel N.; Pryanichnikova, Valeria V.

    2018-03-01

    This article considers the relevance of honing processes for creating high-quality mechanical engineering products. The features of the honing process are revealed and such important concepts as the task for optimization of honing operations, the optimal structure of the honing working cycles, stepped and stepless honing cycles, simulation of processing and its purpose are emphasized. It is noted that the reliability of the mathematical model determines the quality parameters of the honing process control. An algorithm for continuous control of the honing process is proposed. The process model reliably describes the machining of a workpiece in a sufficiently wide area and can be used to operate the CNC machine CC743.

  2. Learning using privileged information: SVM+ and weighted SVM.

    Science.gov (United States)

    Lapin, Maksim; Hein, Matthias; Schiele, Bernt

    2014-05-01

    Prior knowledge can be used to improve predictive performance of learning algorithms or reduce the amount of data required for training. The same goal is pursued within the learning using privileged information paradigm which was recently introduced by Vapnik et al. and is aimed at utilizing additional information available only at training time-a framework implemented by SVM+. We relate the privileged information to importance weighting and show that the prior knowledge expressible with privileged features can also be encoded by weights associated with every training example. We show that a weighted SVM can always replicate an SVM+ solution, while the converse is not true and we construct a counterexample highlighting the limitations of SVM+. Finally, we touch on the problem of choosing weights for weighted SVMs when privileged features are not available. Copyright © 2014 Elsevier Ltd. All rights reserved.

  3. Intelligent Quality Prediction Using Weighted Least Square Support Vector Regression

    Science.gov (United States)

    Yu, Yaojun

    A novel quality prediction method with mobile time window is proposed for small-batch producing process based on weighted least squares support vector regression (LS-SVR). The design steps and learning algorithm are also addressed. In the method, weighted LS-SVR is taken as the intelligent kernel, with which the small-batch learning is solved well and the nearer sample is set a larger weight, while the farther is set the smaller weight in the history data. A typical machining process of cutting bearing outer race is carried out and the real measured data are used to contrast experiment. The experimental results demonstrate that the prediction accuracy of the weighted LS-SVR based model is only 20%-30% that of the standard LS-SVR based one in the same condition. It provides a better candidate for quality prediction of small-batch producing process.

  4. Control algorithms for dynamic attenuators

    Energy Technology Data Exchange (ETDEWEB)

    Hsieh, Scott S., E-mail: sshsieh@stanford.edu [Department of Radiology, Stanford University, Stanford, California 94305 and Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Pelc, Norbert J. [Department of Radiology, Stanford University, Stanford California 94305 and Department of Bioengineering, Stanford University, Stanford, California 94305 (United States)

    2014-06-15

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current

  5. Control algorithms for dynamic attenuators

    International Nuclear Information System (INIS)

    Hsieh, Scott S.; Pelc, Norbert J.

    2014-01-01

    Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current

  6. Control algorithms for dynamic attenuators.

    Science.gov (United States)

    Hsieh, Scott S; Pelc, Norbert J

    2014-06-01

    The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not require a priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without

  7. Opposite Degree Algorithm and Its Applications

    Directory of Open Access Journals (Sweden)

    Xiao-Guang Yue

    2015-12-01

    Full Text Available The opposite (Opposite Degree, referred to as OD algorithm is an intelligent algorithm proposed by Yue Xiaoguang et al. Opposite degree algorithm is mainly based on the concept of opposite degree, combined with the idea of design of neural network and genetic algorithm and clustering analysis algorithm. The OD algorithm is divided into two sub algorithms, namely: opposite degree - numerical computation (OD-NC algorithm and opposite degree - Classification computation (OD-CC algorithm.

  8. Performance analysis of a decoding algorithm for algebraic-geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1999-01-01

    The fast decoding algorithm for one point algebraic-geometry codes of Sakata, Elbrond Jensen, and Hoholdt corrects all error patterns of weight less than half the Feng-Rao minimum distance. In this correspondence we analyze the performance of the algorithm for heavier error patterns. It turns out...

  9. Image-Data Compression Using Edge-Optimizing Algorithm for WFA Inference.

    Science.gov (United States)

    Culik, Karel II; Kari, Jarkko

    1994-01-01

    Presents an inference algorithm that produces a weighted finite automata (WFA), in particular, the grayness functions of graytone images. Image-data compression results based on the new inference algorithm produces a WFA with a relatively small number of edges. Image-data compression results alone and in combination with wavelets are discussed.…

  10. An Enhanced K-Means Algorithm for Water Quality Analysis of The Haihe River in China.

    Science.gov (United States)

    Zou, Hui; Zou, Zhihong; Wang, Xiaojing

    2015-11-12

    The increase and the complexity of data caused by the uncertain environment is today's reality. In order to identify water quality effectively and reliably, this paper presents a modified fast clustering algorithm for water quality analysis. The algorithm has adopted a varying weights K-means cluster algorithm to analyze water monitoring data. The varying weights scheme was the best weighting indicator selected by a modified indicator weight self-adjustment algorithm based on K-means, which is named MIWAS-K-means. The new clustering algorithm avoids the margin of the iteration not being calculated in some cases. With the fast clustering analysis, we can identify the quality of water samples. The algorithm is applied in water quality analysis of the Haihe River (China) data obtained by the monitoring network over a period of eight years (2006-2013) with four indicators at seven different sites (2078 samples). Both the theoretical and simulated results demonstrate that the algorithm is efficient and reliable for water quality analysis of the Haihe River. In addition, the algorithm can be applied to more complex data matrices with high dimensionality.

  11. Body weight perception and body weight control behaviors in adolescents

    OpenAIRE

    Frank, Robson; Claumann, Gaia S.; Felden, Érico P.G.; Silva, Diego A.S.; Pelegrini, Andreia

    2018-01-01

    Abstract Objective: To investigate the association between the perception of body weight (as above or below the desired) and behaviors for body weight control in adolescents. Methods: This was a cross-sectional study that included 1051 adolescents (aged 15-19 years) who were high school students attending public schools. The authors collected information on the perception of body weight (dependent variable), weight control behaviors (initiative to change the weight, physical exercise, eatin...

  12. An Improved Hierarchical Genetic Algorithm for Sheet Cutting Scheduling with Process Constraints

    OpenAIRE

    Yunqing Rao; Dezhong Qi; Jinling Li

    2013-01-01

    For the first time, an improved hierarchical genetic algorithm for sheet cutting problem which involves n cutting patterns for m non-identical parallel machines with process constraints has been proposed in the integrated cutting stock model. The objective of the cutting scheduling problem is minimizing the weighted completed time. A mathematical model for this problem is presented, an improved hierarchical genetic algorithm (ant colony—hierarchical genetic algorithm) is developed for better ...

  13. Error tolerance in an NMR implementation of Grover's fixed-point quantum search algorithm

    International Nuclear Information System (INIS)

    Xiao Li; Jones, Jonathan A.

    2005-01-01

    We describe an implementation of Grover's fixed-point quantum search algorithm on a nuclear magnetic resonance quantum computer, searching for either one or two matching items in an unsorted database of four items. In this algorithm the target state (an equally weighted superposition of the matching states) is a fixed point of the recursive search operator, so that the algorithm always moves towards the desired state. The effects of systematic errors in the implementation are briefly explored

  14. A Cultural Algorithm for Optimal Design of Truss Structures

    Directory of Open Access Journals (Sweden)

    Shahin Jalili

    Full Text Available Abstract A cultural algorithm was utilized in this study to solve optimal design of truss structures problem achieving minimum weight objective under stress and deflection constraints. The algorithm is inspired by principles of human social evolution. It simulates the social interaction between the peoples and their beliefs in a belief space. Cultural Algorithm (CA utilizes the belief space and population space which affects each other based on acceptance and influence functions. The belief space of CA consists of different knowledge components. In this paper, only situational and normative knowledge components are used within the belief space. The performance of the method is demonstrated through four benchmark design examples. Comparison of the obtained results with those of some previous studies demonstrates the efficiency of this algorithm.

  15. Error Estimation for the Linearized Auto-Localization Algorithm

    Directory of Open Access Journals (Sweden)

    Fernando Seco

    2012-02-01

    Full Text Available The Linearized Auto-Localization (LAL algorithm estimates the position of beacon nodes in Local Positioning Systems (LPSs, using only the distance measurements to a mobile node whose position is also unknown. The LAL algorithm calculates the inter-beacon distances, used for the estimation of the beacons’ positions, from the linearized trilateration equations. In this paper we propose a method to estimate the propagation of the errors of the inter-beacon distances obtained with the LAL algorithm, based on a first order Taylor approximation of the equations. Since the method depends on such approximation, a confidence parameter τ is defined to measure the reliability of the estimated error. Field evaluations showed that by applying this information to an improved weighted-based auto-localization algorithm (WLAL, the standard deviation of the inter-beacon distances can be improved by more than 30% on average with respect to the original LAL method.

  16. Approximated affine projection algorithm for feedback cancellation in hearing aids.

    Science.gov (United States)

    Lee, Sangmin; Kim, In-Young; Park, Young-Cheol

    2007-09-01

    We propose an approximated affine projection (AP) algorithm for feedback cancellation in hearing aids. It is based on the conventional approach using the Gauss-Seidel (GS) iteration, but provides more stable convergence behaviour even with small step sizes. In the proposed algorithm, a residue of the weighted error vector, instead of the current error sample, is used to provide stable convergence. A new learning rate control scheme is also applied to the proposed algorithm to prevent signal cancellation and system instability. The new scheme determines step size in proportion to the prediction factor of the input, so that adaptation is inhibited whenever tone-like signals are present in the input. Simulation results verified the efficiency of the proposed algorithm.

  17. Research on personalized recommendation algorithm based on spark

    Science.gov (United States)

    Li, Zeng; Liu, Yu

    2018-04-01

    With the increasing amount of data in the past years, the traditional recommendation algorithm has been unable to meet people's needs. Therefore, how to better recommend their products to users of interest, become the opportunities and challenges of the era of big data development. At present, each platform enterprise has its own recommendation algorithm, but how to make efficient and accurate push information is still an urgent problem for personalized recommendation system. In this paper, a hybrid algorithm based on user collaborative filtering and content-based recommendation algorithm is proposed on Spark to improve the efficiency and accuracy of recommendation by weighted processing. The experiment shows that the recommendation under this scheme is more efficient and accurate.

  18. Gamma-Weighted Discrete Ordinate Two-Stream Approximation for Computation of Domain Averaged Solar Irradiance

    Science.gov (United States)

    Kato, S.; Smith, G. L.; Barker, H. W.

    2001-01-01

    An algorithm is developed for the gamma-weighted discrete ordinate two-stream approximation that computes profiles of domain-averaged shortwave irradiances for horizontally inhomogeneous cloudy atmospheres. The algorithm assumes that frequency distributions of cloud optical depth at unresolved scales can be represented by a gamma distribution though it neglects net horizontal transport of radiation. This algorithm is an alternative to the one used in earlier studies that adopted the adding method. At present, only overcast cloudy layers are permitted.

  19. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  20. Recognition algorithms in knot theory

    International Nuclear Information System (INIS)

    Dynnikov, I A

    2003-01-01

    In this paper the problem of constructing algorithms for comparing knots and links is discussed. A survey of existing approaches and basic results in this area is given. In particular, diverse combinatorial methods for representing links are discussed, the Haken algorithm for recognizing a trivial knot (the unknot) and a scheme for constructing a general algorithm (using Haken's ideas) for comparing links are presented, an approach based on representing links by closed braids is described, the known algorithms for solving the word problem and the conjugacy problem for braid groups are described, and the complexity of the algorithms under consideration is discussed. A new method of combinatorial description of knots is given together with a new algorithm (based on this description) for recognizing the unknot by using a procedure for monotone simplification. In the conclusion of the paper several problems are formulated whose solution could help to advance towards the 'algorithmization' of knot theory

  1. Reducing rotor weight

    Energy Technology Data Exchange (ETDEWEB)

    Cheney, M.C. [PS Enterprises, Inc., Glastonbury, CT (United States)

    1997-12-31

    The cost of energy for renewables has gained greater significance in recent years due to the drop in price in some competing energy sources, particularly natural gas. In pursuit of lower manufacturing costs for wind turbine systems, work was conducted to explore an innovative rotor designed to reduce weight and cost over conventional rotor systems. Trade-off studies were conducted to measure the influence of number of blades, stiffness, and manufacturing method on COE. The study showed that increasing number of blades at constant solidity significantly reduced rotor weight and that manufacturing the blades using pultrusion technology produced the lowest cost per pound. Under contracts with the National Renewable Energy Laboratory and the California Energy Commission, a 400 kW (33m diameter) turbine was designed employing this technology. The project included tests of an 80 kW (15.5m diameter) dynamically scaled rotor which demonstrated the viability of the design.

  2. Calculating Quenching Weights

    CERN Document Server

    Salgado, C A; Salgado, Carlos A.; Wiedemann, Urs Achim

    2003-01-01

    We calculate the probability (``quenching weight'') that a hard parton radiates an additional energy fraction due to scattering in spatially extended QCD matter. This study is based on an exact treatment of finite in-medium path length, it includes the case of a dynamically expanding medium, and it extends to the angular dependence of the medium-induced gluon radiation pattern. All calculations are done in the multiple soft scattering approximation (Baier-Dokshitzer-Mueller-Peign\\'e-Schiff--Zakharov ``BDMPS-Z''-formalism) and in the single hard scattering approximation (N=1 opacity approximation). By comparison, we establish a simple relation between transport coefficient, Debye screening mass and opacity, for which both approximations lead to comparable results. Together with this paper, a CPU-inexpensive numerical subroutine for calculating quenching weights is provided electronically. To illustrate its applications, we discuss the suppression of hadronic transverse momentum spectra in nucleus-nucleus colli...

  3. Normal Weight Dyslipidemia

    DEFF Research Database (Denmark)

    Ipsen, David Hojland; Tveden-Nyborg, Pernille; Lykkesfeldt, Jens

    2016-01-01

    Objective: The liver coordinates lipid metabolism and may play a vital role in the development of dyslipidemia, even in the absence of obesity. Normal weight dyslipidemia (NWD) and patients with nonalcoholic fatty liver disease (NAFLD) who do not have obesity constitute a unique subset...... of individuals characterized by dyslipidemia and metabolic deterioration. This review examined the available literature on the role of the liver in dyslipidemia and the metabolic characteristics of patients with NAFLD who do not have obesity. Methods: PubMed was searched using the following keywords: nonobese......, dyslipidemia, NAFLD, NWD, liver, and metabolically obese/unhealthy normal weight. Additionally, article bibliographies were screened, and relevant citations were retrieved. Studies were excluded if they had not measured relevant biomarkers of dyslipidemia. Results: NWD and NAFLD without obesity share a similar...

  4. Hybrid Cryptosystem Using Tiny Encryption Algorithm and LUC Algorithm

    Science.gov (United States)

    Rachmawati, Dian; Sharif, Amer; Jaysilen; Andri Budiman, Mohammad

    2018-01-01

    Security becomes a very important issue in data transmission and there are so many methods to make files more secure. One of that method is cryptography. Cryptography is a method to secure file by writing the hidden code to cover the original file. Therefore, if the people do not involve in cryptography, they cannot decrypt the hidden code to read the original file. There are many methods are used in cryptography, one of that method is hybrid cryptosystem. A hybrid cryptosystem is a method that uses a symmetric algorithm to secure the file and use an asymmetric algorithm to secure the symmetric algorithm key. In this research, TEA algorithm is used as symmetric algorithm and LUC algorithm is used as an asymmetric algorithm. The system is tested by encrypting and decrypting the file by using TEA algorithm and using LUC algorithm to encrypt and decrypt the TEA key. The result of this research is by using TEA Algorithm to encrypt the file, the cipher text form is the character from ASCII (American Standard for Information Interchange) table in the form of hexadecimal numbers and the cipher text size increase by sixteen bytes as the plaintext length is increased by eight characters.

  5. Online Planning Algorithm

    Science.gov (United States)

    Rabideau, Gregg R.; Chien, Steve A.

    2010-01-01

    AVA v2 software selects goals for execution from a set of goals that oversubscribe shared resources. The term goal refers to a science or engineering request to execute a possibly complex command sequence, such as image targets or ground-station downlinks. Developed as an extension to the Virtual Machine Language (VML) execution system, the software enables onboard and remote goal triggering through the use of an embedded, dynamic goal set that can oversubscribe resources. From the set of conflicting goals, a subset must be chosen that maximizes a given quality metric, which in this case is strict priority selection. A goal can never be pre-empted by a lower priority goal, and high-level goals can be added, removed, or updated at any time, and the "best" goals will be selected for execution. The software addresses the issue of re-planning that must be performed in a short time frame by the embedded system where computational resources are constrained. In particular, the algorithm addresses problems with well-defined goal requests without temporal flexibility that oversubscribes available resources. By using a fast, incremental algorithm, goal selection can be postponed in a "just-in-time" fashion allowing requests to be changed or added at the last minute. Thereby enabling shorter response times and greater autonomy for the system under control.

  6. Algorithmic Relative Complexity

    Directory of Open Access Journals (Sweden)

    Daniele Cerra

    2011-04-01

    Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.

  7. Fatigue evaluation algorithms: Review

    Energy Technology Data Exchange (ETDEWEB)

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  8. Link prediction in weighted networks

    DEFF Research Database (Denmark)

    Wind, David Kofoed; Mørup, Morten

    2012-01-01

    Many complex networks feature relations with weight information. Some models utilize this information while other ignore the weight information when inferring the structure. In this paper we investigate if edge-weights when modeling real networks, carry important information about the network...... is to infer presence of edges, but that simpler models are better at inferring the actual weights....

  9. Marijuana and Body Weight

    OpenAIRE

    Sansone, Randy A.; Sansone, Lori A.

    2014-01-01

    Acute marijuana use is classically associated with snacking behavior (colloquially referred to as “the munchies”). In support of these acute appetite-enhancing effects, several authorities report that marijuana may increase body mass index in patients suffering from human immunodeficiency virus and cancer. However, for these medical conditions, while appetite may be stimulated, some studies indicate that weight gain is not always clinically meaningful. In addition, in a study of cancer patien...

  10. Light Weight Deflectometer (LWD)

    OpenAIRE

    Siddiki, Nayyar Zia

    2012-01-01

    Light weight deflectometer (LWD) has been widely used for quality assurance in road construction, in particular compaction of both chemically treated subgrade soil and aggregate subbase. However, it has been recognized that LWD measurements vary with many factors. Based on LWD tests in actual road construction, this presentation provides updated information on the LWD deflection measurements for both chemically treated subgrade soil and aggregate subbase.

  11. Weighted halfspace depth

    Czech Academy of Sciences Publication Activity Database

    Kotík, Lukáš; Hlubinka, D.; Vencálek, O.

    Vol. 46, č. 1 (2010), s. 125-148 ISSN 0023-5954 Institutional research plan: CEZ:AV0Z10750506 Keywords : data depth * nonparametric multivariate analysis * strong consistency of depth * mixture of distributions Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.461, year: 2010 http://library.utia.cas.cz/separaty/2010/SI/kotik-weighted halfspace depth.pdf

  12. The weight of color

    OpenAIRE

    Brunberg, Mikael

    2013-01-01

    This paper explores the weight of color, with the focus lying on the symbolic significance ofcolor. Exploring whether color in itself conveys symbolic significance and is the symbolicsignificance of color permanent, or is it an after construction? It will be looking at differentareas such as what makes us humans able to perceive colors in the first place, beginning withan insight at some of the foundations in the area of color theory. Mentioning experiments ondecomposed white light, that cont...

  13. Online weight training.

    Science.gov (United States)

    McNamara, John M; Swalm, Ricky L; Stearne, David J; Covassin, Tracey M

    2008-07-01

    The purpose of this study was to determine how a traditional weight training class compared to nontraditional classes that were heavily laden with technology. Could students learn resistance exercises by watching video demonstrations over the Internet? Three university weight training classes, each lasting 16 weeks, were compared. Each class had the same curriculum and workout requirements but different attendance requirements. The online group made extensive use of the Internet and was allowed to complete the workouts on their own at any gym that was convenient for them. Seventy-nine college-aged students were randomized into 3 groups: traditional (n = 27), hybrid (n = 25), and online (n = 27). They completed pretest and posttest measures on upper-body strength (i.e., bench press), lower-body strength (i.e., back squat), and knowledge (i.e., written exam). The results indicated that all 3 groups showed significant improvement in knowledge (p students to attend class and may have resulted in significantly lower scores on the bench press (p technology can be used in a weight training class. If this limit is exceeded, some type of monitoring system appears necessary to ensure that students are actually completing their workouts.

  14. TAO-robust backpropagation learning algorithm.

    Science.gov (United States)

    Pernía-Espinoza, Alpha V; Ordieres-Meré, Joaquín B; Martínez-de-Pisón, Francisco J; González-Marcos, Ana

    2005-03-01

    In several fields, as industrial modelling, multilayer feedforward neural networks are often used as universal function approximations. These supervised neural networks are commonly trained by a traditional backpropagation learning format, which minimises the mean squared error (mse) of the training data. However, in the presence of corrupted data (outliers) this training scheme may produce wrong models. We combine the benefits of the non-linear regression model tau-estimates [introduced by Tabatabai, M. A. Argyros, I. K. Robust Estimation and testing for general nonlinear regression models. Applied Mathematics and Computation. 58 (1993) 85-101] with the backpropagation algorithm to produce the TAO-robust learning algorithm, in order to deal with the problems of modelling with outliers. The cost function of this approach has a bounded influence function given by the weighted average of two psi functions, one corresponding to a very robust estimate and the other to a highly efficient estimate. The advantages of the proposed algorithm are studied with an example.

  15. Group prioritisation with unknown expert weights in incomplete linguistic context

    Science.gov (United States)

    Cheng, Dong; Cheng, Faxin; Zhou, Zhili; Wang, Juan

    2017-09-01

    In this paper, we study a group prioritisation problem in situations when the expert weights are completely unknown and their judgement preferences are linguistic and incomplete. Starting from the theory of relative entropy (RE) and multiplicative consistency, an optimisation model is provided for deriving an individual priority vector without estimating the missing value(s) of an incomplete linguistic preference relation. In order to address the unknown expert weights in the group aggregating process, we define two new kinds of expert weight indicators based on RE: proximity entropy weight and similarity entropy weight. Furthermore, a dynamic-adjusting algorithm (DAA) is proposed to obtain an objective expert weight vector and capture the dynamic properties involved in it. Unlike the extant literature of group prioritisation, the proposed RE approach does not require pre-allocation of expert weights and can solve incomplete preference relations. An interesting finding is that once all the experts express their preference relations, the final expert weight vector derived from the DAA is fixed irrespective of the initial settings of expert weights. Finally, an application example is conducted to validate the effectiveness and robustness of the RE approach.

  16. Vehicle Maximum Weight Limitation Based on Intelligent Weight Sensor

    Science.gov (United States)

    Raihan, W.; Tessar, R. M.; Ernest, C. O. S.; E Byan, W. R.; Winda, A.

    2017-03-01

    Vehicle weight is an important factor to be maintained for transportation safety. A weight limitation system is proposed to make sure the vehicle weight is always below its designation prior the vehicle is being used by the driver. The proposed system is divided into two systems, namely vehicle weight confirmation system and weight warning system. In vehicle weight confirmation system, the weight sensor work for the first time after the ignition switch is turned on. When the weight is under the weight limit, the starter engine can be switched on to start the engine system, otherwise it will be locked. The seconds system, will operated after checking all the door at close position, once the door of the car is closed, the weight warning system will check once again the weight during runing engine condition. The results of these two systems, vehicle weight confirmation system and weight warning system have 100 % accuracy, respectively. These show that the proposed vehicle weight limitation system operate well.

  17. AdaBoost-based algorithm for network intrusion detection.

    Science.gov (United States)

    Hu, Weiming; Hu, Wei; Maybank, Steve

    2008-04-01

    Network intrusion detection aims at distinguishing the attacks on the Internet from normal use of the Internet. It is an indispensable part of the information security system. Due to the variety of network behaviors and the rapid development of attack fashions, it is necessary to develop fast machine-learning-based intrusion detection algorithms with high detection rates and low false-alarm rates. In this correspondence, we propose an intrusion detection algorithm based on the AdaBoost algorithm. In the algorithm, decision stumps are used as weak classifiers. The decision rules are provided for both categorical and continuous features. By combining the weak classifiers for continuous features and the weak classifiers for categorical features into a strong classifier, the relations between these two different types of features are handled naturally, without any forced conversions between continuous and categorical features. Adaptable initial weights and a simple strategy for avoiding overfitting are adopted to improve the performance of the algorithm. Experimental results show that our algorithm has low computational complexity and error rates, as compared with algorithms of higher computational complexity, as tested on the benchmark sample data.

  18. Optimal Fungal Space Searching Algorithms.

    Science.gov (United States)

    Asenova, Elitsa; Lin, Hsin-Yu; Fu, Eileen; Nicolau, Dan V; Nicolau, Dan V

    2016-10-01

    Previous experiments have shown that fungi use an efficient natural algorithm for searching the space available for their growth in micro-confined networks, e.g., mazes. This natural "master" algorithm, which comprises two "slave" sub-algorithms, i.e., collision-induced branching and directional memory, has been shown to be more efficient than alternatives, with one, or the other, or both sub-algorithms turned off. In contrast, the present contribution compares the performance of the fungal natural algorithm against several standard artificial homologues. It was found that the space-searching fungal algorithm consistently outperforms uninformed algorithms, such as Depth-First-Search (DFS). Furthermore, while the natural algorithm is inferior to informed ones, such as A*, this under-performance does not importantly increase with the increase of the size of the maze. These findings suggest that a systematic effort of harvesting the natural space searching algorithms used by microorganisms is warranted and possibly overdue. These natural algorithms, if efficient, can be reverse-engineered for graph and tree search strategies.

  19. STAR Algorithm Integration Team - Facilitating operational algorithm development

    Science.gov (United States)

    Mikles, V. J.

    2015-12-01

    The NOAA/NESDIS Center for Satellite Research and Applications (STAR) provides technical support of the Joint Polar Satellite System (JPSS) algorithm development and integration tasks. Utilizing data from the S-NPP satellite, JPSS generates over thirty Environmental Data Records (EDRs) and Intermediate Products (IPs) spanning atmospheric, ocean, cryosphere, and land weather disciplines. The Algorithm Integration Team (AIT) brings technical expertise and support to product algorithms, specifically in testing and validating science algorithms in a pre-operational environment. The AIT verifies that new and updated algorithms function in the development environment, enforces established software development standards, and ensures that delivered packages are functional and complete. AIT facilitates the development of new JPSS-1 algorithms by implementing a review approach based on the Enterprise Product Lifecycle (EPL) process. Building on relationships established during the S-NPP algorithm development process and coordinating directly with science algorithm developers, the AIT has implemented structured reviews with self-contained document suites. The process has supported algorithm improvements for products such as ozone, active fire, vegetation index, and temperature and moisture profiles.

  20. Algorithm aversion: people erroneously avoid algorithms after seeing them err.

    Science.gov (United States)

    Dietvorst, Berkeley J; Simmons, Joseph P; Massey, Cade

    2015-02-01

    Research shows that evidence-based algorithms more accurately predict the future than do human forecasters. Yet when forecasters are deciding whether to use a human forecaster or a statistical algorithm, they often choose the human forecaster. This phenomenon, which we call algorithm aversion, is costly, and it is important to understand its causes. We show that people are especially averse to algorithmic forecasters after seeing them perform, even when they see them outperform a human forecaster. This is because people more quickly lose confidence in algorithmic than human forecasters after seeing them make the same mistake. In 5 studies, participants either saw an algorithm make forecasts, a human make forecasts, both, or neither. They then decided whether to tie their incentives to the future predictions of the algorithm or the human. Participants who saw the algorithm perform were less confident in it, and less likely to choose it over an inferior human forecaster. This was true even among those who saw the algorithm outperform the human.

  1. The Texas Medication Algorithm Project (TMAP) schizophrenia algorithms.

    Science.gov (United States)

    Miller, A L; Chiles, J A; Chiles, J K; Crismon, M L; Rush, A J; Shon, S P

    1999-10-01

    In the Texas Medication Algorithm Project (TMAP), detailed guidelines for medication management of schizophrenia and related disorders, bipolar disorders, and major depressive disorders have been developed and implemented. This article describes the algorithms developed for medication treatment of schizophrenia and related disorders. The guidelines recommend a sequence of medications and discuss dosing, duration, and switch-over tactics. They also specify response criteria at each stage of the algorithm for both positive and negative symptoms. The rationale and evidence for each aspect of the algorithms are presented.

  2. An improved algorithm for personalized recommendation on MOOCs

    Directory of Open Access Journals (Sweden)

    Yuqin Wang

    2017-09-01

    Full Text Available Purpose – In the past few years, millions of people started to acquire knowledge from the Massive Open Online Courses (MOOCs. MOOCs contain massive video courses produced by instructors, and learners all over the world can get access to these courses via the internet. However, faced with massive courses, learners often waste much time finding courses they like. This paper aims to explore the problem that how to make accurate personalized recommendations for MOOC users. Design/methodology/approach – This paper proposes a multi-attribute weight algorithm based on collaborative filtering (CF to select a recommendation set of courses for target MOOC users. Findings – The recall of the proposed algorithm in this paper is higher than both the traditional CF and a CF-based algorithm – uncertain neighbors’ collaborative filtering recommendation algorithm. The higher the recall is, the more accurate the recommendation result is. Originality/value – This paper reflects the target users’ preferences for the first time by calculating separately the weight of the attributes and the weight of attribute values of the courses.

  3. An Improved Nested Sampling Algorithm for Model Selection and Assessment

    Science.gov (United States)

    Zeng, X.; Ye, M.; Wu, J.; WANG, D.

    2017-12-01

    Multimodel strategy is a general approach for treating model structure uncertainty in recent researches. The unknown groundwater system is represented by several plausible conceptual models. Each alternative conceptual model is attached with a weight which represents the possibility of this model. In Bayesian framework, the posterior model weight is computed as the product of model prior weight and marginal likelihood (or termed as model evidence). As a result, estimating marginal likelihoods is crucial for reliable model selection and assessment in multimodel analysis. Nested sampling estimator (NSE) is a new proposed algorithm for marginal likelihood estimation. The implementation of NSE comprises searching the parameters' space from low likelihood area to high likelihood area gradually, and this evolution is finished iteratively via local sampling procedure. Thus, the efficiency of NSE is dominated by the strength of local sampling procedure. Currently, Metropolis-Hasting (M-H) algorithm and its variants are often used for local sampling in NSE. However, M-H is not an efficient sampling algorithm for high-dimensional or complex likelihood function. For improving the performance of NSE, it could be feasible to integrate more efficient and elaborated sampling algorithm - DREAMzs into the local sampling. In addition, in order to overcome the computation burden problem of large quantity of repeating model executions in marginal likelihood estimation, an adaptive sparse grid stochastic collocation method is used to build the surrogates for original groundwater model.

  4. Robustness Beamforming Algorithms

    Directory of Open Access Journals (Sweden)

    Sajad Dehghani

    2014-04-01

    Full Text Available Adaptive beamforming methods are known to degrade in the presence of steering vector and covariance matrix uncertinity. In this paper, a new approach is presented to robust adaptive minimum variance distortionless response beamforming make robust against both uncertainties in steering vector and covariance matrix. This method minimize a optimization problem that contains a quadratic objective function and a quadratic constraint. The optimization problem is nonconvex but is converted to a convex optimization problem in this paper. It is solved by the interior-point method and optimum weight vector to robust beamforming is achieved.

  5. Algorithmic Reflections on Choreography

    Directory of Open Access Journals (Sweden)

    Pablo Ventura

    2016-11-01

    Full Text Available In 1996, Pablo Ventura turned his attention to the choreography software Life Forms to find out whether the then-revolutionary new tool could lead to new possibilities of expression in contemporary dance. During the next 2 decades, he devised choreographic techniques and custom software to create dance works that highlight the operational logic of computers, accompanied by computer-generated dance and media elements. This article provides a firsthand account of how Ventura’s engagement with algorithmic concepts guided and transformed his choreographic practice. The text describes the methods that were developed to create computer-aided dance choreographies. Furthermore, the text illustrates how choreography techniques can be applied to correlate formal and aesthetic aspects of movement, music, and video. Finally, the text emphasizes how Ventura’s interest in the wider conceptual context has led him to explore with choreographic means fundamental issues concerning the characteristics of humans and machines and their increasingly profound interdependencies.

  6. Multiple-algorithm parallel fusion of infrared polarization and intensity images based on algorithmic complementarity and synergy

    Science.gov (United States)

    Zhang, Lei; Yang, Fengbao; Ji, Linna; Lv, Sheng

    2018-01-01

    Diverse image fusion methods perform differently. Each method has advantages and disadvantages compared with others. One notion is that the advantages of different image methods can be effectively combined. A multiple-algorithm parallel fusion method based on algorithmic complementarity and synergy is proposed. First, in view of the characteristics of the different algorithms and difference-features among images, an index vector-based feature-similarity is proposed to define the degree of complementarity and synergy. This proposed index vector is a reliable evidence indicator for algorithm selection. Second, the algorithms with a high degree of complementarity and synergy are selected. Then, the different degrees of various features and infrared intensity images are used as the initial weights for the nonnegative matrix factorization (NMF). This avoids randomness of the NMF initialization parameter. Finally, the fused images of different algorithms are integrated using the NMF because of its excellent data fusing performance on independent features. Experimental results demonstrate that the visual effect and objective evaluation index of the fused images obtained using the proposed method are better than those obtained using traditional methods. The proposed method retains all the advantages that individual fusion algorithms have.

  7. On Weighted Support Vector Regression

    DEFF Research Database (Denmark)

    Han, Xixuan; Clemmensen, Line Katrine Harder

    2014-01-01

    We propose a new type of weighted support vector regression (SVR), motivated by modeling local dependencies in time and space in prediction of house prices. The classic weights of the weighted SVR are added to the slack variables in the objective function (OF‐weights). This procedure directly...... shrinks the coefficient of each observation in the estimated functions; thus, it is widely used for minimizing influence of outliers. We propose to additionally add weights to the slack variables in the constraints (CF‐weights) and call the combination of weights the doubly weighted SVR. We illustrate...... the differences and similarities of the two types of weights by demonstrating the connection between the Least Absolute Shrinkage and Selection Operator (LASSO) and the SVR. We show that an SVR problem can be transformed to a LASSO problem plus a linear constraint and a box constraint. We demonstrate...

  8. A new LMS algorithm for analysis of atrial fibrillation signals.

    Science.gov (United States)

    Ciaccio, Edward J; Biviano, Angelo B; Whang, William; Garan, Hasan

    2012-03-26

    A biomedical signal can be defined by its extrinsic features (x-axis and y-axis shift and scale) and intrinsic features (shape after normalization of extrinsic features). In this study, an LMS algorithm utilizing the method of differential steepest descent is developed, and is tested by normalization of extrinsic features in complex fractionated atrial electrograms (CFAE). Equations for normalization of x-axis and y-axis shift and scale are first derived. The algorithm is implemented for real-time analysis of CFAE acquired during atrial fibrillation (AF). Data was acquired at a 977 Hz sampling rate from 10 paroxysmal and 10 persistent AF patients undergoing clinical electrophysiologic study and catheter ablation therapy. Over 24 trials, normalization characteristics using the new algorithm with four weights were compared to the Widrow-Hoff LMS algorithm with four tapped delays. The time for convergence, and the mean squared error (MSE) after convergence, were compared. The new LMS algorithm was also applied to lead aVF of the electrocardiogram in one patient with longstanding persistent AF, to enhance the F wave and to monitor extrinsic changes in signal shape. The average waveform over a 25 s interval was used as a prototypical reference signal for matching with the aVF lead. Based on the derivation equations, the y-shift and y-scale adjustments of the new LMS algorithm were shown to be equivalent to the scalar form of the Widrow-Hoff LMS algorithm. For x-shift and x-scale adjustments, rather than implementing a long tapped delay as in Widrow-Hoff LMS, the new method uses only two weights. After convergence, the MSE for matching paroxysmal CFAE averaged 0.46 ± 0.49 μV(2)/sample for the new LMS algorithm versus 0.72 ± 0.35 μV(2)/sample for Widrow-Hoff LMS. The MSE for matching persistent CFAE averaged 0.55 ± 0.95 μV(2)/sample for the new LMS algorithm versus 0.62 ± 0.55 μV(2)/sample for Widrow-Hoff LMS. There were no significant differences in estimation

  9. A new LMS algorithm for analysis of atrial fibrillation signals

    Directory of Open Access Journals (Sweden)

    Ciaccio Edward J

    2012-03-01

    Full Text Available Abstract Background A biomedical signal can be defined by its extrinsic features (x-axis and y-axis shift and scale and intrinsic features (shape after normalization of extrinsic features. In this study, an LMS algorithm utilizing the method of differential steepest descent is developed, and is tested by normalization of extrinsic features in complex fractionated atrial electrograms (CFAE. Method Equations for normalization of x-axis and y-axis shift and scale are first derived. The algorithm is implemented for real-time analysis of CFAE acquired during atrial fibrillation (AF. Data was acquired at a 977 Hz sampling rate from 10 paroxysmal and 10 persistent AF patients undergoing clinical electrophysiologic study and catheter ablation therapy. Over 24 trials, normalization characteristics using the new algorithm with four weights were compared to the Widrow-Hoff LMS algorithm with four tapped delays. The time for convergence, and the mean squared error (MSE after convergence, were compared. The new LMS algorithm was also applied to lead aVF of the electrocardiogram in one patient with longstanding persistent AF, to enhance the F wave and to monitor extrinsic changes in signal shape. The average waveform over a 25 s interval was used as a prototypical reference signal for matching with the aVF lead. Results Based on the derivation equations, the y-shift and y-scale adjustments of the new LMS algorithm were shown to be equivalent to the scalar form of the Widrow-Hoff LMS algorithm. For x-shift and x-scale adjustments, rather than implementing a long tapped delay as in Widrow-Hoff LMS, the new method uses only two weights. After convergence, the MSE for matching paroxysmal CFAE averaged 0.46 ± 0.49μV2/sample for the new LMS algorithm versus 0.72 ± 0.35μV2/sample for Widrow-Hoff LMS. The MSE for matching persistent CFAE averaged 0.55 ± 0.95μV2/sample for the new LMS algorithm versus 0.62 ± 0.55μV2/sample for Widrow

  10. Segment-based dose optimization using a genetic algorithm

    International Nuclear Information System (INIS)

    Cotrutz, Cristian; Xing Lei

    2003-01-01

    Intensity modulated radiation therapy (IMRT) inverse planning is conventionally done in two steps. Firstly, the intensity maps of the treatment beams are optimized using a dose optimization algorithm. Each of them is then decomposed into a number of segments using a leaf-sequencing algorithm for delivery. An alternative approach is to pre-assign a fixed number of field apertures and optimize directly the shapes and weights of the apertures. While the latter approach has the advantage of eliminating the leaf-sequencing step, the optimization of aperture shapes is less straightforward than that of beamlet-based optimization because of the complex dependence of the dose on the field shapes, and their weights. In this work we report a genetic algorithm for segment-based optimization. Different from a gradient iterative approach or simulated annealing, the algorithm finds the optimum solution from a population of candidate plans. In this technique, each solution is encoded using three chromosomes: one for the position of the left-bank leaves of each segment, the second for the position of the right-bank and the third for the weights of the segments defined by the first two chromosomes. The convergence towards the optimum is realized by crossover and mutation operators that ensure proper exchange of information between the three chromosomes of all the solutions in the population. The algorithm is applied to a phantom and a prostate case and the results are compared with those obtained using beamlet-based optimization. The main conclusion drawn from this study is that the genetic optimization of segment shapes and weights can produce highly conformal dose distribution. In addition, our study also confirms previous findings that fewer segments are generally needed to generate plans that are comparable with the plans obtained using beamlet-based optimization. Thus the technique may have useful applications in facilitating IMRT treatment planning

  11. An Interactive Personalized Recommendation System Using the Hybrid Algorithm Model

    Directory of Open Access Journals (Sweden)

    Yan Guo

    2017-10-01

    Full Text Available With the rapid development of e-commerce, the contradiction between the disorder of business information and customer demand is increasingly prominent. This study aims to make e-commerce shopping more convenient, and avoid information overload, by an interactive personalized recommendation system using the hybrid algorithm model. The proposed model first uses various recommendation algorithms to get a list of original recommendation results. Combined with the customer’s feedback in an interactive manner, it then establishes the weights of corresponding recommendation algorithms. Finally, the synthetic formula of evidence theory is used to fuse the original results to obtain the final recommendation products. The recommendation performance of the proposed method is compared with that of traditional methods. The results of the experimental study through a Taobao online dress shop clearly show that the proposed method increases the efficiency of data mining in the consumer coverage, the consumer discovery accuracy and the recommendation recall. The hybrid recommendation algorithm complements the advantages of the existing recommendation algorithms in data mining. The interactive assigned-weight method meets consumer demand better and solves the problem of information overload. Meanwhile, our study offers important implications for e-commerce platform providers regarding the design of product recommendation systems.

  12. Evolving Stochastic Learning Algorithm based on Tsallis entropic index

    Science.gov (United States)

    Anastasiadis, A. D.; Magoulas, G. D.

    2006-03-01

    In this paper, inspired from our previous algorithm, which was based on the theory of Tsallis statistical mechanics, we develop a new evolving stochastic learning algorithm for neural networks. The new algorithm combines deterministic and stochastic search steps by employing a different adaptive stepsize for each network weight, and applies a form of noise that is characterized by the nonextensive entropic index q, regulated by a weight decay term. The behavior of the learning algorithm can be made more stochastic or deterministic depending on the trade off between the temperature T and the q values. This is achieved by introducing a formula that defines a time-dependent relationship between these two important learning parameters. Our experimental study verifies that there are indeed improvements in the convergence speed of this new evolving stochastic learning algorithm, which makes learning faster than using the original Hybrid Learning Scheme (HLS). In addition, experiments are conducted to explore the influence of the entropic index q and temperature T on the convergence speed and stability of the proposed method.

  13. Multisensor data fusion algorithm development

    Energy Technology Data Exchange (ETDEWEB)

    Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.

    1995-12-01

    This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.

  14. PROBABILISTIC PROPERTIES OF THE INITIAL VALUES OF WEIGHTING FACTORS IN SYNCHRONIZED ARTIFICIAL NEURAL

    Directory of Open Access Journals (Sweden)

    V. F. Golikov

    2013-01-01

    Full Text Available One of the most efficient ways for identical binary se quences generation is using methods of neural cryptography. The initial weight vestors values influence on speed of synchronization is analized. Equal probability of initial weight vestors motion directions is great advantage. On this base authors suppose new line of research conserned with improvement of network architecture and correction algorithm.

  15. Marital status and body weight, weight perception, and weight management among U.S. adults.

    Science.gov (United States)

    Klos, Lori A; Sobal, Jeffery

    2013-12-01

    Married individuals often have higher body weights than unmarried individuals, but it is unclear how marital roles affect body weight-related perceptions, desires, and behaviors. This study analyzed cross-sectional data for 4,089 adult men and 3,989 adult women using multinomial logistic regression to examine associations between marital status, perceived body weight, desired body weight, and weight management approach. Controlling for demographics and current weight, married or cohabiting women and divorced or separated women more often perceived themselves as overweight and desired to weigh less than women who had never married. Marital status was unrelated to men's weight perception and desired weight change. Marital status was also generally unrelated to weight management approach, except that divorced or separated women were more likely to have intentionally lost weight within the past year compared to never married women. Additionally, never married men were more likely to be attempting to prevent weight gain than married or cohabiting men and widowed men. Overall, married and formerly married women more often perceived themselves as overweight and desired a lower weight. Men's marital status was generally unassociated with weight-related perceptions, desires, and behaviors. Women's but not men's marital roles appear to influence their perceived and desired weight, suggesting that weight management interventions should be sensitive to both marital status and gender differences. © 2013 Elsevier Ltd. All rights reserved.

  16. Weighted community detection and data clustering using message passing

    Science.gov (United States)

    Shi, Cheng; Liu, Yanchen; Zhang, Pan

    2018-03-01

    Grouping objects into clusters based on the similarities or weights between them is one of the most important problems in science and engineering. In this work, by extending message-passing algorithms and spectral algorithms proposed for an unweighted community detection problem, we develop a non-parametric method based on statistical physics, by mapping the problem to the Potts model at the critical temperature of spin-glass transition and applying belief propagation to solve the marginals corresponding to the Boltzmann distribution. Our algorithm is robust to over-fitting and gives a principled way to determine whether there are significant clusters in the data and how many clusters there are. We apply our method to different clustering tasks. In the community detection problem in weighted and directed networks, we show that our algorithm significantly outperforms existing algorithms. In the clustering problem, where the data were generated by mixture models in the sparse regime, we show that our method works all the way down to the theoretical limit of detectability and gives accuracy very close to that of the optimal Bayesian inference. In the semi-supervised clustering problem, our method only needs several labels to work perfectly in classic datasets. Finally, we further develop Thouless-Anderson-Palmer equations which heavily reduce the computation complexity in dense networks but give almost the same performance as belief propagation.

  17. All rights reserved Variation in Body Weight, Organ Weight and ...

    African Journals Online (AJOL)

    ADOWIE PERE

    Variation in Body Weight, Organ Weight and Haematological Parameters of Rats Fed ... ABSTRACT: Food insecurity is a major problem of the developing nations. ... Except for the values of haemoglobin and packed cell volume that were ...

  18. Birth weight recovery among very low birth weight infants surviving ...

    African Journals Online (AJOL)

    Very low birth weight (VLBW) infants are those born weighing less ... an association between retinopathy of prematurity and poor weight gain. .... LGA = large for gestational age; SGA = small for gestational age; NEC = necrotising enterocolitis;.

  19. Mao-Gilles Stabilization Algorithm

    OpenAIRE

    Jérôme Gilles

    2013-01-01

    Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different sce...

  20. Mao-Gilles Stabilization Algorithm

    Directory of Open Access Journals (Sweden)

    Jérôme Gilles

    2013-07-01

    Full Text Available Originally, the Mao-Gilles stabilization algorithm was designed to compensate the non-rigid deformations due to atmospheric turbulence. Given a sequence of frames affected by atmospheric turbulence, the algorithm uses a variational model combining optical flow and regularization to characterize the static observed scene. The optimization problem is solved by Bregman Iteration and the operator splitting method. The algorithm is simple, efficient, and can be easily generalized for different scenarios involving non-rigid deformations.

  1. One improved LSB steganography algorithm

    Science.gov (United States)

    Song, Bing; Zhang, Zhi-hong

    2013-03-01

    It is easy to be detected by X2 and RS steganalysis with high accuracy that using LSB algorithm to hide information in digital image. We started by selecting information embedded location and modifying the information embedded method, combined with sub-affine transformation and matrix coding method, improved the LSB algorithm and a new LSB algorithm was proposed. Experimental results show that the improved one can resist the X2 and RS steganalysis effectively.

  2. Unsupervised Classification Using Immune Algorithm

    OpenAIRE

    Al-Muallim, M. T.; El-Kouatly, R.

    2012-01-01

    Unsupervised classification algorithm based on clonal selection principle named Unsupervised Clonal Selection Classification (UCSC) is proposed in this paper. The new proposed algorithm is data driven and self-adaptive, it adjusts its parameters to the data to make the classification operation as fast as possible. The performance of UCSC is evaluated by comparing it with the well known K-means algorithm using several artificial and real-life data sets. The experiments show that the proposed U...

  3. Graph Algorithm Animation with Grrr

    OpenAIRE

    Rodgers, Peter; Vidal, Natalia

    2000-01-01

    We discuss geometric positioning, highlighting of visited nodes and user defined highlighting that form the algorithm animation facilities in the Grrr graph rewriting programming language. The main purpose of animation was initially for the debugging and profiling of Grrr code, but recently it has been extended for the purpose of teaching algorithms to undergraduate students. The animation is restricted to graph based algorithms such as graph drawing, list manipulation or more traditional gra...

  4. Algorithms over partially ordered sets

    DEFF Research Database (Denmark)

    Baer, Robert M.; Østerby, Ole

    1969-01-01

    in partially ordered sets, answer the combinatorial question of how many maximal chains might exist in a partially ordered set withn elements, and we give an algorithm for enumerating all maximal chains. We give (in § 3) algorithms which decide whether a partially ordered set is a (lower or upper) semi......-lattice, and whether a lattice has distributive, modular, and Boolean properties. Finally (in § 4) we give Algol realizations of the various algorithms....

  5. An overview of smart grid routing algorithms

    Science.gov (United States)

    Wang, Junsheng; OU, Qinghai; Shen, Haijuan

    2017-08-01

    This paper summarizes the typical routing algorithm in smart grid by analyzing the communication business and communication requirements of intelligent grid. Mainly from the two kinds of routing algorithm is analyzed, namely clustering routing algorithm and routing algorithm, analyzed the advantages and disadvantages of two kinds of typical routing algorithm in routing algorithm and applicability.

  6. Algorithmic complexity of quantum capacity

    Science.gov (United States)

    Oskouei, Samad Khabbazi; Mancini, Stefano

    2018-04-01

    We analyze the notion of quantum capacity from the perspective of algorithmic (descriptive) complexity. To this end, we resort to the concept of semi-computability in order to describe quantum states and quantum channel maps. We introduce algorithmic entropies (like algorithmic quantum coherent information) and derive relevant properties for them. Then we show that quantum capacity based on semi-computable concept equals the entropy rate of algorithmic coherent information, which in turn equals the standard quantum capacity. Thanks to this, we finally prove that the quantum capacity, for a given semi-computable channel, is limit computable.

  7. Machine Learning an algorithmic perspective

    CERN Document Server

    Marsland, Stephen

    2009-01-01

    Traditional books on machine learning can be divided into two groups - those aimed at advanced undergraduates or early postgraduates with reasonable mathematical knowledge and those that are primers on how to code algorithms. The field is ready for a text that not only demonstrates how to use the algorithms that make up machine learning methods, but also provides the background needed to understand how and why these algorithms work. Machine Learning: An Algorithmic Perspective is that text.Theory Backed up by Practical ExamplesThe book covers neural networks, graphical models, reinforcement le

  8. DNABIT Compress - Genome compression algorithm.

    Science.gov (United States)

    Rajarajeswari, Pothuraju; Apparao, Allam

    2011-01-22

    Data compression is concerned with how information is organized in data. Efficient storage means removal of redundancy from the data being stored in the DNA molecule. Data compression algorithms remove redundancy and are used to understand biologically important molecules. We present a compression algorithm, "DNABIT Compress" for DNA sequences based on a novel algorithm of assigning binary bits for smaller segments of DNA bases to compress both repetitive and non repetitive DNA sequence. Our proposed algorithm achieves the best compression ratio for DNA sequences for larger genome. Significantly better compression results show that "DNABIT Compress" algorithm is the best among the remaining compression algorithms. While achieving the best compression ratios for DNA sequences (Genomes),our new DNABIT Compress algorithm significantly improves the running time of all previous DNA compression programs. Assigning binary bits (Unique BIT CODE) for (Exact Repeats, Reverse Repeats) fragments of DNA sequence is also a unique concept introduced in this algorithm for the first time in DNA compression. This proposed new algorithm could achieve the best compression ratio as much as 1.58 bits/bases where the existing best methods could not achieve a ratio less than 1.72 bits/bases.

  9. Diversity-Guided Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Ursem, Rasmus Kjær

    2002-01-01

    Population diversity is undoubtably a key issue in the performance of evolutionary algorithms. A common hypothesis is that high diversity is important to avoid premature convergence and to escape local optima. Various diversity measures have been used to analyze algorithms, but so far few...... algorithms have used a measure to guide the search. The diversity-guided evolutionary algorithm (DGEA) uses the wellknown distance-to-average-point measure to alternate between phases of exploration (mutation) and phases of exploitation (recombination and selection). The DGEA showed remarkable results...

  10. FRAMEWORK FOR COMPARING SEGMENTATION ALGORITHMS

    Directory of Open Access Journals (Sweden)

    G. Sithole

    2015-05-01

    Full Text Available The notion of a ‘Best’ segmentation does not exist. A segmentation algorithm is chosen based on the features it yields, the properties of the segments (point sets it generates, and the complexity of its algorithm. The segmentation is then assessed based on a variety of metrics such as homogeneity, heterogeneity, fragmentation, etc. Even after an algorithm is chosen its performance is still uncertain because the landscape/scenarios represented in a point cloud have a strong influence on the eventual segmentation. Thus selecting an appropriate segmentation algorithm is a process of trial and error. Automating the selection of segmentation algorithms and their parameters first requires methods to evaluate segmentations. Three common approaches for evaluating segmentation algorithms are ‘goodness methods’, ‘discrepancy methods’ and ‘benchmarks’. Benchmarks are considered the most comprehensive method of evaluation. This paper shortcomings in current benchmark methods are identified and a framework is proposed that permits both a visual and numerical evaluation of segmentations for different algorithms, algorithm parameters and evaluation metrics. The concept of the framework is demonstrated on a real point cloud. Current results are promising and suggest that it can be used to predict the performance of segmentation algorithms.

  11. When do evolutionary algorithms optimize separable functions in parallel?

    DEFF Research Database (Denmark)

    Doerr, Benjamin; Sudholt, Dirk; Witt, Carsten

    2013-01-01

    is that evolutionary algorithms make progress on all subfunctions in parallel, so that optimizing a separable function does not take not much longer than optimizing the hardest subfunction-subfunctions are optimized "in parallel." We show that this is only partially true, already for the simple (1+1) evolutionary...... algorithm ((1+1) EA). For separable functions composed of k Boolean functions indeed the optimization time is the maximum optimization time of these functions times a small O(log k) overhead. More generally, for sums of weighted subfunctions that each attain non-negative integer values less than r = o(log1...

  12. Optimising a shaft's geometry by applying genetic algorithms

    Directory of Open Access Journals (Sweden)

    María Alejandra Guzmán

    2005-05-01

    Full Text Available Many engnieering design tasks involve optimising several conflicting goals; these types of problem are known as Multiobjective Optimisation Problems (MOPs. Evolutionary techniques have proved to be an effective tool for finding solutions to these MOPs during the last decade, Variations on the basic generic algorithm have been particulary proposed by different researchers for finding rapid optimal solutions to MOPs. The NSGA (Non-dominated Sorting Generic Algorithm has been implemented in this paper for finding an optimal design for a shaft subjected to cyclic loads, the conflycting goals being minimum weight and minimum lateral deflection.

  13. [Motivation for weight loss among weight loss treatment participants].

    Science.gov (United States)

    Czeglédi, Edit

    2017-12-01

    Unrealistic expectations about weight goal and about weight loss-related benefits can hinder the effort for a successful long-term weight control. To explore weight loss-related goals and their background among overweight/obese patients. Study sample consisted of patients who participated in the inpatient weight loss treatment in the Lipidological Department of Szent Imre Hospital (n = 339, 19% men). Mean age: 50.2 years (SD = 13.47 years), mean BMI: 38.6 (SD = 7.58). self-reported anthropometric data, type and number of treated illnesses, Goals and Relative Weights Questionnaire, Motivations for Weight Loss Scale, Body Shape Questionnaire. Participants would feel disappointed with a possible 10% weight loss in a half-year time span. The acceptable weight loss percentage was higher among women, younger participants and among those who had more excess weight. Motivation regarding the increase in social desirability by weight loss is in association with body dissatisfaction, health related motivation is in association with the number of treated illnesses. Our results are contributing to the understanding of motivational factors behind weight reduction efforts, considering these can improve treatment success rates. Orv Hetil. 2017; 158(49): 1960-1967.

  14. Prevalence of metabolic syndrome using weight and weight indices ...

    African Journals Online (AJOL)

    Background: Notions about the metabolic syndrome (MS) emphasized the importance of obesity. This may prevent the early diagnosis of the condition in normal weight individuals. Aim: To determine variations in prevalence of MS according to different weight and weight indices. Materials and Methods: 342 apparently ...

  15. Phylogenetic inference with weighted codon evolutionary distances.

    Science.gov (United States)

    Criscuolo, Alexis; Michel, Christian J

    2009-04-01

    We develop a new approach to estimate a matrix of pairwise evolutionary distances from a codon-based alignment based on a codon evolutionary model. The method first computes a standard distance matrix for each of the three codon positions. Then these three distance matrices are weighted according to an estimate of the global evolutionary rate of each codon position and averaged into a unique distance matrix. Using a large set of both real and simulated codon-based alignments of nucleotide sequences, we show that this approach leads to distance matrices that have a significantly better treelikeness compared to those obtained by standard nucleotide evolutionary distances. We also propose an alternative weighting to eliminate the part of the noise often associated with some codon positions, particularly the third position, which is known to induce a fast evolutionary rate. Simulation results show that fast distance-based tree reconstruction algorithms on distance matrices based on this codon position weighting can lead to phylogenetic trees that are at least as accurate as, if not better, than those inferred by maximum likelihood. Finally, a well-known multigene dataset composed of eight yeast species and 106 codon-based alignments is reanalyzed and shows that our codon evolutionary distances allow building a phylogenetic tree which is similar to those obtained by non-distance-based methods (e.g., maximum parsimony and maximum likelihood) and also significantly improved compared to standard nucleotide evolutionary distance estimates.

  16. Simultaneous optimization of beam orientations and beam weights in conformal radiotherapy

    International Nuclear Information System (INIS)

    Rowbottom, Carl Graham; Khoo, Vincent S.; Webb, Steve

    2001-01-01

    A methodology for the concurrent optimization of beam orientations and beam weights in conformal radiotherapy treatment planning has been developed and tested on a cohort of five patients. The algorithm is based on a beam-weight optimization scheme with a downhill simplex optimization engine. The use of random voxels in the dose calculation provides much of the required speed up in the optimization process, and allows the simultaneous optimization of beam orientations and beam weights in a reasonable time. In the implementation of the beam-weight optimization algorithm just 10% of the original patient voxels are used for the dose calculation and cost function evaluation. A fast simulated annealing algorithm controls the optimization of the beam arrangement. The optimization algorithm was able to produce clinically acceptable plans for the five patients in the cohort study. The algorithm equalized the dose to the optic nerves compared to the standard plans and reduced the mean dose to the brain stem by an average of 4.4% (±1.9, 1 SD), p value=0.007. The dose distribution to the PTV was not compromised by developing beam arrangements via the optimization algorithm. In conclusion, the simultaneous optimization of beam orientations and beam weights has been developed to be routinely used in a realistic time. The results of optimization in a small cohort study show that the optimization can reliably produce clinically acceptable dose distributions and may be able to improve dose distributions compared to those from a human planner

  17. Effects of Random Values for Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Hou-Ping Dai

    2018-02-01

    Full Text Available Particle swarm optimization (PSO algorithm is generally improved by adaptively adjusting the inertia weight or combining with other evolution algorithms. However, in most modified PSO algorithms, the random values are always generated by uniform distribution in the range of [0, 1]. In this study, the random values, which are generated by uniform distribution in the ranges of [0, 1] and [−1, 1], and Gauss distribution with mean 0 and variance 1 ( U [ 0 , 1 ] , U [ − 1 , 1 ] and G ( 0 , 1 , are respectively used in the standard PSO and linear decreasing inertia weight (LDIW PSO algorithms. For comparison, the deterministic PSO algorithm, in which the random values are set as 0.5, is also investigated in this study. Some benchmark functions and the pressure vessel design problem are selected to test these algorithms with different types of random values in three space dimensions (10, 30, and 100. The experimental results show that the standard PSO and LDIW-PSO algorithms with random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are more likely to avoid falling into local optima and quickly obtain the global optima. This is because the large-scale random values can expand the range of particle velocity to make the particle more likely to escape from local optima and obtain the global optima. Although the random values generated by U [ − 1 , 1 ] or G ( 0 , 1 are beneficial to improve the global searching ability, the local searching ability for a low dimensional practical optimization problem may be decreased due to the finite particles.

  18. Weight suppression predicts total weight gain and rate of weight gain in outpatients with anorexia nervosa.

    Science.gov (United States)

    Carter, Frances A; Boden, Joseph M; Jordan, Jennifer; McIntosh, Virginia V W; Bulik, Cynthia M; Joyce, Peter R

    2015-11-01

    The present study sought to replicate the finding of Wildes and Marcus, Behav Res Ther, 50, 266-274, 2012 that higher levels of weight suppression at pretreatment predict greater total weight gain, faster rate of weight gain, and bulimic symptoms amongst patients admitted with anorexia nervosa. Participants were 56 women with anorexia nervosa diagnosed by using strict or lenient weight criteria, who were participating in a randomized controlled psychotherapy trial (McIntosh et al., Am J Psychiatry, 162, 741-747, 2005). Thirty-five women completed outpatient treatment and post-treatment assessment. Weight suppression was the discrepancy between highest lifetime weight at adult height and weight at pretreatment assessment. Outcome variables were total weight gain, rate of weight gain, and bulimic symptoms in the month prior to post-treatment assessment [assessed using the Eating Disorders Examination (Fairburn et al., Binge-Eating: Nature, Assessment and Treatment. New York: Guilford, 1993)]. Weight suppression was positively associated with total weight gain and rate of weight gain over treatment. Regression models showed that this association could not be explained by covariates (age at onset of anorexia nervosa and treatment modality). Weight suppression was not significantly associated with bulimic symptoms in the month prior to post-treatment assessment, regardless of whether bulimic symptoms were examined as continuous or dichotomous variables. The present study reinforces the previous finding that weight suppression predicts total weight gain and rate of weight gain amongst patients being treated for anorexia nervosa. Methodological issues may explain the failure of the present study to find that weight suppression predicts bulimic symptoms. Weight suppression at pretreatment for anorexia nervosa should be assessed routinely and may inform treatment planning. © 2015 Wiley Periodicals, Inc.

  19. A FIRST APPROXIMATION CALCULATION OF AIR CUSHION CHASSIS WEIGHT OF TRANSPORT AIRPLANE

    Directory of Open Access Journals (Sweden)

    2016-01-01

    Full Text Available This article describes a first approximation of a weighted estimate of air cushion chassis. The algorithm for calculating the weight of air cushion chassis allows not only to estimate the mass of the chassis to a first approximation, but also to conduct a preliminary analysis of the influence of various parameters of the aircraft and the chassis on the weight of the aircraft at the stage of before designing. The algorithm can be expanded to include additional design decisions, such as the transformation of the fuselage, increasing the air cushion chassis canopy due to extensions, center of gravity, etc.

  20. RESOLVE: A new algorithm for aperture synthesis imaging of extended emission in radio astronomy

    Science.gov (United States)

    Junklewitz, H.; Bell, M. R.; Selig, M.; Enßlin, T. A.

    2016-02-01

    We present resolve, a new algorithm for radio aperture synthesis imaging of extended and diffuse emission in total intensity. The algorithm is derived using Bayesian statistical inference techniques, estimating the surface brightness in the sky assuming a priori log-normal statistics. resolve estimates the measured sky brightness in total intensity, and the spatial correlation structure in the sky, which is used to guide the algorithm to an optimal reconstruction of extended and diffuse sources. During this process, the algorithm succeeds in deconvolving the effects of the radio interferometric point spread function. Additionally, resolve provides a map with an uncertainty estimate of the reconstructed surface brightness. Furthermore, with resolve we introduce a new, optimal visibility weighting scheme that can be viewed as an extension to robust weighting. In tests using simulated observations, the algorithm shows improved performance against two standard imaging approaches for extended sources, Multiscale-CLEAN and the Maximum Entropy Method.