A child with severe head banging.
Granana, N; Tuchman, R F
1999-09-01
We present a 7-year-old boy with a developmental disorder presenting with severe head banging. Clinical evolution was consistent with diagnosis of autistic spectrum disorder, obsessive compulsive disorder, stuttering, and Tourette's syndrome. This report emphasizes the overlap between developmental disorder phenotypes. There is a need to understand the natural history and relationship of specific symptoms that occur in developmental disorders to devise effective and appropriate intervention strategies.
Head banging persisting during adolescence: A case with polysomnographic findings
Directory of Open Access Journals (Sweden)
Ravi Gupta
2014-01-01
Full Text Available Head banging is a sleep-related rhythmic movement disorder of unknown etiology. It is common during infancy; however, available literature suggests that prevalence decreases dramatically after childhood. We report the case of a 16-year-old male who presented with head banging. The symptoms were interfering with his functioning and he had been injured because of the same in the past. We are presenting the video-polysomnographic data of the case. Possible differential diagnoses, etiology, and treatment modalities are discussed. The boy was prescribed clonazepam and followed up for 3 months. Parents did not report any episode afterward.
An Improved Weighted Clustering Algorithm in MANET
Institute of Scientific and Technical Information of China (English)
WANG Jin; XU Li; ZHENG Bao-yu
2004-01-01
The original clustering algorithms in Mobile Ad hoc Network (MANET) are firstly analyzed in this paper.Based on which, an Improved Weighted Clustering Algorithm (IWCA) is proposed. Then, the principle and steps of our algorithm are explained in detail, and a comparison is made between the original algorithms and our improved method in the aspects of average cluster number, topology stability, clusterhead load balance and network lifetime. The experimental results show that our improved algorithm has the best performance on average.
An efficient algorithm for weighted PCA
Krijnen, W.P.; Kiers, H.A.L.
1995-01-01
The method for analyzing three-way data where one of the three components matrices in TUCKALS3 is chosen to have one column is called Replicated PCA. The corresponding algorithm is relatively inefficient. This is shown by offering an alternative algorithm called Weighted PCA. Specifically it is prov
Kalman plus weights: a time scale algorithm
Greenhall, C. A.
2001-01-01
KPW is a time scale algorithm that combines Kalman filtering with the basic time scale equation (BTSE). A single Kalman filter that estimates all clocks simultaneously is used to generate the BTSE frequency estimates, while the BTSE weights are inversely proportional to the white FM variances of the clocks. Results from simulated clock ensembles are compared to previous simulation results from other algorithms.
Information filtering via weighted heat conduction algorithm
Liu, Jian-Guo; Guo, Qiang; Zhang, Yi-Cheng
2011-06-01
In this paper, by taking into account effects of the user and object correlations on a heat conduction (HC) algorithm, a weighted heat conduction (WHC) algorithm is presented. We argue that the edge weight of the user-object bipartite network should be embedded into the HC algorithm to measure the object similarity. The numerical results indicate that both the accuracy and diversity could be improved greatly compared with the standard HC algorithm and the optimal values reached simultaneously. On the Movielens and Netflix datasets, the algorithmic accuracy, measured by the average ranking score, can be improved by 39.7% and 56.1% in the optimal case, respectively, and the diversity could reach 0.9587 and 0.9317 when the recommendation list equals to 5. Further statistical analysis indicates that, in the optimal case, the distributions of the edge weight are changed to the Poisson form, which may be the reason why HC algorithm performance could be improved. This work highlights the effect of edge weight on a personalized recommendation study, which maybe an important factor affecting personalized recommendation performance.
Greedy algorithm with weights for decision tree construction
Moshkov, Mikhail
2010-12-01
An approximate algorithm for minimization of weighted depth of decision trees is considered. A bound on accuracy of this algorithm is obtained which is unimprovable in general case. Under some natural assumptions on the class NP, the considered algorithm is close (from the point of view of accuracy) to best polynomial approximate algorithms for minimization of weighted depth of decision trees.
Experimental Realization of Braunstein's Weight-Decision Algorithm
Institute of Scientific and Technical Information of China (English)
HOU Shi-Yao; CUI Jing-Xin; LI Jun-Lin
2011-01-01
Braunstein proposed an Algorithm to distinguish the Boolean functions of two different weights. Here we implement the algorithm in a two-qubit nuclear magnetic resonance quantum information processor. The experiment shows that the algorithm could distinguish the Boolean functions of two different weights efficiently.%@@ Braunstein proposed an algorithm to distinguish the Boolean functions of two different weights.Here we imple-ment the algorithm in a two-qubit nuclear magnetic resonance quantum information processor.The experiment shows that the algorithm could distinguish the Boolean functions of two different weights efficiently.
ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD
Institute of Scientific and Technical Information of China (English)
SONG Kaichen; NIE Xili
2006-01-01
Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.
A Variable Weighted Least-Connection Algorithm for Multimedia Transmission
Institute of Scientific and Technical Information of China (English)
杨立辉; 余胜生
2003-01-01
Under high loads, a multimedia duster server can serve many hundreds of connections concurrently, where a load balancer distributes the incoming connection request to each node according to a preset algorithm. Among existing scheduling algorithms, round-Robin and least-connection do not take into account the difference of service capability of each node and improved algorithms such as weighted round-Robin and weighted least-connection. They also do not consider the fact that the ratio of number of TCP connections and the fixed weight does not reflect the real load of node. In this paper we introduce our attempts in improving the scheduling algorithms and propose a variable weighted least-connection algorithm, which assigns variable weight, instead of fixed weight, to each node according to its real time resource. A validating trial has been performed and the results show that the proposed algorithm has effective load balancing in one central control node scenario.
Grover quantum searching algorithm based on weighted targets
Institute of Scientific and Technical Information of China (English)
Li Panchi; Li Shiyong
2008-01-01
The current Grover quantum searching algorithm cannot identify the difference in importance of the search targets when it is applied to an unsorted quantum database, and the probability for each search target is equal. To solve this problem, a Grover searching algorithm based on weighted targets is proposed. First, each target is endowed a weight coefficient according to its importance. Applying these different weight coefficients, the targets are represented as quantum superposition states. Second, the novel Grover searching algorithm based on the quantum superposition of the weighted targets is constructed. Using this algorithm, the probability of getting each target can be approximated to the corresponding weight coefficient, which shows the flexibility of this algorithm.Finally, the validity of the algorithm is proved by a simple searching example.
A research on fast FCM algorithm based on weighted sample
Institute of Scientific and Technical Information of China (English)
KUANG Ping; ZHU Qing-xin; WANG Ming-wen; CHEN Xu-dong; QING Li
2006-01-01
To improve the computational performance of the fuzzy C-means (FCM) algorithm used in dataset clustering with large numbers,the concepts of the equivalent samples and the weighting samples based on eigenvalue distribution of the samples in the feature space were introduced and a novel fast cluster algorithm named weighted fuzzy C-means (WFCM) algorithm was put forward,which came from the traditional FCM algorithm.It was proved that the duster results were equivalent in dataset with two different cluster algorithms:WFCM and FCM.Furthermore,the WFCM algorithm had better computational performance than the ordinary FCM algorithm.The experiment of the gray image segmentation showed that the WFCM algorithm is a fast and effective cluster algorithm.
Multifractal analysis of weighted networks by a modified sandbox algorithm
Song, Yu-Qin; Yu, Zu-Guo; Li, Bao-Gen
2015-01-01
Complex networks have attracted growing attention in many fields. As a generalization of fractal analysis, multifractal analysis (MFA) is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns. Some algorithms for MFA of unweighted complex networks have been proposed in the past a few years, including the sandbox (SB) algorithm recently employed by our group. In this paper, a modified SB algorithm (we call it SBw algorithm) is proposed for MFA of weighted networks.First, we use the SBw algorithm to study the multifractal property of two families of weighted fractal networks (WFNs): "Sierpinski" WFNs and "Cantor dust" WFNs. We also discuss how the fractal dimension and generalized fractal dimensions change with the edge-weights of the WFN. From the comparison between the theoretical and numerical fractal dimensions of these networks, we can find that the proposed SBw algorithm is efficient and feasible for MFA of weighted networks. Then, we apply...
Exact quantum algorithm to distinguish Boolean functions of different weights
Energy Technology Data Exchange (ETDEWEB)
Braunstein, Samuel L [Computer Science, University of York, York YO10 5DD (United Kingdom); Choi, Byung-Soo [Computer Science, University of York, York YO10 5DD (United Kingdom); Ghosh, Subhroshekhar [Indian Statistical Institute, Kolkata 700 108 (India); Maitra, Subhamoy [Applied Statistics Unit, Indian Statistical Institute, Kolkata 700 108 (India)
2007-07-20
In this work, we exploit the Grover operator for the weight analysis of a Boolean function, specifically to solve the weight-decision problem. The weight w is the fraction of all possible inputs for which the output is 1. The goal of the weight-decision problem is to find the exact weight w from the given two weights w{sub 1} and w{sub 2} satisfying a general weight condition as w{sub 1} + w{sub 2} = 1 and 0 < w{sub 1} < w{sub 2} < 1. First, we propose a limited weight-decision algorithm where the function has another constraint: a weight is in {l_brace} W{sub 1} = sin{sup 2}(k/(2k+1) {pi}/2), w{sub 2} = cos{sup 2}(k/(2k+1) {pi}/2){r_brace} for integer k. Second, by changing the phases in the last two Grover iterations, we propose a general weight-decision algorithm which is free from the above constraint. Finally, we show that when our algorithm requires O(k) queries to find w with a unit success probability, any classical algorithm requires at least {omega}(k{sup 2}) queries for a unit success probability. In addition, we show that our algorithm requires fewer queries to solve this problem compared with the quantum counting algorithm.
Multifractal analysis of weighted networks by a modified sandbox algorithm
Song, Yu-Qin; Liu, Jin-Long; Yu, Zu-Guo; Li, Bao-Gen
2015-12-01
Complex networks have attracted growing attention in many fields. As a generalization of fractal analysis, multifractal analysis (MFA) is a useful way to systematically describe the spatial heterogeneity of both theoretical and experimental fractal patterns. Some algorithms for MFA of unweighted complex networks have been proposed in the past a few years, including the sandbox (SB) algorithm recently employed by our group. In this paper, a modified SB algorithm (we call it SBw algorithm) is proposed for MFA of weighted networks. First, we use the SBw algorithm to study the multifractal property of two families of weighted fractal networks (WFNs): “Sierpinski” WFNs and “Cantor dust” WFNs. We also discuss how the fractal dimension and generalized fractal dimensions change with the edge-weights of the WFN. From the comparison between the theoretical and numerical fractal dimensions of these networks, we can find that the proposed SBw algorithm is efficient and feasible for MFA of weighted networks. Then, we apply the SBw algorithm to study multifractal properties of some real weighted networks — collaboration networks. It is found that the multifractality exists in these weighted networks, and is affected by their edge-weights.
A New Algorithm for the Weighted Reliability of Networks
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
The weighted reliability of network is defined as the sum of the multiplication of the probability of each network state by its normalized weighting factor. Under a certain state, when the capacity from source s to sink t is larger than the given required capacity Cr, then the normalized weighting factor is 1, otherwise, it is the ratio of the capacity to the required capacity Cr. This paper proposes a new algorithm for the weighted reliability of networks, puts forward the concept of saturated state of capacity, and suggests a recursive formula for expanding the minimal paths to be the sum of qualifying subsets. In the new algorithm, the expansions of the minimal paths don't create the irrelevant qualifying subsets, thus decreasing the unnecessary expanding calculation. Compared with the current algorithms, this algorithm has the advantage of a small amount of computations for computer implementation.
Weighted deductive parsing and Knuth's algorithm - Squibs and discussions
Nederhof, MJ
2003-01-01
We discuss weighted deductive parsing and consider the problem of finding the derivation with the lowest weight. We show that Knuth's generalization of Dijkstra's algorithm for the shortest-path problem offers a general method to solve this problem. Our approach is modular in the sense that Knuth's
An Improved Three-Weight Message-Passing Algorithm
Derbinsky, Nate; Elser, Veit; Yedidia, Jonathan S
2013-01-01
We describe how the powerful "Divide and Concur" algorithm for constraint satisfaction can be derived as a special case of a message-passing version of the Alternating Direction Method of Multipliers (ADMM) algorithm for convex optimization, and introduce an improved message-passing algorithm based on ADMM/DC by introducing three distinct weights for messages, with "certain" and "no opinion" weights, as well as the standard weight used in ADMM/DC. The "certain" messages allow our improved algorithm to implement constraint propagation as a special case, while the "no opinion" messages speed convergence for some problems by making the algorithm focus only on active constraints. We describe how our three-weight version of ADMM/DC can give greatly improved performance for non-convex problems such as circle packing and solving large Sudoku puzzles, while retaining the exact performance of ADMM for convex problems. We also describe the advantages of our algorithm compared to other message-passing algorithms based u...
Minimum Weight Cycles and Triangles: Equivalences and Algorithms
Roditty, Liam
2011-01-01
We consider the fundamental algorithmic problem of finding a cycle of minimum weight in a weighted graph. In particular, we show that the minimum weight cycle problem in an undirected n-node graph with edge weights in {1,...,M} or in a directed n-node graph with edge weights in {-M,..., M} and no negative cycles can be efficiently reduced to finding a minimum weight triangle in an Theta(n)-node undirected graph with weights in {1,...,O(M)}. Roughly speaking, our reductions imply the following surprising phenomenon: a minimum cycle with an arbitrary number of weighted edges can be "encoded" using only three edges within roughly the same weight interval! This resolves a longstanding open problem posed by Itai and Rodeh [SIAM J. Computing 1978 and STOC'77]. A direct consequence of our efficient reductions are O (Mn^{omega})-time algorithms using fast matrix multiplication (FMM) for finding a minimum weight cycle in both undirected graphs with integral weights from the interval [1,M] and directed graphs with inte...
Weighted K-Nearest Neighbor Classification Algorithm Based on Genetic Algorithm
Directory of Open Access Journals (Sweden)
Xuesong Yan
2013-10-01
Full Text Available K-Nearest Neighbor (KNN is one of the most popular algorithms for data classification. Many researchers have found that the KNN algorithm accomplishes very good performance in their experiments on different datasets. The traditional KNN text classification algorithm has limitations: calculation complexity, the performance is solely dependent on the training set, and so on. To overcome these limitations, an improved version of KNN is proposed in this paper, we use genetic algorithm combined with weighted KNN to improve its classification performance. and the experiment results shown that our proposed algorithm outperforms the KNN with greater accuracy.
Analysis of linear weighted order statistics CFAR algorithm
Institute of Scientific and Technical Information of China (English)
孟祥伟; 关键; 何友
2004-01-01
CFAR technique is widely used in radar targets detection fields. Traditional algorithm is cell averaging (CA),which can give a good detection performance in a relatively ideal environment. Recently, censoring technique is adopted to make the detector perform robustly. Ordered statistic (OS) and trimmed mean (TM) methods are proposed. TM methods treat the reference samples which participate in clutter power estimates equally, but this processing will not realize the effective estimates of clutter power. Therefore, in this paper a quasi best weighted (Q BW) order statistics algorithm is presented. In special cases, QBW reduces to CA and the censored mean level detector (CMLD).
A Multiple Classifier Fusion Algorithm Using Weighted Decision Templates
Directory of Open Access Journals (Sweden)
Aizhong Mi
2016-01-01
Full Text Available Fusing classifiers’ decisions can improve the performance of a pattern recognition system. Many applications areas have adopted the methods of multiple classifier fusion to increase the classification accuracy in the recognition process. From fully considering the classifier performance differences and the training sample information, a multiple classifier fusion algorithm using weighted decision templates is proposed in this paper. The algorithm uses a statistical vector to measure the classifier’s performance and makes a weighed transform on each classifier according to the reliability of its output. To make a decision, the information in the training samples around an input sample is used by the k-nearest-neighbor rule if the algorithm evaluates the sample as being highly likely to be misclassified. An experimental comparison was performed on 15 data sets from the KDD’99, UCI, and ELENA databases. The experimental results indicate that the algorithm can achieve better classification performance. Next, the algorithm was applied to cataract grading in the cataract ultrasonic phacoemulsification operation. The application result indicates that the proposed algorithm is effective and can meet the practical requirements of the operation.
A novel dynamical community detection algorithm based on weighting scheme
Li, Ju; Yu, Kai; Hu, Ke
2015-12-01
Network dynamics plays an important role in analyzing the correlation between the function properties and the topological structure. In this paper, we propose a novel dynamical iteration (DI) algorithm, which incorporates the iterative process of membership vector with weighting scheme, i.e. weighting W and tightness T. These new elements can be used to adjust the link strength and the node compactness for improving the speed and accuracy of community structure detection. To estimate the optimal stop time of iteration, we utilize a new stability measure which is defined as the Markov random walk auto-covariance. We do not need to specify the number of communities in advance. It naturally supports the overlapping communities by associating each node with a membership vector describing the node's involvement in each community. Theoretical analysis and experiments show that the algorithm can uncover communities effectively and efficiently.
A Weight-Aware Recommendation Algorithm for Mobile Multimedia Systems
Directory of Open Access Journals (Sweden)
Pedro M. P. Rosa
2013-01-01
Full Text Available In the last years, information flood is becoming a common reality, and the general user, hit by thousands of possible interesting information, has great difficulties identifying the best ones, that can guide him in his/her daily choices, like concerts, restaurants, sport gatherings, or culture events. The current growth of mobile smartphones and tablets with embedded GPS receiver, Internet access, camera, and accelerometer offer new opportunities to mobile ubiquitous multimedia applications that helps gathering the best information out of an always growing list of possibly good ones. This paper presents a mobile recommendation system for events, based on few weighted context-awareness data-fusion algorithms to combine several multimedia sources. A demonstrative deployment were utilized relevance like location data, user habits and user sharing statistics, and data-fusion algorithms like the classical CombSUM and CombMNZ, simple, and weighted. Still, the developed methodology is generic, and can be extended to other relevance, both direct (background noise volume and indirect (local temperature extrapolated by GPS coordinates in a Web service and other data-fusion techniques. To experiment, demonstrate, and evaluate the performance of different algorithms, the proposed system was created and deployed into a working mobile application providing real time awareness-based information of local events and news.
Speech recognition algorithms based on weighted finite-state transducers
Hori, Takaaki
2013-01-01
This book introduces the theory, algorithms, and implementation techniques for efficient decoding in speech recognition mainly focusing on the Weighted Finite-State Transducer (WFST) approach. The decoding process for speech recognition is viewed as a search problem whose goal is to find a sequence of words that best matches an input speech signal. Since this process becomes computationally more expensive as the system vocabulary size increases, research has long been devoted to reducing the computational cost. Recently, the WFST approach has become an important state-of-the-art speech recogni
A weight based genetic algorithm for selecting views
Talebian, Seyed H.; Kareem, Sameem A.
2013-03-01
Data warehouse is a technology designed for supporting decision making. Data warehouse is made by extracting large amount of data from different operational systems; transforming it to a consistent form and loading it to the central repository. The type of queries in data warehouse environment differs from those in operational systems. In contrast to operational systems, the analytical queries that are issued in data warehouses involve summarization of large volume of data and therefore in normal circumstance take a long time to be answered. On the other hand, the result of these queries must be answered in a short time to enable managers to make decisions as short time as possible. As a result, an essential need in this environment is in improving the performances of queries. One of the most popular methods to do this task is utilizing pre-computed result of queries. In this method, whenever a new query is submitted by the user instead of calculating the query on the fly through a large underlying database, the pre-computed result or views are used to answer the queries. Although, the ideal option would be pre-computing and saving all possible views, but, in practice due to disk space constraint and overhead due to view updates it is not considered as a feasible choice. Therefore, we need to select a subset of possible views to save on disk. The problem of selecting the right subset of views is considered as an important challenge in data warehousing. In this paper we suggest a Weighted Based Genetic Algorithm (WBGA) for solving the view selection problem with two objectives.
Serial Min-max Decoding Algorithm Based on Variable Weighting for Nonbinary LDPC Codes
Directory of Open Access Journals (Sweden)
Zhongxun Wang
2013-09-01
Full Text Available In this paper, we perform an analysis on the min-max decoding algorithm for nonbinary LDPC(low-density parity-check codes and propose serial min-max decoding algorithm. Combining with the weighted processing of the variable node message, we propose serial min-max decoding algorithm based on variable weighting for nonbinary LDPC codes in the end. The simulation indicates that when the bit error rate is 10^-3,compared with serial min-max decoding algorithm ,traditional min-max decoding algorithm and traditional minsum algorithm ,serial min-max decoding algorithm based on variable weighting can offer additional coding gain 0.2dB、0.8dB and 1.4dB respectively in additional white Gaussian noise channel and under binary phase shift keying modulation.
A deterministic algorithm for fitting a step function to a weighted point-set
Fournier, Hervé
2013-02-01
Given a set of n points in the plane, each point having a positive weight, and an integer k>0, we present an optimal O(nlogn)-time deterministic algorithm to compute a step function with k steps that minimizes the maximum weighted vertical distance to the input points. It matches the expected time bound of the best known randomized algorithm for this problem. Our approach relies on Coles improved parametric searching technique. As a direct application, our result yields the first O(nlogn)-time algorithm for computing a k-center of a set of n weighted points on the real line. © 2012 Elsevier B.V.
Adaptive Weighted Clustering Algorithm for Mobile Ad-hoc Networks
Directory of Open Access Journals (Sweden)
Adwan Yasin
2016-04-01
Full Text Available In this paper we present a new algorithm for clustering MANET by considering several parameters. This is a new adaptive load balancing technique for clustering out Mobile Ad-hoc Networks (MANET. MANET is special kind of wireless networks where no central management exits and the nodes in the network cooperatively manage itself and maintains connectivity. The algorithm takes into account the local capabilities of each node, the remaining battery power, degree of connectivity and finally the power consumption based on the average distance between nodes and candidate cluster head. The proposed algorithm efficiently decreases the overhead in the network that enhances the overall MANET performance. Reducing the maintenance time of broken routes makes the network more stable, reliable. Saving the power of the nodes also guarantee consistent and reliable network.
Image haze removal algorithm for transmission lines based on weighted Gaussian PDF
Wang, Wanguo; Zhang, Jingjing; Li, Li; Wang, Zhenli; Li, Jianxiang; Zhao, Jinlong
2015-03-01
Histogram specification is a useful algorithm of image enhancement field. This paper proposes an image haze removal algorithm of histogram specification based on the weighted Gaussian probability density function (Gaussian PDF). Firstly, we consider the characteristics of image histogram that captured when sunny, fogging and haze weather. Then, we solve the weak intensity of image specification through changing the variance and weighted Gaussian PDF. The performance of the algorithm could removal the effective of fog and experimental results show the superiority of the proposed algorithm compared with histogram specification. It also has much advantage in respect of low computational complexity, high efficiency, no manual intervention.
Chang, S; Wong, K W; Zhang, W; Zhang, Y
1999-08-10
An algorithm for optimizing a bipolar interconnection weight matrix with the Hopfield network is proposed. The effectiveness of this algorithm is demonstrated by computer simulation and optical implementation. In the optical implementation of the neural network the interconnection weights are biased to yield a nonnegative weight matrix. Moreover, a threshold subchannel is added so that the system can realize, in real time, the bipolar weighted summation in a single channel. Preliminary experimental results obtained from the applications in associative memories and multitarget classification with rotation invariance are shown.
Panin, S. V.; Titkov, V. V.; Lyubutin, P. S.; Chemezov, V. O.; Eremin, A. V.
2016-11-01
Application of weight coefficients of the bilateral filter used to determine weighted similarity metrics of image ranges in optical flow computation algorithm that employs 3-dimension recursive search (3DRS) was investigated. By testing the algorithm applying images taken from the public test database Middlebury benchmark, the effectiveness of this weighted similarity metrics for solving the image processing problem was demonstrated. The necessity of matching the equation parameter values when calculating the weight coefficients aimed at taking into account image texture features was proved for reaching the higher noise resistance under the vector field construction. The adaptation technique which allows excluding manual determination of parameter values was proposed and its efficiency was demonstrated.
Weighting iterative Fourier transform algorithm of the kinoform synthesis.
Kuzmenko, Alexander V
2008-05-15
Two object-dependent filters (an amplitude and a phase filter) are used in the object plane on the iterative calculation of a kinoform instead of a single (phase) filter as usual. The amplitude filter is a system of weight coefficients that vary in the process of iterations and control the amplitude of an input object. The advantages of the proposed method over other ones are confirmed by computer-based experiments. It is found that the method is most efficient for binary objects.
Institute of Scientific and Technical Information of China (English)
Musheng Wei; Qiaohua Liu
2007-01-01
Recently,Wei in[18]proved that perturbed stiff weighted pseudoinverses and stiff weighted least squares problems are stable,if and only if the original and perturbed coefficient matrices A and A satisfy several row rank preservation conditions.According to these conditions,in this paper we show that in general,ordinary modified Gram-Schmidt with column pivoting is not numerically stable for solving the stiff weighted least squares problem.We then propose a row block modified Gram-Schmidt algorithm with column pivoting,and show that with appropriately chosen tolerance,this algorithm can correctly determine the numerical ranks of these row partitioned sub-matrices,and the computed QR factor R contains small roundoff error which is row stable.Several numerical experiments are also provided to compare the results of the ordinary Modified Gram-Schmidt algorithm with column pivoting and the row block Modified Gram-Schmidt algorithm with column pivoting.
An Improved Weighting Algorithm to Achieve Software Compensation in a Fine Grained LAr Calorimeter
Issever, C.; Borras, K.; Wegener, D.
2004-01-01
An improved weighting algorithm applied to hadron showers has been developed for a fine grained LAr calorimeter. The new method uses tabulated weights which depend on the density of energy deposited in individual cells and in a surrounding cone whose symmetry axis connects the interaction vertex with the highest energy cluster in the shower induced by a hadron. The weighting of the visible energy and the correction for losses due to noise cuts are applied in separate steps. In contrast to sta...
New HB-weighted time delay estimation algorithm under impulsive noise environment*
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
The traditional HB-weighted time-delay estimation (TDE) method degenerates under the impulsive noise environment. Two new time-delay estimation methods are proposed based on fractional lower order statistics (FLOS) according to the impulsive characteristics of fractional lower order α-stable noises. Theoretic analysis and computer simulations indicate that the proposed covariation based HB weighted (COV-HB) algorithm can suppress impulsive noises in one received signal for 1 ≤α≤ 2, whereas the other proposed fractional lower order covariance-based HB weighted (FLOC-HB) algorithm has robust performance under arbitrary impulsive noise conditions for the whole range of 0 < α≤ 2.
BEST WEIGHT PATTERN EVALUATION BASED SECURITY CONSTRAINED POWER DISPATCH ALGORITHM
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper presents a methodology which determines the allocation of power demand among the committed generating units while minimizes number of objectives as well as meets physical and technological system constraints. The procedure considers two decoupled problems based upon the dependency of their goals on either active power or reactive power generation. Both the problems have been solved sequentially to achieve optimal allocation of active and reactive power generation while minimizes operating cost, gaseous pollutants emission objectives and active power transmission loss with consideration of system operating constraints along with generators prohibited operating zones and transmission line flow limits. The active and reactive power line flows are obtained with the help of generalized generation shift distribution factors (GGDF) and generalized Z-bus distribution factors (GZBDF), respectively. First problem is solved in multi-objective framework in which the best weights assigned to objectives are determined while employing weighting method and in second problem,active power loss of the system is minimized subject to system constraints. The validity of the proposed method is demonstrated on 30-bus IEEE power system.
An Adaptive Weighting Algorithm for Interpolating the Soil Potassium Content
Liu, Wei; Du, Peijun; Zhao, Zhuowen; Zhang, Lianpeng
2016-04-01
The concept of spatial interpolation is important in the soil sciences. However, the use of a single global interpolation model is often limited by certain conditions (e.g., terrain complexity), which leads to distorted interpolation results. Here we present a method of adaptive weighting combined environmental variables for soil properties interpolation (AW-SP) to improve accuracy. Using various environmental variables, AW-SP was used to interpolate soil potassium content in Qinghai Lake Basin. To evaluate AW-SP performance, we compared it with that of inverse distance weighting (IDW), ordinary kriging, and OK combined with different environmental variables. The experimental results showed that the methods combined with environmental variables did not always improve prediction accuracy even if there was a strong correlation between the soil properties and environmental variables. However, compared with IDW, OK, and OK combined with different environmental variables, AW-SP is more stable and has lower mean absolute and root mean square errors. Furthermore, the AW-SP maps provided improved details of soil potassium content and provided clearer boundaries to its spatial distribution. In conclusion, AW-SP can not only reduce prediction errors, it also accounts for the distribution and contributions of environmental variables, making the spatial interpolation of soil potassium content more reasonable.
Dafu, Shen; Leihong, Zhang; Dong, Liang; Bei, Li; Yi, Kang
2017-07-01
The purpose of this study is to improve the reconstruction precision and better copy the color of spectral image surfaces. A new spectral reflectance reconstruction algorithm based on an iterative threshold combined with weighted principal component space is presented in this paper, and the principal component with weighted visual features is the sparse basis. Different numbers of color cards are selected as the training samples, a multispectral image is the testing sample, and the color differences in the reconstructions are compared. The channel response value is obtained by a Mega Vision high-accuracy, multi-channel imaging system. The results show that spectral reconstruction based on weighted principal component space is superior in performance to that based on traditional principal component space. Therefore, the color difference obtained using the compressive-sensing algorithm with weighted principal component analysis is less than that obtained using the algorithm with traditional principal component analysis, and better reconstructed color consistency with human eye vision is achieved.
Institute of Scientific and Technical Information of China (English)
Igor Boglaev; Matthew Hardy
2008-01-01
This paper presents and analyzes a monotone domain decomposition algorithm for solving nonlinear singularly perturbed reaction-diffusion problems of parabolic type.To solve the nonlinear weighted average finite difference scheme for the partial differential equation,we construct a monotone domain decomposition algorithm based on a Schwarz alternating method and a box-domain decomposition.This algorithm needs only to solve linear discrete systems at each iterative step and converges monotonically to the exact solution of the nonlinear discrete problem. The rate of convergence of the monotone domain decomposition algorithm is estimated.Numerical experiments are presented.
Directory of Open Access Journals (Sweden)
Browning DJ
2014-07-01
Full Text Available David J Browning, Chong Lee, David Rotberg Charlotte Eye, Ear, Nose and Throat Associates, Charlotte, North Carolina, NC, USA Purpose: To determine how algorithms for ideal body weight (IBW affect hydroxychloroquine dosing in women.Methods: This was a retrospective study of 520 patients screened for hydroxychloroquine retinopathy. Charts were reviewed for sex, height, weight, and daily dose. The outcome measures were ranges of IBW across algorithms; rates of potentially toxic dosing; height thresholds below which 400 mg/d dosing is potentially toxic; and rates for which actual body weight (ABW was less than IBW.Results: Women made up 474 (91% of the patients. The IBWs for a height varied from 30–34 pounds (13.6–15.5 kg across algorithms. The threshold heights below which toxic dosing occurred varied from 62–70 inches (157.5–177.8 cm. Different algorithms placed 16%–98% of women in the toxic dosing range. The proportion for whom dosing should have been based on ABW rather than IBW ranged from 5%–31% across algorithms. Conclusion: Although hydroxychloroquine dosing should be based on the lesser of ABW and IBW, there is no consensus about the definition of IBW. The Michaelides algorithm is associated with the most frequent need to adjust dosing; the Metropolitan Life Insurance, large frame, mean value table with the least frequent need. No evidence indicates that one algorithm is superior to others. Keywords: hydroxychloroquine, ideal body weight, actual body weight, toxicity, retinopathy, algorithms
Browning, David J; Lee, Chong; Rotberg, David
2014-01-01
To determine how algorithms for ideal body weight (IBW) affect hydroxychloroquine dosing in women. This was a retrospective study of 520 patients screened for hydroxychloroquine retinopathy. Charts were reviewed for sex, height, weight, and daily dose. The outcome measures were ranges of IBW across algorithms; rates of potentially toxic dosing; height thresholds below which 400 mg/d dosing is potentially toxic; and rates for which actual body weight (ABW) was less than IBW. Women made up 474 (91%) of the patients. The IBWs for a height varied from 30-34 pounds (13.6-15.5 kg) across algorithms. The threshold heights below which toxic dosing occurred varied from 62-70 inches (157.5-177.8 cm). Different algorithms placed 16%-98% of women in the toxic dosing range. The proportion for whom dosing should have been based on ABW rather than IBW ranged from 5%-31% across algorithms. Although hydroxychloroquine dosing should be based on the lesser of ABW and IBW, there is no consensus about the definition of IBW. The Michaelides algorithm is associated with the most frequent need to adjust dosing; the Metropolitan Life Insurance, large frame, mean value table with the least frequent need. No evidence indicates that one algorithm is superior to others.
Modified Weighted PageRank Algorithm using Time Spent on Links
Directory of Open Access Journals (Sweden)
Priyanka Bauddha
2014-09-01
Full Text Available With dynamic growth and increasing data on the web, it is very difficult to find relevant information for a user. Large numbers of pages are returned by search engine in response of user’s query. The ranking algorithms have been developed to prioritize the search results so that more relevant pages are displayed at the top. Various ranking algorithms based on web structure mining and web usage mining such as PageRank, Weighted PageRank, PageRank with VOL and Weighted PageRank with VOL have been developed but they are not able to endure with the time spent by user on a particular web page. If user is conferring more time on a web page that signifies the page is more relevant to user. The proposed algorithm consolidates time spent with the Weighted PageRank using Visit of Links.
Adaptive Weighted Morphology Detection Algorithm of Plane Object in Docking Guidance System
Directory of Open Access Journals (Sweden)
Guo yan-ying
2010-09-01
Full Text Available In this paper, we presented an image segmentation algorithm based on adaptive weighted mathematical morphology edge detectors. The performance of the proposed algorithm has been demonstrated on the Lena image. The input of the proposed algorithm is a grey level image. The image was first processed by the mathematical morphological closing and dilation residue edge detector to enhance the edge features and sketch out the contour of the image, respectively. Then the adaptive weight SE operation was applied to the edge-extracted image to fuse edge gaps and hill up holds. Experimental results show it can not only primely extract detail edge, but also superbly preserve integer effect comparative to classical edge detection algorithm.
Weighted SVD algorithm for close-orbit correction and 10 Hz feedback in RHIC
Energy Technology Data Exchange (ETDEWEB)
Liu C.; Hulsart, R.; Marusic, A.; Michnoff, R.; Minty, M.; Ptitsyn, V.
2012-05-20
Measurements of the beam position along an accelerator are typically treated equally using standard SVD-based orbit correction algorithms so distributing the residual errors, modulo the local beta function, equally at the measurement locations. However, sometimes a more stable orbit at select locations is desirable. In this paper, we introduce an algorithm for weighting the beam position measurements to achieve a more stable local orbit. The results of its application to close-orbit correction and 10 Hz orbit feedback are presented.
Gao, Yingbin; Kong, Xiangyu; Zhang, Huihui; Hou, Li'an
2017-05-01
Minor component (MC) plays an important role in signal processing and data analysis, so it is a valuable work to develop MC extraction algorithms. Based on the concepts of weighted subspace and optimum theory, a weighted information criterion is proposed for searching the optimum solution of a linear neural network. This information criterion exhibits a unique global minimum attained if and only if the state matrix is composed of the desired MCs of an autocorrelation matrix of an input signal. By using gradient ascent method and recursive least square (RLS) method, two algorithms are developed for multiple MCs extraction. The global convergences of the proposed algorithms are also analyzed by the Lyapunov method. The proposed algorithms can extract the multiple MCs in parallel and has advantage in dealing with high dimension matrices. Since the weighted matrix does not require an accurate value, it facilitates the system design of the proposed algorithms for practical applications. The speed and computation advantages of the proposed algorithms are verified through simulations. Copyright © 2017 Elsevier Ltd. All rights reserved.
An image-tracking algorithm based on object center distance-weighting and image feature recognition
Institute of Scientific and Technical Information of China (English)
JIANG Shuhong; WANG Qin; ZHANG Jianqiu; HU Bo
2007-01-01
Areal-time image-tracking algorithm is proposed.which gives small weights to pixels farther from the object center and uses the quantized image gray scales as a template.It identifies the target's location by the mean-shift iteration method and arrives at the target's scale by using image feature recognition.It improves the kernel-based algorithm in tracking scale-changing targets.A decimation mcthod is proposed to track large-sized targets and real-time experimental results verify the effectiveness of the proposed algorithm.
On applying weighted seed techniques to GMRES algorithm for solving multiple linear systems
Directory of Open Access Journals (Sweden)
Lakhdar Elbouyahyaoui
2018-07-01
Full Text Available In the present paper, we are concerned by weighted Arnoldi like methods for solving large and sparse linear systems that have different right-hand sides but have the same coefficient matrix. We first give detailed descriptions of the weighted Gram-Schmidt process and of a Ruhe variant of the weighted block Arnoldi algorithm. We also establish some theoretical results that links the iterates of the weighted block Arnoldi process to those of the non weighted one. Then, to accelerate the convergence of the classical restarted block and seed GMRES methods, we introduce the weighted restarted block and seed GMRES methods. Numerical experiments that are done with different matrices coming from the Matrix Market repository or from the university of Florida sparse matrix collection are reported at the end of this work in order to compare the performance and show the effectiveness of the proposed methods.
HO Weight Factor in Particle-Flow Algorithm in CMS Experiment
Chatterjee, Suman
2017-01-01
The weight factors of Outer Hadron Calorimeters in Particle Flow algorithm used in CMS has been optimized using dijet and $\\gamma+$jet samples from the data collected in 2015 and 2016. The response of the hadron calorimeter depends on the shower depth as well as the total energy of the jet, hence energy dependent weight factors are also considered in this study along with its dependence on pseudorapidity.
An Adaptive Particle Swarm Optimization Algorithm Based on Directed Weighted Complex Network
Directory of Open Access Journals (Sweden)
Ming Li
2014-01-01
Full Text Available The disadvantages of particle swarm optimization (PSO algorithm are that it is easy to fall into local optimum in high-dimensional space and has a low convergence rate in the iterative process. To deal with these problems, an adaptive particle swarm optimization algorithm based on directed weighted complex network (DWCNPSO is proposed. Particles can be scattered uniformly over the search space by using the topology of small-world network to initialize the particles position. At the same time, an evolutionary mechanism of the directed dynamic network is employed to make the particles evolve into the scale-free network when the in-degree obeys power-law distribution. In the proposed method, not only the diversity of the algorithm was improved, but also particles’ falling into local optimum was avoided. The simulation results indicate that the proposed algorithm can effectively avoid the premature convergence problem. Compared with other algorithms, the convergence rate is faster.
Fast weighted K-view-voting algorithm for image texture classification
Liu, Hong; Lan, Yihua; Wang, Qian; Jin, Renchao; Song, Enmin; Hung, Chih-Cheng
2012-02-01
We propose an innovative and efficient approach to improve K-view-template (K-view-T) and K-view-datagram (K-view-D) algorithms for image texture classification. The proposed approach, called the weighted K-view-voting algorithm (K-view-V), uses a novel voting method for texture classification and an accelerating method based on the efficient summed square image (SSI) scheme as well as fast Fourier transform (FFT) to enable overall faster processing. Decision making, which assigns a pixel to a texture class, occurs by using our weighted voting method among the ``promising'' members in the neighborhood of a classified pixel. In other words, this neighborhood consists of all the views, and each view has a classified pixel in its territory. Experimental results on benchmark images, which are randomly taken from Brodatz Gallery and natural and medical images, show that this new classification algorithm gives higher classification accuracy than existing K-view algorithms. In particular, it improves the accurate classification of pixels near the texture boundary. In addition, the proposed acceleration method improves the processing speed of K-view-V as it requires much less computation time than other K-view algorithms. Compared with the results of earlier developed K-view algorithms and the gray level co-occurrence matrix (GLCM), the proposed algorithm is more robust, faster, and more accurate.
An Improved WiFi Indoor Positioning Algorithm by Weighted Fusion.
Ma, Rui; Guo, Qiang; Hu, Changzhen; Xue, Jingfeng
2015-08-31
The rapid development of mobile Internet has offered the opportunity for WiFi indoor positioning to come under the spotlight due to its low cost. However, nowadays the accuracy of WiFi indoor positioning cannot meet the demands of practical applications. To solve this problem, this paper proposes an improved WiFi indoor positioning algorithm by weighted fusion. The proposed algorithm is based on traditional location fingerprinting algorithms and consists of two stages: the offline acquisition and the online positioning. The offline acquisition process selects optimal parameters to complete the signal acquisition, and it forms a database of fingerprints by error classification and handling. To further improve the accuracy of positioning, the online positioning process first uses a pre-match method to select the candidate fingerprints to shorten the positioning time. After that, it uses the improved Euclidean distance and the improved joint probability to calculate two intermediate results, and further calculates the final result from these two intermediate results by weighted fusion. The improved Euclidean distance introduces the standard deviation of WiFi signal strength to smooth the WiFi signal fluctuation and the improved joint probability introduces the logarithmic calculation to reduce the difference between probability values. Comparing the proposed algorithm, the Euclidean distance based WKNN algorithm and the joint probability algorithm, the experimental results indicate that the proposed algorithm has higher positioning accuracy.
Institute of Scientific and Technical Information of China (English)
Liu Shengmei; Pan Su; Mi Zhengkun; Meng Qingmin; Xu Minghai
2012-01-01
An improved MEW （ muhiplicative exponent weighting） algorithm, SLE-MEW is proposed for vertical handoff decision in heterogeneous wireless networks. It introduces the SINR（ signal to interference plus noise ratio） effects, LS （least square） and information entropy method into the algorithm. An attribute matrix is constructed considering the SINR in the source network and the equivalent SINR in the target network, the required bandwidth, the traffic cost and the available bandwidth of participating access networks. Handoff decision meeting multi-attribute QoS（quality of serv- ice） requirement is made according to the traffic features. The subjective weight relation of decision elements is determined with LS method. The information entropy method is employed to derive the objective weights of the evaluation criteria, and lead to the comprehensive weight. Finally decision is made using MEW algorithm based on the attribute matrix and weight vector. Four 3GPP（ the 3rd generation partnership project） defined traffic classes are considered in performance evaluation. The simulation results have shown that the proposed algorithm can provide satisfactory performance fitting to the characteristics of the traffic.
Reconstruction-plane-dependent weighted FDK algorithm for cone beam volumetric CT
Tang, Xiangyang; Hsieh, Jiang
2005-04-01
The original FDK algorithm has been extensively employed in medical and industrial imaging applications. With an increased cone angle, cone beam (CB) artifacts in images reconstructed by the original FDK algorithm deteriorate, since the circular trajectory does not satisfy the so-called data sufficiency condition (DSC). A few "circular plus" trajectories have been proposed in the past to reduce CB artifacts by meeting the DSC. However, the circular trajectory has distinct advantages over other scanning trajectories in practical CT imaging, such as cardiac, vascular and perfusion applications. In addition to looking into the DSC, another insight into the CB artifacts of the original FDK algorithm is the inconsistency between conjugate rays that are 180° apart in view angle. The inconsistence between conjugate rays is pixel dependent, i.e., it varies dramatically over pixels within the image plane to be reconstructed. However, the original FDK algorithm treats all conjugate rays equally, resulting in CB artifacts that can be avoided if appropriate view weighting strategy is exercised. In this paper, a modified FDK algorithm is proposed, along with an experimental evaluation and verification, in which the helical body phantom and a humanoid head phantom scanned by a volumetric CT (64 x 0.625 mm) are utilized. Without extra trajectories supplemental to the circular trajectory, the modified FDK algorithm applies reconstruction-plane-dependent view weighting on projection data before 3D backprojection, which reduces the inconsistency between conjugate rays by suppressing the contribution of one of the conjugate rays with a larger cone angle. Both computer-simulated and real phantom studies show that, up to a moderate cone angle, the CB artifacts can be substantially suppressed by the modified FDK algorithm, while advantages of the original FDK algorithm, such as the filtered backprojection algorithm structure, 1D ramp filtering, and data manipulation efficiency, can be
Directory of Open Access Journals (Sweden)
Esther A.K James
2012-01-01
Full Text Available Problem statement: The paper addresses the face recognition problem by proposing Weighted Fuzzy Fisherface (WFF technique using Biorthogonal Transformation. The Weighted Fuzzy Fisherface technique is an extension of Fisher Face technique by introducing fuzzy class membership to each training sample in calculating the scatter matrices. Approach: In weighted fuzzy fisherface method, the weight emphasizes classes that are close together and deemphasizes the classes that are far away from each other. Results: The proposed method is more advantageous for the classification task and its accuracy is improved. Also with the performance measures False Acceptance Rate (FAR, False Rejection Rate (FRR and Equal Error Rate (EER are calculated. Conclusion: Weighted fuzzy fisherface algorithm using wavelet transform can effectively and efficiently used for face recognition and its accuracy is improved.
Selected Operations, Algorithms, and Applications of n-Tape Weighted Finite-State Machines
Kempe, André
2011-01-01
A weighted finite-state machine with n tapes (n-WFSM) defines a rational relation on n strings. It is a generalization of weighted acceptors (one tape) and transducers (two tapes). After recalling some basic definitions about n-ary weighted rational relations and n-WFSMs, we summarize some central operations on these relations and machines, such as join and auto-intersection. Unfortunately, due to Post's Correspondence Problem, a fully general join or auto-intersection algorithm cannot exist. We recall a restricted algorithm for a class of n-WFSMs. Through a series of practical applications, we finally investigate the augmented descriptive power of n-WFSMs and their join, compared to classical transducers and their composition. Some applications are not feasible with the latter. The series includes: the morphological analysis of Semitic languages, the preservation of intermediate results in transducer cascades, the induction of morphological rules from corpora, the alignment of lexicon entries, the automatic ...
Closed circle DNA algorithm of change positive-weighted Hamilton circuit problem
Institute of Scientific and Technical Information of China (English)
Zhou Kang; Tong Xiaojun; Xu Jin
2009-01-01
Chain length of closed circle DNA is equal. The same closed circle DNA's position corresponds to different recognition sequence, and the same recognition sequence corresponds to different foreign DNA segment, so closed circle DNA computing model is generalized. For change positive-weighted Hamilton circuit problem, closed circle DNA algorithm is put forward. First, three groups of DNA encoding are encoded for all arcs, and deck groups are designed for all vertices. All possible solutions axe composed. Then, the feasible solutions axe filtered out by using group detect experiment, and the optimization solutions are obtained by using group insert experiment and electrophoresis experiment. Finally, all optimization solutions are found by using detect experiment. Complexity of algorithm is concluded and validity of DNA algorithm is explained by an example. Three dominances of the closed circle DNA algorithm are analyzed, and characteristics and dominances of group delete experiment axe discussed.
LENUS (Irish Health Repository)
Ledwidge, Mark T
2013-04-01
Previous studies have demonstrated poor sensitivity of guideline weight monitoring in predicting clinical deterioration of heart failure (HF). This study aimed to evaluate patterns of remotely transmitted daily weights in a high-risk HF population and also to compare guideline weight monitoring and an individualized weight monitoring algorithm.
Zhang, X.; Kusari, A.; Glennie, C. L.; Oskin, M. E.; Hinojosa-Corona, A.; Borsa, A. A.; Arrowsmith, R.
2013-12-01
Differential LiDAR (Light Detection and Ranging) from repeated surveys has recently emerged as an effective tool to measure three-dimensional (3D) change for applications such as quantifying slip and spatially distributed warping associated with earthquake ruptures, and examining the spatial distribution of beach erosion after hurricane impact. Currently, the primary method for determining 3D change is through the use of the iterative closest point (ICP) algorithm and its variants. However, all current studies using ICP have assumed that all LiDAR points in the compared point clouds have uniform accuracy. This assumption is simplistic given that the error for each LiDAR point is variable, and dependent upon highly variable factors such as target range, angle of incidence, and aircraft trajectory accuracy. Therefore, to rigorously determine spatial change, it would be ideal to model the random error for every LiDAR observation in the differential point cloud, and use these error estimates as apriori weights in the ICP algorithm. To test this approach, we implemented a rigorous LiDAR observation error propagation method to generate estimated random error for each point in a LiDAR point cloud, and then determine 3D displacements between two point clouds using an anistropic weighted ICP algorithm. The algorithm was evaluated by qualitatively and quantitatively comparing post earthquake slip estimates from the 2010 El Mayor-Cucapah Earthquake between a uniform weight and anistropically weighted ICP algorithm, using pre-event LiDAR collected in 2006 by Instituto Nacional de Estadística y Geografía (INEGI), and post-event LiDAR collected by The National Center for Airborne Laser Mapping (NCALM).
A Line-Based Adaptive-Weight Matching Algorithm Using Loopy Belief Propagation
Directory of Open Access Journals (Sweden)
Hui Li
2015-01-01
Full Text Available In traditional adaptive-weight stereo matching, the rectangular shaped support region requires excess memory consumption and time. We propose a novel line-based stereo matching algorithm for obtaining a more accurate disparity map with low computation complexity. This algorithm can be divided into two steps: disparity map initialization and disparity map refinement. In the initialization step, a new adaptive-weight model based on the linear support region is put forward for cost aggregation. In this model, the neural network is used to evaluate the spatial proximity, and the mean-shift segmentation method is used to improve the accuracy of color similarity; the Birchfield pixel dissimilarity function and the census transform are adopted to establish the dissimilarity measurement function. Then the initial disparity map is obtained by loopy belief propagation. In the refinement step, the disparity map is optimized by iterative left-right consistency checking method and segmentation voting method. The parameter values involved in this algorithm are determined with many simulation experiments to further improve the matching effect. Simulation results indicate that this new matching method performs well on standard stereo benchmarks and running time of our algorithm is remarkably lower than that of algorithm with rectangle-shaped support region.
Roh, Min K; Daigle, Bernie J; Gillespie, Dan T; Petzold, Linda R
2011-12-21
In recent years there has been substantial growth in the development of algorithms for characterizing rare events in stochastic biochemical systems. Two such algorithms, the state-dependent weighted stochastic simulation algorithm (swSSA) and the doubly weighted SSA (dwSSA) are extensions of the weighted SSA (wSSA) by H. Kuwahara and I. Mura [J. Chem. Phys. 129, 165101 (2008)]. The swSSA substantially reduces estimator variance by implementing system state-dependent importance sampling (IS) parameters, but lacks an automatic parameter identification strategy. In contrast, the dwSSA provides for the automatic determination of state-independent IS parameters, thus it is inefficient for systems whose states vary widely in time. We present a novel modification of the dwSSA--the state-dependent doubly weighted SSA (sdwSSA)--that combines the strengths of the swSSA and the dwSSA without inheriting their weaknesses. The sdwSSA automatically computes state-dependent IS parameters via the multilevel cross-entropy method. We apply the method to three examples: a reversible isomerization process, a yeast polarization model, and a lac operon model. Our results demonstrate that the sdwSSA offers substantial improvements over previous methods in terms of both accuracy and efficiency.
Teo, P. T.; Crow, R.; Van Nest, S.; Sasaki, D.; Pistorius, S.
2013-07-01
This paper investigates the feasibility and accuracy of using a computer vision algorithm and electronic portal images to track the motion of a tumour-like target from a breathing phantom. A multi-resolution optical flow algorithm that incorporates weighting based on the differences between frames was used to obtain a set of vectors corresponding to the motion between two frames. A global value representing the average motion was obtained by computing the average weighted mean from the set of vectors. The tracking accuracy of the optical flow algorithm as a function of the breathing rate and target visibility was investigated. Synthetic images with different contrast-to-noise ratios (CNR) were created, and motions were tracked. The accuracy of the proposed algorithm was compared against potentiometer measurements giving average position errors of 0.6 ± 0.2 mm, 0.2 ± 0.2 mm and 0.1 ± 0.1 mm with average velocity errors of 0.2 ± 0.2 mm s-1, 0.4 ± 0.3 mm s-1 and 0.6 ± 0.5 mm s-1 for 6, 12 and 16 breaths min-1 motions, respectively. The cumulative average position error reduces more rapidly with the greater number of breathing cycles present in higher breathing rates. As the CNR increases from 4.27 to 5.6, the average relative error approaches zero and the errors are less dependent on the velocity. When tracking a tumour on a patient's digitally reconstructed radiograph images, a high correlation was obtained between the dynamically weighted optical flow algorithm, a manual delineation process and a centroid tracking algorithm. While the accuracy of our approach is similar to that of other methods, the benefits are that it does not require manual delineation of the target and can therefore provide accurate real-time motion estimation during treatment.
Random weights, robust lattice rules and the geometry of the cbc$r$c algorithm
Dick, Josef
2011-01-01
In this paper we study lattice rules which are cubature formulae to approximate integrands over the unit cube $[0,1]^s$ from a weighted reproducing kernel Hilbert space. We assume that the weights are independent random variables with a given mean and variance for two reasons stemming from practical applications: (i) It is usually not known in practice how to choose the weights. Thus by assuming that the weights are random variables, we obtain robust constructions (with respect to the weights) of lattice rules. This, to some extend, removes the necessity to carefully choose the weights. (ii) In practice it is convenient to use the same lattice rule for many different integrands. The best choice of weights for each integrand may vary to some degree, hence considering the weights random variables does justice to how lattice rules are used in applications. We also study a generalized version which uses $r$ constraints which we call the cbc$r$c (component-by-component with $r$ constraints) algorithm. We show that...
Study of weighted space deconvolution algorithm in computer controlled optical surfacing formation
Institute of Scientific and Technical Information of China (English)
Hongyu Li; Wei Zhang; Guoyu Yu
2009-01-01
Theoretical and experimental research on the deconvolution algorithm of dwell time in the technology of computer controlled optical surfacing (CCOS) formation is made to get an ultra-smooth surface of space optical element. Based on the Preston equation, the convolution model of CCOS is deduced. Considering the morbidity problem of deconvolution algorithm and the actual situation of CCOS technology, the weighting spatial deconvolution algorithm is presented based on the non-periodic matrix model, which avoids solving morbidity resulting from the noise induced by measurement error. The discrete convolution equation is solved using conjugate gradient iterative method and the workload of iterative calculation in spatial domain is reduced effectively. Considering the edge effect of convolution algorithm, the method adopts a marginal factor to control the edge precision and attains a good effect. The simulated processing test shows that the convergence ratio of processed surface shape error reaches 80%. This algorithm is further verified through an experiment on a numerical control bonnet polishing machine, and an ultra-smooth glass surface with the root-mean-square (RMS) error of 0.0088 μm is achieved. The simulation and experimental results indicate that this algorithm is steady, convergent, and precise, and it can satisfy the solving requirement of actual dwell time.
Sato, Seichi; Kurihara, Toru; Ando, Shigeru
This paper proposes an exact direct method to determine all parameters including an envelope peak of the white-light interferogram. A novel mathematical technique, the weighted integral method (WIM), is applied that starts from the characteristic differential equation of the target signal, interferogram in this paper, to obtain the algebraic relation among the finite-interval weighted integrals (observations) of the signal and the waveform parameters (unknowns). We implemented this method using FFT and examined through various numerical simulations. The results show the method is able to localize the envelope peak very accurately even if it is not included in the observed interval. The performance comparisons reveal the superiority of the proposed algorithm over conventional algorithms in all terms of accuracy, efficiency, and estimation range.
Directory of Open Access Journals (Sweden)
Puneet Rai
2014-02-01
Full Text Available Ant Colony Optimization (ACO is nature inspired algorithm based on foraging behavior of ants. The algorithm is based on the fact how ants deposit pheromone while searching for food. ACO generates a pheromone matrix which gives the edge information present at each pixel position of image, formed by ants dispatched on image. The movement of ants depends on local variance of image's intensity value. This paper proposes an improved method based on heuristic which assigns weight to the neighborhood. Thus by assigning the weights or priority to the neighboring pixels, the ant decides in which direction it can move. The method is applied on Medical images and experimental results are provided to support the superior performance of the proposed approach and the existing method.
Classification of EEG Signals using adaptive weighted distance nearest neighbor algorithm
E. Parvinnia; M. Sabeti; M. Zolghadri Jahromi; Boostani, R
2014-01-01
Electroencephalogram (EEG) signals are often used to diagnose diseases such as seizure, alzheimer, and schizophrenia. One main problem with the recorded EEG samples is that they are not equally reliable due to the artifacts at the time of recording. EEG signal classification algorithms should have a mechanism to handle this issue. It seems that using adaptive classifiers can be useful for the biological signals such as EEG. In this paper, a general adaptive method named weighted distance near...
Poland, Simon P; Krstajić, Nikola; Knight, Robert D; Henderson, Robert K; Ameer-Beg, Simon M
2014-04-15
We report on the development of a doubly weighted Gerchberg-Saxton algorithm (DWGS) to enable generation of uniform beamlet arrays with a spatial light modulator (SLM) for use in multiphoton multifocal imaging applications. The algorithm incorporates the WGS algorithm as well as feedback of fluorescence signals from the sample measured with a single-photon avalanche diode (SPAD) detector array. This technique compensates for issues associated with nonuniform illumination onto the SLM, the effects due to aberrations and the variability in gain between detectors within the SPAD array to generate a uniformly illuminated multiphoton fluorescence image. We demonstrate the use of the DWGS with a number of beamlet array patterns to image muscle fibers of a 5-day-old fixed zebrafish larvae.
A O(E) Time Shortest Path Algorithm For Non Negative Weighted Undirected Graphs
Qureshi, Muhammad Aasim; Safdar, Sohail; Akbar, Rehan
2009-01-01
In most of the shortest path problems like vehicle routing problems and network routing problems, we only need an efficient path between two points source and destination, and it is not necessary to calculate the shortest path from source to all other nodes. This paper concentrates on this very idea and presents an algorithm for calculating shortest path for (i) nonnegative weighted undirected graphs (ii) unweighted undirected graphs. The algorithm completes its execution in O(E) for all graphs except few in which longer path (in terms of number of edges) from source to some node makes it best selection for that node. The main advantage of the algorithms is its simplicity and it does not need complex data structures for implementations.
Robertson, Alexander M.; Willett, Peter
1996-01-01
Describes a genetic algorithm (GA) that assigns weights to query terms in a ranked-output document retrieval system. Experiments showed the GA often found weights slightly superior to those produced by deterministic weighting (F4). Many times, however, the two methods gave the same results and sometimes the F4 results were superior, indicating…
Yang, Jie; Zhang, Pengcheng; Zhang, Liyuan; Shu, Huazhong; Li, Baosheng; Gui, Zhiguo
2017-01-01
In inverse treatment planning of intensity-modulated radiation therapy (IMRT), the objective function is typically the sum of the weighted sub-scores, where the weights indicate the importance of the sub-scores. To obtain a high-quality treatment plan, the planner manually adjusts the objective weights using a trial-and-error procedure until an acceptable plan is reached. In this work, a new particle swarm optimization (PSO) method which can adjust the weighting factors automatically was investigated to overcome the requirement of manual adjustment, thereby reducing the workload of the human planner and contributing to the development of a fully automated planning process. The proposed optimization method consists of three steps. (i) First, a swarm of weighting factors (i.e., particles) is initialized randomly in the search space, where each particle corresponds to a global objective function. (ii) Then, a plan optimization solver is employed to obtain the optimal solution for each particle, and the values of the evaluation functions used to determine the particle's location and the population global location for the PSO are calculated based on these results. (iii) Next, the weighting factors are updated based on the particle's location and the population global location. Step (ii) is performed alternately with step (iii) until the termination condition is reached. In this method, the evaluation function is a combination of several key points on the dose volume histograms. Furthermore, a perturbation strategy - the crossover and mutation operator hybrid approach - is employed to enhance the population diversity, and two arguments are applied to the evaluation function to improve the flexibility of the algorithm. In this study, the proposed method was used to develop IMRT treatment plans involving five unequally spaced 6MV photon beams for 10 prostate cancer cases. The proposed optimization algorithm yielded high-quality plans for all of the cases, without human
Onose, Alexandru; Dabbech, Arwa; Wiaux, Yves
2017-07-01
Next-generation radio interferometers, like the Square Kilometre Array, will acquire large amounts of data with the goal of improving the size and sensitivity of the reconstructed images by orders of magnitude. The efficient processing of large-scale data sets is of great importance. We propose an acceleration strategy for a recently proposed primal-dual distributed algorithm. A preconditioning approach can incorporate into the algorithmic structure both the sampling density of the measured visibilities and the noise statistics. Using the sampling density information greatly accelerates the convergence speed, especially for highly non-uniform sampling patterns, while relying on the correct noise statistics optimizes the sensitivity of the reconstruction. In connection to clean, our approach can be seen as including in the same algorithmic structure both natural and uniform weighting, thereby simultaneously optimizing both the resolution and the sensitivity. The method relies on a new non-Euclidean proximity operator for the data fidelity term, that generalizes the projection on to the ℓ2 ball where the noise lives for naturally weighted data, to the projection on to a generalized ellipsoid incorporating sampling density information through uniform weighting. Importantly, this non-Euclidean modification is only an acceleration strategy to solve the convex imaging problem with data fidelity dictated only by noise statistics. We show through simulations with realistic sampling patterns the acceleration obtained using the preconditioning. We also investigate the algorithm performance for the reconstruction of the 3C129 radio galaxy from real visibilities and compare with multiscale clean, showing better sensitivity and resolution. Our matlab code is available online on GitHub.
Vector Cascade Algorithms with Infinitely Supported Masks in Weighted L2-Spaces
Institute of Scientific and Technical Information of China (English)
Jian Bin YANG
2013-01-01
In this paper,we shall study the solutions of functional equations of the form φ =Σ a(α) φ(M.-α),α∈Zswhere φ =(Φ1,...,Φr)T is an r × 1 column vector of functions on the s-dimensional Euclidean space,a:=(a(α))α∈zs is an exponentially decaying sequence of r × r complex matrices called refinement mask and M is an s × s integer matrix such that limn→∞ M-n =0.We are interested in the question,for a mask a with exponential decay,if there exists a solution Φ to the functional equation with each function Φj,j =1,...,r,belonging to L2(IRs) and having exponential decay in some sense? Our approach will be to consider the convergence of vector cascade algorithms in weighted L2 spaces.The vector cascade operator Qa,M associated with mask a and matrix M is defined by Qa,Mf:=Σ a(α)f(M.-α),f =(f1,...,fr)T ∈ (L2,μ(IRs))r.αα∈ZsThe iterative scheme (Qna,Mf)n=1,2 is called a vector cascade algorithm or a vector subdivision scheme.The purpose of this paper is to provide some conditions for the vector cascade algorithm to converge in (L2,μ(IRs))r,the weighted L2 space.Inspired by some ideas in [Jia,R.Q.,Li,S.:Refinable functions with exponential decay:An approach via cascade algorithms.J.Fourier Anal.Appl.,17,1008-1034 (2011)],we prove that if the vector cascade algorithm associated with a and M converges in (L2(IRs))r,then its limit function belongs to (L2,μ(IRs))r for some μ ＞ 0.
Application of stochastic weighted algorithms to a multidimensional silica particle model
Energy Technology Data Exchange (ETDEWEB)
Menz, William J. [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); Patterson, Robert I.A.; Wagner, Wolfgang [Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstrasse 39, Berlin 10117 (Germany); Kraft, Markus, E-mail: mk306@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom)
2013-09-01
Highlights: •Stochastic weighted algorithms (SWAs) are developed for a detailed silica model. •An implementation of SWAs with the transition kernel is presented. •The SWAs’ solutions converge to the direct simulation algorithm’s (DSA) solution. •The efficiency of SWAs is evaluated for this multidimensional particle model. •It is shown that SWAs can be used for coagulation problems in industrial systems. -- Abstract: This paper presents a detailed study of the numerical behaviour of stochastic weighted algorithms (SWAs) using the transition regime coagulation kernel and a multidimensional silica particle model. The implementation in the SWAs of the transition regime coagulation kernel and associated majorant rates is described. The silica particle model of Shekar et al. [S. Shekar, A.J. Smith, W.J. Menz, M. Sander, M. Kraft, A multidimensional population balance model to describe the aerosol synthesis of silica nanoparticles, Journal of Aerosol Science 44 (2012) 83–98] was used in conjunction with this coagulation kernel to study the convergence properties of SWAs with a multidimensional particle model. High precision solutions were calculated with two SWAs and also with the established direct simulation algorithm. These solutions, which were generated using large number of computational particles, showed close agreement. It was thus demonstrated that SWAs can be successfully used with complex coagulation kernels and high dimensional particle models to simulate real-world systems.
Classification of EEG Signals using adaptive weighted distance nearest neighbor algorithm
Directory of Open Access Journals (Sweden)
E. Parvinnia
2014-01-01
Full Text Available Electroencephalogram (EEG signals are often used to diagnose diseases such as seizure, alzheimer, and schizophrenia. One main problem with the recorded EEG samples is that they are not equally reliable due to the artifacts at the time of recording. EEG signal classification algorithms should have a mechanism to handle this issue. It seems that using adaptive classifiers can be useful for the biological signals such as EEG. In this paper, a general adaptive method named weighted distance nearest neighbor (WDNN is applied for EEG signal classification to tackle this problem. This classification algorithm assigns a weight to each training sample to control its influence in classifying test samples. The weights of training samples are used to find the nearest neighbor of an input query pattern. To assess the performance of this scheme, EEG signals of thirteen schizophrenic patients and eighteen normal subjects are analyzed for the classification of these two groups. Several features including, fractal dimension, band power and autoregressive (AR model are extracted from EEG signals. The classification results are evaluated using Leave one (subject out cross validation for reliable estimation. The results indicate that combination of WDNN and selected features can significantly outperform the basic nearest-neighbor and the other methods proposed in the past for the classification of these two groups. Therefore, this method can be a complementary tool for specialists to distinguish schizophrenia disorder.
Text Feature Weighting For Summarization Of Document Bahasa Indonesia Using Genetic Algorithm
Directory of Open Access Journals (Sweden)
Aristoteles.
2012-05-01
Full Text Available This paper aims to perform the text feature weighting for summarization of document bahasa Indonesia using genetic algorithm. There are eleven text features, i.e, sentence position (f1, positive keywords in sentence (f2, negative keywords in sentence (f3, sentence centrality (f4, sentence resemblance to the title (f5, sentence inclusion of name entity (f6, sentence inclusion of numerical data (f7, sentence relative length (f8, bushy path of the node (f9, summation of similarities for each node (f10, and latent semantic feature (f11. We investigate the effect of the first ten sentence features on the summarization task. Then, we use latent semantic feature to increase the accuracy. All feature score functions are used to train a genetic algorithm model to obtain a suitable combination of feature weights. Evaluation of text summarization uses F-measure. The F-measure directly related to the compression rate. The results showed that adding f11 increases the F-measure by 3.26% and 1.55% for compression ratio of 10% and 30%, respectively. On the other hand, it decreases the F-measure by 0.58% for compression ratio of 20%. Analysis of text feature weight showed that only using f2, f4, f5, and f11 can deliver a similar performance using all eleven features.
Protein side-chain packing problem: a maximum edge-weight clique algorithmic approach.
Dukka Bahadur, K C; Tomita, Etsuji; Suzuki, Jun'ichi; Akutsu, Tatsuya
2005-02-01
"Protein Side-chain Packing" has an ever-increasing application in the field of bio-informatics, dating from the early methods of homology modeling to protein design and to the protein docking. However, this problem is computationally known to be NP-hard. In this regard, we have developed a novel approach to solve this problem using the notion of a maximum edge-weight clique. Our approach is based on efficient reduction of protein side-chain packing problem to a graph and then solving the reduced graph to find the maximum clique by applying an efficient clique finding algorithm developed by our co-authors. Since our approach is based on deterministic algorithms in contrast to the various existing algorithms based on heuristic approaches, our algorithm guarantees of finding an optimal solution. We have tested this approach to predict the side-chain conformations of a set of proteins and have compared the results with other existing methods. We have found that our results are favorably comparable or better than the results produced by the existing methods. As our test set contains a protein of 494 residues, we have obtained considerable improvement in terms of size of the proteins and in terms of the efficiency and the accuracy of prediction.
Automatic Weight Selection Algorithm for Designing H Infinity controller for Active Magnetic Bearing
Directory of Open Access Journals (Sweden)
Sarath S Nair
2011-01-01
Full Text Available In recent times active magnetic bearing has got wide acceptance in industries and other special systems. Current researches focus on improving the disturbance rejection properties of magnetic bearings towork well in industrial environment. So far many controllers have been developed to control the system, of which the H∞ controller is found to guarantee robustness and performance. In this paper an automatic weight selection algorithm is proposed to design robust H Infinity controller automatically for active magnetic bearing system and detailed disturbance analysis is done. This paper focuses on the controller implementation point of view and analyses the variation in control current, peak responses and steady state error of the developed controller. Comparison with a well tuned PID controller shows the efficacy of H infinity controller designed using the proposed algorithm.
BPT算法对SIRT算法的一种加权研究%A Study on BPT Algorithm Weighting to the SIRT Algorithm
Institute of Scientific and Technical Information of China (English)
牛法富; 许令周
2011-01-01
A weighted algorithm was put forward for SIRT algorithm of acoustic tomography.Slowness distribution regarded as the initial value was firstly obtained by BPT algorithm.The difference of slowness between each point of space and the average was seen as the weighting coefficient of the ray matrix in the SIRT algorithm.The example shows that the improved algorithm was a feasible weighting algorithm as maintaining the convergence of the SIRT algorithm, speeding up the convergence rate of SIRT algorithm,improving computational efficiency and particularly the accuracy.%对声波层析成像中的SIRT算法提出了一种加权算法.该改进算法先通过BPT算法得到慢度分布,以之作为SIRT算法的初始值,以空间各点的慢度值与平均慢度值之差作为权系数对sIRT算法中射线矩阵进行加权.算例表明,该改进算法保持了SIRT算法收敛特点的同时,能够加快SIRT算法的收敛速度,提高计算效率,尤其能提高计算精度,是一种可行的加权方法.
CHARACTER BASED WEIGHTED SUPPORT THRESHOLD ALGORITHM USING MULTI CRITERIA DECISION MAKING TECHNIQUE
Directory of Open Access Journals (Sweden)
Dr.T.Christopher
2010-07-01
Full Text Available An association rule technique generally used to generate frequent itemsets from databases and generates association rules by considering each item in the datasets. However, the values of items are different in many aspects in a number of real applications, such as retail marketing, network log, etc. The difference between items makes a strong impact on the decision making in these applications.Therefore, traditional Association Rule Mining(ARM cannot meet the demands arising from these applications. In this paper a new approach is introduced for computing profit weight of an item and generating frequent itemsets by minimum support threshold. The profit or theimportance of the items in the itemsets is computed, based on the item subjective measures of characteristic through the proposed Global Profit Weight (GPW algorithm using multi criteria decision making technique to improve the quality of output.
Weight optimization of large span steel truss structures with genetic algorithm
Energy Technology Data Exchange (ETDEWEB)
Mojolic, Cristian; Hulea, Radu; Pârv, Bianca Roxana [Technical University of Cluj-Napoca, Faculty of Civil Engineering, Department of Structural Mechanics, Str. Constantin Daicoviciu nr. 15, Cluj-Napoca (Romania)
2015-03-10
The paper presents the weight optimization process of the main steel truss that supports the Slatina Sport Hall roof. The structure was loaded with self-weight, dead loads, live loads, snow, wind and temperature, grouped in eleven load cases. The optimization of the structure was made using genetic algorithms implemented in a Matlab code. A total number of four different cases were taken into consideration when trying to determine the lowest weight of the structure, depending on the types of connections with the concrete structure ( types of supports, bearing modes), and the possibility of the lower truss chord nodes to change their vertical position. A number of restrictions for tension, maximum displacement and buckling were enforced on the elements, and the cross sections are chosen by the program from a user data base. The results in each of the four cases were analyzed in terms of weight, element tension, element section and displacement. The paper presents the optimization process and the conclusions drawn.
An Adaptive Defect Weighted Sampling Algorithm to Design Pseudoknotted RNA Secondary Structures.
Zandi, Kasra; Butler, Gregory; Kharma, Nawwaf
2016-01-01
Computational design of RNA sequences that fold into targeted secondary structures has many applications in biomedicine, nanotechnology and synthetic biology. An RNA molecule is made of different types of secondary structure elements and an important RNA element named pseudoknot plays a key role in stabilizing the functional form of the molecule. However, due to the computational complexities associated with characterizing pseudoknotted RNA structures, most of the existing RNA sequence designer algorithms generally ignore this important structural element and therefore limit their applications. In this paper we present a new algorithm to design RNA sequences for pseudoknotted secondary structures. We use NUPACK as the folding algorithm to compute the equilibrium characteristics of the pseudoknotted RNAs, and describe a new adaptive defect weighted sampling algorithm named Enzymer to design low ensemble defect RNA sequences for targeted secondary structures including pseudoknots. We used a biological data set of 201 pseudoknotted structures from the Pseudobase library to benchmark the performance of our algorithm. We compared the quality characteristics of the RNA sequences we designed by Enzymer with the results obtained from the state of the art MODENA and antaRNA. Our results show our method succeeds more frequently than MODENA and antaRNA do, and generates sequences that have lower ensemble defect, lower probability defect and higher thermostability. Finally by using Enzymer and by constraining the design to a naturally occurring and highly conserved Hammerhead motif, we designed 8 sequences for a pseudoknotted cis-acting Hammerhead ribozyme. Enzymer is available for download at https://bitbucket.org/casraz/enzymer.
Heavy traffic queue length behavior in a switch under the MaxWeight algorithm
Directory of Open Access Journals (Sweden)
Siva Theja Maguluri
2016-11-01
Full Text Available We consider a switch operating under the MaxWeight scheduling algorithm, under any traffic pattern such that all the ports are loaded. This system is interesting to study since the queue lengths exhibit a multi-dimensional state-space collapse in the heavy-traffic regime. We use a Lyapunov-type drift technique to characterize the heavy-traffic behavior of the expectation of the sum queue lengths in steady-state, under the assumption that all ports are saturated and all queues receive non-zero traffic. Under these conditions, we show that the heavy-traffic scaled queue length is given by (1−1/2n||σ||2, where σ is the vector of the standard deviations of arrivals to each port in the heavy-traffic limit. In the special case of uniform Bernoulli arrivals, the corresponding formula is given by (n−3/2+1/2n. The result shows that the heavy-traffic scaled queue length has optimal scaling with respect to n, thus settling one version of an open conjecture; in fact, it is shown that the heavy-traffic queue length is at most within a factor of two from the optimal. We then consider certain asymptotic regimes where the load of the system scales simultaneously with the number of ports. We show that the MaxWeight algorithm has optimal queue length scaling behavior provided that the arrival rate approaches capacity sufficiently fast.
Institute of Scientific and Technical Information of China (English)
LI Qiang; WU Jianxin; SUN Yan
2009-01-01
Dynamic optimization of electromechanical coupling system is a significant engineering problem in the field of mechatronics. The performance improvement of electromechanical equipment depends on the system design parameters. Aiming at the spindle unit of refitted machine tool for solid rocket, the vibration acceleration of tool is taken as objective function, and the electromechanical system design parameters are appointed as design variables. Dynamic optimization model is set up by adopting Lagrange-Maxwell equations, Park transform and electromechanical system energy equations. In the procedure of seeking high efficient optimization method, exponential function is adopted to be the weight function of particle swarm optimization algorithm. Exponential inertia weight particle swarm algorithm(EPSA), is formed and applied to solve the dynamic optimization problem of electromechanical system. The probability density function of EPSA is presented and used to perform convergence analysis. After calculation, the optimized design parameters of the spindle unit are obtained in limited time period. The vibration acceleration of the tool has been decreased greatly by the optimized design parameters. The research job in the paper reveals that the problem of dynamic optimization of electromechanical system can be solved by the method of combining system dynamic analysis with reformed swarm particle optimization. Such kind of method can be applied in the design of robots, NC machine, and other electromechanical equipments.
Directory of Open Access Journals (Sweden)
R.Sugumar
2011-12-01
Full Text Available The security of the large database that contains certain crucial information, it will become a serious issue when sharing data to the network against unauthorized access. Privacy preserving data mining is a new research trend in privacy data for data mining and statistical database. Association analysis is a powerful tool for discovering relationships which are hidden in large database. Association rules hiding algorithms get strong an efficient performance for protecting confidential and crucial data. Data modification and rule hiding is one of the most important approaches for secure data. The objective of the proposed Weight Based Sorting Distortion (WBSD algorithm is to distort certain data which satisfies a particular sensitive rule. Then hide those transactions which support a sensitive rule and assigns them a priority and sorts them in ascending order according to the priority value of each rule. Then it uses these weights to compute the priority value for each transaction according to how weak the rule is that a transaction supports. Data distortion is one of the important methods to avoid this kind of scalability issues
AN OPTIMIZED WEIGHT BASED CLUSTERING ALGORITHM IN HETEROGENEOUS WIRELESS SENSOR NETWORKS
Directory of Open Access Journals (Sweden)
Babu.N.V
2012-12-01
Full Text Available The last few years have seen an increased interest in the potential use of wireless sensor networks (WSNs in various fields like disaster management, battle field surveillance, and border security surveillance. In such applications, a large number of sensor nodes are deployed, which are often unattended and work autonomously. The process of dividing the network into interconnected substructures is called clustering and the interconnected substructures are called clusters. The cluster head (CH of each cluster act as a coordinator within the substructure. Each CH acts as a temporary base station within its zone or cluster. It also communicates with other CHs. Clustering is a key technique used to extend the lifetime of a sensor network by reducing energy consumption. It can also increase network scalability. Researchers in all fields of wireless sensor network believe that nodes are homogeneous, but some nodes may be of different characteristics to prolong the lifetime of a WSN and its reliability. We have proposed an algorithm for better cluster head selection based on weights for different parameter that influence on energy consumption which includes distance from base station as a new parameter to reduce number of transmissions and reduce energy consumption by sensor nodes. Finally proposed algorithm compared with the WCA, IWCA algorithm in terms of number of clusters and energy consumption.
Energy Technology Data Exchange (ETDEWEB)
Goyal, T. [Indian Institute of Technology Kanpur. Dept. of Aerospace Engineering, UP (India); Gupta, R. [Infotech Enterprises Ltd., Hyderabad (India)
2012-07-01
In this work, minimum weight-cost design for laminated composites is presented. A genetic algorithm has been developed for the optimization process. Maximum-Stress, Tsai-Wu and Tsai-Hill failure criteria have been used along with buckling analysis parameter for the margin of safety calculations. The design variables include three materials; namely Carbon-Epoxy, Glass-Epoxy, Kevlar-Epoxy; number of plies; ply orientation angles, varying from -75 deg. to 90 deg. in the intervals of 15 deg. and ply thicknesses which depend on the material in use. The total cost is a sum of material cost and layup cost. Layup cost is a function of the ply angle. Validation studies for solution convergence and weight-cost inverse proportionality are carried out. One set of results for shear loading are also validated from literature for a particular case. A Pareto-Optimal solution set is demonstrated for biaxial loading conditions. It is then extended to applied moments. It is found that global optimum for a given loading condition is a function of the failure criteria for shear loading, with Maximum Stress criteria giving the lightest-cheapest and Tsai-Wu criteria giving the heaviest-costliest optimized laminates. Optimized weight results are plotted from the three criteria to do a comparative study. This work gives a global optimized laminated composite and also a set of other local optimum laminates for a given set of loading conditions. The current algorithm also provides with adequate data to supplement the use of different failure criteria for varying loadings. This work can find use in the industry and/or academia considering the increased use of laminated composites in modern wind blades. (Author)
Wu, Tin-Yu; Chang, Tse; Chu, Teng-Hao
2017-02-01
Many data mining adopts the form of Artificial Neural Network (ANN) to solve many problems, many problems will be involved in the process of training Artificial Neural Network, such as the number of samples with volume label, the time and performance of training, the number of hidden layers and Transfer function, if the compared data results are not expected, it cannot be known clearly that which dimension causes the deviation, the main reason is that Artificial Neural Network trains compared results through the form of modifying weight, and it is not a kind of training to improve the original algorithm for the extraction algorithm of image, but tend to obtain correct value aimed at the result plus the weigh; in terms of these problems, this paper will mainly put forward a method to assist in the image data analysis of Artificial Neural Network; normally, a parameter will be set as the value to extract feature vector during processing the image, which will be considered by us as weight, the experiment will use the value extracted from feature point of Speeded Up Robust Features (SURF) Image as the basis for training, SURF itself can extract different feature points according to extracted values, we will make initial semi-supervised clustering according to these values, and use Modified K - on his Neighbors (MFKNN) as training and classification, the matching mode of unknown images is not one-to-one complete comparison, but only compare group Centroid, its main purpose is to save its efficiency and speed up, and its retrieved data results will be observed and analyzed eventually; the method is mainly to make clustering and classification with the use of the nature of image feature point to give values to groups with high error rate to produce new feature points and put them into Input Layer of Artificial Neural Network for training, and finally comparative analysis is made with Back-Propagation Neural Network (BPN) of Genetic Algorithm-Artificial Neural Network
Jafarizadeh, Saber
2010-01-01
Providing an analytical solution for the problem of finding Fastest Distributed Consensus (FDC) is one of the challenging problems in the field of sensor networks. Most of the methods proposed so far deal with the FDC averaging algorithm problem by numerical convex optimization methods and in general no closed-form solution for finding FDC has been offered up to now except in [3] where the conjectured answer for path has been proved. Here in this work we present an analytical solution for the problem of Fastest Distributed Consensus for the Path network using semidefinite programming particularly solving the slackness conditions, where the optimal weights are obtained by inductive comparing of the characteristic polynomials initiated by slackness conditions.
Enayatifar, Rasul; Abdullah, Abdul Hanan; Lee, Malrey
2013-09-01
In recent years, there has been increasing interest in the security of digital images. This study focuses on binary image encryption using the weighted discrete imperialist competitive algorithm (WDICA). In the proposed method, a chaotic map is first used to create a specified number of cipher images. Then, to improve the results, WDICA is applied to the cipher images. In this study, entropy and correlation coefficient are used as WDICA's fitness functions. The goal is to maximize the entropy and minimize correlation coefficients. The advantage of this method is its ability to optimize the outcome of all iterations using WDICA. Simulation results show that WDICA not only demonstrates excellent encryption but also resists various typical attacks. The obtained correlation coefficient and entropy of the proposed WDICA are approximately 0.004 and 7.9994, respectively.
Liu, Chen-Yi; Goertzen, Andrew L
2013-07-21
An iterative position-weighted centre-of-gravity algorithm was developed and tested for positioning events in a silicon photomultiplier (SiPM)-based scintillation detector for positron emission tomography. The algorithm used a Gaussian-based weighting function centred at the current estimate of the event location. The algorithm was applied to the signals from a 4 × 4 array of SiPM detectors that used individual channel readout and a LYSO:Ce scintillator array. Three scintillator array configurations were tested: single layer with 3.17 mm crystal pitch, matched to the SiPM size; single layer with 1.5 mm crystal pitch; and dual layer with 1.67 mm crystal pitch and a ½ crystal offset in the X and Y directions between the two layers. The flood histograms generated by this algorithm were shown to be superior to those generated by the standard centre of gravity. The width of the Gaussian weighting function of the algorithm was optimized for different scintillator array setups. The optimal width of the Gaussian curve was found to depend on the amount of light spread. The algorithm required less than 20 iterations to calculate the position of an event. The rapid convergence of this algorithm will readily allow for implementation on a front-end detector processing field programmable gate array for use in improved real-time event positioning and identification.
Hubs and authorities in the world trade network using a weighted HITS algorithm.
Deguchi, Tsuyoshi; Takahashi, Katsuhide; Takayasu, Hideki; Takayasu, Misako
2014-01-01
We investigate the economic hubs and authorities of the world trade network (WTN) from 1992 to 2012, an era of rapid economic globalization. Using a well-defined weighted hyperlink-induced topic search (HITS) algorithm, we can calculate the values of the weighted HITS hub and authority for each country in a conjugate way. In the context of the WTN, authority values are large for countries with significant imports from large hub countries, and hub values are large for countries with significant exports to high-authority countries. The United States was the largest economic authority in the WTN from 1992 to 2012. The authority value of the United States has declined since 2001, and China has now become the largest hub in the WTN. At the same time, China's authority value has grown as China is transforming itself from the "factory of the world" to the "market of the world." European countries show a tendency to trade mostly within the European Union, which has decreased Europe's hub and authority values. Japan's authority value has increased slowly, while its hub value has declined. These changes are consistent with Japan's transition from being an export-driven economy in its high economic growth era in the latter half of the twentieth century to being a more mature, economically balanced nation.
Hubs and authorities in the world trade network using a weighted HITS algorithm.
Directory of Open Access Journals (Sweden)
Tsuyoshi Deguchi
Full Text Available We investigate the economic hubs and authorities of the world trade network (WTN from 1992 to 2012, an era of rapid economic globalization. Using a well-defined weighted hyperlink-induced topic search (HITS algorithm, we can calculate the values of the weighted HITS hub and authority for each country in a conjugate way. In the context of the WTN, authority values are large for countries with significant imports from large hub countries, and hub values are large for countries with significant exports to high-authority countries. The United States was the largest economic authority in the WTN from 1992 to 2012. The authority value of the United States has declined since 2001, and China has now become the largest hub in the WTN. At the same time, China's authority value has grown as China is transforming itself from the "factory of the world" to the "market of the world." European countries show a tendency to trade mostly within the European Union, which has decreased Europe's hub and authority values. Japan's authority value has increased slowly, while its hub value has declined. These changes are consistent with Japan's transition from being an export-driven economy in its high economic growth era in the latter half of the twentieth century to being a more mature, economically balanced nation.
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
A new method for power quality (PQ) disturbances identification is brought forward based on combining a neural network with least square (LS) weighted fusion algorithm. The characteristic components of PQ disturbances are distilled through an improved phase-located loop (PLL) system at first, and then five child BP ANNs with different structures are trained and adopted to identify the PQ disturbances respectively. The combining neural network fuses the identification results of these child ANNs with LS weighted fusion algorithm, and identifies PQ disturbances with the fused result finally. Compared with a single neural network, the combining one with LS weighted fusion algorithm can identify the PQ disturbances correctly when noise is strong. However, a single neural network may fail in this case. Furthermore, the combining neural network is more reliable than a single neural network. The simulation results prove the conclusions above.
Sheikhan, Mansour; Abbasnezhad Arabi, Mahdi; Gharavian, Davood
2015-10-01
Artificial neural networks are efficient models in pattern recognition applications, but their performance is dependent on employing suitable structure and connection weights. This study used a hybrid method for obtaining the optimal weight set and architecture of a recurrent neural emotion classifier based on gravitational search algorithm (GSA) and its binary version (BGSA), respectively. By considering the features of speech signal that were related to prosody, voice quality, and spectrum, a rich feature set was constructed. To select more efficient features, a fast feature selection method was employed. The performance of the proposed hybrid GSA-BGSA method was compared with similar hybrid methods based on particle swarm optimisation (PSO) algorithm and its binary version, PSO and discrete firefly algorithm, and hybrid of error back-propagation and genetic algorithm that were used for optimisation. Experimental tests on Berlin emotional database demonstrated the superior performance of the proposed method using a lighter network structure.
A Weighted Spatial-Spectral Kernel RX Algorithm and Efficient Implementation on GPUs
Directory of Open Access Journals (Sweden)
Chunhui Zhao
2017-02-01
Full Text Available The kernel RX (KRX detector proposed by Kwon and Nasrabadi exploits a kernel function to obtain a better detection performance. However, it still has two limits that can be improved. On the one hand, reasonable integration of spatial-spectral information can be used to further improve its detection accuracy. On the other hand, parallel computing can be used to reduce the processing time in available KRX detectors. Accordingly, this paper presents a novel weighted spatial-spectral kernel RX (WSSKRX detector and its parallel implementation on graphics processing units (GPUs. The WSSKRX utilizes the spatial neighborhood resources to reconstruct the testing pixels by introducing a spectral factor and a spatial window, thereby effectively reducing the interference of background noise. Then, the kernel function is redesigned as a mapping trick in a KRX detector to implement the anomaly detection. In addition, a powerful architecture based on the GPU technique is designed to accelerate WSSKRX. To substantiate the performance of the proposed algorithm, both synthetic and real data are conducted for experiments.
Features of Discontinuous Galerkin Algorithms in Gkeyll, and Exponentially-Weighted Basis Functions
Hammett, G. W.; Hakim, A.; Shi, E. L.
2016-10-01
There are various versions of Discontinuous Galerkin (DG) algorithms that have interesting features that could help with challenging problems of higher-dimensional kinetic problems (such as edge turbulence in tokamaks and stellarators). We are developing the gyrokinetic code Gkeyll based on DG methods. Higher-order methods do more FLOPS to extract more information per byte, thus reducing memory and communication costs (which are a bottleneck for exascale computing). The inner product norm can be chosen to preserve energy conservation with non-polynomial basis functions (such as Maxwellian-weighted bases), which alternatively can be viewed as a Petrov-Galerkin method. This allows a full- F code to benefit from similar Gaussian quadrature employed in popular δf continuum gyrokinetic codes. We show some tests for a 1D Spitzer-Härm heat flux problem, which requires good resolution for the tail. For two velocity dimensions, this approach could lead to a factor of 10 or more speedup. Supported by the Max-Planck/Princeton Center for Plasma Physics, the SciDAC Center for the Study of Plasma Microturbulence, and DOE Contract DE-AC02-09CH11466.
A Weighted Spatial-Spectral Kernel RX Algorithm and Efficient Implementation on GPUs.
Zhao, Chunhui; Li, Jiawei; Meng, Meiling; Yao, Xifeng
2017-02-23
The kernel RX (KRX) detector proposed by Kwon and Nasrabadi exploits a kernel function to obtain a better detection performance. However, it still has two limits that can be improved. On the one hand, reasonable integration of spatial-spectral information can be used to further improve its detection accuracy. On the other hand, parallel computing can be used to reduce the processing time in available KRX detectors. Accordingly, this paper presents a novel weighted spatial-spectral kernel RX (WSSKRX) detector and its parallel implementation on graphics processing units (GPUs). The WSSKRX utilizes the spatial neighborhood resources to reconstruct the testing pixels by introducing a spectral factor and a spatial window, thereby effectively reducing the interference of background noise. Then, the kernel function is redesigned as a mapping trick in a KRX detector to implement the anomaly detection. In addition, a powerful architecture based on the GPU technique is designed to accelerate WSSKRX. To substantiate the performance of the proposed algorithm, both synthetic and real data are conducted for experiments.
Institute of Scientific and Technical Information of China (English)
Ibrahim M.AL-HARKAN
2005-01-01
In this paper, a constrained genetic algorithm (CGA) is proposed to solve the single machine total weighted tardiness problem. The proposed CGA incorporates dominance rules for the problem under consideration into the GA operators. This incorporation should enable the proposed CGA to obtain close to optimal solutions with much less deviation and much less computational effort than the conventional GA (UGA). Several experiments were performed to compare the quality of solutions obtained by the three versions of both the CGA and the UGA with the results obtained by a dynamic programming approach. The computational results showed that the CGA was better than the UGA in both quality of solutions obtained and the CPU time needed to obtain the close to optimal solutions.The three versions of the CGA reduced the percentage deviation by 15.6%, 61.95%, and 25% respectively and obtained close to optimal solutions with 59% lower CPU time than what the three versions of the UGA demanded. The CGA performed better than the UGA in terms of quality of solutions and computational effort when the population size and the number of generations are smaller.
一种改进的web文档关键词权重计算方法%An improved algorithm for weighting keywords in web documents
Institute of Scientific and Technical Information of China (English)
孙双; 贺樑; 杨静; 顾君忠
2008-01-01
In this paper, an improved algorithm, web-based keyword weight algorithm (WKWA), is presented to weight keywords in web documents. WKWA takes into account representation features of web documents and advantages of the TF*IDF, TFC and ITC algorithms in order to make it more appropriate for web documents. Meanwhile, the presented algorithm is applied to improved vector space model (IVSM). A real system has been implemented for calculating semantic similarities of web documents. Four experiments have been carried out. They are keyword weight calculation, feature item selection, semantic similarity calculation, and WKWA time performance. The results demonstrate accuracy of keyword weight, and semantic similarity is improved.
BiCluE - Exact and heuristic algorithms for weighted bi-cluster editing of biomedical data
DEFF Research Database (Denmark)
Sun, Peng; Guo, Jiong; Baumbach, Jan
2013-01-01
different types. Bi-cluster editing, as a special case of clustering, which partitions two different types of data simultaneously, might be used for several biomedical scenarios. However, the underlying algorithmic problem is NP-hard.RESULTS:Here we contribute with BiCluE, a software package designed...... to solve the weighted bi-cluster editing problem. It implements (1) an exact algorithm based on fixed-parameter tractability and (2) a polynomial-time greedy heuristics based on solving the hardest part, edge deletions, first. We evaluated its performance on artificial graphs. Afterwards we exemplarily...... applied our implementation on real world biomedical data, GWAS data in this case. BiCluE generally works on any kind of data types that can be modeled as (weighted or unweighted) bipartite graphs.CONCLUSIONS:To our knowledge, this is the first software package solving the weighted bi-cluster editing...
Zheng, Jun-Xi; Zhang, Ping; Li, Fang; Du, Guang-Long
2016-09-01
Although the sequence-dependent setup times flowshop problem with the total weighted tardiness minimization objective exists widely in industry, work on the problem has been scant in the existing literature. To the authors' best knowledge, the NEH?EWDD heuristic and the Iterated Greedy (IG) algorithm with descent local search have been regarded as the high performing heuristic and the state-of-the-art algorithm for the problem, which are both based on insertion search. In this article firstly, an efficient backtracking algorithm and a novel heuristic (HPIS) are presented for insertion search. Accordingly, two heuristics are introduced, one is NEH?EWDD with HPIS for insertion search, and the other is the combination of NEH?EWDD and both the two methods. Furthermore, the authors improve the IG algorithm with the proposed methods. Finally, experimental results show that both the proposed heuristics and the improved IG (IG*) significantly outperform the original ones.
Akusoba, Ikemefuna; Birriel, T Javier; El Chaar, Maher
2016-01-01
There are no clinical guidelines or published studies addressing excessive weight loss and protein calorie malnutrition following a standard Roux-en-Y gastric bypass (RYGB) to guide nutritional management and treatment strategies. This study demonstrates the presentation, clinical algorithm, surgical technique, and outcomes of patients afflicted and successfully treated with excessive weight loss following a standard RYGB. Three patients were successfully reversed to normal anatomy after evaluation, management, and treatment by multidisciplinary team. Lowest BMI (kg/m(2)) was 18.9, 17.9, and 14.2, respectively. Twelve-month post-operative BMI (kg/m(2)) was 28.9, 22.8, and 26.1, respectively. Lowest weight (lbs) was 117, 128, and 79, respectively. Twelve-month post-operative weight (lbs) was 179, 161, and 145, respectively. Pre-reversal gastrostomy tube was inserted into the remnant stomach to demonstrate weight gain and improve nutritional status prior to reversal to original anatomy. We propose a practical clinical algorithm for the work-up and management of patients with excessive weight loss and protein calorie malnutrition after standard RYGB including reversal to normal anatomy.
Zhang, Yanjun; Zhao, Yu; Fu, Xinghu; Xu, Jinrui
2016-10-01
A novel particle swarm optimization algorithm based on adaptive inertia weight and chaos optimization is proposed for extracting the features of Brillouin scattering spectra. Firstly, the adaptive inertia weight parameter of the velocity is introduced to the basic particle swarm algorithm. Based on the current iteration number of particles and the adaptation value, the algorithm can change the weight coefficient and adjust the iteration speed of searching space for particles, so the local optimization ability can be enhanced. Secondly, the logical self-mapping chaotic search is carried out by using the chaos optimization in particle swarm optimization algorithm, which makes the particle swarm optimization algorithm jump out of local optimum. The novel algorithm is compared with finite element analysis-Levenberg Marquardt algorithm, particle swarm optimization-Levenberg Marquardt algorithm and particle swarm optimization algorithm by changing the linewidth, the signal-to-noise ratio and the linear weight ratio of Brillouin scattering spectra. Then the algorithm is applied to the feature extraction of Brillouin scattering spectra in different temperatures. The simulation analysis and experimental results show that this algorithm has a high fitting degree and small Brillouin frequency shift error for different linewidth, SNR and linear weight ratio. Therefore, this algorithm can be applied to the distributed optical fiber sensing system based on Brillouin optical time domain reflection, which can effectively improve the accuracy of Brillouin frequency shift extraction.
Jha, Abhinav K; Kupinski, Matthew A; Rodríguez, Jeffrey J; Stephen, Renu M; Stopeck, Alison T
2012-07-07
In many studies, the estimation of the apparent diffusion coefficient (ADC) of lesions in visceral organs in diffusion-weighted (DW) magnetic resonance images requires an accurate lesion-segmentation algorithm. To evaluate these lesion-segmentation algorithms, region-overlap measures are used currently. However, the end task from the DW images is accurate ADC estimation, and the region-overlap measures do not evaluate the segmentation algorithms on this task. Moreover, these measures rely on the existence of gold-standard segmentation of the lesion, which is typically unavailable. In this paper, we study the problem of task-based evaluation of segmentation algorithms in DW imaging in the absence of a gold standard. We first show that using manual segmentations instead of gold-standard segmentations for this task-based evaluation is unreliable. We then propose a method to compare the segmentation algorithms that does not require gold-standard or manual segmentation results. The no-gold-standard method estimates the bias and the variance of the error between the true ADC values and the ADC values estimated using the automated segmentation algorithm. The method can be used to rank the segmentation algorithms on the basis of both the ensemble mean square error and precision. We also propose consistency checks for this evaluation technique.
Institute of Scientific and Technical Information of China (English)
Belgacem BETTAYEB; Imed KACEM; Kondo H.ADJALLAH
2008-01-01
This article investigates identical parallel machines scheduling with family setup times. Theobjective function being the weighted sum of completion times, the problem is known to be strongly NP-hard. We propose a constructive heuristic algorithm and three complementary lower bounds. Two of these bounds proceed by elimination of setup times or by distributing each of them to jobs of the corresponding family, while the third one is based on a lagrangian relaxation. The bounds and the heuristic are incorporated into a branch-and-bound algorithm. Experimental results obtained outperform those of the methods presented in previous works, in term of size of solved problems.
DEFF Research Database (Denmark)
Yuan, Yan; Ding, Yi
2012-01-01
of the multi-state weighted k-out-of-n systems. The well known universal generating function (UGF) approach was also used as a counterpart to compare with the developed recursive algorithms, which is not very efficient. In this paper a transformation of the conventional UGF formula is proposed to develop a UGF......A multi-state k-out-of-n system model provides a flexible tool for evaluating vulnerability and reliability of critical infrastructures such as electric power systems. The multi-state weighted k-out-of-n system model is the generalization of the multi-state k-out-of-n system model, where...
Degani, Shimon; Peleg, Dori; Bahous, Karina; Leibovitz, Zvi; Shapiro, Israel; Ohel, Gonen
2008-01-01
Objective The aim of this study was to test whether pattern recognition classifiers with multiple clinical and sonographic variables could improve ultrasound prediction of fetal macrosomia over prediction which relies on the commonly used formulas for the sonographic estimation of fetal weight. Methods The SVM algorithm was used for binary classification between two categories of weight estimation: >4000gr and macrosomia with a sensitivity of 81%, specificity of 73%, positive predictive value of 81% and negative predictive value of 73%. The comparative figures according to the combined criteria based on two commonly used formulas generated from regression analysis were 88.1%, 34%, 65.8%, 66.7%. Conclusions The SVM algorithm provides a comparable prediction of LGA fetuses as other commonly used formulas generated from regression analysis. The better specificity and better positive predictive value suggest potential value for this method and further accumulation of data may improve the reliability of this approach. PMID:22439018
Institute of Scientific and Technical Information of China (English)
Li Kai; Yang Shanlin
2008-01-01
A class of nonidentical parallel machine scheduling problems are considered in which the goal is to minimize the total weighted completion time.Models and relaxations are collected.Most of these problems are NP-hard,in the strong sense,or open problems,therefore approximation algorithms are studied.The review reveals that there exist some potential areas worthy of further research.
Deng, Guanlong; Gu, Xingsheng
2014-03-01
This article presents an enhanced iterated greedy (EIG) algorithm that searches both insert and swap neighbourhoods for the single-machine total weighted tardiness problem with sequence-dependent setup times. Novel elimination rules and speed-ups are proposed for the swap move to make the employment of swap neighbourhood worthwhile due to its reduced computational expense. Moreover, a perturbation operator is newly designed as a substitute for the existing destruction and construction procedures to prevent the search from being attracted to local optima. To validate the proposed algorithm, computational experiments are conducted on a benchmark set from the literature. The results show that the EIG outperforms the existing state-of-the-art algorithms for the considered problem.
Directory of Open Access Journals (Sweden)
Yu-Tzu Chang
2012-01-01
Full Text Available This paper aims to find the optimal set of initial weights to enhance the accuracy of artificial neural networks (ANNs by using genetic algorithms (GA. The sample in this study included 228 patients with first low-trauma hip fracture and 215 patients without hip fracture, both of them were interviewed with 78 questions. We used logistic regression to select 5 important factors (i.e., bone mineral density, experience of fracture, average hand grip strength, intake of coffee, and peak expiratory flow rate for building artificial neural networks to predict the probabilities of hip fractures. Three-layer (one hidden layer ANNs models with back-propagation training algorithms were adopted. The purpose in this paper is to find the optimal initial weights of neural networks via genetic algorithm to improve the predictability. Area under the ROC curve (AUC was used to assess the performance of neural networks. The study results showed the genetic algorithm obtained an AUC of 0.858±0.00493 on modeling data and 0.802 ± 0.03318 on testing data. They were slightly better than the results of our previous study (0.868±0.00387 and 0.796±0.02559, resp.. Thus, the preliminary study for only using simple GA has been proved to be effective for improving the accuracy of artificial neural networks.
DEFF Research Database (Denmark)
Ackerman, Margareta; Ben-David, Shai; Branzei, Simina
2012-01-01
We investigate a natural generalization of the classical clustering problem, considering clustering tasks in which different instances may have different weights.We conduct the first extensive theoretical analysis on the influence of weighted data on standard clustering algorithms in both...... the partitional and hierarchical settings, characterizing the conditions under which algorithms react to weights. Extending a recent framework for clustering algorithm selection, we propose intuitive properties that would allow users to choose between clustering algorithms in the weighted setting and classify...
Beirle, Steffen; Hörmann, Christoph; Jöckel, Patrick; Liu, Song; Penning de Vries, Marloes; Pozzer, Andrea; Sihler, Holger; Valks, Pieter; Wagner, Thomas
2016-07-01
The STRatospheric Estimation Algorithm from Mainz (STREAM) determines stratospheric columns of NO2 which are needed for the retrieval of tropospheric columns from satellite observations. It is based on the total column measurements over clean, remote regions as well as over clouded scenes where the tropospheric column is effectively shielded. The contribution of individual satellite measurements to the stratospheric estimate is controlled by various weighting factors. STREAM is a flexible and robust algorithm and does not require input from chemical transport models. It was developed as a verification algorithm for the upcoming satellite instrument TROPOMI, as a complement to the operational stratospheric correction based on data assimilation. STREAM was successfully applied to the UV/vis satellite instruments GOME 1/2, SCIAMACHY, and OMI. It overcomes some of the artifacts of previous algorithms, as it is capable of reproducing gradients of stratospheric NO2, e.g., related to the polar vortex, and reduces interpolation errors over continents. Based on synthetic input data, the uncertainty of STREAM was quantified as about 0.1-0.2 × 1015 molecules cm-2, in accordance with the typical deviations between stratospheric estimates from different algorithms compared in this study.
Harrison, R. J.; Feinberg, J. M.
2007-12-01
First-order reversal curves (FORCs) are a powerful method for characterizing the magnetic hysteresis properties of natural and synthetic materials, and are rapidly becoming a standard tool in rock magnetic and paleomagnetic investigations. Here we describe a modification to existing algorithms for the calculation of FORC diagrams using locally-weighted regression smoothing (often referred to as loess smoothing). Like conventional algorithms, the FORC distribution is calculated by fitting a second degree polynomial to a region of FORC space defined by a smoothing factor, N. Our method differs from conventional algorithms in two ways. Firstly, rather than a square of side (2N+1) centered on the point of interest, the region of FORC space used for fitting is defined as a span of arbitrary shape encompassing the (2N+1)2 data points closest to the point of interest. Secondly, data inside the span are given a weight that depends on their distance from the point being evaluated: data closer to the point being evaluated have higher weights and have a greater effect on the fit. Loess smoothing offers two advantages over current methods. Firstly, it allows the FORC distribution to be calculated using a constant smoothing factor all the way to the Hc = 0 axis. This eliminates possible distortions to the FORC distribution associated with reducing the smoothing factor close to the Hc = 0 axis, and does not require use of the extended FORC formalism and the reversible ridge, which swamps the low-coercivity signal. Secondly, it allows finer control over the degree of smoothing applied to the data, enabling automated selection of the optimum smoothing factor for a given FORC measurement, based on an analysis of the standard deviation of the fit residuals. The new algorithm forms the basis for FORCinel, a new suite of FORC analysis tools for Igor Pro (www.wavemetrics.com), freely available on request from the authors.
New backpropagation algorithm with type-2 fuzzy weights for neural networks
Gaxiola, Fernando; Valdez, Fevrier
2016-01-01
In this book a neural network learning method with type-2 fuzzy weight adjustment is proposed. The mathematical analysis of the proposed learning method architecture and the adaptation of type-2 fuzzy weights are presented. The proposed method is based on research of recent methods that handle weight adaptation and especially fuzzy weights. The internal operation of the neuron is changed to work with two internal calculations for the activation function to obtain two results as outputs of the proposed method. Simulation results and a comparative study among monolithic neural networks, neural network with type-1 fuzzy weights and neural network with type-2 fuzzy weights are presented to illustrate the advantages of the proposed method. The proposed approach is based on recent methods that handle adaptation of weights using fuzzy logic of type-1 and type-2. The proposed approach is applied to a cases of prediction for the Mackey-Glass (for ô=17) and Dow-Jones time series, and recognition of person with iris bi...
New Facets and a Branch-and-Cut Algorithm for the Weighted Clique Problem
DEFF Research Database (Denmark)
Sørensen, Michael Malmros
2001-01-01
We consider a polyhedral approach to the weighted maximal b-clique problem. Given a node- and edge-weighted complete graph the problem is to find a complete subgraph (clique) with no more than b nodes such that the sum of the weights of all nodes and edges in the clique is maximal. We introduce...... four new classes of facet defining inequalities for the associated b-clique polytope. One of these inequality classes constitutes a generalization of the well known tree inequalities; the other classes are associated with multistars. We utilize these inequality classes together with other classes...
New facets and a branch-and-cut algorithm for the weighted clique problem
DEFF Research Database (Denmark)
Sørensen, Michael Malmros
2004-01-01
We consider a polyhedral approach to the weighted maximal b-clique problem. Given a node- and edge-weighted complete graph the problem is to find a complete subgraph (clique) with no more than b nodes such that the sum of the weights of all nodes and edges in the clique is maximal. We introduce...... four new classes of facet defining inequalities for the associated b-clique polytope. One of these inequality classes constitutes a generalization of the well known tree inequalities; the other classes are associated with multistars. We use these inequalities together with other classes of facet...
Directory of Open Access Journals (Sweden)
Deepa Dhanaskodi
2011-01-01
Full Text Available Problem statement: Speech Enhancement plays an important role in any of the speech processing systems like speech recognition, mobile communication, hearing aid. Approach: In this work, human perceptual auditory masking effect is incorporated into the single channel speech enhancement algorithm. The algorithm is based on a criterion by which the audible noise may be masked rather than being attenuated and thereby reducing the chance of distortion to speech. The basic decision directed approach is for efficient reduction of musical noise, that includes the estimation of the a priori SNR which is a crucial parameter of the spectral gain, follows the a posteriori SNR with a delay of one frame in speech frames. In this work a simple adaptive speech enhancement technique, using an adaptive sigmoid type function to determine the weighting factor of the TSDD algorithm is employed based on a sub band approach. In turn the spectral estimate is used to obtain a perceptual gain factor. Results: Objective and subjective measures like SNR, MSE, IS distance and were obtained, which shows the ability of the proposed method for efficient enhancement of noisy speech Conclusion/Recommendations: Performance assessment shows that our proposal can achieve a more significant noise reduction and a better spectral estimation of weak speech spectral components from a noisy signal as compared to the conventional speech enhancement algorithm.
Directory of Open Access Journals (Sweden)
Tiannan Ma
2016-12-01
Full Text Available Accurate forecasting of icing thickness has great significance for ensuring the security and stability of the power grid. In order to improve the forecasting accuracy, this paper proposes an icing forecasting system based on the fireworks algorithm and weighted least square support vector machine (W-LSSVM. The method of the fireworks algorithm is employed to select the proper input features with the purpose of eliminating redundant influence. In addition, the aim of the W-LSSVM model is to train and test the historical data-set with the selected features. The capability of this proposed icing forecasting model and framework is tested through simulation experiments using real-world icing data from the monitoring center of the key laboratory of anti-ice disaster, Hunan, South China. The results show that the proposed W-LSSVM-FA method has a higher prediction accuracy and it may be a promising alternative for icing thickness forecasting.
Energy Technology Data Exchange (ETDEWEB)
Dunham, Mark Edward [Los Alamos National Laboratory; Baker, Zachary K [Los Alamos National Laboratory; Stettler, Matthew W [Los Alamos National Laboratory; Pigue, Michael J [Los Alamos National Laboratory; Schmierer, Eric N [Los Alamos National Laboratory; Power, John F [Los Alamos National Laboratory; Graham, Paul S [Los Alamos National Laboratory
2009-01-01
Los Alamos has recently completed the latest in a series of Reconfigurable Software Radios, which incorporates several key innovations in both hardware design and algorithms. Due to our focus on satellite applications, each design must extract the best size, weight, and power performance possible from the ensemble of Commodity Off-the-Shelf (COTS) parts available at the time of design. In this case we have achieved 1 TeraOps/second signal processing on a 1920 Megabit/second datastream, while using only 53 Watts mains power, 5.5 kg, and 3 liters. This processing capability enables very advanced algorithms such as our wideband RF compression scheme to operate remotely, allowing network bandwidth constrained applications to deliver previously unattainable performance.
3D weighting in cone beam image reconstruction algorithms: ray-driven vs. pixel-driven.
Tang, Xiangyang; Nilsen, Roy A; Smolin, Alex; Lifland, Ilya; Samsonov, Dmitry; Taha, Basel
2008-01-01
A 3D weighting scheme have been proposed previously to reconstruct images at both helical and axial scans in stat-of-the-art volumetric CT scanners for diagnostic imaging. Such a 3D weighting can be implemented in the manner of either ray-driven or pixel-drive, depending on the available computation resources. An experimental study is conducted in this paper to evaluate the difference between the ray-driven and pixel-driven implementations of the 3D weighting from the perspective of image quality, while their computational complexity is analyzed theoretically. Computer simulated data and several phantoms, such as the helical body phantom and humanoid chest phantom, are employed in the experimental study, showing that both the ray-driven and pixel-driven 3D weighting provides superior image quality for diagnostic imaging in clinical applications. With the availability of image reconstruction engine at increasing computational power, it is believed that the pixel-driven 3D weighting will be dominantly employed in state-of-the-art volumetric CT scanners over clinical applications.
A Light-weight Symmetric Encryption Algorithm Based on Feistel Cryptosystem Structure
Directory of Open Access Journals (Sweden)
Jingli Zheng
2014-12-01
Full Text Available WSNs is usually deployed in opening wireless environment, its data is easy to be intercepted by attackers. It is necessary to adopt some encryption measurements to protect data of WSNs. But the battery capacity, CPU performance and RAM capacity of WSNs sensors are all limited, the complex encryption algorithm is not fitted for them. The paper proposed a light-level symmetrical encryption algorithm: LWSEA, which adopt minor encryption rounds, shorter data packet and simplified scrambling function. So the calculation cost of LWSEA is very low. We also adopt longer-bit Key and circular interpolation method to produce Child-Key, which raised the security of LWSEA. The experiments demonstrate that the LWSEA possess better "avalanche effect" and data confusion degree, furthermore, its calculation speed is far faster than DES, but its resource cost is very low. Those excellent performances make LWSEA is much suited for resource-restrained WSNs.
Improved algorithm for mining weighted frequent itemsets.%一种挖掘加权频繁项集的改进算法
Institute of Scientific and Technical Information of China (English)
李彦伟; 戴月明; 王金鑫
2011-01-01
分析了New-Apriori和MWFI(Mining-Weighted Frequent Itemsets)算法之不足,提出了一种挖掘加权频繁项集的New-MWFI算法.该算法按属性的权值对事务进行分类,并依次求出每个类别内的加权频繁项集.由于每个类别内的频繁项集满足Apriori性质,因而可以利用Apriori算法或其他改进算法进行挖掘,从而克服了原来算法的不合理和效率低下的缺陷.实验表明该算法能更有效地从数据集中挖掘出加权频繁项集.%The shortages of the New-Apriori and Mining Weighted Frequent Itemsets (MWFI) are analyzed, and the New-MWFI algorithm for mining weighted frequent itemsets is proposed. In this algorithm the transactions are classified according to the item's weight and the weighted frequent itemsets are mined within each category in turn. Since the frequent itemsets of each category satisfy the Apriori's property, the Apriori algorithm or other improved algorithms can be used,thus the deficiencies of the original algorithms can be overcome successfully. Experiments show that the new algorithm is more effective in mining the weighted frequent itemsets from the dataset.
Unwinding the "hairball" graph: a pruning algorithm for weighted complex networks
Dianati, Navid
2016-01-01
Empirical networks of weighted dyadic relations often contain "noisy" edges that alter the global characteristics of the network and obfuscate the most important structures therein. Graph pruning is the process of identifying the most significant edges according to a generative null model, and extracting the subgraph consisting of those edges. Here we introduce a simple and intuitive null model based on the configuration model of network generation, and derive a significance filter from it. We apply the filter to the network of air traffic volume between US airports and recover a geographically faithful representation of the graph. Furthermore, compared with thresholding based on edge weight, we show that our filter extracts a larger giant component that is nevertheless significantly sparser.
Devi, D Chitra; Uthariaraj, V Rhymend
2016-01-01
Cloud computing uses the concepts of scheduling and load balancing to migrate tasks to underutilized VMs for effectively sharing the resources. The scheduling of the nonpreemptive tasks in the cloud computing environment is an irrecoverable restraint and hence it has to be assigned to the most appropriate VMs at the initial placement itself. Practically, the arrived jobs consist of multiple interdependent tasks and they may execute the independent tasks in multiple VMs or in the same VM's multiple cores. Also, the jobs arrive during the run time of the server in varying random intervals under various load conditions. The participating heterogeneous resources are managed by allocating the tasks to appropriate resources by static or dynamic scheduling to make the cloud computing more efficient and thus it improves the user satisfaction. Objective of this work is to introduce and evaluate the proposed scheduling and load balancing algorithm by considering the capabilities of each virtual machine (VM), the task length of each requested job, and the interdependency of multiple tasks. Performance of the proposed algorithm is studied by comparing with the existing methods.
Using exponentially weighted moving average algorithm to defend against DDoS attacks
CSIR Research Space (South Africa)
Machaka, P
2016-11-01
Full Text Available ) the effect, on detection-rate, of the alarm threshold α, tuning parameter; (2) the effect, on detection rate, of the EWMA weighting factor β, tuning parameter; (3) the trade-off between detection rate and the false positive rate; (4) the trade-off between... improves. It can be seen that there is a trade-off between detection rate and false positive rate. B. The effect of the EWMA factor (β) In this section we seek to investigate the effect of the value of the EWMA factor (β) on the detection rate...
Mizoguchi, Shun'ya
2016-01-01
It is now well known that the moduli space of a vector bundle for heterotic string compactifications to four dimensions is parameterized by a set of sections of a weighted projective space bundle of a particular kind, known as Looijenga's weighted projective space bundle. We show that the requisite weighted projective spaces and the Weierstrass equations describing the spectral covers for gauge groups E_N (N=4,...,8) and SU(n+1) (n=1,2,3) can be obtained systematically by a series of blowing-up procedures according to Tate's algorithm, thereby the sections of correct line bundles claimed to arise by Looijenga's theorem can be automatically obtained. They are nothing but the four-dimensional analogue of the set of independent polynomials in the six-dimensional F-theory parameterizing the complex structure, which is further confirmed in the constructions of D_4, A_5, D_6 and E_3 bundles. We also explain why we can obtain them in this way by using the structure theorem of the Mordell-Weil lattice, which is also ...
Mass weighted urn design--A new randomization algorithm for unequal allocations.
Zhao, Wenle
2015-07-01
Unequal allocations have been used in clinical trials motivated by ethical, efficiency, or feasibility concerns. Commonly used permuted block randomization faces a tradeoff between effective imbalance control with a small block size and accurate allocation target with a large block size. Few other unequal allocation randomization designs have been proposed in literature with applications in real trials hardly ever been reported, partly due to their complexity in implementation compared to the permuted block randomization. Proposed in this paper is the mass weighted urn design, in which the number of balls in the urn equals to the number of treatments, and remains unchanged during the study. The chance a ball being randomly selected is proportional to the mass of the ball. After each treatment assignment, a part of the mass of the selected ball is re-distributed to all balls based on the target allocation ratio. This design allows any desired optimal unequal allocations be accurately targeted without approximation, and provides a consistent imbalance control throughout the allocation sequence. The statistical properties of this new design is evaluated with the Euclidean distance between the observed treatment distribution and the desired treatment distribution as the treatment imbalance measure; and the Euclidean distance between the conditional allocation probability and the target allocation probability as the allocation predictability measure. Computer simulation results are presented comparing the mass weighted urn design with other randomization designs currently available for unequal allocations.
Famulari, Gabriel; Pater, Piotr; Enger, Shirin A.
2017-07-01
The aim of this study was to calculate microdosimetric distributions for low energy electrons simulated using the Monte Carlo track structure code Geant4-DNA. Tracks for monoenergetic electrons with kinetic energies ranging from 100 eV to 1 MeV were simulated in an infinite spherical water phantom using the Geant4-DNA extension included in Geant4 toolkit version 10.2 (patch 02). The microdosimetric distributions were obtained through random sampling of transfer points and overlaying scoring volumes within the associated volume of the tracks. Relative frequency distributions of energy deposition f(>E)/f(>0) and dose mean lineal energy (\\bar{y}D ) values were calculated in nanometer-sized spherical and cylindrical targets. The effects of scoring volume and scoring techniques were examined. The results were compared with published data generated using MOCA8B and KURBUC. Geant4-DNA produces a lower frequency of higher energy deposits than MOCA8B. The \\bar{y}D values calculated with Geant4-DNA are smaller than those calculated using MOCA8B and KURBUC. The differences are mainly due to the lower ionization and excitation cross sections of Geant4-DNA for low energy electrons. To a lesser extent, discrepancies can also be attributed to the implementation in this study of a new and fast scoring technique that differs from that used in previous studies. For the same mean chord length (\\bar{l} ), the \\bar{y}D calculated in cylindrical volumes are larger than those calculated in spherical volumes. The discrepancies due to cross sections and scoring geometries increase with decreasing scoring site dimensions. A new set of \\bar{y}D values has been presented for monoenergetic electrons using a fast track sampling algorithm and the most recent physics models implemented in Geant4-DNA. This dataset can be combined with primary electron spectra to predict the radiation quality of photon and electron beams.
Azadnia, Amir Hossein; Taheri, Shahrooz; Ghadimi, Pezhman; Saman, Muhamad Zameri Mat; Wong, Kuan Yew
2013-01-01
One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.
Directory of Open Access Journals (Sweden)
Amir Hossein Azadnia
2013-01-01
Full Text Available One of the cost-intensive issues in managing warehouses is the order picking problem which deals with the retrieval of items from their storage locations in order to meet customer requests. Many solution approaches have been proposed in order to minimize traveling distance in the process of order picking. However, in practice, customer orders have to be completed by certain due dates in order to avoid tardiness which is neglected in most of the related scientific papers. Consequently, we proposed a novel solution approach in order to minimize tardiness which consists of four phases. First of all, weighted association rule mining has been used to calculate associations between orders with respect to their due date. Next, a batching model based on binary integer programming has been formulated to maximize the associations between orders within each batch. Subsequently, the order picking phase will come up which used a Genetic Algorithm integrated with the Traveling Salesman Problem in order to identify the most suitable travel path. Finally, the Genetic Algorithm has been applied for sequencing the constructed batches in order to minimize tardiness. Illustrative examples and comparisons are presented to demonstrate the proficiency and solution quality of the proposed approach.
Jafarizadeh, Saber
2010-01-01
Providing an analytical solution for the problem of finding Fastest Distributed Consensus (FDC) is one of the challenging problems in the field of sensor networks. Most of the methods proposed so far deal with the FDC averaging algorithm problem by numerical methods, with convex-optimization techniques and in general no closed-form solution for finding FDC has been offered up to now except in [1] where the conjectured answer for path has been proved. Here in this work we present the analytical solution for the problem of finding FDC by means of semidefinite programming (SDP), for two networks, Star and Complete Cored Star which are containing path as a particular case. Our method in this paper is based on convexity of fastest distributed consensus averaging problem, and we rather allow the networks to have their own symmetric pattern, in order to find the optimal weights. The main idea of the proposed methodology is to solve the slackness conditions to obtain a closed-form expression for the optimal weights, ...
Afanasyev, Vsevolod; Buldyrev, Sergey V; Dunn, Michael J; Robst, Jeremy; Preston, Mark; Bremner, Steve F; Briggs, Dirk R; Brown, Ruth; Adlard, Stacey; Peat, Helen J
2015-01-01
A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus) at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.
Directory of Open Access Journals (Sweden)
Vsevolod Afanasyev
Full Text Available A fully automated weighbridge using a new algorithm and mechanics integrated with a Radio Frequency Identification System is described. It is currently in use collecting data on Macaroni penguins (Eudyptes chrysolophus at Bird Island, South Georgia. The technology allows researchers to collect very large, highly accurate datasets of both penguin weight and direction of their travel into or out of a breeding colony, providing important contributory information to help understand penguin breeding success, reproductive output and availability of prey. Reliable discrimination between single and multiple penguin crossings is demonstrated. Passive radio frequency tags implanted into penguins allow researchers to match weight and trip direction to individual birds. Low unit and operation costs, low maintenance needs, simple operator requirements and accurate time stamping of every record are all important features of this type of weighbridge, as is its proven ability to operate 24 hours a day throughout a breeding season, regardless of temperature or weather conditions. Users are able to define required levels of accuracy by adjusting filters and raw data are automatically recorded and stored allowing for a range of processing options. This paper presents the underlying principles, design specification and system description, provides evidence of the weighbridge's accurate performance and demonstrates how its design is a significant improvement on existing systems.
Hop Weighted DV-Hop Localization Algorithm%跳数加权DV-Hop定位算法
Institute of Scientific and Technical Information of China (English)
刘凯; 余君君; 谭立雄
2012-01-01
A beacon node selection scheme and a DV-Hop localization algorithm based on hop weighted are proposed to improve the positioning performance in this paper. First of all, by setting a threshold value of signal propagation hop,the beacon nodes whose propagation hop are fewer are retained. Then,a beacon node optimization is followed to eliminate unfavourable beacon nodes which are approximately in a line to avoid positioning failure. Besides, based on the relationship between distance estimation error and signal propagation hop derived from Friis model,the hop weighted localization algorithm is obtained. It uses the signal propagation hop as the weighted factors to modify the positioning result that gained through maximum likelihood estimation, which reduces the effect on positioning accuracy brought by distance estimation error. The simulation results show that the proposed method can improve the positioning accuracy by 3% ~5%.%针对DV-Hop定位算法中距离估计误差对定位结果的影响,提出了一种信标节点优选方案和跳数加权DV-Hop定位算法.首先通过设定跳数阈值,保留跳数较少的信标节点,然后剔除近似在一条直线上的信标节点,完成信标节点优选,避免未知节点无法定位的情形.此外,利用Friis模型推导出距离估计误差与信号传播跳数之间的映射关系,采用传播跳数作为加权因子对定位结果进行了修正.仿真结果表明该算法降低了距离估计误差对定位精度的影响,提高了定位精度.
Ackerman, Margareta; Branzei, Simina; Loker, David
2011-01-01
In this paper we investigate clustering in the weighted setting, in which every data point is assigned a real valued weight. We conduct a theoretical analysis on the influence of weighted data on standard clustering algorithms in each of the partitional and hierarchical settings, characterising the precise conditions under which such algorithms react to weights, and classifying clustering methods into three broad categories: weight-responsive, weight-considering, and weight-robust. Our analysis raises several interesting questions and can be directly mapped to the classical unweighted setting.
Tang, Xiangyang; Hsieh, Jiang; Hagiwara, Akira; Nilsen, Roy A.; Thibault, Jean-Baptiste; Drapkin, Evgeny
2005-08-01
The original FDK algorithm proposed for cone beam (CB) image reconstruction under a circular source trajectory has been extensively employed in medical and industrial imaging applications. With increasing cone angle, CB artefacts in images reconstructed by the original FDK algorithm deteriorate, since the circular trajectory does not satisfy the so-called data sufficiency condition (DSC). A few 'circular plus' trajectories have been proposed in the past to help the original FDK algorithm to reduce CB artefacts by meeting the DSC. However, the circular trajectory has distinct advantages over other scanning trajectories in practical CT imaging, such as head imaging, breast imaging, cardiac, vascular and perfusion applications. In addition to looking into the DSC, another insight into the CB artefacts existing in the original FDK algorithm is the inconsistency between conjugate rays that are 180° apart in view angle (namely conjugate ray inconsistency). The conjugate ray inconsistency is pixel dependent, varying dramatically over pixels within the image plane to be reconstructed. However, the original FDK algorithm treats all conjugate rays equally, resulting in CB artefacts that can be avoided if appropriate weighting strategies are exercised. Along with an experimental evaluation and verification, a three-dimensional (3D) weighted axial cone beam filtered backprojection (CB-FBP) algorithm is proposed in this paper for image reconstruction in volumetric CT under a circular source trajectory. Without extra trajectories supplemental to the circular trajectory, the proposed algorithm applies 3D weighting on projection data before 3D backprojection to reduce conjugate ray inconsistency by suppressing the contribution from one of the conjugate rays with a larger cone angle. Furthermore, the 3D weighting is dependent on the distance between the reconstruction plane and the central plane determined by the circular trajectory. The proposed 3D weighted axial CB-FBP algorithm
A Variable Weighted Least-Connection Algorithm for Multimedia Transmission%多媒体传输中基于可变权重的最少连接优先算法
Institute of Scientific and Technical Information of China (English)
杨立辉; 余胜生
2003-01-01
Under high loads, a multimedia cluster server can serve many hundreds of connections concurrently, where a load balancer distributes the incoming connection request to each node according to a preset algorithm. Among existing scheduling algorithms, roundRobin and least-connection do not take into account the difference of service capability of each node and improved algorithms such as weighted round-Robin and weighted least-connection. They also do not consider the fact that the ratio of number of TCP connections and the fixed weight does not reflect the real load of node. In this paper we introduce our attempts in improving the scheduling algorithms and propose a variable weighted least-connection algorithm, which assigns variable weight, instead of fixed weight, to each node according to its real time resource. A validating trial has been performed and the results show that the proposed algorithm has effective load balancing in one central control node scenario.
Directory of Open Access Journals (Sweden)
Yaron Orenstein
Full Text Available The new technology of protein binding microarrays (PBMs allows simultaneous measurement of the binding intensities of a transcription factor to tens of thousands of synthetic double-stranded DNA probes, covering all possible 10-mers. A key computational challenge is inferring the binding motif from these data. We present a systematic comparison of four methods developed specifically for reconstructing a binding site motif represented as a positional weight matrix from PBM data. The reconstructed motifs were evaluated in terms of three criteria: concordance with reference motifs from the literature and ability to predict in vivo and in vitro bindings. The evaluation encompassed over 200 transcription factors and some 300 assays. The results show a tradeoff between how the methods perform according to the different criteria, and a dichotomy of method types. Algorithms that construct motifs with low information content predict PBM probe ranking more faithfully, while methods that produce highly informative motifs match reference motifs better. Interestingly, in predicting high-affinity binding, all methods give far poorer results for in vivo assays compared to in vitro assays.
Reinharz, Vladimir; Ponty, Yann; Waldispühl, Jérôme
2013-07-01
The design of RNA sequences folding into predefined secondary structures is a milestone for many synthetic biology and gene therapy studies. Most of the current software uses similar local search strategies (i.e. a random seed is progressively adapted to acquire the desired folding properties) and more importantly do not allow the user to control explicitly the nucleotide distribution such as the GC-content in their sequences. However, the latter is an important criterion for large-scale applications as it could presumably be used to design sequences with better transcription rates and/or structural plasticity. In this article, we introduce IncaRNAtion, a novel algorithm to design RNA sequences folding into target secondary structures with a predefined nucleotide distribution. IncaRNAtion uses a global sampling approach and weighted sampling techniques. We show that our approach is fast (i.e. running time comparable or better than local search methods), seedless (we remove the bias of the seed in local search heuristics) and successfully generates high-quality sequences (i.e. thermodynamically stable) for any GC-content. To complete this study, we develop a hybrid method combining our global sampling approach with local search strategies. Remarkably, our glocal methodology overcomes both local and global approaches for sampling sequences with a specific GC-content and target structure. IncaRNAtion is available at csb.cs.mcgill.ca/incarnation/. Supplementary data are available at Bioinformatics online.
Planeta, David S
2007-01-01
In this paper I present general outlook on questions relevant to the basic graph algorithms; Finding the Shortest Path with Positive Weights and Minimum Spanning Tree. I will show so far known solution set of basic graph problems and present my own. My solutions to graph problems are characterized by their linear worst-case time complexity. It should be noticed that the algorithms which compute the Shortest Path and Minimum Spanning Tree problems not only analyze the weight of arcs (which is the main and often the only criterion of solution hitherto known algorithms) but also in case of identical path weights they select this path which walks through as few vertices as possible. I have presented algorithms which use priority queue based on multilevel prefix tree -- PTrie. PTrie is a clever combination of the idea of prefix tree -- Trie, the structure of logarithmic time complexity for insert and remove operations, doubly linked list and queues. In C++ I will implement linear worst-case time algorithm computin...
Weighted cluster fusion algorithm based on graph%一种基于图论的加权聚类融合算法
Institute of Scientific and Technical Information of China (English)
谢岳山; 樊晓平; 廖志芳; 尹红练; 罗浩
2013-01-01
The results of the existing cluster fusion algorithms are usually not so good when they process the mixed attributes datas, the main reason is that the results of the algorithms are still dispersed. To solve this problem, this paper presented a new weighted cluster fusion algorithm based on graph theory. It first clustered the datasets and got cluster members, and then set weights to each data object with a proposed fusion function, and determined the relationship between the data-pair by setting weights to the edges between them, so it could get a weighted nearest neighbor graph. At last it did a last-clustering based on graph theory. Experiments show that the accuracy and stability of this cluster fusion algorithm is better than other clustering fusion algorithms.%现有聚类融合算法对混合属性数据进行处理的效果不佳,主要是融合后的结果仍存在一定的分散性.为解决这个问题,提出了一种基于图论的加权聚类融合算法,通过对数据集聚类得到聚类成员后,利用所设计的融合函数对各个数据对象赋予权重,同时通过设置各个数据对间边的权重来确定数据之间的关系,得到带权最近邻图,再用图论的方法进行聚类.实验表明,该算法的聚类精度和稳定性优于其他聚类融合算法.
基于加权边界度的稀有类检测算法∗%Rare Category Detection Algorithm Based on Weighted Boundary Degree
Institute of Scientific and Technical Information of China (English)
黄浩; 何钦铭; 陈奇; 钱烽; 何江峰; 马连航
2013-01-01
提出了一种快速的稀有类检测算法——CATION(rare category detection algorithm based on weighted boundary degree)。通过使用加权边界度(weighted boundary degree,简称WBD)这一新的稀有类检测标准,该算法可利用反向k近邻的特性来寻找稀有类的边界点,并选取加权边界度最高的边界点询问其类别标签。实验结果表明,与现有方法相比,该算法避免了现有方法的局限性,大幅度地提高了发现数据集中各个类的效率,并有效地缩短了算法运行所需要的运行时间。%This paper proposes an efficient algorithm named CATION (rare category detection algorithm based on weighted boundary degree) for rare category detection. By employing a rare-category criterion known as weighted boundary degree (WBD), this algorithm can make use of reverse k-nearest neighbors to help find the boundary points of rare categories and selects the boundary points with maximum WBDs for labeling. Extensive experimental results demonstrate that this algorithm avoids the limitations of existing approaches, has a significantly better efficiency on discovering new categories in data sets, and effectively reduces runtime, compared against the existing approaches.
Rare Category Detection Algorithm Based on Weighted Boundary Degree%基于加权边界度的稀有类检测算法
Institute of Scientific and Technical Information of China (English)
黄浩; 何钦铭; 陈奇; 钱烽; 何江峰; 马连航
2012-01-01
This paper proposes an efficient algorithm named CATION (rare category detection algorithm based on weighted boundary degree) for rare category detection. By employing a rare-category criterion known as weighted boundary degree (WBD), this algorithm can make use of reverse k-nearest neighbors to help find the boundary points of rare categories and selects the boundary points with maximum WBDs for labeling. Extensive experimental results demonstrate that this algorithm avoids the limitations of existing approaches, has a significantly better efficiency on discovering new categories in data sets, and effectively reduces runtime, compared against the existing approaches.%提出了一种快速的稀有类检测算法——CATION(rare category detection algorithm based on weighted boundary degree).通过使用加权边界度(weighted boundary degree,简称WBD)这一新的稀有类检测标准,该算法可利用反向κ近邻的特性来寻找稀有类的边界点,并选取加权边界度最高的边界点询问其类别标签.实验结果表明,与现有方法相比,该算法避免了现有方法的局限性,大幅度地提高了发现数据集中各个类的效率,并有效地缩短了算法运行所需要的运行时间.
Ghaderi, F.; Pahlavani, P.
2015-12-01
A multimodal multi-criteria route planning (MMRP) system provides an optimal multimodal route from an origin point to a destination point considering two or more criteria in a way this route can be a combination of public and private transportation modes. In this paper, the simulate annealing (SA) and the fuzzy analytical hierarchy process (fuzzy AHP) were combined in order to find this route. In this regard, firstly, the effective criteria that are significant for users in their trip were determined. Then the weight of each criterion was calculated using the fuzzy AHP weighting method. The most important characteristic of this weighting method is the use of fuzzy numbers that aids the users to consider their uncertainty in pairwise comparison of criteria. After determining the criteria weights, the proposed SA algorithm were used for determining an optimal route from an origin to a destination. One of the most important problems in a meta-heuristic algorithm is trapping in local minima. In this study, five transportation modes, including subway, bus rapid transit (BRT), taxi, walking, and bus were considered for moving between nodes. Also, the fare, the time, the user's bother, and the length of the path were considered as effective criteria for solving the problem. The proposed model was implemented in an area in centre of Tehran in a GUI MATLAB programming language. The results showed a high efficiency and speed of the proposed algorithm that support our analyses.
Research on Weighted Centroid Location Algorithm for Wireless Sensor Network%无线传感器网络加权质心定位算法研究
Institute of Scientific and Technical Information of China (English)
李文辰; 张雷
2013-01-01
针对距离权重的改进质心定位精度受所选反演模型影响,并且正确距离反演模型不容易确定的问题,提出了采用接收信号强度的改进质心定位算法,可将接收信号强度(RSS)作为质心定位算法的权重,直接将权重代入到质心定位算法从而估算出未知节点坐标,取消了距离反演过程,避免了反演误差的引入,提高了算法的定位精度、鲁棒性和实用性,同时还降低了计算复杂度.通过MATLAB平台进行仿真分析得出,改进算法定位性能优于距离权重的改进质心定位算法,符合无线传感器网络定位需求,具有较好的应用价值.%As the positioning precision of the improved weighted centroid location algorithm based on the distance is affected by the selected inversion model and the correct inversion model of distance is not easy to be found, an improved weighted centroid location algorithm based on the received signal strength was introduced in the paper. The received signal strength (RSS) is regarded as the weight of the improved centroid location algorithm. As the inversion process of the distance was cancelled and the inversion errors were avoided, the positioning precision, robustness and availability of the algorithm was improved. At the same time, the computational complexity was reduced. The algorithm simulation results based on MATLAB show that the performance of the improved algorithm is superior to that of the improved weighted centroid location algorithm based on the distance. The improved algorithm can meet the location demands of the wireless sensor network and has good application value.
基于最大似然估计的加权质心定位算法%Weighted Centroid Localization Algorithm Based on Maximum Likelihood Estimation
Institute of Scientific and Technical Information of China (English)
卢先领; 夏文瑞
2016-01-01
In solving the problem of localizing nodes in a wireless sensor network,we propose a weighted centroid localization algorithm based on maximum likelihood estimation,with the specific goal of solving the problems of big received signal strength indication (RSSI)ranging error and low accuracy of the centroid localization algorithm.Firstly,the maximum likelihood estimation between the estimated distance and the actual distance is calculated as weights.Then,a parameter k is introduced to optimize the weights between the anchor nodes and the unknown nodes in the weight model.Finally,the locations of the unknown nodes are calculated and modified by using the proposed algorithm.The simulation results show that the weighted centroid algorithm based on the maximum likelihood estimation has the features of high localization accuracy and low cost,and has better performance compared with the inverse distance-based algorithm and the inverse RSSI-based algo-rithm.Hence,the proposed algorithm is more suitable for the indoor localization of large areas.%为解决无线传感器网络中节点自身定位问题，针对接收信号强度指示（received signal strength indication，RSSI）测距误差大和质心定位算法精度低的问题，提出一种基于最大似然估计的加权质心定位算法。首先通过计算将估计距离与实际距离之间的最大似然估计值作为权值，然后在权值模型中，引进一个参数k优化未知节点周围锚节点分布，最后计算出未知节点的位置并加以修正。仿真结果表明，基于最大似然估计的加权质心算法具有定位精度高和成本低的特点，优于基于距离倒数的质心加权和基于RSSI倒数的质心加权算法，适用于大面积的室内定位。
An Instruction Scheduling Algorithm Based on Weighted Paths%基于加权路径的指令调度算法
Institute of Scientific and Technical Information of China (English)
路璐; 安虹; 王莉; 王耀彬; 曾斌
2009-01-01
随着线延迟的逐渐增加,指令调度技术作为一种可以有效减少处理器片上通信的技术日益重要.本文介绍一种分片式处理器结构上基于加权路径的指令调度算法,该算法利用已经放置好的指令--锚指令信息精确计算路径长度,再用指令所在路径长度作为权值对指令进行调度.实验结果表明,本算法实现的调度器IPC比已有的两种TRIPS调度算法的IPC分别提高了21%和3%.%Growing on-chip wire delay makes instruction scheduling a more important compiler technique to decrease on-chip communication. This paper describes a compiler scheduling algorithm called weighted path scheduling, which uses the path length as the weight when scheduling instructions. To precisely calculate the weight of the path, we make use of previ-ously scheduled instructions-anchor instructions. Our experimental results show that this algorithm achieves a 21% and 3% average performance improvement over two prior scheduling algorithms of TRIPS.
Directory of Open Access Journals (Sweden)
Mohameed Sarhan Al_Duais
2015-01-01
Full Text Available The drawback of the Back Propagation (BP algorithm is slow training and easily convergence to the local minimum and suffers from saturation training. To overcome those problems, we created a new dynamic function for each training rate and momentum term. In this study, we presented the (BPDRM algorithm, which training with dynamic training rate and momentum term. Also in this study, a new strategy is proposed, which consists of multiple steps to avoid inflation in the gross weight when adding each training rate and momentum term as a dynamic function. In this proposed strategy, fitting is done by making a relationship between the dynamic training rate and the dynamic momentum. As a result, this study placed an implicit dynamic momentum term in the dynamic training rate. This αdmic = f(1/&etadmic . This procedure kept the weights as moderate as possible (not to small or too large. The 2-dimensional XOR problem and buba data were used as benchmarks for testing the effects of the ‘new strategy’. All experiments were performed on Matlab software (2012a. From the experiment’s results, it is evident that the dynamic BPDRM algorithm provides a superior performance in terms of training and it provides faster training compared to the (BP algorithm at same limited error.
Gur, Ilan; Markel, Gal; Nave, Yaron; Vainshtein, Igor; Eisenkraft, Arik; Riskin, Arieh
2014-08-01
To study the diagnostic ability of RALIS (computerized mathematical algorithm and continuous monitoring device) to detect late onset sepsis among very low birth weight preterm neonates. Randomly chosen 24 very low birth weight infants with proven sepsis were compared to 22 infants without sepsis. The clinical parameters were retrospectively collected from the medical records. The ability of RALIS to detect late onset sepsis was calculated. RALIS positively identified 23 of the 24 infants with sepsis (sensitivity 95.8%). It indicated sepsis alert median 2.0 days earlier than clinical suspicion. A false positive alert was indicated in 23% (5/22) infants. The specificity, and positive and negative predictive ability of RALIS were 77.3%. 82.1% and 94.4%, respectively. RALIS may aid in the early diagnosis of late onset sepsis in very low birth weight preterm infants.
Institute of Scientific and Technical Information of China (English)
陈旻; 王开云; 贾学明; 薛洁
2014-01-01
给出一种基于自适应Context加权的细菌DNA序列压缩算法。不同阶数的Context模型用于描述碱基符号间的关联程度。通过加权的方式将各阶模型进行组合，构建用于驱动算术编码器的条件概率分布。各阶模型对应权值由其相应自适应码长决定。在编码过程中，权值能够根据各阶模型获得的统计计数值自适应更新。实验结果表明，该方法能够获得比其他加权Context建模基因组序列压缩算法更好的压缩效率。%A bacteria genome compression algorithm based on the adaptive weighted context model is present. The context model with different order is used to describe the relation degree of basic group code. The context models are combined by weighting to constitute the conditional probability distribution to drive arithmetic coder and the values of these weights are determined by the corresponding adaptive code length. In the coding process,the values of these weights are adaptively updated according to the statistic count value acquired by the context model. The experimental results indicate that the algorithm presented could produce better compression result than the results by other algorithms.
Weighted Improved Euclidean Localization Algorithm Based on ZigBee%基于ZigBee的加权改进Euclidean定位算法
Institute of Scientific and Technical Information of China (English)
金纯; 何山; 胡建农; 周亮; 徐洪刚
2011-01-01
RSSI-based triangle and centroid location sometimes can not work, in order to solve this problem, weighted improved Euclidean localization algorithm is proposed based on ZigBee. The algorithm uses the least square method to derive the parameters of the environment, and then the weighted improved Euclidean localization algorithm is used to locate the unknown nodes. The result of the simulation and experiment proves that it can be applied to some applications.%针对基于RSSI的三角形质心定位算法会出现不能工作的情况,提出了基于ZigBee的加权改进Euclidean定位算法.此算法利用最小二乘法得到环境参数,然后利用加权改进Euclidean定位算法对未知节点进行定位.仿真和实验表明,该算法具有一定的实用价值.
BiCluE - Exact and heuristic algorithms for weighted bi-cluster editing of biomedical data
DEFF Research Database (Denmark)
Sun, Peng; Guo, Jiong; Baumbach, Jan
2013-01-01
different types. Bi-cluster editing, as a special case of clustering, which partitions two different types of data simultaneously, might be used for several biomedical scenarios. However, the underlying algorithmic problem is NP-hard.RESULTS:Here we contribute with BiCluE, a software package designed...
Khalil, A; D'Antonio, F; Dias, T; Cooper, D; Thilaganathan, B
2014-08-01
The aims of this study were first, to ascertain the accuracy of formulae for ultrasonographic birth-weight estimation in twin compared with singleton pregnancies and second, to assess the accuracy of sonographic examination in the prediction of birth-weight discordance in twin pregnancies. This was a retrospective cohort study including both singleton and twin pregnancies. Routine biometry was recorded and estimated fetal weight (EFW) calculated using 33 different formulae. Only pregnancies that delivered within 48 h of the ultrasound scan were included (4280 singleton and 586 twin fetuses). Differences between the EFW and actual birth weight (ABW) were assessed by percentage error, accuracy in predictions within ± 10% and ± 15% of error and use of the Bland-Altman method. The accuracy of prediction of the different cut-offs of birth-weight discordance in twin pregnancies was also assessed using the area under the receiver-operating characteristics curve (AUC). The overall mean absolute percentage error was ≤ 10% for 25 formulae in singleton pregnancies compared with three formulae in twin pregnancies. The overall predictions within ± 10% and ± 15% of the ABW were 62.2% and 81.5% in singleton and 49.7% and 68.5% in twin pregnancies, respectively. When t e formulae were categorized according to the biometric parameters included, those based on a combination of head, abdomen and femur measurements showed the lowest mean absolute percentage error, in both singleton and twin pregnancies. The predictive accuracy for 25% birth-weight discordance using the Hadlock 2 formula, as assessed by the AUC, was 0.87. Ultrasound estimation of birth weight is less accurate in twin than in singleton pregnancies. Formulae that include a combination of head, abdomen and femur measurements perform best in both singleton and twin pregnancies. Copyright © 2013 ISUOG. Published by John Wiley & Sons Ltd.
Albuquerque, Fabio; Beier, Paul
2015-01-01
Here we report that prioritizing sites in order of rarity-weighted richness (RWR) is a simple, reliable way to identify sites that represent all species in the fewest number of sites (minimum set problem) or to identify sites that represent the largest number of species within a given number of sites (maximum coverage problem). We compared the number of species represented in sites prioritized by RWR to numbers of species represented in sites prioritized by the Zonation software package for 11 datasets in which the size of individual planning units (sites) ranged from algorithms remain superior for conservation prioritizations that consider compactness and multiple near-optimal solutions in addition to species representation. But because RWR can be implemented easily and quickly in R or a spreadsheet, it is an attractive alternative to integer programming or heuristic algorithms in some conservation prioritization contexts.
Improved term weighting algorithm for text sentiment analysis%用于文本情感分析的特征加权改进算法
Institute of Scientific and Technical Information of China (English)
郑安怡
2015-01-01
There are two universal factors in term weighting for sentiment analysis:Importance of a Term in a Document (ITD)and Importance of a Term for expressing Sentiment(ITS). An improved ITS algorithm is proposed by combining two state-of-the-art supervised term weighting schemes which have high classification accuracy. The improved algorithm takes both document frequency(the number of documents in which a term occurs)of specific feature and its proportion in the whole document frequency into account. Thus, features which occur predominantly in many documents of one class can be given relatively higher ITS weights. Experiment results show that the proposed algorithm can improve the perfor-mance of sentiment classification.%文本情感分析领域内的特征加权一般考虑两个影响因子：特征在文档中的重要性（ITD）和特征在表达情感上的重要性（ITS）。结合该领域内两种分类准确率较高的监督特征加权算法，提出了一种新的ITS算法。新算法同时考虑特征在一类文档集里的文档频率（在特定的文档集里，出现某个特征的文档数量）及其占总文档频率的比例，使主要出现且大量出现在同一类文档集里的特征获得更高的ITS权值。实验证明，新算法能提高文本情感分类的准确率。
Weighted MRF Algorithm for Automatic Unsupervised Image Segmentation%变权重MRF算法在图像自动无监督分割中的应用
Institute of Scientific and Technical Information of China (English)
刘雪娜; 侯宝明
2012-01-01
In order to achieve the automatic unsupervised image segmentation, an algorithm based on the adaptive classification and the weighted MRF is proposed. First, combined with the MDL criterion, the number of image classification under the framework of Markov random fields is computed adaptively. And then, the weighted MRF algorithm is used to expand the option range of the potential function, thus to eliminate the complex calculation of the potential function. Finally, by using ICM algorithm to optimize the model, the segmentation image under MAP criterion is obtained. In the Matlab, test results show that the proposed algorithm is effective, which can correctly calculate the number of classification and effectively reduce the segmentation error.%为了实现图像的自动无监督分割,本文提出类自适应变权重马尔可夫随机场分割算法.首先结合最小描述长度准则,自适应计算马尔可夫随机场框架下的图像分类数;然后引入变权重的马尔可夫随机场算法,扩大势函数的选择范围,消除势函数的复杂计算；最后用迭代条件模式进行优化,获得最大后验概率准则下的分割图像.在Matlab环境中的测试结果表明,该算法具有实效性,能正确计算分类数,同时有效减少了分割错误.
Cao, Jun-Zhe; Liu, Wen-Qi; Gu, Hong
2012-11-01
Machine learning is a kind of reliable technology for automated subcellular localization of viral proteins within a host cell or virus-infected cell. One challenge is that the viral protein samples are not only with multiple location sites, but also class-imbalanced. The imbalanced dataset often decreases the prediction performance. In order to accomplish this challenge, this paper proposes a novel approach named imbalance-weighted multi-label K-nearest neighbor to predict viral protein subcellular location with multiple sites. The experimental results by jackknife test indicate that the presented algorithm achieves a better performance than the existing methods and has great potentials in protein science.
Mining Algorithm of Normalized Weighted Association Rules in Database%数据库中标准加权关联规则挖掘算法
Institute of Scientific and Technical Information of China (English)
杜鹢; 藏海霞
2001-01-01
在原有的关联规则挖掘算法的研究中，认为所有的属性的重要程度相同，提出标准加权关联规则的挖掘算法，能够解决因属性重要程度不一样带来的问题。%Previous algorithms on mining association rules maintain that theimportance of each item in database is equal. This paper presents a method of mining weighted association rules in database, which can solve the problems caused by the unequal importance of the items.
Directory of Open Access Journals (Sweden)
Lutsenko Y. V.
2014-12-01
Full Text Available The method of ordinary least squares (OLS is widely known and deservedly popular. However, some attempts to improve this method. The result of one of such attempts is the weighted least squares (WMNC, the essence of which is to give the observation a weight which is inversely proportional to the errors of their approximation. Thereby, in fact, monitoring is ignored the more the difficult to approximate it. The result of this approach, formally, is the approximation error decreasing, but in fact, this occurs by partial refusal to consider the "problem" of observations, making a big mistake. If the idea underlying WMNC to bring to the extreme (and absurd, then in the limit, this approach will lead to the fact that from the entire set of observations there will be only those that lie almost exactly on the trend obtained by the method of least squares, and the rest will simply be ignored. However, according to the author, it's not a problem, and the failure of its decision, though it might look like a solution. In the work we have proposed a solution, based on the theory of information: to consider the weight of observations, the number of the argument of the value function. This approach was validated in the framework of a new innovative method of artificial intelligence: methods for automated system-cognitive analysis (ASA-analysis and implemented 30 years ago in its software toolkit, which is "Eidos" intelligent system in the form of so-called "cognitive functions". This article presents an algorithm and software implementation of this approach, illustrated in detailed numerical example. In the future it is planned to give a detailed mathematical basis of the method of weighted least squares, which is modified by the application of information theory to calculate the weights of the observations, and investigate its properties
Bergamino, Maurizio; Saitta, Laura; Barletta, Laura; Bonzano, Laura; Mancardi, Giovanni Luigi; Castellan, Lucio; Ravetti, Jean Louis; Roccatagliata, Luca
2013-01-01
The purpose of this study was to assess the feasibility of measuring different permeability parameters with T1-weighted dynamic contrast-enhanced (DCE) magnetic resonance imaging (MRI) in order to investigate the blood brain-barrier permeability associated with different brain tumors. The Patlak algorithm and the extended Tofts-Kety model were used to this aim. Twenty-five adult patients with tumors of different histological grades were enrolled in this study. MRI examinations were performed at 1.5 T. Multiflip angle, fast low-angle shot, and axial 3D T1-weighted images were acquired to calculate T1 maps, followed by a DCE acquisition. A region of interest was placed within the tumor of each patient to calculate the mean value of different permeability parameters. Differences in permeability measurements were found between different tumor grades, with higher histological grades characterized by higher permeability values. A significant difference in transfer constant (K (trans)) values was found between the two methods on high-grade tumors; however, both techniques revealed a significant correlation between the histological grade of tumors and their K (trans) values. Our results suggest that DCE acquisition is feasible in patients with brain tumors and that K (trans) maps can be easily obtained by these two algorithms, even if the theoretical model adopted could affect the final results.
一种基于WRR的新的调度算法%A New Queuing Scheduling Algorithm Based on Weighted Round Robin
Institute of Scientific and Technical Information of China (English)
饶宝乾; 侯嘉
2011-01-01
为了提高网络区分服务的性能,选择一个合适的队列调度算法尤为重要.在WRR调度算法的基础上提出了一种新的调度算法P-VDWRR(Priority Variable Deficit Weighted Round Robin),P-VDWRR不仅能够提供一定的QiS(Quality of Service)保证,还能够在一定范围内根据网络负载情况动态分配网络资源,降低网络节点的丢包率.%In order to improve the performance of distinguished network services, it is particularly important to select an appropriate queuing scheduling algorithm. A new scheduling algorithm is proposed based on the WRR,whose name is Priority Variable Deficit Weighted Round Robin(P-VDWRR). P-VDWRR could provide certain assurance of the QoS, distribute the network resources dynamically according to the load situation within a certain range and reduce the packet loss rate of network nodes.
Hoogerheide, L.F.; Opschoor, A.; Dijk, van, Nico M.
2012-01-01
This discussion paper was published in the Journal of Econometrics (2012). Vol. 171(2), 101-120. A class of adaptive sampling methods is introduced for efficient posterior and predictive simulation. The proposed methods are robust in the sense that they can handle target distributions that exhibit non-elliptical shapes such as multimodality and skewness. The basic method makes use of sequences of importance weighted Expectation Maximization steps in order to efficiently construct a mixture of...
Institute of Scientific and Technical Information of China (English)
付锴; 雷勇; 颜嘉俊
2011-01-01
The traditional Multi-Dimensional Scaling ( MDS) algorithm adopts multi-hop distance to replace direct distance, resulting in low accuracy of the local network and large localization error in irregular network. Relative to the existing algorithms, the paper introduced the Euclidean algorithm to generate accurate multi-hop distance between nodes, and used weighting mechanism to improve the coefficient of stress. The simulation results show that in low connectivity rectangle network and C-shape network localization, this method acTheves better performance.%传统的多维定标(MDS)算法由于采用多跳距离代替节点间的直接距离,生成的局部网络准确度低,在不规则网络中定位误差大.相对于现有的算法,引入Euclidean方法来产生多跳节点间的准确距离,并采用一种加权机制来改进协强系数,以抑制累积误差.仿真结果表明该方法在C型网络和低连通度的矩形网络定位中能取得更好的效果.
DSA cone beam reconstruction algorithm based on backprojection weight FDK%基于FDK反投影权重的锥束DSA重建算法
Institute of Scientific and Technical Information of China (English)
杨宏成; 高欣; 张涛
2013-01-01
To solve the problem of cone beam artifacts resulting from the large cone angle in cone beam digital subtraction angiography of DSA, a novel backprojection weight reconstruction algorithm based on the frame work of FDK(BPW-FDK) was proposed. The cause of the cone beam artifacts away from the rotating track was analyzed. To solve the data deficiency in Randon space, a new backprojection weight function based on distance was designed and incorporated into the original FDK algorithm as a constraint condition for data compensation in the region far away from the rotating track to expand the reconstruction region. The reconstructing experiments were conducted on the images from simulated projections with noise or without noise and the real projections from a self-development DSA scanner. The results show that the proposed algorithm has obvious superiority over the Parker-FDK algorithm in suppression of cone beam artifacts for large cone angle projections. Compared with the Parker-FDK, the normalized mean square distance criterion and the normalized mean absolute distance criterion of the proposed algorithm are decreased by 5%.%针对锥束数字减影血管造影成像系统(DSA)锥角增大而导致锥束伪影严重的问题,提出了一种基于FDK的反投影权重锥束DSA重建算法.分析了圆扫描轨迹远端伪影的成因,针对短扫描阴影区域导致的Radon空间数据缺失,提出了一种距离变量的反投影权重函数,并将其作为约束条件引入到FDK算法中,实现扫描轨迹远端区域的数据补偿,扩大图像重建区域.应用该算法对无噪声和有噪声的模拟投影数据,及自行研发的锥束DSA的实际扫描数据分别进行了重建试验.结果表明,文中算法较FDK类算法(Parker-FDK)对大锥角投影数据可明显抑制锥角伪影,其归一化均方距离判据和归一化平均绝对距离判据比Parker-FDK均降低了5％.
改进的峰值检测反距离加权插值算法%Improved inverse distance weighting interpolation algorithm for peak detection
Institute of Scientific and Technical Information of China (English)
李超; 陈钱; 顾国华; 钱惟贤
2013-01-01
采样频率限制和回波脉冲展宽是导致数字化激光脉冲测距峰值检测精度低的主要原因。传统的反距离加权插值算法只能解决低采样频率的问题，却无力解决回波脉冲展宽的问题。针对该问题，根据回波时间能量分布模型，从采样得到的峰值位置分别向上升沿和下降沿方向搜索，各自对搜索半径内的采样点按照距离远近赋予不同权重和插值，然后加权平均提取修正后的峰值时刻。该改进算法有效地解决了回波展宽的问题，减小了低采样频率和回波脉冲展宽带来的测量误差，通过实验论证了算法的有效性。%Low sampling frequency and echo broadening are the main rea sons for low peak detection accuracy in digital pulsed laser ranging. The traditional inverse distance weighted interpolation algorithm (IDW) for peak detection can only solve the problem of low sampling frequency. Aim at the problem, an improved inverse distance weighted interpolation algorithm(IIDW) was proposed. The sampling peak value is firstly located, then sampling spots within the researching radius of the rising edge and falling edge were assigned with different weight according to the different distance to the peak value separately, finally the peak time was corrected by weighted average. This algorithm reduces peak detection errors caused by restriction of sampling frequency and echo broadening effectively and improves the detection precision of digital pulsed laser ranging.
Institute of Scientific and Technical Information of China (English)
龙敏; 谭丽
2011-01-01
Using chaos-based weight variation of Huffman tree,an image/video encryption algorithm is proposed in this paper. In the process of the entropy coding,DC coefficients are encrypted by the weight variation of Huffman tree with the double Logistic chaos and AC coefficients are encrypted by the indexes of codeword. The security,complexity and compression ration of the algorithm are analyzed. Simulation results show that this algorithm has no impact on the compression efficiency and has low complexity,high security and good real-time property. Therefore,it is suitable for real-time image on the network.%提出一种采用混沌权值变异的Huff man树的图像加密算法.此算法在熵编码过程中,以基本的Huffman树为标准,利用双耦合混沌序列1对DC系数进行树的结构未变异、路径值变异的加密；再利用双耦合混沌序列2对AC系数进行码字序号的加密.论文对算法进行了仿真,并对安全性、计算复杂度、压缩比性能进行了分析.实验结果表明,该算法基本上不影响压缩效率,且计算复杂度低、安全性高和实时性好,可用于网络上的图像服务.
A CLOUD SERVICE SEARCHING ALGORITHM BASED ON WEIGHT%一种基于权重的云服务搜索算法
Institute of Scientific and Technical Information of China (English)
孙寒玉; 顾春华; 万锋; 杨巍巍
2016-01-01
云计算技术迅猛发展，众多云服务的功能和服务质量有所不同。针对这种情况，提出一种服务权重计算方法。算法通过分析组合服务和原子服务的关系，结合服务实例的可靠性、新鲜度和负载度，计算服务实例权重值并排行，获得最优实例提供服务。数据分析和仿真实验表明，算法有效选择当前最优服务实例，提高云平台的可用性、效率和鲁棒性。%Cloud computing technology is developing rapidly,various functions and qualities of cloud services differ from each other.In view of this,we propose a service weight computation algorithm.The algorithm computes and ranks the weight of service instances by analysing the relationship of atom service and composition service and combining with reliability,novelty and load degree of service instances to obtain best instance for providing service.Data analysis and simulation experiment all show that the algorithm can pick up optimal current service instance,and promote cloud platform’s high availability,efficiency and robustness.
Energy Technology Data Exchange (ETDEWEB)
Rahmani, Yashar, E-mail: yashar.rahmani@gmail.com [Department of Physics, Faculty of Engineering, Islamic Azad University, Sari Branch, Sari (Iran, Islamic Republic of); Shahvari, Yaser [Department of Computer Engineering, Payame Noor University (PNU), P.O. Box 19395-3697, Tehran (Iran, Islamic Republic of); Kia, Faezeh [Golestan Institute of Higher Education, Gorgan 49139-83635 (Iran, Islamic Republic of)
2017-03-15
Highlights: • This article was an attempt to optimize reloading pattern of Bushehr VVER-1000 reactor. • A combination of weighting factor method and the imperialist competitive algorithm was used. • The speed of optimization and desirability of the proposed pattern increased considerably. • To evaluate arrangements, a coupling of WIMSD5-B, CITATION-LDI2 and WERL codes was used. • Results reflected the considerable superiority of the proposed method over direct optimization. - Abstract: In this research, an innovative solution is described which can be used with a combination of the new imperialist competitive algorithm and the weighting factor method to improve speed and increase globality of search in reloading pattern optimization of VVER-1000 reactors in transient cycles and even obtain more desirable results than conventional direct method. In this regard, to reduce the scope of the assumed searchable arrangements, first using the weighting factor method and based on values of these coefficients in each of the 16 types of loadable fuel assemblies in the second cycle, the fuel assemblies were classified in more limited groups. In consequence, the types of fuel assemblies were reduced from 16 to 6 and consequently the number of possible arrangements was reduced considerably. Afterwards, in the first phase of optimization the imperialist competitive algorithm was used to propose an optimum reloading pattern with 6 groups. In the second phase, the algorithm was reused for finding desirable placement of the subset assemblies of each group in the optimum arrangement obtained from the previous phase, and thus the retransformation of the optimum arrangement takes place from the virtual 6-group mode to the real mode with 16 fuel types. In this research, the optimization process was conducted in two states. In the first state, it was tried to obtain an arrangement with the maximum effective multiplication factor and the smallest maximum power peaking factor. In
A R-NIC Algorithm Based on ReliefF Feature Weighting%一种基于ReliefF特征加权的R-NIC算法
Institute of Scientific and Technical Information of China (English)
陈晓琳; 姬波; 叶阳东
2015-01-01
Nonparametric Information theoretic Clustering ( NIC ) utilizes a non-parametric estimation of the average cluster entropies to maximize the estimated mutual information between data points and clusters,which effectively reduces the cost of participation. However,the algorithm assumes that all features of the sample to be analyzed plays a uniform contribution in the process of cluster analysis. Obviously,the hypothesis is inconsistent with a lot of practices. Therefore, this paper proposes a novel non-parametric feature weighting clustering algorithm based on ReliefF,which is named R-NIC,to consider of different feature. It adopts ReliefF to transform and weighting features,R-NIC can inhibit redundant features,improves the clustering results by clustering in the transformed feature space. Experimental results on UCI datasets show that the performance of the proposed R-NIC algorithm is superior to the NIC algorithm.%非参数信息理论聚类( NIC)算法通过计算数据点与簇间的互信息来实现聚类，利用无参估计法计算集群平均熵，从而降低人为参与的成本，但该算法假定待分析样本的所有特征对分类具有相同的贡献，与目前已有的研究结果相悖。为此，提出一种特征加权的R-NIC算法，该算法考虑各维特征对模式分类的不同影响，使用ReliefF对特征进行加权变换，抑制冗余特征，加强有效特征，利用NIC算法在变换后的特征空间中进行聚类以提高聚类效果。在UCI数据集上的实验结果表明，该算法具有较高的聚类性能，聚类效果优于NIC算法。
Yang, Qin; Zou, Hong-Yan; Zhang, Yan; Tang, Li-Juan; Shen, Guo-Li; Jiang, Jian-Hui; Yu, Ru-Qin
2016-01-15
Most of the proteins locate more than one organelle in a cell. Unmixing the localization patterns of proteins is critical for understanding the protein functions and other vital cellular processes. Herein, non-linear machine learning technique is proposed for the first time upon protein pattern unmixing. Variable-weighted support vector machine (VW-SVM) is a demonstrated robust modeling technique with flexible and rational variable selection. As optimized by a global stochastic optimization technique, particle swarm optimization (PSO) algorithm, it makes VW-SVM to be an adaptive parameter-free method for automated unmixing of protein subcellular patterns. Results obtained by pattern unmixing of a set of fluorescence microscope images of cells indicate VW-SVM as optimized by PSO is able to extract useful pattern features by optimally rescaling each variable for non-linear SVM modeling, consequently leading to improved performances in multiplex protein pattern unmixing compared with conventional SVM and other exiting pattern unmixing methods.
Adaptive security weighted clustering algorithm of Ad Hoc network%Ad Hoc网络自适应安全加权分簇算法
Institute of Scientific and Technical Information of China (English)
马豫青; 李晓宇
2014-01-01
为解决Ad Hoc网络分簇过程中恶意节点被选为簇首带来的安全隐患，保障Ad Hoc网络的正确分簇和稳定运行，提出基于节点相关度、相对移动性、剩余能量值、安全评估度量值多方面因素的自适应安全加权分簇算法。安全评估度量参数由外部入侵检测系统和内部节点信任度共同计算得到，确保安全因素在分簇过程中的准确性；基于该算法给出相应分簇管理过程。仿真结果表明，该算法能够改善分簇性能，提高Ad Hoc网络的安全性。%To eliminate the hidden danger of electing malicious nodes being the cluster heads during clustering and to guarantee the correct clustering and the stable operation of the Ad Hoc network ,a new adaptive safety weighted clustering algorithm was proposed based on the degrees of nodes ,the relative mobility ,the residual energy value ,and security evaluation metrics .To ensure the accuracy of the safety factor during clustering ,the security evaluation metrics were calculated by using both external intrusion detection system and the internal nodes trust degree .The clustering management process was proposed based on this algorithm .The simulation experimental results show that this algorithm can not only optimize its clustering function ,but also improve the safety of Ad Hoc networks significantly .
A weighted fusion WiFi positioning algorithm based on distance rendezvous%一种WiFi距离交会加权融合定位算法
Institute of Scientific and Technical Information of China (English)
彭雪生; 花向红; 邱卫宁; 魏康; 刘少伟
2017-01-01
WiFi indoor location algorithm based on distance rendezvous is influenced by the environment . The lower precision of distance inversion results in bigger positioning error ,and the influences of different distance inversion models vary for positioning error characteristics .Because of over‐smoothing ,and bigger error ,this paper proposes a weighted fusion positioning algorithm based on GPR prediction model and loss model .The experiment results show :the fusion algorithm can effectively improve the positioning accuracy and obviously improve the centralisation of positioning results based on GPR model .%基于距离交会的WIFI室内定位算法受环境的影响，距离反演精度较低导致定位误差较大，不同距离反演模型对定位误差影响特性存在差异。考虑到GPR距离反演模型过度平滑引起定位结果扎堆的缺陷和传统损耗模型反演精度过低引起定位结果误差较大的缺陷，提出一种基于GPR预测模型和损耗模型的加权融合定位算法。实验结果表明：该融合算法能够有效提高定位精度并明显改善基于GPR模型定位算法的定位结果扎堆现象。
Institute of Scientific and Technical Information of China (English)
张艳秋; 徐六通; 王柏
2003-01-01
Reducing inconsistency is the key problem to improve data quality during data integration. In this paper,we first present a weighted algorithm of similarity coefficient which is superior to traditional algorithms if the sourcedata have multiple characteristic items ,all of which have to be taken into account ,especially during the complex infor-mation integration. Secondly,we apply it to the experiment of telecommunication customers integrating ,the results ofdata clustering show it has high feasibility and precision performance.
Online Structure Learning Algorithm for Weighted Networks%加权网络的在线结构学习算法
Institute of Scientific and Technical Information of China (English)
蒋晓娟; 张文生
2016-01-01
With continuous development of internet technology, the scope of network datasets increases massively. Analyzing the structure of network data is a research hotspot in machine learning and network applications. In this paper, a scalable online learning algorithm is proposed to speed up the inference procedure for the latent structure of weighted networks. Firstly, the exponential family distribution is utilized to represent the generative process of weighted networks. Then, using stochastic variational inference technique, the online-weighted stochastic block model ( ON-WSBM) is developed to efficiently approximate the posterior distribution of underlying block structure. In ON-WSBM an incremental approach based on the subsampling method is adopted to reduce the time complexity of optimization, and then the stochastic optimization method is employed by using natural gradient to simplify the calculation and further accelerate the learning procedure. Extensive experiments on four popular datasets demonstrate that ON-WSBM can efficiently capture the community structure of the complex weighted networks, and can achieve comparatively high prediction accuracy in a short time.%随着互联网技术的进步，网络关系数据不断涌现，规模不断膨胀，网络数据的结构分析成为机器学习和网络应用领域的研究热点。为了提高推理效率，文中提出加权网络的在线结构学习算法。首先，使用指数族分布描述加权网络的生成过程。然后，利用随机变分推理方法，构建加权网络的在线结构学习算法。该算法采用基于重采样技术的增量学习方式，降低优化的时间复杂度。最后，利用基于自然梯度理论的随机优化方法进一步加速学习过程，实现网络社区结构的在线学习和实时优化。通过与传统的离线学习算法进行对比实验，验证文中算法能高效快速地实现复杂加权网络的社区结构学习，并在较短时间
Gray Weighted CT Reconstruction Algorithm Based on Variable Voltage%基于灰度加权的变电压CT重建算法
Institute of Scientific and Technical Information of China (English)
李权; 陈平; 潘晋孝
2014-01-01
In conventional CT reconstruction based on fixed Voltage, the projective data often appears overex-posed or underexposed, and so the reconstructive results are poor.To solve this problem, variable voltage CT reconstruction has advanced.The effective projective sequences of a structural component are obtained through the variable voltages.Adjust and minimize the total variation to optimize the reconstructive results on the basis of iterative image using ART algorithm.In the process of reconstruction, the reconstructive image of the low voltage is used as an initial value of the effective projective reconstruction of the adjacent high voltage, and so on until to the highest voltage according to the gray weighted algorithm.That is to say the complete structural information is reconstructed.Experiment shows that the proposed algorithm can completely reflect the informa-tion of a complicated structural component, and the pixel values are more stable.%常规固定电压CT重建，由于过曝光和欠曝光导致的不完全投影信息，成像质量差，为此提出变电压CT重建。通过变电压获得跟工件有效厚度相匹配的有效投影序列，在ART迭代图像的基础上，调整全变差使其最小化，来优化重建。在重建过程中，依据灰度加权，把低电压的重建图像作为初值，应用在相邻高电压有效投影重建中，得到相邻高电压的重建图像，依次类推直至最高电压，工件的全部结构信息重建完毕。实验表明，灰度加权算法不仅实现了变电压图像信息的完整重建，像素值也更加稳定。
Improved Algorithm of Text Feature Weighting Based on Information Gain%基于信息增益的文本特征权重改进算法
Institute of Scientific and Technical Information of China (English)
李凯齐; 刁兴春; 曹建军
2011-01-01
传统tf.idf算法中的idf函数只能从宏观上评价特征区分不同文档的能力,无法反映特征在训练集各文档以及各类别中分布比例上的差异时特征权重计算结果的影响,降低文本表示的准确性.针对以上问题,提出一种改进的特征权重计算方法tf.ig igc.该方法从考察特征分布入手,通过引入信息论中信息增益的概念,实现对上述特征分布具体维度的综合考虑,克服传统公式存在的不足.实验结果表明,与tf.idf.ig和tf.idf.igc2种特征权重计算方法相比,tf.ig.igc在计算特征权重时更加有效.%The idf function of traditional tf.idf algorithm can only evaluate the ability of features to discriminate different documents in a macroscopically way, which can not reflect the differences of distribution proportion for features in each document and each class of the whole training set, it reduces the accuracy of text representation. To solve the above problem, this paper proposes an improved feature weighting method called tf.igt.igc. This method begins frotu analyzing the characteristics of feature distribution, through introducing the concept of information gain in the information theory, realizes the comprehensive consideration of the two specific dimensions of feature distributions, and overcomes the shortcomings of the traditional formula. Experimental results on the two open source corpus show that compared to other two feature weighting methods, tf.igt.igc is more effective in terms of calculating the feature weighting.
Weighted Fair Queuing and Feedback Round Robin Scheduling Algorithm on MPLmS%MPLmS中基于WFQ的轮转调度算法
Institute of Scientific and Technical Information of China (English)
周乃富
2014-01-01
分析了MPLmS网络的基本原理，为了能在标签的分配中更好地体现QoS，通过引入多协议标签交换(MPLS)流量工程控制技术与光交叉连接相结合的一种新型光互联技术--多协议波长标签交换(MPLmS)，重点对标签的分配策略进行分析和设计，提出了一种新的基于WFQ轮转调度算法。%This paper analyzes the basic principles of the network MPLmS. in order to be able to better reflect the QoS of The distribution of the label. Through introducing Multi-Protocol lambda Switching which is a traffic engineering control technology on the Multi-Protocol Label Switching and the light cross connection unify, we focus on the label allocation strategy analysis and design. A new Weighted Fair Queuing and feedback round robin scheduling algorithm is proposed.
Gur, Ilan; Riskin, Arieh; Markel, Gal; Bader, David; Nave, Yaron; Barzilay, Bernard; Eyal, Fabien G; Eisenkraft, Arik
2015-03-01
Diagnosis of late onset sepsis (LOS) in very low birth weight (VLBW) preterm infants relies mainly on clinical suspicion, whereas prognosis depends on early initiation of antibiotic treatment. RALIS is a mathematical algorithm for early detection of LOS incorporating six vital signs measured every 2 hours. The aim of this study is to study RALIS ability to detect LOS before clinical suspicion. A total of 118 VLBW preterm infants (gestational age detection of LOS between RALIS and clinical/culture evidence of LOS. Of the 2,174 monitoring days, RALIS indicated sepsis in 590 days, and LOS was positively diagnosed in 229 days. Sensitivity, specificity, positive, and negative predictive values were 74.6, 80.7, 38.8, and 95.1%, respectively. RALIS provided an indication for sepsis 3 days on the average before clinical suspicion. RALIS has a promising potential as an easy to implement noninvasive early indicator of LOS, especially for ruling out LOS in VLBW high-risk infants. Thieme Medical Publishers 333 Seventh Avenue, New York, NY 10001, USA.
Adaptive Context Tree Weighting
O'Neill, Alexander; Shao, Wen; Sunehag, Peter
2012-01-01
We describe an adaptive context tree weighting (ACTW) algorithm, as an extension to the standard context tree weighting (CTW) algorithm. Unlike the standard CTW algorithm, which weights all observations equally regardless of the depth, ACTW gives increasing weight to more recent observations, aiming to improve performance in cases where the input sequence is from a non-stationary distribution. Data compression results show ACTW variants improving over CTW on merged files from standard compression benchmark tests while never being significantly worse on any individual file.
变电压 CT 重建的灰度加权算法%Gray weighted algorithm for variable voltage CT reconstruction
Institute of Scientific and Technical Information of China (English)
李权; 陈平; 潘晋孝
2014-01-01
In conventional computed tomography (CT) reconstruction based on fixed voltage ,the projective data often ap-pear overexposed or underexposed ,as a result ,the reconstructive results are poor .To solve this problem ,variable voltage CT reconstruction has been proposed .The effective projective sequences of a structural component are obtained through the variable voltage .The total variation is adjusted and minimized to optimize the reconstructive results on the basis of iterative image using algebraic reconstruction technique (ART) .In the process of reconstruction ,the reconstructive image of low voltage is used as an initial value of the effective projective reconstruction of the adjacent high voltage ,and so on until to the highest voltage according to the gray weighted algorithm .Thereby the complete structural information is reconstructed . Simulation results show that the proposed algorithm can completely reflect the information of a complicated structural com -ponent ,and the pixel values are more stable than those of the conventional .%常规固定电压的 CT 重建，因成像系统动态范围受限，投影数据易出现过曝光和欠曝光共存现象，造成信息缺失多，成像质量差，为此提出变电压 CT 重建。通过变电压获得跟工件有效厚度相匹配的有效投影序列，在 ART 迭代图像的基础上，调整全变差使其最小化，从而优化重建。在重建过程中，依据灰度加权，把低电压的重建图像作为初值，应用在相邻高电压有效投影重建中，得到相邻高电压的重建图像，依次类推直至最高电压。至此，工件的全部结构信息重建完毕。仿真结果表明，灰度加权算法不仅实现了变电压图像信息的完整重建，而且像素值更加稳定。
Song, Ting; Li, Nan; Zarepisheh, Masoud; Li, Yongbao; Gautier, Quentin; Zhou, Linghong; Mell, Loren; Jiang, Steve; Cerviño, Laura
2016-01-01
Intensity-modulated radiation therapy (IMRT) currently plays an important role in radiotherapy, but its treatment plan quality can vary significantly among institutions and planners. Treatment plan quality control (QC) is a necessary component for individual clinics to ensure that patients receive treatments with high therapeutic gain ratios. The voxel-weighting factor-based plan re-optimization mechanism has been proved able to explore a larger Pareto surface (solution domain) and therefore increase the possibility of finding an optimal treatment plan. In this study, we incorporated additional modules into an in-house developed voxel weighting factor-based re-optimization algorithm, which was enhanced as a highly automated and accurate IMRT plan QC tool (TPS-QC tool). After importing an under-assessment plan, the TPS-QC tool was able to generate a QC report within 2 minutes. This QC report contains the plan quality determination as well as information supporting the determination. Finally, the IMRT plan quality can be controlled by approving quality-passed plans and replacing quality-failed plans using the TPS-QC tool. The feasibility and accuracy of the proposed TPS-QC tool were evaluated using 25 clinically approved cervical cancer patient IMRT plans and 5 manually created poor-quality IMRT plans. The results showed high consistency between the QC report quality determinations and the actual plan quality. In the 25 clinically approved cases that the TPS-QC tool identified as passed, a greater difference could be observed for dosimetric endpoints for organs at risk (OAR) than for planning target volume (PTV), implying that better dose sparing could be achieved in OAR than in PTV. In addition, the dose-volume histogram (DVH) curves of the TPS-QC tool re-optimized plans satisfied the dosimetric criteria more frequently than did the under-assessment plans. In addition, the criteria for unsatisfied dosimetric endpoints in the 5 poor-quality plans could typically be
Institute of Scientific and Technical Information of China (English)
黄超; 张剑云; 朱家兵; 张正言
2016-01-01
Because classical DOA(Direction of Arrival)estimation algorithms often fail to deal with coherent signals,a new DOA estimation algorithm based on weighted-modified MUSIC(Multiple Signal Classifi-cation) algorithm was proposed in this paper.In this algorithm,the data covariance matrix was reconstructed at first, and new noise subspace was constructed then.Finally the eigenvectors of noise subspace were weighted based on the size of the gain value,and the DOA estimation of coherent signals was completed.Computer simulation indi-cated the algorithm estimated the DOA of coherent signals successfully.%针对传统波达方向估计算法对相干信号失效的问题，提出了基于加权改进多信号分类法(MUSIC)的波达方向估计算法。该算法首先对接收数据协方差矩阵进行重构，进而构造出新的噪声子空间，再根据特征值大小对噪声子空间特征向量进行加权，完成对相干信号的波达方向估计。仿真分析表明，算法成功地完成了对相干信号的波达方向估计。
Institute of Scientific and Technical Information of China (English)
高敏; 郭业才
2012-01-01
When MMA(Multi -modulus Algorithm) is used to equalize high -order QAM, it has many disadvantages, such as slow convergence rate, large mean square error, and so on. In order to overcome the problems, an orthogonal wavelet transform weighted multi - modulus blind equalization algorithm based on simulated annealing optimization glowworm swarm algorithm ( SA - GSO - WT - MMA) was proposed. In the proposed algorithm , the weighted item was increased to the traditional multi - modulus blind equalization algorithm (MMA) , and the simulated annealing glowworm swarm optimization algorithm and the wavelet transform were also introduced in. The proposed algorithm can adjust the modulus value of the cost function value by using the weighted item, it can optimize the initial weight vector of the equalizer by using the strong global optimization ability of SA - GSO , and reduce the signal autocorrelation by using the de - correlation ability of WT. The results from computer simulation show that the proposed algorithm was excellence in improving the convergence rate and reducing the steady - state error.%为解决传统多模盲均衡算法(MMA)在均衡高阶QAM信号时存在的收敛速度慢、稳态误差大等问题,提出了一种基于模拟退火萤火虫优化的小波加权多模盲均衡算法(SA-GSO-WT-WMMA).该算法在MMA的基础上增加了加权项,并引入了模拟退火萤火虫优化(SA-GSO)算法和正交小波变换(Wr),利用加权项自适应地调整算法中代价函数的模值,利用SA-GSO算法极强的全局寻优能力来优化均衡器的初始权向量,利用正交小波变换降低信号的自相关性,有效提高了均衡效果.水声信道仿真实验表明,该算法在降低稳态均方误差和加速收敛速度两方面表现卓越.
Institute of Scientific and Technical Information of China (English)
金天; 原青; 郑光辉; 张立杨; 张军
2014-01-01
In GPS attitude determination system to deal with the problem that the traditional elevation weighted algorithm can’t reflect the situation of occlusion effectively,the accuracy of carrier phase by the satellite carrier to noise ratio of receiver is analyzed in this paper,and a new algorithm based on weighted matrixW is proposed.In the new algorithm,the leastGsquares (ILS)are used to estimate ambiguity and baseline vector.The proposed algorithm can improve the success rate under situation when satellite signal is weak and precision of carrier phase is low.By comparative experiments,it is verified the rationality and effectiveness of the proposed algorithm.Simulation results show that the proposed CN0 weighted algorithm can increase the success rate of single epoch attitude determination by 5 percent compared to the traditional algorithm,1~2 percent compared to the elevation weighted algorithm.%针对GPS定姿系统中,传统的利用高度角的加权不能有效反映遮挡情况的问题,从接收机的卫星载噪比出发,对载波相位精度进行分析,提出新的权重矩阵W,对观测模型进行加权。加权后模型根据最小二乘得到整周模糊度和基线矢量的浮点解。加权方法能够降低弱信号卫星的载波相位观测值对姿态测量成功率的影响。通过对比试验验证了加权算法的合理性和有效性。部分试验结果表明,相比于未加权算法,载噪比加权算法可以使单历元定姿成功率提高5个百分点；相比于高度角加权算法,载噪比加权可以使成功率提高1~2个百分点。
Institute of Scientific and Technical Information of China (English)
陈阳; 王大志
2016-01-01
Studies on type–2 fuzzy logic systems is a hot topic in the current academic area. While type-reduction is one of the most important blocks in the systems. KM algorithms are standarded algorithms which are used to compute and perform the type-reduction of interval type–2 fuzzy logic systems. By comparing the sum operation in discretized version KM algorithms and the integral operation in continuous version of KM (CKM) algorithms, the paper extends the standarded KM algorithms to three different forms of weighted KM (WKM) algorithms according to the Newton-Cotes quadrature formulas of numerical integration techniques. And the KM algorithms become a special case of the WKM algorithms. Three computer simulation examples are used to illustrate and analyze the performance of the WKM algorithms. Compared with the traditional KM algorithms, the WKM algorithms have smaller absolute error and faster convergence speed, which provide the potential application value for designers and adopters of type–2 fuzzy logic systems.%二型模糊逻辑系统是当前的学术研究的热点问题,而降型是该系统中非常重要的一个模块. Karnik-Mendel (KM)算法是被用来计算和完成区间二型模糊逻辑系统降型的标准算法.通过比较离散版本KM算法中求和运算和连续版本的KM(continuous version of KM, CKM)算法中求积分运算,本文利用数值积分技术中牛顿-柯斯特求积公式将标准KM算法扩展成3种不同形式的加权KM(weighted KM, WKM)算法.而KM算法只是WKM算法中的一种特殊情况.3个计算机仿真例子用来阐述和分析WKM算法的表现,与传统的KM算法相比, WKM算法有较小的绝对误差和较快的收敛速度,给二型模糊逻辑系统设计者和应用者提供了潜在的应用价值.
Institute of Scientific and Technical Information of China (English)
孙世然; 卡米力·木依丁; 刘文华; 艾斯卡尔·艾木都拉
2012-01-01
采用信息熵性能评价指标从概率的角度描述因权值变化而引起的图像信息熵分布的变化,可以根据用户关注的范围确定重点加权区域,通过改进的猴王遗传算法来确定权值选择,能够自动的确定感兴趣的区域。区域加权信息熵能够提取图像的颜色以及空间特性,并且保留了香农熵的数学特性。实验表明,区域加权信息熵实验结果的准确率高于单纯的颜色特征提取算法。%Regional weighted information entropy can be determined based on the user key areas of concern weighted area and the Monkey King through improved genetic algorithm to determine the weight of choice,able to automatically identify areas of interest.Regional weighted information entropy to extract the color and spatial characteristics of images and retains the mathematical properties of Shannon entropy.Experimental results show that the weighted information entropy regional results is higher than the accuracy of the simple color feature extraction algorithm.
Vertebral artery aneurysm--a unique hazard of head banging by heavy metal rockers. Case report.
Egnor, M R; Page, L K; David, C
A 15-year-old drummer in a neighborhood rock music band suffered a traumatic true aneurysm of the cervical vertebral artery from violent head and neck motion. He underwent excision of the aneurysm after distal and proximal ligation of the artery. He is neurologically normal 1 year after surgery. The mechanisms of injury caused by extremes of cervical motion, as well as 5 previously reported cases of extracranial vertebral artery aneurysm from closed trauma, are discussed. Excision of vertebral artery aneurysms in patients with emboli from a mural thrombus is recommended. The consequences of vertebral artery ligation and the indications for distal reconstruction are discussed.
Modified Clipped LMS Algorithm
Directory of Open Access Journals (Sweden)
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
基于干涉图加权叠加的图像压缩算法%Image Compression Algorithm Based on the Interference Figure Weighted Superposition
Institute of Scientific and Technical Information of China (English)
苑颖
2015-01-01
在对图像进行压缩的过程中，容易出现信息丢失的情况，导致传统图像压缩算法由于相关性低的图像也可参与等权计算，使得图像产生偏差及失真，无法有效实现图像压缩，提出一种基于干涉图加权叠加的图像压缩算法，给出高相关点的平均形变相位变化速率，依据误差传播定律，求出全部干涉图叠加后高相关点受到的大气延迟干扰，对每个图像对应的相关系数进行计算，依据模型采集高相关目标点，干涉图被叠加后，给出高相关点大气延迟对线性形变速率的干扰，通过移位操作获取样本的Exp-Golomb级数，完成编码获取数据的非负映射操作，利用上一个样本的Exp-Golomb编码级数对当前样本值级数进行估测，通过计算原始干涉图数据和压缩后的干涉图数据的压缩比与峰值信噪比，对压缩效果进行度量。通过光谱相对均方误差RQE对压缩前后的原始光谱和复原光谱进行度量。仿真实验结果表明，所提方法具有很高的精度。%In the process of image compression, prone to the condition of the missing information, the traditional image compression algorithms can also be right to participate in such as the image with low correlation calculation, makes the deviation and distortion image, unable to effectively realize the image compression, put forward a kind of image compression algorithm based on weighted superposition interference figure, gives relevant points higher average deformation phase rate of change, according to the law of error propagation, and the superposition of all interference figure after the relevant points higher disturbance by the atmospheric delay, for each image corresponding to the correlation coefficient is calculated, based on the model to collect related target high, after interference figure is superposition, gives relevant points higher linear deformation rate of atmospheric delay, by shift
Institute of Scientific and Technical Information of China (English)
陶雄飞; 王跃东; 柳盼
2016-01-01
An improved weighted bit-flipping decoding algorithm for LDPC codes is presented. The proposed algorithm introduces an updating rule for variable nodes to efficiently improve the reliability of the flipped bits and reduces the error codes caused by the oscillation of the loops. Simulation results show that the proposed algorithm achieves better BER performance than the Sum of Magnitude based Weighted Bit-Flipping (SMWBF) decoding algorithm over the additive white Gaussian noise channel with only a small increase in computational complexity.%该文提出一种改进的低密度奇偶校验(Low Density Parity-Check,LDPC)码的加权比特翻转译码算法.该算法引入了变量节点的更新规则,对翻转函数的计算更加精确,同时能够有效弱化环路振荡引起的误码.仿真结果表明,与已有的基于幅度和的加权比特翻转译码算法(SMWBF)相比,在加性高斯白噪声信道下,该文算法在复杂度增加很小的情况下获得了误码率性能的有效提升.
基于加权的K-modes聚类初始中心选择算法%A weight-based initial centers selection algorithm for K-modes clustering
Institute of Scientific and Technical Information of China (English)
江峰; 杜军威; 刘国柱; 眭跃飞
2016-01-01
针对现有的K-modes聚类初始化方法没有考虑不同的属性具有不同的重要性这一问题，提出一种基于加权密度与加权重叠距离的初始中心选择算法Ini－Weight。Ini－Weight算法通过计算每个对象的密度以及对象之间的距离来选择初始中心。在计算对象的密度以及对象的距离时，Ini－Weight算法根据每个属性的重要性为不同的属性赋予不同的权值。最后，在UCI数据集上将Ini－Weight与现有的方法进行了比较，结果表明，Ini－Weight算法可以有效地区分不同的属性，而且提高了初始中心选择的准确性。%The current initialization methods for K-modes clustering do not consider the case in which various attributes have different significances.To solve this problem,a weighted density and weighted overlap distance-based initial cen-ter selection algorithm (called Ini-Weight)was proposed.In algorithm Ini-Weight,initial centers were selected by calculating the density of each object and the distance between any two objects.In Ini-Weight,when calculating the density of each object and the distance between any two objects,different weights were assigned to different attributes according to the significance of each attribute.Finally,Ini-Weight was compared with the current methods on UCI data sets.The results showed that Ini-Weight algorithm could effectively distinguish different attributes and improve the ac-curacy for selecting initial centers.
Tang, Shaojie; Tang, Xiangyang
2016-03-01
Axial cone beam (CB) computed tomography (CT) reconstruction is still the most desirable in clinical applications. As the potential candidates with analytic form for the task, the back projection-filtration (BPF) and the derivative backprojection filtered (DBPF) algorithms, in which Hilbert filtering is the common algorithmic feature, are originally derived for exact helical and axial reconstruction from CB and fan beam projection data, respectively. These two algorithms have been heuristically extended for axial CB reconstruction via adoption of virtual PI-line segments. Unfortunately, however, streak artifacts are induced along the Hilbert filtering direction, since these algorithms are no longer accurate on the virtual PI-line segments. We have proposed to cascade the extended BPF/DBPF algorithm with orthogonal butterfly filtering for image reconstruction (namely axial CB-BPP/DBPF cascaded with orthogonal butterfly filtering), in which the orientation-specific artifacts caused by post-BP Hilbert transform can be eliminated, at a possible expense of losing the BPF/DBPF's capability of dealing with projection data truncation. Our preliminary results have shown that this is not the case in practice. Hence, in this work, we carry out an algorithmic analysis and experimental study to investigate the performance of the axial CB-BPP/DBPF cascaded with adequately oriented orthogonal butterfly filtering for three-dimensional (3D) reconstruction in region of interest (ROI).
Institute of Scientific and Technical Information of China (English)
王士龙; 徐玉如; 庞永杰
2011-01-01
The S/N of an underwater image is low and has a fuzzy edge. Ifusing traditional methods to process it directly, the result is not satisfying. Though the traditional fuzzy C-means algorithm can sometimes divide the image into object and background, its time-consuming computation is often an obstacle. The mission of the vision system of an autonomous underwater vehicle (AUV) is to rapidly and exactly deal with the information about the object in a complex environment for the AUV to use the obtained result to execute the next task. So,by using the statistical characteristics of the gray image histogram, a fast and effective fuzzy C-means underwater image segmentation algorithm was presented. With the weighted histogram modifying the fuzzy membership, the above algorithm can not only cut down on a large amount of data processing and storage during the computation process compared with the traditional algorithm, so as to speed up the efficiency of the segmentation, but also improve the quality of underwater image segmentation. Finally, particle swarm optimization (PSO) described by the sine function was introduced to the algorithm mentioned above. It made up for the shortcomings that the FCM algorithm can not get the global optimal solution. Thus, on the one hand,it considers the global impact and achieves the local optimal solution, and on the other hand, further greatly increases the computing speed. Experimental results indicate that the novel algorithm can reach a better segmentation quality and the processing time of each image is reduced. They enhance efficiency and satisfy the requirements of a highly effective, real-time AUV.
一种改进的基于加权模式的BP算法%Improvement of BP Algorithm Based on Weighted Mode
Institute of Scientific and Technical Information of China (English)
李森林; 邓小武
2012-01-01
Back propagation （BP） neural network is a supervised neural network learning algorithm, but the original algorithm has a slow convergence rate and low accuracy, the training process of which is easier to fall into local minimum value. In this paper, we improved the BP neural network algorithm, which partily overcomes the above drawbacks. An improved BP algorithm is completed with the C programming language. We examined the improved BP algorithm with real data about the employment of students. As is shown that the modified algorithm is effective. And it is also a decision support algorithm for the ability of college students employment.%反向传播算法（BackPropagation）是一种有监督神经网络学习算法，但原始算法收敛速率慢，训练过程易陷入局部极小值，精度不高等问题．文中提出了一种加权和引入参数改进的神经网络BP算法，某种程度上克服了以上缺点．对文中的改进算法用VC平台编程，并利用真实数据，对大学生就业能力进行了预测．实验表明，改进算法有效，也为高校解决大学生就业能力提供了决策支持．
Institute of Scientific and Technical Information of China (English)
刘太洪; 赵永雷
2016-01-01
为提高变压器故障诊断准确率，提出了一种基于遗传算法的动态加权模糊C均值聚类算法。该算法使用把聚类中心作为染色体的浮点数的编码方式，染色体长度可变，不同的长度对应于不同的故障聚类数；并使用权值区别不同样本点对故障划分的影响程度。将该算法应用于电力变压器油中溶解气体分析（DGA）数据分析，实现了变压器的故障诊断。经过大量实例分析，并将结果与其他算法进行对比，表明该算法具有较高的诊断精度。%ABSTRACT:In order to improve the correct rate of fault diagnosis of transformer, this paper investigates a dynamic weighted fuzzy c-means clustering algorithm based on genetic algorithm. The algorithm adopts a kind of cluster-center-based floating point encoding mode, in which the variable length chromosomes express cluster prototypes and different length of chromosomes corresponding to different numbers of cluster prototypes;besides,The algorithm utilizes the weights to express the relative degree of the importance of various data in fault partitioning. The algorithm is applied to DGA data analysis, which can accomplish fault diagnosis of the transformer. Examples analysis and comparison results show that the preci-sion of fault diagnosis can be evidently improved.
A SELF-ADAPTIVE WEIGHT-BASED ROUTING ALGORITHM IN LEO SATELLITE NETWORK%LEO卫星网络中一种自适应权值路由算法
Institute of Scientific and Technical Information of China (English)
江玉洁; 姚晔; 梁旭文
2013-01-01
针对LEO卫星网络拓扑动态时变的特点,提出一种自适应权值路由算法.该算法综合考虑了路由的时延和切换频率,既能保证低代价路由的选择优先权,又兼顾了网络流量的平衡.采用地面离线计算方式,简化了星上路由计算.另外,采用节点实时状态与权值路由表相结合的方式选择分组路径,使其对网络实时状态具备一定的自适应性.通过仿真分析证明,该算法在应对拥塞时的时延和时延抖动方面的性能表现良好.%According to the characteristics of LEO satellite network in its topology dynamic time-variant, we propose a novel self-adaptive weights-based routing algorithm. The algorithm takes in to account comprehensively the routing delays and handover frequency and balances between the selection priority of low-cost routing and the equilibrium of networks traffic. With the help of off-line computing on the ground, we simplify the computing complexity of the routing in the satellite. Besides, the algorithm selects the packet path by combining the node real-time status with weighted routing table, this makes the algorithm has self-adaptive property to certain extent on real-time status of the network. From the simulation analyses it is proved that the algorithm performs well in tackling with the delay and its jitters when congestion happened in the network.
基于用户相似度加权的Slope One算法%Slope One Algorithm Weighted by User Similarity
Institute of Scientific and Technical Information of China (English)
田松瑞
2016-01-01
Slope One algorithm based on a simple linear regression model. Reducing the response time and mainte-nance difficulty, it significantly improve the recommended performance. However, Slope One algorithm does not con-sider the internal relevance of users. Using data of all users without distinction is likely to cause deviation and effect the recommendation quality. In this paper we propose an improved Slope One algorithm which takes user similarity into account and modifies the rating deviation calculation formula. Combing item-based Slope One algorithm and user-based collaborative filtering algorithm, a new hybrid recommendation algorithm US-Slope One is proposed. The experimental results on Movielens data set show that the proposed algorithm has better prediction accuracy and rec-ommendation quality compared with the original Slope One algorithms.%Slope One算法基于简单的线性回归模型，通过减少响应时间和维护难度，显著提高了推荐性能。然而Slope One算法没有考虑用户内部的关联，同等地使用各个用户数据进行预测，容易造成偏差，影响推荐质量。本文提出了一种改进的Slope One算法，它将用户相似度纳入考虑并且对评分偏差计算公式进行了修正。基于项目的Slope One算法结合基于用户的协同过滤算法，提出新的混合推荐算法US-Slope One。在MovieLens数据集上的实验结果表明，该算法与原Slope One算法相比具有更好的预测准确度和推荐质量。
DEFF Research Database (Denmark)
Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels
2014-01-01
in multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results....
Institute of Scientific and Technical Information of China (English)
侯华; 李亘煊
2012-01-01
针对多用户OFDM系统,提出两种适用于混合业务的加权比例公平跨层资源分配方案.该方案假设系统用户拥有多个队列,每个队列分别承载不同类型的业务.在MAC层,所提的两种方案都实施加权比例公平调度.该调度先为用户队列中不同分组授予不同的权重,再通过该权重值计算用户权重,并对每个用户的分组进行排序,最后根据系统中各用户待传数据量之比设置用户间速率成比例约束条件.在物理层,这两种方案不仅都将用户间速率成比例约束条件下系统权重容量和的最大化作为优化目标,而且都在该目标下将群智能算法引入其资源分配.但有所不同的是,方案1将人工鱼群算法引入其子载波分配,用新推导的功率分配方式进行功率分配；方案2将云自适应粒子群算法引入其子载波分配,用人口迁移算法进行功率分配.在此基础上,两种方案都依据由加权比例公平调度提供的各用户分组排序结果传送分组.数值仿真与性能分析显示,这两种方案能在满足用户业务流时延需求和保证用户公平性的基础上,有效提高系统总速率.%This paper proposed two weighted cross-layer resource allocation schemes with proportional fairness for multi-user OFDM system on the basis of researching the swarm intelligence algorithm, which were suitable for heterogeneous traffic. It not only assumed that every user in the system had multiple queues, that every queue carried respectively traffic of different types. In the MAC layer, the proposed two schemes both carried out the weighted proportional fairness scheduling. The scheduling provided different packets in users' queues different weight, sorted every user' s packets calculating users' weight by means of the weight, and set up the proportional constraints among users' rates according to the ratio of users' data to be passed. In the PHY layer, the two schemes not only both defined
一种基于权重与轮询的双层仲裁算法%A Two-level Arbitration Algorithm Based on Weight and Round-robin
Institute of Scientific and Technical Information of China (English)
吴睿振; 杨银堂; 张丽; 陆锋雷
2013-01-01
A two-level arbitration algorithm based on weight and Round-Robin (RR) is presented. It sets weight by tickets and employs improved Fixed Priority (FP) and RR arbitration to work in turn respectively under the conditions that there is no contention and there exit heavy contentions. In the NonIdling and NonPreemptive (NINP) model, the proposed arbitration algorithm is much better in the output bandwidth ratio, bandwidth utilization, power, fanout and it also has advantages in speed and area compared with the commonly-used FP, RR and Lottery arbitration algorithms. The proposed arbitration algorithm is suitable in various request environments, simple in logic and easy to implement, so it can be applied to SoC bus systems.%该文提出一种基于权重与轮询(Round-Robin, RR)的双层仲裁算法，在无冲突和多冲突情况下分别采用改进的固定优先级(Fixed Priority, FP)和RR仲裁轮流工作，并通过彩票项设置权重。在非空非抢占(NonIdling and NonPreemptive, NINP)模型下相比传统FP, RR和Lottery仲裁算法有更好的输出带宽比、带宽占用率和功耗，在速度和面积上有一定优势。该算法适应多种请求环境，逻辑简单，容易实现，可应用于总线结构的片上系统(System-on-Chip, SoC)。
Institute of Scientific and Technical Information of China (English)
李尧尧; 廖红云; 曾孝平; 吴小林
2011-01-01
Concentric anchor beacons (CAB) localization algorithm is a range -free wireless sensor networks localization algorithm. Comparing with the traditional range localization algorithm, although it can reduce the nodes energy consumption, the localization accuracy is no better than traditional range localization approaches. In order to further improve the accuracy of localizing nodes, an improved weighted centroid algorithm based on CAB is proposed after analyzing the radio propagation path loss. First, the process of this improved algorithm is introduced. Then, the influence of communication radius on the accuracy of this new localization algorithm was simulated. After that, the accuracy of this localization algorithm were compared with the existing several kinds of range - free localization methods. Simulation results show that this new algorithm has a higher positioning accuracy and not sensitive to communication radius, and it has some value in productive practice.%研究无线传感器定位准确性问题,针对测量位置节点信息,为了提高无线传感器网络的定位精度,采用同心圆定位算法(CAB)是一种免测距的无线传感器定位算法,相比于传统的测距方法能降低节点的能量消耗,但是定位精度却不及传统的测距定位方法.提出在同心圆定位算法(CAB)的基础上,通过分析无线电传播路径损耗采用了一种加权同心圆定位算法.给出了算法的流程,仿真分析了通信半径对新算法定位精度的影响,比较了算法定位精度与现有的几种免测距定位方法的定位精度.仿真结果表明,改进算法有较高的定位精度而且对距离不敏感,对实际工程提供应用价值.
... Anger Weight Management Weight Management Smoking and Weight Healthy Weight Loss Being Comfortable in Your Own Skin Your Weight Loss Expectations & Goals Healthier Lifestyle Healthier Lifestyle Physical Fitness Food & Nutrition Sleep, Stress & Relaxation Emotions & Relationships HealthyYouTXT ...
LDPC 码偏移修正加权比特翻转译码算法%Offset adjustment based weighted bit-flipping decoding algorithm for LDPC codes
Institute of Scientific and Technical Information of China (English)
张高远; 周亮; 文红
2014-01-01
大量仿真表明，基于幅度和的改进型加权比特翻转（modified sum of the magnitude based weighted bit flipping，MSMWBF）译码算法对于行重／列重较小的低密度奇偶校验（low density parity-check，LDPC）码而言，展现出巨大的性能优势，但对于行重／列重较大的基于有限域几何（finite-geometry，FG）的 LDPC 码，性能损失严重。首先对此现象进行理论分析。其次，引入附加的偏移项对 MSMWBF 算法的校验方程可靠度信息进行修正，提高了算法对行重／列重较大的 LDPC 码的译码性能。仿真结果表明，在加性高斯白噪声信道下，误比特率为10－5时，相比于 MSMWBF 算法，在适度增加实现复杂度的条件下，所提算法可获得约0．63 dB 的增益。%Extensive simulations show that for some low density parity-check (LDPC)codes of low row/colum weight,the modified sum of the magnitude based weighted bit flipping (MSMWBF)decoding algorithm performs extraordinarily well.However,this dose not hold when it comes to large row/colum weight finite-ge-ometry LDPC codes,which is first theoretically analyzed.Secondly,improvement in performance is observed for large row/colum weight LDPC codes by considering an additive offset term for the parity-check-equation reliabi-lity of the above algorithm.Simulation results show that the performance of the improved scheme is better than that of the MSMWBF algorithm about 0.63 dB at bit-error rate of 10 -5 over the additive white Gaussian noise channel with only a modest increase in computational complexity.
Institute of Scientific and Technical Information of China (English)
王一; 邹继伟; 杨俊安; 刘辉; 白京路
2011-01-01
Manifold learning methods are sensitive to noise especially in acoustic targets recognition. To deal with this problem, we present a novel manifold learning algorithm for noisy manifold, termed weighted neighborhood reconstruction (WNR). The algorithm builds a curve that can best reflect the trend of the noisy manifold sub-surface. The curve is extended to reconstruct the manifold sub-surface and calculate low dimensional embedding on the new surface. The proposed algorithm can minimize noise effects on manifold while keep the original surface trend. The algorithm is tested on public database and low attitude flying targets acoustic signal. Experiment results show that the proposed algorithm is robust against noise, and outperforms the other three methods cited in this paper.%针对流形学习方法用于声目标识别时易受噪声干扰的情况,提出一种加权邻域重构算法,采用加权迭代方式构造出带噪流形子曲面中最能反映该曲面变化趋势的曲线,通过拓展该曲线对带噪流形子曲面进行重构,利用新曲面计算低维嵌入.该算法在去除噪声的同时,最大限度地保持了原流形曲面的变化趋势,是一种适用于声目标识别的算法.在公开数据库和低空飞行目标实际数据中进行实验,结果表明在识别正确率及运行时间上,本文提出的算法相对于其他3种对比算法均取得了较好的效果.
Institute of Scientific and Technical Information of China (English)
汪文明; 操小伟; 刘桂江; 施赵媛
2016-01-01
通过分析传统 DV-Hop 定位算法在对无线传感器网络中节点分布随机性方面的不足，提出了一种改进算法。该算法采用最小均方误差准则代替方差或偏差，求得平均每跳距离。为了体现各信标节点对未知节点的影响程度不同，采用反距离加权法来处理平均每跳距离。实验结果表明，在没有增加原算法复杂度和成本的前提下，定位精度有了一定提高，该算法是一种简单实用的改进定位算法。%An improved algorithm was proposed by analyzing the deficiencies of traditional DV-Hop algorithm in terms of ran-dom distribution, which owns improvements over two aspects.Firstly, instead of variance or deviation, the average one-hop dis-tance is estimated by minimum mean-squared error criterion (MMSE).Secondly, in order to reflect the different effects of each beacon node on unknown nodes, inverse distance weighting method is adopted to deal with the average one-hop distance.Improved algorithm is simulated on MATLAB platform , and the results show that the positioning accuracy is improved on the condition of no increasing complexity and cost.The improved algorithm is a simple and practical algorithm.
Institute of Scientific and Technical Information of China (English)
杨生叶; 王其兵
2013-01-01
讨论了市场化价格机制未形成之前，跨区跨省电力交易结果的确定和成交限额的权重因子算法，详细阐述了各因子的选取及其权重的确定，以及成交限额的计算方法，并通过某一笔电力交易进行模拟计算，验证了该算法的节能减排效果。%The market has not formed before the price mechanism, the results of cross interprovincial electricity trading and transaction limits determine the weighting factor algorithm.In this paper, we described in detail the selection of the factors de-termining their weight, as well as the calculation of turnover limit and an electricity trading through a simulation calculations to verify the energy saving effect of this algorithm.
Institute of Scientific and Technical Information of China (English)
唐卫东; 关志华; 吴中元
2002-01-01
大多数现有的多目标进化算法(MOEA-Multiobjective Evolutionary Algorithm)都是基于Pareto机制的,如NPGA(Niched Pareto Genetic Algorithm),NSGA(Non-dominated Sorting Genetic Algorithm)等.这些算法的每一个循环都要对种群中的部分或全部个体进行排序或比较,计算量很大.文中介绍了一种基于变权重线性加权的Pareto轨迹法-WSTPEA(Weighted Sum Approach and Tracing Pareto Method),该算法不是同时求得所有可能的非劣解,而是每执行一个循环步骤求得一个非劣解,通过权重变化次数控制算法循环的次数,从而使整个种群遍历Pareto曲线(面).文中给出了算法的详细描述和流程图,并且对两个实验测试问题进行了计算,最后对结果进行了分析.
Zare Hosseini, Zeinab; Mohammadzadeh, Mahdi
2016-01-01
The rapid growing of information technology (IT) motivates and makes competitive advantages in health care industry. Nowadays, many hospitals try to build a successful customer relationship management (CRM) to recognize target and potential patients, increase patient loyalty and satisfaction and finally maximize their profitability. Many hospitals have large data warehouses containing customer demographic and transactions information. Data mining techniques can be used to analyze this data and discover hidden knowledge of customers. This research develops an extended RFM model, namely RFML (added parameter: Length) based on health care services for a public sector hospital in Iran with the idea that there is contrast between patient and customer loyalty, to estimate customer life time value (CLV) for each patient. We used Two-step and K-means algorithms as clustering methods and Decision tree (CHAID) as classification technique to segment the patients to find out target, potential and loyal customers in order to implement strengthen CRM. Two approaches are used for classification: first, the result of clustering is considered as Decision attribute in classification process and second, the result of segmentation based on CLV value of patients (estimated by RFML) is considered as Decision attribute. Finally the results of CHAID algorithm show the significant hidden rules and identify existing patterns of hospital consumers.
Zare Hosseini, Zeinab; Mohammadzadeh, Mahdi
2016-01-01
The rapid growing of information technology (IT) motivates and makes competitive advantages in health care industry. Nowadays, many hospitals try to build a successful customer relationship management (CRM) to recognize target and potential patients, increase patient loyalty and satisfaction and finally maximize their profitability. Many hospitals have large data warehouses containing customer demographic and transactions information. Data mining techniques can be used to analyze this data and discover hidden knowledge of customers. This research develops an extended RFM model, namely RFML (added parameter: Length) based on health care services for a public sector hospital in Iran with the idea that there is contrast between patient and customer loyalty, to estimate customer life time value (CLV) for each patient. We used Two-step and K-means algorithms as clustering methods and Decision tree (CHAID) as classification technique to segment the patients to find out target, potential and loyal customers in order to implement strengthen CRM. Two approaches are used for classification: first, the result of clustering is considered as Decision attribute in classification process and second, the result of segmentation based on CLV value of patients (estimated by RFML) is considered as Decision attribute. Finally the results of CHAID algorithm show the significant hidden rules and identify existing patterns of hospital consumers. PMID:27610177
Institute of Scientific and Technical Information of China (English)
王玮; 王玉惠; 王文敬; 张洪波
2016-01-01
The path planning for warship-aircraft joint operation is studied. Firstly, the weapon system of the destroyer is analyzed to obtain the safe distance when the shipboard helicopter and the destroyer are performing a task cooperatively. Since the traditional A∗ algorithm cannot be applied directly to the path planning for warship-aircraft joint operation, the security costs and the path safety factor are introduced. An improved weighted A∗ algorithm is given based on the traditional algorithm, to solve the path planning problem for warship-aircraft joint operation. Finally, a case simulation is given to verify the effectiveness of the improved algorithm.%基于改进加权A∗算法研究了舰机联合航迹规划问题。首先，通过分析驱逐舰的武器系统结构，得出驱逐舰和舰载直升机在联合执行任务时的安全距离；由于传统A∗算法运用于舰机联合航迹规划问题的局限性，引入安全代价和路径安全值加权系数，基于传统A∗算法给出了改进加权A∗算法，协同规划舰艇和舰载机的路径；最后，通过案例仿真验证了算法的有效性。
Improved reversible logic synthesis algorithm based on weighted directed graph%基于带权有向图的可逆逻辑综合改进算法
Institute of Scientific and Technical Information of China (English)
程学云; 管致锦
2012-01-01
To reduce the number of reversible gates used in the reversible logic synthesis, by analyzing the reversible logic synthesis algorithm based on the weighted direction gragh (WDG) , the number of transitional gates in the process of function transformation is more and the optimization algorithm is simple, so the concept of efficient complexity-equal primitive output transformation (POT) is proposed, the moving and simplification rules for Toffoli gate sequence are expanded and proven, and the improved synthesis algorithm based on WDG is given. Experimental results show that the improved algorithm can not only reduce the number of reversible gates during circuit generation, but also optimize the generated circuit effectively, the number of gates and control bits is reduced greatly, and the circuit cost is decreased.%为减少可逆逻辑综合中使用的可逆门,通过对基于带权有向图的可逆逻辑综合算法的分析,针对函数转换过程中过渡门数较多及电路优化算法简单的问题,提出了有效的等复杂度基本输出变换的概念,扩充并证明了Toffoli门序列的移动和化简规则,给出了改进的基于带权有向图的可逆逻辑综合算法.实验结果表明,该算法不仅减少了可逆电路构成时所使用的可逆门,而且对构建的可逆电路实现了有效化简,大幅度减少了门数和控制位数,降低了可逆电路代价.
Institute of Scientific and Technical Information of China (English)
薛建彬; 陈一鸣; 何凤婕
2015-01-01
With reference to the position accuracy in range￣based and range￣free location algorithm,we analyze error effect on the performance of localization and factors that influence the error. On the basis of least squares algorithm,we propose on the constraints of the weighted least squares algorithm of TDOA based on hierarchy. The algorithm u￣sing the AUV( Autonomous Underwater Vehicle) to stratified of underwater nodes,making beacon nodes with hierar￣chy and depth of information of unknown nodes lift the plane,which will be converted from three￣dimensional posi￣tioning to two￣dimensional localization. It reduces the complexity of the algorithm,at the same time avoids the influ￣ence of the beacon nodes far from unknown nodes for positioning improves the ranging precision,to further reduce the positioning error. The simulation results show that the proposed algorithm under the condition of the position er￣ror is smaller,which can significantly improve the positioning accuracy.%针对距离无关定位算法与距离相关定位算法中定位精度的问题，分析了误差对定位性能的影响及影响误差的因素，在最小二乘定位算法的基础上，提出了一种基于层级结构的约束加权最小二乘时差定位算法。该算法利用AUV( Autonomous Underwater Vehicle)对水下节点进行分层，使得具有层级和深度信息的信标节点升降至未知节点所在平面，从而将三维定位转换为二维定位，降低了算法的复杂度，同时避免了距离未知节点较远的信标节点对定位的影响，提高了测距精度，使定位误差进一步降低。仿真结果表明，该算法在位置误差较小的情况下，可以明显地提高定位精度。
A new cluster algorithm for graphs
Dongen, S. van
1998-01-01
A new cluster algorithm for graphs called the emph{Markov Cluster algorithm ($MCL$ algorithm) is introduced. The graphs may be both weighted (with nonnegative weight) and directed. Let~$G$~be such a graph. The $MCL$ algorithm simulates flow in $G$ by first identifying $G$ in a canonical way with
Institute of Scientific and Technical Information of China (English)
赵锋; 杨伟; 杨朝旭; 孙绍山
2014-01-01
通过对A*算法在路径规划中的应用进行研究，提出了一种新的三维航路动态规划方法，通过对搜索策略引入启发式权重系数，利用加权值自适应方法对算法的评价函数进行设计，改善了传统A*算法在大空间中搜索速度低的缺点，提高航迹点搜索效率，同时将无人机的约束条件有效分割到解空间，便于应用于工程实践。基于优化算法规划的最优航路，设计了导引控制律，使无人机很好地跟随规划的路径，同时生成的期望控制指令充分考虑了无人机本身的机动性能以及实时性要求，解决了航迹规划与航迹跟踪之间的问题，最后进行了仿真验证，结果表明：该方法是可行和有效的，有着较高的优化效率；易于实现，工程实用性强。%Based on the theory of A*algorithm in the Path Planning Problem,a novel algorithm for search strategy optimization of UAV three-dimensional dynamic route planning is presented. Introducing heuristic weight coefficient to improve the search strategy,using method of adaptive algorithm weighted value of evaluation function to design has greatly increased the track point search efficiency. To facilitate the application in practice ,effective solution space contains constraints of UAV,the guidance control law is designed in the paper. It can make the UAV follow the optimal path planning based on the optimization algorithm. At the same time generate the expectations of the control instruction give full consideration to the UAV itself of motor performance and real-time requirements,solve the problem between the path planning and path tracking. The simulation result shows that the method proposed in this paper is feasible and effective and the result obtained is easy to the application of engineering.
Institute of Scientific and Technical Information of China (English)
梅灿华; 张玉红; 胡学钢; 李培培
2011-01-01
Traditional machine learning and data mining algorithms mainly assume that the training and test data must be in the same feature space and follow the same distribution. However, in real applications, the data distributions change frequently, so those two hypotheses are hence difficult to hold. In such cases, most traditional algorithms are no longer applicable, because they usually require re-collecting and re-labeling large amounts of data, which is very expensive and time consuming. As a new framework of learning, transfer learning could effectively solve this problem by transferring the knowledge learned from one or more source domains to a target domain. This paper focuses on one of the important branches in this field, namely inductive transfer learning. Therefore, a weighted algorithm of inductive transfer learning based on maximum entropy model is proposed. It transfers the parameters of model learned from the source domain to the target domain, and meanwhile adjusts the weights of instances in the target domain to obtain the model with higher accuracy. And thus it could speed up learning process and achieve domain adaptation. The experimental results show the effectiveness of this algorithm.%传统机器学习和数据挖掘算法主要基于两个假设:训练数据集和测试数据集具有相同的特征空间和数据分布.然而在实际应用中,这两个假设却难以成立,从而导致传统的算法不再适用.迁移学习作为一种新的学习框架能有效地解决该问题.着眼于迁移学习的一个重要分支——归纳迁移学习,提出了一种基于最大熵模型的加权归纳迁移学习算法WTLME.该算法通过将已训练好的原始领域模型参数迁移到目标领域,并对目标领域实例权重进行调整,从而获得了精度较高的目标领域模型.实验结果表明了该算法的有效性.
Institute of Scientific and Technical Information of China (English)
周玲; 康志伟; 何怡刚
2013-01-01
Aiming at the problem that the ranging error of DV-HOP algorithm gets greater as the distance of beacon away from the unknown node,a weighted hyperbolic positioning DV-HOP algorithm is proposed based on triangle inequality.On the basis of calculating distance between beacons and distance between unknown nodes and singlehop beacons,the algorithm takes advantage of triangle inequality to constrain the distance between unknown nodes and multi-hop beacons for the purpose of reducing ranging error.Besides,in order to reduce the influence of ranging error on positioning accuracy,a weight which is inversely proportional to the distance is introduced into the hyperbolic positioning.The simulation experiments show that the proposed algorithm is stable,reliable and achievable.Compared with original DV-HOP and related improved DV-HOP,positioning accuracy respectively increased by 13％ and 7％ under the same conditions.%针对DV-HOP算法中信标距未知节点越远其测距误差越大的问题,提出了一种基于三角不等式的加权双曲线定位DV-HOP算法.该算法在计算信标间的距离和未知节点到一跳信标距离的基础上,利用三角不等式对未知节点到多跳信标的距离进行约束,以减小测距误差.在双曲线定位过程中引入一个与距离成反比的权值,以降低测距误差对节点定位精度的影响.仿真实验表明,在相同的条件下,与DV-HOP及相关改进算法相比,定位精度分别提高了13％和7％以上,且稳定可靠、易于实现.
Institute of Scientific and Technical Information of China (English)
Shen Yi
2011-01-01
This paper proposes the new definition of the community structure of the weighted networks that groups of nodes in which the edge's weights distribute uniformly but at random between them. It can describe the steady connections between nodes or some similarity between nodes' functions effectively.In order to detect the community structure efficiently,a threshold coefficient K to evaluate the equivalence of edges' weights and a new weighted modularity based on the weight's similarity are proposed. Then, constructing the weighted matrix and using the agglomerative mechanism,it presents a weight's agglomerative method based on optimizing the modularity to detect communities. For a network with n nodes, the algorithm can detect the community structure in time 0(n2 logn2).Simulations on networks show that the algorithm has higher accuracy and precision than the existing techniques. Furthermore, with the change of K the algorithm discovers a special hierarchical organization which can describe the various steady connections between nodes in groups.
Institute of Scientific and Technical Information of China (English)
张高远; 周亮; 文红
2014-01-01
Two simple and efficient weighted bit flipping (WBF)decoding algorithms for low density parity-check (LDPC)codes are proposed,in which the sum of the variable nodes’magnitude is introduced to compute the reliability of the parity checks.Simulation results shows that the performance of one of the improved scheme is better than that of traditional WBF and modified WBF (MWBF)algorithms about 1.65 dB and 1.31 dB at BER of 10 -5 over an additive white Gaussian noise channel,respectively,while the average number of decoding iterations is significantly reduced.%以信息节点的幅度和作为校验方程的可靠度信息，提出两种简单高效的低密度奇偶效验（low densi-ty parity check，LDPC）码的加权比特翻转（weighted bit flipping，WBF）译码算法。仿真结果表明，在加性高斯白噪声信道下，误比特率为10-5时，相比于传统的 WBF 和改进型 WBF（modified WBF，MWBF）算法，提出的一种算法可分别获得约1．65 dB 和1．31 dB 的增益。同时，平均迭代次数也大大降低。
云数据中心基于负载权重的负载均衡调度算法%Load Balancing Scheduling Algorithm Based on Load Weight for Cloud Data Center
Institute of Scientific and Technical Information of China (English)
杜吉成
2013-01-01
针对现有云数据中心的多维资源利用不均衡问题，提出基于资源负载权重的动态多资源负载均衡调度算法。算法结合服务器各维度资源动态负载情况，构造层次分析法(AHP)判断矩阵来处理多维资源对于负载均衡影响权重大小，在此基础上综合考虑任务资源需求，将任务放置到合适服务器来改善资源利用，实现资源间负载均衡。平台仿真显示新算法可有效提高利用率低的资源的利用效率，在提高整体资源利用率、降低资源间负载不均衡率方面有优势。%To solve the unbalance of multi-dimensional resource utilization in cloud data center, proposes a dynamic load balancing scheduling al-gorithm based on resource workload weight. By using Analytic Hierarchy Process (AHP) based on the dynamic resource utilization of servers to deal with multi-dimensional resources for load-balancing influence weights and consider the task resource requirements, places the right task to the appropriate server in order to improve resource utilization and implements the load-balancing between multi-dimensional resources. Platform emulation shows that the new algorithm can effectively improve the use efficiency especially for the resource which has a low utilization before, and has advantages in improving the overall resource utilization, reduces uneven rates for resource use.
Energy Technology Data Exchange (ETDEWEB)
Meyer, J.
2007-07-01
Several measurements of the top quark mass in the dilepton final states with the DOe experiment are presented. The theoretical and experimental properties of the top quark are described together with a brief introduction of the Standard Model of particle physics and the physics of hadron collisions. An overview over the experimental setup is given. The Tevatron at Fermilab is presently the highest-energy hadron collider in the world with a center-of-mass energy of 1.96 TeV. There are two main experiments called CDF and DOe, A description of the components of the multipurpose DOe detector is given. The reconstruction of simulated events and data events is explained and the criteria for the identification of electrons, muons, jets, and missing transverse energy is given. The kinematics in the dilepton final state is underconstraint. Therefore, the top quark mass is extracted by the so-called Neutrino Weighting method. This method is introduced and several different approaches are described, compared, and enhanced. Results for the international summer conferences 2006 and winter 2007 are presented. The top quark mass measurement for the combination of all three dilepton channels with a dataset of 1.05 1/fb yields: mtop=172.5{+-}5.5 (stat.) {+-} 5.8 (syst.) GeV. This result is presently the most precise top quark mass measurement of the DOe experiment in the dilepton chann el. It entered the top quark mass wold average from March 2007. (orig.)
Institute of Scientific and Technical Information of China (English)
吴杰; 李梁; 王华奎
2011-01-01
针对Euclide,定位算法中定位精度及覆盖率受描节点密度影响较大的问题,提出一种改进的节点定位算法.根据节点初始定位精度及测距精度提出一种新的加权方法.定位后的节点升级为辅助信标点.未知节点根据更新的锚节点位置信息循环求精.仿真表明该定位系统既能提高定位覆盖率又能减少定位累积误差,从而提高整个网络的定位精度.%Aiming at the problem that anchor ratio has a strong impact on localization error and coverage in Euclidean algorithm, an improved localization algorithm is proposed.According to the initial node position accuracy and measurement accuracy,a new method for weighted is presented.The orphan node becomes assistant beacon node to aid localization after being localized.And the localization accuracy of other nodes is obtained by iterative refinements.Simulation results demonstrate that the position cumulative and average error are both lower,but the coverage is higher.
Institute of Scientific and Technical Information of China (English)
刘中金; 卓子寒; 何跃鹰; 李勇; 苏厉; 金德鹏; 曾烈光
2016-01-01
Network virtualization is widely deployed in network experiment platforms and data center networks. As a key networking equipment in virtualized environment, the virtual router can build many virtual router instances to run different virtual networks. The key problem for a virtual router lies in how to schedule the packets into different virtual instances according to the virtual networks’ bandwidth requirement. In this article, a model is given to the scheduling problem and a dynamical weighted scheduling algorithm is proposed. The experimental results show that the proposed algorithm has superiority over miDRR algorithm in terms of the efficiency and the fairness.%网络虚拟化被广泛用于网络实验平台和数据中心等场景中。作为虚拟化网络中的核心组网设备，虚拟路由器可以在同一物理底层上构建多个虚拟路由器实例来承载多个虚拟网。其核心调度问题在于如何根据不同虚拟网对带宽的不同需求，将网络数据包调度到不同的实例中。该文针对该问题对虚拟化场景下的队列调度问题进行建模，提出了基于动态配额的队列调度算法，与miDRR等算法相比，该文算法在虚拟网带宽分配的有效性和公平性上有明显优势。
改进的动态加权多传感器数据融合算法%Improved Dynamic Weighted Multi-sensors Data Fusion Algorithm
Institute of Scientific and Technical Information of China (English)
杨佳; 宫峰勋
2011-01-01
为采用多个传感器对某一目标特性进行多次测量,提出一种改进的动态加权多传感器数据融合算法.利用模糊集合理论中的隶属函数构造各观测值的支持度矩阵,通过增加矩阵维数度量观测数据在整个观测区间的相互支持程度,采用矩阵特征向量的稳定理论分配融合权重,得到数据融合估计的最终表达式.仿真结果表明,与同类方法相比,该方法的融合精度较高,具有较好的稳健性.%In the case of multi-sensors measurement of many times on some characteristic index, a new fusion method is proposed.A membership function in fuzzy set is used to measure the mutual support degree of observation values, and the integrated support degree of data from various sensors is measured through an augmented support degree matrix.According to this augmented matrix's maximum modulus eigenvectors,corresponding weight coefficients of all the observation values are allocated, hence, the final expression of data fusion is obtained.An example and a simulation are used to compare the proposed method with another two similar fusion methods.Result shows that this method has both higher precision and strong ability of stableness.
... Health Information Weight Management English English Español Weight Management Obesity is a chronic condition that affects more ... Liver (NASH) Heart Disease & Stroke Sleep Apnea Weight Management Topics About Food Portions Bariatric Surgery for Severe ...
Institute of Scientific and Technical Information of China (English)
郑明言
2014-01-01
为了有效地滤除红外图像中的噪声，提出了一种小波域多方向自适应加权伪中值滤波算法。该算法首先对红外噪声图像的各高频分解子图像分别进行噪声点检测和标记；然后根据各子图像中像素点分布特征分别设计出4类具有多方向性的滤波模板进行自适应加权滤波；最后将低频分解子图像与滤波后的各小波高频分解子图像进行重构。分别将中值滤波（MF）、伪中值滤波（PMF）、极值中值滤波（EMF）、加权中值滤波（WMF）、以及本文算法应用于标准测试图像以及红外图像去噪，并引入峰值信噪比（PSNR）、平均绝对误差（MAE）进行去噪效果评定。标准测试图像和红外图像仿真结果表明，该算法性能明显优于PMF，且相对于与其余几类同类型算法而言，也具有一定的优势。%In order to filter the noise in infrared image, a multi-direction adaptive weighted pseudo median filtering algorithm is proposed based on wavelet transform. Firstly, the salt & pepper noise image is conducted by wavelet transform, and the high-frequency sub-images and low-frequency sub-image are obtained. The noise distribution areas of the high-frequency sub-images are detected and labeled effectively. Then, according to the characteristics of the ground objects and the features of the directionality of the high frequency wavelet decomposition sub-images, four kinds of directional filtering templates are respectively designed so as to deal with the noise through adaptive weighted filtering. Finally, low-frequency sub-image and high-frequency sub-images are reconstructed. The median filtering(MF), pseudo median filtering(PMF), extreme median filtering(EMF), weighted median filtering(WMF) and the algorithm in this paper are used to filter the salt & pepper noise in standard test image and infrared image. Peak signal to noise ratio(PSNR) and mean absolute error (MAE)are adopted to evaluate the
Institute of Scientific and Technical Information of China (English)
刘伟; 朴胜春; 祝捍皓
2014-01-01
This paper analyzes a frequency domain polarization weighted multiple signal classification (MUSIC) al-gorithm using one vector sensor against the restriction of randomly distributed noise in space .After the frequency domain polarization parameters are picked up , the weights are estimated automatically according to the differences in the polarization characteristics between desired signal and noise .The received signals are replaced by the weigh-ted frequency domain signals to construct the covariance matrix which is used to realize DOA estimation through the MUSIC algorithm.The simulation results show that the estimation error of this proposed method is less than 5°when the signal to noise ratio is -15 dB and there is no prior information .The validity of this improved method has been verified by the sea experiment results .%针对经典多重信号分类（ MUSIC）算法估计性能受限于空间随机分布噪声的问题，提出了一种基于频域极化加权的单矢量水听器MUSIC算法。该方法提取接收信号频域极化参数，根据目标信号与噪声在极化特性上的区别，对接收信号频谱自适应地加权。将加权后的频谱信号代替原接收信号，构造极化协方差矩阵，再利用经典MUSIC算法估计目标方位。仿真研究表明，在信噪比为-15 dB且无先验知识情况下，该方法的目标方位估计误差小于5°。海试数据处理结果验证了该方法的有效性。
Institute of Scientific and Technical Information of China (English)
姚薇; 钱玲玲
2016-01-01
矿山遥感图像的质量受成像区域环境的影响较大,加之成像器件固有的缺陷,易导致获取的图像存在一定程度的失真,影响对遥感图像的判读和分析.为此,在中值滤波(Median filtering,MF)算法的基础上进行了加权改进,提出了一种适合于遥感图像处理的自适应加权改进中值滤波算法(Adaptive weighted improved median filtering,AWIMF).首先将遥感图像划分为多个尺寸为7×7的图像块,对每个图像块分别统计像素点灰度极大(小)值,将该类像素点标记为第1类疑似噪声点;其次计算每个图像块的像素点灰度中值,将各图像块中的每个像素点灰度值与对应的像素点灰度中值作差,将较大差值对应的像素点标记为第2类疑似噪声点,将2次检测均被标记为疑似噪声点的像素点确定为噪声点;然后,以每个噪声点为中心选取尺寸为5×5的滤波窗口,根据滤波窗口内各像素点灰度值与所在滤波窗口的像素点灰度中值的差值计算权重,分别将各像素点灰度值与对应的权重进行加权计算,实现对噪声点的高效滤波;最后,采用直方图规定化算法(Histogram specification,HS)对滤波后的图像对比度进行调节.采用内蒙古白云鄂博矿区遥感图像进行试验,结果表明,所提算法的性能相对于中值滤波及其已有的几类改进型算法而言,优势较明显.%The quality of mine remote sensing image is affected by the imaging regional environment greatly,and the ima-ging device inherent defects make access to the obtained remote sensing image with certain degree of distortion,which appeared all kinds of random noise. The median filtering ( MF) algorithm is weighted improved,a new adaptive weighted improved medi-an filtering( AWIMF) algorithm that is suitable for remote sensing image processing is proposed. Firstly,the image is divided into many image blocks with the size of 7×7,the maximum and minimum pixel gray values of each
多可利用节点加权网格扫描安全定位算法%Weighted grid scan secure location algorithm with multi available nodes
Institute of Scientific and Technical Information of China (English)
詹弢; 郭庆; 李含青
2011-01-01
定位算法是无线传感器网络的重要支撑技术,现有定位算法大多假设网络中的节点是完全可信的,然而这种假设通常不可能实现,尤其在战场环境中,无线传感器网络经常因受到攻击而导致网络定位不准确或系统崩溃.除此之外,与其他网络相同,无线传感器网络也存在着多径问题.针对以上问题,提出了一种适用于有遮蔽的战场环境下的分布式传感器网络的多可利用节点加权网格扫描安全定位(MANWS)算法.该算法首先对网络中的锚节点的安全性做出判定,从而去除由于网络攻击或者多径问题引出的恶意节点,然后利用加权的网格扫描算法对未知节点进行定位.因而,算法可以有效地解决网络中的安全和多径问题.仿真结果表明:不论在网络安全还是受到攻击时,算法的定位误差都远远小于经典的分布式网络DLE( distributed location estimation)算法.%Since they play a crucial role in wireless sensor networks ( WSNs) , many localization schemes have been proposed in recent years. It is generally assumed that the nodes in networks are all reliable; however, this may not always be the case. Especially in battlefield conditions, the WSNs are often attacked by hostile parties in order to destroy or mislead the localization system. As in other type of networks, the question of using a multi-path method also existed with relation to WSNs. Thus, a location algorithm needed to be developed. In this paper, a new scheme called "weighted grid scan secure localization algorithm with multi-available nodes (MANWS)" was put forward in order to adjust the networks distributed in the battlefield. It may avoid the location errors resulting from attacks and multi-path. The MANWS scheme first justified whether the anchors were dependable and got rid of malicious anchors. Then, the unknown nodes were localized through a weighted grid scan algorithm. Simulation results show that the MANWS
Greedy algorithms withweights for construction of partial association rules
Moshkov, Mikhail
2009-09-10
This paper is devoted to the study of approximate algorithms for minimization of the total weight of attributes occurring in partial association rules. We consider mainly greedy algorithms with weights for construction of rules. The paper contains bounds on precision of these algorithms and bounds on the minimal weight of partial association rules based on an information obtained during the greedy algorithm run.
Handling Dynamic Weights in Weighted Frequent Pattern Mining
Ahmed, Chowdhury Farhan; Tanbeer, Syed Khairuzzaman; Jeong, Byeong-Soo; Lee, Young-Koo
Even though weighted frequent pattern (WFP) mining is more effective than traditional frequent pattern mining because it can consider different semantic significances (weights) of items, existing WFP algorithms assume that each item has a fixed weight. But in real world scenarios, the weight (price or significance) of an item can vary with time. Reflecting these changes in item weight is necessary in several mining applications, such as retail market data analysis and web click stream analysis. In this paper, we introduce the concept of a dynamic weight for each item, and propose an algorithm, DWFPM (dynamic weighted frequent pattern mining), that makes use of this concept. Our algorithm can address situations where the weight (price or significance) of an item varies dynamically. It exploits a pattern growth mining technique to avoid the level-wise candidate set generation-and-test methodology. Furthermore, it requires only one database scan, so it is eligible for use in stream data mining. An extensive performance analysis shows that our algorithm is efficient and scalable for WFP mining using dynamic weights.
Institute of Scientific and Technical Information of China (English)
沈笑慧; 张健; 何熊熊
2012-01-01
基于接收信号强度指示的无线传感器网络定位问题,提出一种改进Kalman滤波方法,消除测距过程中的非视距误差,得到标签与节点间的估计距离.然后,分析标签与节点的距离、定位单元质量和标签所处的位置三方面对定位精度的影响,提出一种改进三边定位算法,并根据滤波后的估计距离计算得到的多个定位坐标进行加权融合.最后,通过Matlab仿真验证所提算法的有效性.%Based on the received signal strength indicator wireless sensor network location problem, the article puts forward an improved Kalman filtering method to obtain the evaluated distance between label and nodes through eliminating the non-line-of-sight error in the ranging process. Then, three effects on location accuracy are analysed, which are the distance between label and node, the quality of location unit and the position of label and an improved trilateral localization algorithm is proposed, in which weighted fusion of some position coordinates calculated by the filtered distance. Finally, the simulation is given to demonstrate the effectiveness of the proposed algorithm.
基于掩膜的加权卡尔曼滤波相位解缠算法%Weighted Kalman filter phase unwrapping algorithm based on mask and its analysis
Institute of Scientific and Technical Information of China (English)
闫满
2013-01-01
The Kalman filter transforms the phase unwrapping problem into the state estimate issue to deal with phase unwrapping and noise eliminating at the same time. But the original radar signal and post-processing producing a lot of errors and others can cause phase discontinuity and local error propagation, unwrapped result is not accurate. Therefore, weighted Kalman filter phase unwrapping algorithm based on mask is proposed. Through the low-quality region in wrapped phase is masked, Kalman filter is implemented in the high-quality region after masked. When the high-quality region is unwrapped correctly, weighted Kalman filter is implemented in the masked off low-quality region, then a reliable unwrapping result is obtained. Using simulated data and InSAR data of ALOS satellite in Shandong Yanzhou mining area, it is verified that the proposed algorithm is effective and reliable.% 卡尔曼滤波将相位解缠转化为状态估计问题，同时实现相位解缠与噪声消除。由于原始雷达信号以及后处理过程中产生的诸多误差，造成相位数据不连续产生局部误差传递，使得解缠结果不准确。提出一种基于掩膜的加权卡尔曼滤波相位解缠算法。该算法通过对包缠数据中的低质量区域进行掩膜处理，对掩膜后的高质量区域进行卡尔曼滤波相位解缠，再对掩膜区域实施加权卡尔曼滤波相位解缠，得到了较为可靠的相位解缠结果。采用仿真数据和ALOS卫星的山东兖矿地区干涉SAR数据进行实验，验证了算法的有效性和可靠性。
Institute of Scientific and Technical Information of China (English)
黄宇鹏; 汪可友; 李国杰
2015-01-01
随着电力电子技术在电力系统中的广泛应用，电力电子开关引起的高频次系统拓扑突变对电磁暂态仿真提出了新挑战，为此，提出了一种电力电子开关仿真插值算法。在全局隐式梯形积分法条件下，该算法在开关动作点通过线性插值计算系统变量，并利用后向欧拉法对系统进行重新初始化，而后根据插值点在当步仿真步长的位置，利用权重法数值积分，灵活改变积分步长，快速积分至仿真整步时间点，对仿真进行再同步。在开关动作过程中，算法仅通过一次插值对仿真进行重新初始化，并在再同步过程中保持节点导纳矩阵不变，且能有效抑制数值振荡。算法在保证精度的前提下，降低了计算负担，提高了计算速度，并考虑了多重开关问题。最后通过算例验证了算法的适应性和有效性。%With wide application of power electronics apparatuses in modern power system, electromagnetic transient program (EMTP) have to face with new challenges due to high-frequent topologic sudden change caused by the action of power electronic valves. For this reason, a new interpolation algorithm for the simulation of power electronic circuits is proposed to ensure the accuracy and reliability of control system. Under the condition of global implicit trapezoidal integration, at the action point of the power electronic valve the proposed algorithm computes system variables by linear interpolation and reinitializes the control system by backward Euler method, then according to the position of the interpolation point in current simulation step the weight-numerical integration is utilized to flexibly adjust the integration step-length and the fast integration is carried out to determine the time point of the resynchronization in the simulation. During the action of power electronic valves, the proposed algorithm reinitialize the simulation via only once of interpolation, and
... obese. Achieving a healthy weight can help you control your cholesterol, blood pressure and blood sugar. It ... use more calories than you eat. A weight-control strategy might include Choosing low-fat, low-calorie ...
Institute of Scientific and Technical Information of China (English)
谭宝成; 李博
2015-01-01
Driverless vehicle in the process of moving,,need to use multi-sensor system to detect the enviroment of the road. but these sensors data information have some problems,such as overload, missing or inaccurate.we have to use data fusion technology to deal with the received data. this paper based on the driverless vehicle's multi-sensor system,make study on the weighted average of the data fusion algorithm,conform to the requirements of driverless vehicle's fusion level in the running enviroment. In the actual data fusion processing,it has high feasibility.%无人车在运行过程中，需要利用多传感器系统对周围道路环境进行观测，但这些传感器获取的数据信息存在着超载，丢失或不精确等问题，则需采用数据融合技术对所获数据加以优化处理。本文基于无人车的多传感器系统，对加权平均数据融合算法进行了研究，符合无人车运行环境下融合层次的要求，在实际的数据融合处理中具有很高的可行性。
Institute of Scientific and Technical Information of China (English)
彭会萍; 曹晓军; 杨永旭
2012-01-01
This paper introduced some important concepts,such as the evidence-distance,the evidence support degree,the evidence credibility and the decision-making distance measurement.By given some rationalize processing to the trust function data of evidence,and achieved the result of the multi-sensor information fusion using the D-S evidence combination rules,then,this paper proposed a weighted average algorithm of multi-sensor information fusion based on decision-distance.Numerical example shows that the method can effectively solve the D-S conflict,and can ensure the accuracy of the information fusion.%本文引入证据间距离、证据间支持度和可信度、决策距离测量等重要概念,对证据的信任函数数据进行合理化处理,并借助D-S证据合成规则实现多传感器的信息融合,从而提出基于决策距离的多传感器信息融合加权平均算法。算例分析表明,该方法能够有效解决D-S冲突问题,确保信息融合结果的准确度。
... baby, taken just after he or she is born. A low birth weight is less than 5.5 pounds. A high ... weight is more than 8.8 pounds. A low birth weight baby can be born too small, too early (premature), or both. This ...
Weighted cubic and biharmonic splines
Kvasov, Boris; Kim, Tae-Wan
2017-01-01
In this paper we discuss the design of algorithms for interpolating discrete data by using weighted cubic and biharmonic splines in such a way that the monotonicity and convexity of the data are preserved. We formulate the problem as a differential multipoint boundary value problem and consider its finite-difference approximation. Two algorithms for automatic selection of shape control parameters (weights) are presented. For weighted biharmonic splines the resulting system of linear equations can be efficiently solved by combining Gaussian elimination with successive over-relaxation method or finite-difference schemes in fractional steps. We consider basic computational aspects and illustrate main features of this original approach.
Kleinberg, Jon
2006-01-01
Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Application of WKFCM Algorithm Based on Weight in Debris Flow Evaluation%基于权重的WKFCM聚类算法在泥石流评价中的应用
Institute of Scientific and Technical Information of China (English)
李炫; 范建容; 张建强
2015-01-01
泥石流的形成受地形条件、地质构造、气象水文等多种因素的综合影响，不同区域的泥石流灾害在空间分布上具有一定的差异性和相似性，聚类分析能够识别出这种相似的特性。针对 KFCM聚类算法没有考虑到不同影响因子对于泥石流灾害的贡献程度不同的缺陷，引入权重的概念，对 KFCM算法进行改进，选取沟床比降、流域面积、构造系数、冰川坡度、岩性系数、平均坡度、最大淤积、可移方量、冰川面积与流域面积比9个因素作为区域内冰川泥石流危险性的评价指标，采用层次分析法求取各个评价指标的权重值，以西藏境内30条具有良好工作基础的冰川泥石流沟为研究对象，探究改进后的 WKFCM聚类算法在泥石流危险性评价中的应用。结果表明，改进后的算法可以避免传统评价方法阈值确定时的主观不确定性，将其应用在泥石流危险性评价中是有效可行的。%The formation of debris flow include terrain condition,geological structure,weather and other fac-tors.There are some differences and similarities in debris flow of different area.Using cluster analysis can identify the similarity.General KFCM Algorithm has almost no consideration for the contribution of different factors have different weights,therefore,this article try to improve the KFCMalgorithm through introducing concept of weight, then we get WKFCMalgorithm.After analyze the present data of these glacier debris flow gullies,9 factors are cho-sen as main factors for evaluating the risk of these gullies.Analytic Hierarchy Process is used to determine the weights of the factors.The study area is s located in Tibet along the Sichuan-Tibet highway where 30 glacier debris flow gullies are selected for study.According with the reference result,it shows that it is practical to use this meth-od to evaluate the risk of debris flow.
Some observations on weighted GMRES
Güttel, Stefan
2014-01-10
We investigate the convergence of the weighted GMRES method for solving linear systems. Two different weighting variants are compared with unweighted GMRES for three model problems, giving a phenomenological explanation of cases where weighting improves convergence, and a case where weighting has no effect on the convergence. We also present a new alternative implementation of the weighted Arnoldi algorithm which under known circumstances will be favourable in terms of computational complexity. These implementations of weighted GMRES are compared for a large number of examples. We find that weighted GMRES may outperform unweighted GMRES for some problems, but more often this method is not competitive with other Krylov subspace methods like GMRES with deflated restarting or BICGSTAB, in particular when a preconditioner is used. © 2014 Springer Science+Business Media New York.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
Institute of Scientific and Technical Information of China (English)
神显豪; 张祁
2014-01-01
Since the system of wind turbine is quite complex,the relationship between faults and phenomena is not simple or lin-ear,the diagnostic requirements could not be met by single detection.Aimed at this problem,the information fusion theory of Wire-less Sensor Network was applied in wind turbine's state monitoring and fault diagnosis,which made information fusion separately in two levels of signal level and characteristics level of a large amount of collected data.By using self-adaptive weighting fusion algorithm, the data redundancy and transmission energy consumption of network were reduced,and using the Gauss membership function,the basic probability assignment was obtained,which enhanced the D-S evidence theory data reliability and improve the ability of fault i-dentification.Finally,a simulation experiment of fault diagnosis was held on gearbox of wind turbine.The experimental results prove that the method has a high diagnostic accuracy,and obviously improves diagnostic reliability.%由于风电机组系统相当复杂，故障原因及其现象不成简单或线性对应关系，单一检测不能够满足诊断需要。针对这一问题，将无线传感器网络（Wireless Sensor Network）中信息融合的理论和方法应用于风电机组状态监测和故障诊断中，使采集到的海量数据分别进行信号层与特征层两个层次的信息融合，运用自适应加权融合算法降低网络的数据冗余和传输能量消耗，利用高斯隶属度函数获得基本概率的赋值，提高了D-S证据理论数据的可靠性，改进的证据组合方法提高了故障识别能力。最后，对风电机组齿轮箱的故障诊断进行仿真实验，实验结果验证了该方法具有较高的诊断精度，明显提高诊断的可信度。
Energy Aware Scheduling for Weighted Completion Time and Weighted Tardiness
Carrasco, Rodrigo A; Stein, Cliff
2011-01-01
The ever increasing adoption of mobile devices with limited energy storage capacity, on the one hand, and more awareness of the environmental impact of massive data centres and server pools, on the other hand, have both led to an increased interest in energy management algorithms. The main contribution of this paper is to present several new constant factor approximation algorithms for energy aware scheduling problems where the objective is to minimize weighted completion time plus the cost of the energy consumed, in the one machine non-preemptive setting, while allowing release dates and deadlines.Unlike previous known algorithms these new algorithms can handle general job-dependent energy cost functions, extending the application of these algorithms to settings outside the typical CPU-energy one. These new settings include problems where in addition, or instead, of energy costs we also have maintenance costs, wear and tear, replacement costs, etc., which in general depend on the speed at which the machine r...
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
Tel, G.
1993-01-01
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of distri
Impulse denoising using Hybrid Algorithm
Directory of Open Access Journals (Sweden)
Ms.Arumugham Rajamani
2015-03-01
Full Text Available Many real time images facing a problem of salt and pepper noise contaminated,due to poor illumination and environmental factors. Many filters and algorithms are used to remove salt and pepper noise from the image, but it also removes image information. This paper proposes a new effective algorithm for diagnosing and removing salt and pepper noise is presented. The existing standard algorithms like Median Filter (MF, Weighted Median Filter (WMF, Standard Median Filter (SMF and so on, will yield poor performance particularly at high noise density. The suggested algorithm is compared with the above said standard algorithms using the metrics Mean Square Error (MSE and Peak Signal to Noise Ratio (PSNR value.The proposed algorithm exhibits more competitive performance results at all noise densities. The joint sorting and diagonal averaging algorithm has lower computational time,better quantitative results and improved qualitative result by a better visual appearance at all noise densities.
Institute of Scientific and Technical Information of China (English)
周延年; 朱怡安
2011-01-01
文章针对嵌入式计算机的自身特点,建立了嵌入式计算机性能评估指标体系.同时,将组合赋权法和灰色关联度相结合,提出了组合权重的灰色关联评价模型.该模型将组合赋权法确定指标的权重系数代替灰色关联分析中的均值权重系数,用来修正各评价指标的差异,并对嵌入式计算机性能进行综合评估.实验结果表明,该方法有效地解决了部分指标信息不完全和部分指标信息模糊的问题,提高了计算机性能评估的可靠性,为今后嵌入式计算机综合性能的评价提供了有价值的参考.%Aim. The introduction of the full paper reviews some papers in the open literature, all in Chinese, and then proposes our better algorithm, which is explained in sections 1,2 and 3. Section 1 establishes the embedded computer performance appraisal indicator system, whose hierarchical structure for six main indexes is given in Fig.1. Section 2 explains the evaluation algorithm through combining grey correlation with assignment weight; its core consists of: ( 1 ) we combine the analytic hierarchy process (AHP) with the entropy weight to calculate the weight of each index and replace the average value weighting for the gray correlation analysis; (2) we carry out the comprehensive evaluation of the performance of embedded computer with our evaluation algorithm. Section 3 improves the grey correlation evaluation algorithm in section 2. Section 4 gives an illustrative example to verify the effectiveness of our evaluation algorithm and applies it to the comprehensive evaluation of embedded computer; the evaluation results, given in Table 3, and their analysis show preliminarily that our evaluation algorithm can effectively evaluate the performance of embedded computer.
Integrated Association Rules Complete Hiding Algorithms
Directory of Open Access Journals (Sweden)
Mohamed Refaat Abdellah
2017-01-01
Full Text Available This paper presents database security approach for complete hiding of sensitive association rules by using six novel algorithms. These algorithms utilize three new weights to reduce the needed database modifications and support complete hiding, as well as they reduce the knowledge distortion and the data distortions. Complete weighted hiding algorithms enhance the hiding failure by 100%; these algorithms have the advantage of performing only a single scan for the database to gather the required information to form the hiding process. These proposed algorithms are built within the database structure which enables the sanitized database to be generated on run time as needed.
Weighted approximation with varying weight
Totik, Vilmos
1994-01-01
A new construction is given for approximating a logarithmic potential by a discrete one. This yields a new approach to approximation with weighted polynomials of the form w"n"(" "= uppercase)P"n"(" "= uppercase). The new technique settles several open problems, and it leads to a simple proof for the strong asymptotics on some L p(uppercase) extremal problems on the real line with exponential weights, which, for the case p=2, are equivalent to power- type asymptotics for the leading coefficients of the corresponding orthogonal polynomials. The method is also modified toyield (in a sense) uniformly good approximation on the whole support. This allows one to deduce strong asymptotics in some L p(uppercase) extremal problems with varying weights. Applications are given, relating to fast decreasing polynomials, asymptotic behavior of orthogonal polynomials and multipoint Pade approximation. The approach is potential-theoretic, but the text is self-contained.
Weighted guided image filtering.
Li, Zhengguo; Zheng, Jinghong; Zhu, Zijian; Yao, Wei; Wu, Shiqian
2015-01-01
It is known that local filtering-based edge preserving smoothing techniques suffer from halo artifacts. In this paper, a weighted guided image filter (WGIF) is introduced by incorporating an edge-aware weighting into an existing guided image filter (GIF) to address the problem. The WGIF inherits advantages of both global and local smoothing filters in the sense that: 1) the complexity of the WGIF is O(N) for an image with N pixels, which is same as the GIF and 2) the WGIF can avoid halo artifacts like the existing global smoothing filters. The WGIF is applied for single image detail enhancement, single image haze removal, and fusion of differently exposed images. Experimental results show that the resultant algorithms produce images with better visual quality and at the same time halo artifacts can be reduced/avoided from appearing in the final images with negligible increment on running times.
Gaussian integration with rescaling of abscissas and weights
Odrzywolek, A
2010-01-01
An algorithm for integration of polynomial functions with variable weight is considered. It provides extension of the Gaussian integration, with appropriate scaling of the abscissas and weights. Method is a good alternative to usually adopted interval splitting.
Drawing Weighted Directed Graph from It's Adjacency Matrix%用邻矩阵生成加权有向图
Institute of Scientific and Technical Information of China (English)
毛国勇; 张武
2005-01-01
This paper proposes an algorithm for building weighted directed graph, defines the weighted directed relationship matrix of the graph, and describes algorithm implementation using this matrix. Based on this algorithm, an effective way for building and drawing weighted directed graphs is presented, forming a foundation for visual implementation of the algorithm in the graph theory.
Institute of Scientific and Technical Information of China (English)
李菲; 王书锋; 冯冬青
2011-01-01
Dynamic adaptive weighted polymorphic ant colony algorithm was applied to minimize the makespan on a single batch-processing machine with non-identical job sizes. The algorithm introduced the different types of ant colonies, each colony had a different updating mechanism, the transition probabilities and the pheromone value update of ant colony was redesigned for the problem. The algorithm was more accordant with the ants' information processing mechanism, which combined the local search with the global search to improve its convergence and searching ability. In the experiment, different levels of instances are simulated and the results show the efficiency of dynamic adaptive weighted polymorphic ant colony algorithm.%针对差异工件的单机批调度问题,提出了动态自适应加权多态蚁群算法对最大完工时间进行优化,该算法引入了不同种类的蚁群,每种蚁群都有不同的信息素调控机制,并根据批调度问题对不同种类的蚁群状态转移概率和信息素更新机制进行了改进,同时将局域搜索与全局搜索相结合,从而更符合蚁群的真实信息处理机制.对不同规模的算例进行了仿真,结果验证了该算法的有效性和可行性.
Matrix Multiplication Algorithm Selection with Support Vector Machines
2015-05-01
approach for linear algebra algorithms, as we achieve up to a 26% performance improvement over selecting a single algorithm in advance. III. Related...different algorithms (such as industry-standard linear algebra algorithms and their communication-avoiding counterparts) [5]. One way to tackle algorithm...we conclude that using weighted SVM for algorithm selection is superior to using unweighted SVM. D. Properties of Misclassifications For any complex
Energy Technology Data Exchange (ETDEWEB)
Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
Iterative methods for weighted least-squares
Energy Technology Data Exchange (ETDEWEB)
Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)
1996-12-31
A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.
Hromkovic, Juraj
2009-01-01
Explores the science of computing. This book starts with the development of computer science, algorithms and programming, and then explains and shows how to exploit the concepts of infinity, computability, computational complexity, nondeterminism and randomness.
Overlapping community detection using weighted consensus clustering
Indian Academy of Sciences (India)
LINTAO YANG; ZETAI YU; JING QIAN; SHOUYIN LIU
2016-10-01
Many overlapping community detection algorithms have been proposed. Most of them are unstable and behave non-deterministically. In this paper, we use weighted consensus clustering for combining multiple base covers obtained by classic non-deterministic algorithms to improve the quality of the results. We first evaluate a reliability measure for each community in all base covers and assign a proportional weight to each one. Then we redefine the consensus matrix that takes into account not only the common membership of nodes, but also the reliability of the communities. Experimental results on both artificial and real-world networks show that our algorithm can find overlapping communities accurately.
Interpolatory Weighted-H2 Model Reduction
Anic, Branimir; Gugercin, Serkan; Antoulas, Athanasios C
2012-01-01
This paper introduces an interpolation framework for the weighted-H2 model reduction problem. We obtain a new representation of the weighted-H2 norm of SISO systems that provides new interpolatory first order necessary conditions for an optimal reduced-order model. The H2 norm representation also provides an error expression that motivates a new weighted-H2 model reduction algorithm. Several numerical examples illustrate the effectiveness of the proposed approach.
Institute of Scientific and Technical Information of China (English)
张院; 寇文杰
2015-01-01
Based on the measured date from landfill groundwater monitoring well located in the east of Yongding River,using fuzzy compre-hensive evaluation clustering weight method and exceed the standard method to determine the weight matrix of water quality evaluation,com-bined with the method do multiplication then do add do multiplication then choice big choice big then do add choice small then choice big , carried out weight and membership grade of fuzzy operation,comparing with the Nemerow index method and single factor index evaluation method,the results show that exceed the standard index and clustering weight method are reasonable;in Fuzzy operation do multiplication then choice big method and do multiplication then do add method repeated consideration the index of concentration,the result is not very rea-sonable;choice big then do add method and choice small then choice big method remove duplicate elements in index membership grade and weight,weaken and highlights the role of extreme value is relatively reasonable.%以位于永定河东岸某垃圾填埋场地下水监测井的实测数据为基础，利用模糊数学综合评判法中的超标法和聚类权法确定了水质评价指标的权重矩阵，结合相乘相加法、相乘取大法、取小相加法和取小取大法进行了权重与隶属度的模糊运算，与内梅罗指数评价法、单因子评价法进行了对比，结果表明：利用超标法和聚类权法确定指标权重值均具有合理性；模糊运算中相乘相加法和相乘取大法重复考虑了污染物的浓度，结果不甚合理；取小相加法和取小取大法去除了隶属度和权重指标中的重复因素，分别削弱和突出了极值的作用，结果较为合理。
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
438 Adaptive Kernel in Meshsize Boosting Algorithm in KDE (Pp ...
African Journals Online (AJOL)
FIRST LADY
2011-01-18
Jan 18, 2011 ... classifier is boosted by suitably re-weighting the data. This weight ... Methods. Algorithm on Boosting Kernel Density Estimates and Bias Reduction ... Gaussian (since all distributions tend to be normal as n, the sample size,.
Five modified boundary scan adaptive test generation algorithms
Institute of Scientific and Technical Information of China (English)
Niu Chunping; Ren Zheping; Yao Zongzhong
2006-01-01
To study the diagnostic problem of Wire-OR (W-O) interconnect fault of PCB (Printed Circuit Board), five modified boundary scan adaptive algorithms for interconnect test are put forward. These algorithms apply Global-diagnosis sequence algorithm to replace the equal weight algorithm of primary test, and the test time is shortened without changing the fault diagnostic capability. The descriptions of five modified adaptive test algorithms are presented, and the capability comparison between the modified algorithm and the original algorithm is made to prove the validity of these algorithms.
Clustering with Weighted Hyperlink and Sub Similarity Matrix
Institute of Scientific and Technical Information of China (English)
WU Ping; SONG Han-tao; ZHANG Li-ping; WU Zheng-yu
2006-01-01
A web page clustering algorithm called PageCluster and the improved algorithm ImPageCluster solving overlapping are proposed. These methods not only take the web structure and page hyperlink into account,but also consider the importance of each page which is described as in-weight and out-weight. Compared with the traditional clustering methods, the experiments show that the runtimes of the proposed algorithms are less with the improved accuracies.
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Intuitionistic fuzzy hierarchical clustering algorithms
Institute of Scientific and Technical Information of China (English)
Xu Zeshui
2009-01-01
Intuitionistic fuzzy set (IFS) is a set of 2-tuple arguments, each of which is characterized by a mem-bership degree and a nonmembership degree. The generalized form of IFS is interval-valued intuitionistic fuzzy set (IVIFS), whose components are intervals rather than exact numbers. IFSs and IVIFSs have been found to be very useful to describe vagueness and uncertainty. However, it seems that little attention has been focused on the clus-tering analysis of IFSs and IVIFSs. An intuitionistic fuzzy hierarchical algorithm is introduced for clustering IFSs, which is based on the traditional hierarchical clustering procedure, the intuitionistic fuzzy aggregation operator, and the basic distance measures between IFSs: the Hamming distance, normalized Hamming, weighted Hamming, the Euclidean distance, the normalized Euclidean distance, and the weighted Euclidean distance. Subsequently, the algorithm is extended for clustering IVIFSs. Finally the algorithm and its extended form are applied to the classifications of building materials and enterprises respectively.
Performance of a Distributed Stochastic Approximation Algorithm
Bianchi, Pascal; Hachem, Walid
2012-01-01
In this paper, a distributed stochastic approximation algorithm is studied. Applications of such algorithms include decentralized estimation, optimization, control or computing. The algorithm consists in two steps: a local step, where each node in a network updates a local estimate using a stochastic approximation algorithm with decreasing step size, and a gossip step, where a node computes a local weighted average between its estimates and those of its neighbors. Convergence of the estimates toward a consensus is established under weak assumptions. The approach relies on two main ingredients: the existence of a Lyapunov function for the mean field in the agreement subspace, and a contraction property of the random matrices of weights in the subspace orthogonal to the agreement subspace. A second order analysis of the algorithm is also performed under the form of a Central Limit Theorem. The Polyak-averaged version of the algorithm is also considered.
Random walk term weighting for information retrieval
DEFF Research Database (Denmark)
Blanco, R.; Lioma, Christina
2007-01-01
We present a way of estimating term weights for Information Retrieval (IR), using term co-occurrence as a measure of dependency between terms.We use the random walk graph-based ranking algorithm on a graph that encodes terms and co-occurrence dependencies in text, from which we derive term weights...... that represent a quantification of how a term contributes to its context. Evaluation on two TREC collections and 350 topics shows that the random walk-based term weights perform at least comparably to the traditional tf-idf term weighting, while they outperform it when the distance between co-occurring terms...
DEFF Research Database (Denmark)
Markham, Annette
layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also......This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
A New Page Ranking Algorithm Based On WPRVOL Algorithm
Directory of Open Access Journals (Sweden)
Roja Javadian Kootenae
2013-03-01
Full Text Available The amount of information on the web is always growing, thus powerful search tools are needed to search for such a large collection. Search engines in this direction help users so they can find their desirable information among the massive volume of information in an easier way. But what is important in the search engines and causes a distinction between them is page ranking algorithm used in them. In this paper a new page ranking algorithm based on "Weighted Page Ranking based on Visits of Links (WPRVOL Algorithm" for search engines is being proposed which is called WPR'VOL for short. The proposed algorithm considers the number of visits of first and second level in-links. The original WPRVOL algorithm takes into account the number of visits of first level in-links of the pages and distributes rank scores based on the popularity of the pages whereas the proposed algorithm considers both in-links of that page (first level in-links and in-links of the pages that point to it (second level in-links in order to calculation of rank of the page, hence more related pages are displayed at the top of search result list. In the summary it is said that the proposed algorithm assigns higher rank to pages that both themselves and pages that point to them be important.
Light-weight cyptography for resource constrained environments
Baier, Patrick; Szu, Harold
2006-04-01
We give a survey of "light-weight" encryption algorithms designed to maximise security within tight resource constraints (limited memory, power consumption, processor speed, chip area, etc.) The target applications of such algorithms are RFIDs, smart cards, mobile phones, etc., which may store, process and transmit sensitive data, but at the same time do not always support conventional strong algorithms. A survey of existing algorithms is given and new proposal is introduced.
Compact Weighted Class Association Rule Mining using Information Gain
Ibrahim, S P Syed
2011-01-01
Weighted association rule mining reflects semantic significance of item by considering its weight. Classification constructs the classifier and predicts the new data instance. This paper proposes compact weighted class association rule mining method, which applies weighted association rule mining in the classification and constructs an efficient weighted associative classifier. This proposed associative classification algorithm chooses one non class informative attribute from dataset and all the weighted class association rules are generated based on that attribute. The weight of the item is considered as one of the parameter in generating the weighted class association rules. This proposed algorithm calculates the weight using the HITS model. Experimental results show that the proposed system generates less number of high quality rules which improves the classification accuracy.
Hermance, J. F.; Jacob, R. W.; Bradley, B. A.; Mustard, J. F.
2005-12-01
In studying vegetation patterns remotely, the objective is to draw inferences on the development of specific or general land surface phenology (LSP) as a function of space and time by determining the behavior of a parameter (in our case NDVI), when the parameter estimate may be biased by noise, data dropouts and obfuscations from atmospheric and other effects. We describe the underpinning concepts of a procedure for a robust interpolation of NDVI data that does not have the limitations of other mathematical approaches which require orthonormal basis functions (e.g. Fourier analysis). In this approach, data need not be uniformly sampled in time, nor do we expect noise to be Gaussian-distributed. Our approach is intuitive and straightforward, and is applied here to the refined modeling of LSP using 7 years of weekly and biweekly AVHRR NDVI data for a 150 x 150 km study area in central Nevada. This site is a microcosm of a broad range of vegetation classes, from irrigated agriculture with annual NDVIvalues of up to 0.7 to playas and alkali salt flats with annual NDVI values of only 0.07. Our procedure involves a form of parameter estimation employing Bayesian statistics. In utilitarian terms, the latter procedure is a method of statistical analysis (in our case, robustified, weighted least-squares recursive curve-fitting) that incorporates a variety of prior knowledge when forming current estimates of a particular process or parameter. In addition to the standard Bayesian approach, we account for outliers due to data dropouts or obfuscations because of clouds and snow cover. An initial "starting model" for the average annual cycle and long term (7 year) trend is determined by jointly fitting a common set of complex annual harmonics and a low order polynomial to an entire multi-year time series in one step. This is not a formal Fourier series in the conventional sense, but rather a set of 4 cosine and 4 sine coefficients with fundamental periods of 12, 6, 3 and 1
Directory of Open Access Journals (Sweden)
Paulo Henrique Siqueira
2004-08-01
Full Text Available O objetivo deste trabalho é mostrar a aplicação do Algoritmo do Matching de peso máximo, na elaboração de jornadas de trabalho para motoristas e cobradores de ônibus. Este problema deve ser resolvido levando-se em consideração o maior aproveitamento possível das tabelas de horários, com o objetivo de minimizar o número de funcionários, de horas extras e de horas ociosas. Desta forma, os custos das companhias de transporte público são minimizados. Na primeira fase do trabalho, supondo-se que as tabelas de horários já estejam divididas em escalas de curta e de longa duração, as escalas de curta duração são combinadas para a formação da jornada diária de trabalho de um funcionário. Esta combinação é feita com o Algoritmo do Matching de peso máximo, no qual as escalas são representadas por vértices de um grafo, e o peso máximo é atribuído às combinações de escalas que não formam horas extras e horas ociosas. Na segunda fase, uma jornada de final de semana é designada para cada jornada semanal de dias úteis. Por meio destas duas fases, as jornadas semanais de trabalho para motoristas e cobradores de ônibus podem ser construídas com custo mínimo. A terceira e última fase deste trabalho consiste na designação das jornadas semanais de trabalho para cada motorista e cobrador de ônibus, considerando-se suas preferências. O Algoritmo do Matching de peso máximo é utilizado para esta fase também. Este trabalho foi aplicado em três empresas de transporte público da cidade de Curitiba - PR, nas quais os algoritmos utilizados anteriormente eram heurísticos, baseados apenas na experiência do encarregado por esta tarefa.The purpose of this paper is to discuss how the maximum weight Matching Algorithm can be applied to schedule the workdays of bus drivers and bus fare collectors. This scheduling should be based on the best possible use of timetables in order to minimize the number of employees, overtime and
Institute of Scientific and Technical Information of China (English)
王颖辉; 韩先国
2011-01-01
The technique of digital flexible assembly is one of the important research directions in the field of large parts manufacture. Aiming at large parts positioning and joining system supported by locators, many location data in large parts is measured through laser tracer. To solve the problem of points' mismatching and precision difference under different coordinate systems, an evaluation model for large parts' posture based on least square method is proposed and error allocation is optimized by adding weight for all points data.%本文以定位器支撑的大部件调姿系统为研究对象,通过激光跟踪仪测得大部件上多个测点坐标数据.针对各点在不同坐标系下测量点值不匹配且各点精度要求有差异的问题,提出了最小二乘法评估大部件位姿的模型,并利用权值实现对多个测点数据进行误差分配优化,为大型部件的位置姿态评估计算提供一套有效的解决方案.
A New Page Ranking Algorithm Based On WPRVOL Algorithm
Roja Javadian Kootenae; Seyyed Mohsen Hashemi; mehdi afzali
2013-01-01
The amount of information on the web is always growing, thus powerful search tools are needed to search for such a large collection. Search engines in this direction help users so they can find their desirable information among the massive volume of information in an easier way. But what is important in the search engines and causes a distinction between them is page ranking algorithm used in them. In this paper a new page ranking algorithm based on "Weighted Page Ranking based on Visits of ...
A Short Survey of Document Structure Similarity Algorithms
Energy Technology Data Exchange (ETDEWEB)
Buttler, D
2004-02-27
This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel......We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...
Increasing the weight of minimum spanning trees
Energy Technology Data Exchange (ETDEWEB)
Frederickson, G.N.; Solis-Oba, R. [Purdue Univ., West Lafayette, IN (United States)
1996-12-31
Given an undirected connected graph G and a cost function for increasing edge weights, the problem of determining the maximum increase in the weight of the minimum spanning trees of G subject to a budget constraint is investigated. Two versions of the problem are considered. In the first, each edge has a cost function that is linear in the weight increase. An algorithm is presented that solves this problem in strongly polynomial time. In the second version, the edge weights are fixed but an edge can be removed from G at a unit cost. This version is shown to be NP-hard. An {Omega}(1/ log k)-approximation algorithm is presented for it, where k is the number of edges to be removed.
Weighted Reed-Muller codes revisited
Geil, Olav
2011-01-01
We consider weighted Reed-Muller codes over point ensemble $S_1 \\times...\\times S_m$ where $S_i$ needs not be of the same size as $S_j$. For $m = 2$ we determine optimal weights and analyze in detail what is the impact of the ratio $|S_1|/|S_2|$ on the minimum distance. In conclusion the weighted Reed-Muller code construction is much better than its reputation. For a class of affine variety codes that contains the weighted Reed-Muller codes we then present two list decoding algorithms. With a small modification one of these algorithms is able to correct up to 31 errors of the [49, 11, 28] Joyner code.
Institute of Scientific and Technical Information of China (English)
邬平; 马继涛; 李鑫; 李俊; 黄红伟
2013-01-01
专家打分是评价项目水平的关键数据,由于对项目熟悉程度或人为因素的影响,存在个别专家打分过高或过低、导致项目得分过高或过低的情况,严重影响了项目评价结果的公正性,以专家对项目熟悉程度、把握程度和个人打分与专家组平均分差为专家打分权重计算参数,对专家打分权重进行迭代计算,使专家个人打分尽可能趋近专家组平均分,提高项目评价结果的合理性,保障项目评价的公正性.%The expert' s score is key data for evaluating project.In the process of project evaluation,some experts give higher or lower scores due to different understanding of the project.Obviously,it seriously affects the fairness of the project evaluation.By using the degree of project comprehension,the degree of control ability and the average difference between the team and the personal scores as weight values,iterative computation is carried out in order to decrease the significant difference between the personal score and the team average score.In this way,the score from the project evaluation is scientific and rational which improves the fairness of the project evaluation.
Inverse Distance Weighted Interpolation Involving Position Shading
Li, Zhengquan; WU Yaoxiang
2015-01-01
Considering the shortcomings of inverse distance weighted (IDW) interpolation in practical applications, this study improved the IDW algorithm and put forward a new spatial interpolation method that named as adjusted inverse distance weighted (AIDW). In interpolating process, the AIDW is capable of taking into account the comprehensive influence of distance and position of sample point to interpolation point, by adding a coefficient (K) into the normal IDW formula. The coefficient (K) is used...
Institute of Scientific and Technical Information of China (English)
郝海涛; 马元元
2016-01-01
To solve the direct commodity rapid and accurate matching problem between electronic shoppers and merchants, the e⁃commerce commodity recommendation system based on mining algorithm of weighted association rules is researched. Ai⁃ming at the insufficiency of the classic Apriori algorithm,a new weighted fuzzy association rules mining algorithm is put forward to ensure the downward closure of frequent item sets. The work flow of the recommendation system was tested through the struc⁃tural design of e⁃commerce recommendation system,data preprocessing module design and recommendation module design. The hit rate is selected as the evaluation standard of different recommendation models. The contrastive analysis for the practical col⁃lected data was conducted with the half⁃off cross test method. The experimental results show that the hit rate of Top⁃N products in association rule set is significantly higher than that of the interest recommendation method and best selling recommendation method.%为了解决电子购物者和商家直接的商品快速、准确匹配问题，进行基于加权关联规则挖掘算法的电子商务商品推荐系统研究。首先指出了经典Apriori算法的缺点和不足，并提出一种新的加权模糊关联挖掘模型算法，以保证频繁项集的向下封闭性；通过对电子商务推荐系统的结构化设计、数据预处理模块设计、推荐模块设计，完成了推荐系统的工作流程测试；最后选取命中率作为不同推荐模型的评价标准，通过五折交叉试验法对实际采集数据进行了对比分析，试验结果表明关联规则集的Top⁃N产品命中率要明显高于兴趣推荐和畅销推荐法。
Derandomization of Online Assignment Algorithms for Dynamic Graphs
Sahai, Ankur
2011-01-01
This paper analyzes different online algorithms for the problem of assigning weights to edges in a fully-connected bipartite graph that minimizes the overall cost while satisfying constraints. Edges in this graph may disappear and reappear over time. Performance of these algorithms is measured using simulations. This paper also attempts to derandomize the randomized online algorithm for this problem.
A modified multitarget adaptive array algorithm for wireless CDMA system.
Liu, Yun-hui; Yang, Yu-hang
2004-11-01
The paper presents a modified least squares despread respread multitarget constant modulus algorithm (LS-DRMTCMA). The cost function of the original algorithm was modified by the minimum bit error rate (MBER) criterion. The novel algorithm tries to optimize weight vectors by directly minimizing bit error rate (BER) of code division multiple access (CDMA) mobile communication system. In order to achieve adaptive update of weight vectors, a stochastic gradient adaptive algorithm was developed by a kernel density estimator of possibility density function based on samples. Simulation results showed that the modified algorithm remarkably improves the BER performance, capacity and near-far effect resistance of a given CDMA communication system.
Adaptive Central Force Optimization Algorithm Based on the Stability Analysis
Directory of Open Access Journals (Sweden)
Weiyi Qian
2015-01-01
Full Text Available In order to enhance the convergence capability of the central force optimization (CFO algorithm, an adaptive central force optimization (ACFO algorithm is presented by introducing an adaptive weight and defining an adaptive gravitational constant. The adaptive weight and gravitational constant are selected based on the stability theory of discrete time-varying dynamic systems. The convergence capability of ACFO algorithm is compared with the other improved CFO algorithm and evolutionary-based algorithm using 23 unimodal and multimodal benchmark functions. Experiments results show that ACFO substantially enhances the performance of CFO in terms of global optimality and solution accuracy.
An Adaptive Filtering Algorithm Based on Genetic Algorithm-Backpropagation Network
Directory of Open Access Journals (Sweden)
Kai Hu
2013-01-01
Full Text Available A new image filtering algorithm is proposed. GA-BPN algorithm uses genetic algorithm (GA to decide weights in a back propagation neural network (BPN. It has better global optimal characteristics than traditional optimal algorithm. In this paper, we used GA-BPN to do image noise filter researching work. Firstly, this paper uses training samples to train GA-BPN as the noise detector. Then, we utilize the well-trained GA-BPN to recognize noise pixels in target image. And at last, an adaptive weighted average algorithm is used to recover noise pixels recognized by GA-BPN. Experiment data shows that this algorithm has better performance than other filters.
Weighted principal component analysis: a weighted covariance eigendecomposition approach
Delchambre, Ludovic
2014-01-01
We present a new straightforward principal component analysis (PCA) method based on the diagonalization of the weighted variance-covariance matrix through two spectral decomposition methods: power iteration and Rayleigh quotient iteration. This method allows one to retrieve a given number of orthogonal principal components amongst the most meaningful ones for the case of problems with weighted and/or missing data. Principal coefficients are then retrieved by fitting principal components to the data while providing the final decomposition. Tests performed on real and simulated cases show that our method is optimal in the identification of the most significant patterns within data sets. We illustrate the usefulness of this method by assessing its quality on the extrapolation of Sloan Digital Sky Survey quasar spectra from measured wavelengths to shorter and longer wavelengths. Our new algorithm also benefits from a fast and flexible implementation.
... weight) weight loss. As in the treatment with hyperthyroidism, treatment of the abnormal state of hypothyroidism with thyroid ... Goiter Graves’ Disease Graves’ Eye Disease Hashimoto’s Thyroiditis Hyperthyroidism ... & Weight Thyroiditis Thyroid ...
Weight loss surgery helps people with extreme obesity to lose weight. It may be an option if you ... caused by obesity. There are different types of weight loss surgery. They often limit the amount of food ...
Weight-Constrained Minimum Spanning Tree Problem
Henn, Sebastian Tobias
2007-01-01
In an undirected graph G we associate costs and weights to each edge. The weight-constrained minimum spanning tree problem is to find a spanning tree of total edge weight at most a given value W and minimum total costs under this restriction. In this thesis a literature overview on this NP-hard problem, theoretical properties concerning the convex hull and the Lagrangian relaxation are given. We present also some in- and exclusion-test for this problem. We apply a ranking algorithm and the me...
Weight diagram construction of Lax operators
Energy Technology Data Exchange (ETDEWEB)
Carbon, S.L.; Piard, E.J.
1991-10-01
We review and expand methods introduced in our previous paper. It is proved that cyclic weight diagrams corresponding to representations of affine Lie algebras allow one to construct the associated Lax operator. The resultant Lax operator is in the Miura-like form and generates the modified KdV equations. The algorithm is extended to the super-symmetric case.
Quantifying Pathology in Diffusion Weighted MRI
Caan, M.W.A.
2010-01-01
In this thesis algorithms are proposed for quantification of pathology in Diffusion Weighted Magnetic Resonance Imaging (DW-MRI) data. Functional evidence for brain diseases can be explained by specific structural loss in the white matter of the brain. That is, certain biomarkers may exist where the
Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks.
Leung, C S; Tsoi, A C; Chan, L W
2001-01-01
Recursive least squares (RLS)-based algorithms are a class of fast online training algorithms for feedforward multilayered neural networks (FMNNs). Though the standard RLS algorithm has an implicit weight decay term in its energy function, the weight decay effect decreases linearly as the number of learning epochs increases, thus rendering a diminishing weight decay effect as training progresses. In this paper, we derive two modified RLS algorithms to tackle this problem. In the first algorithm, namely, the true weight decay RLS (TWDRLS) algorithm, we consider a modified energy function whereby the weight decay effect remains constant, irrespective of the number of learning epochs. The second version, the input perturbation RLS (IPRLS) algorithm, is derived by requiring robustness in its prediction performance to input perturbations. Simulation results show that both algorithms improve the generalization capability of the trained network.
An Algorithmic Framework for Multiobjective Optimization
Ganesan, T.; Elamvazuthi, I.; Shaari, Ku Zilati Ku; Vasant, P.
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization. PMID:24470795
An algorithmic framework for multiobjective optimization.
Ganesan, T; Elamvazuthi, I; Shaari, Ku Zilati Ku; Vasant, P
2013-01-01
Multiobjective (MO) optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE), genetic algorithm (GA), gravitational search algorithm (GSA), and particle swarm optimization (PSO) have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI) method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two). In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization.
An Algorithmic Framework for Multiobjective Optimization
Directory of Open Access Journals (Sweden)
T. Ganesan
2013-01-01
Full Text Available Multiobjective (MO optimization is an emerging field which is increasingly being encountered in many fields globally. Various metaheuristic techniques such as differential evolution (DE, genetic algorithm (GA, gravitational search algorithm (GSA, and particle swarm optimization (PSO have been used in conjunction with scalarization techniques such as weighted sum approach and the normal-boundary intersection (NBI method to solve MO problems. Nevertheless, many challenges still arise especially when dealing with problems with multiple objectives (especially in cases more than two. In addition, problems with extensive computational overhead emerge when dealing with hybrid algorithms. This paper discusses these issues by proposing an alternative framework that utilizes algorithmic concepts related to the problem structure for generating efficient and effective algorithms. This paper proposes a framework to generate new high-performance algorithms with minimal computational overhead for MO optimization.
Weight loss, weight regain and bone health.
Pines, Amos
2012-08-01
The ideal body image for women these days is being slim but, in the real world, obesity becomes a major health problem even in the developing countries. Overweight, but also underweight, may have associated adverse outcomes in many bodily systems, including the bone. Only a few studies have investigated the consequences of intentional weight loss, then weight regain, on bone metabolism and bone density. It seems that the negative impact of bone loss is not reversed when weight partially rebounds following the end of active intervention programs. Thus the benefits and risks of any weight loss program should be addressed individually, and monitoring of bone parameters is recommended.
Evolutionary Graph Drawing Algorithms
Institute of Scientific and Technical Information of China (English)
Huang Jing-wei; Wei Wen-fang
2003-01-01
In this paper, graph drawing algorithms based on genetic algorithms are designed for general undirected graphs and directed graphs. As being shown, graph drawing algorithms designed by genetic algorithms have the following advantages: the frames of the algorithms are unified, the method is simple, different algorithms may be attained by designing different objective functions, therefore enhance the reuse of the algorithms. Also, aesthetics or constrains may be added to satisfy different requirements.
Optimization of Evolutionary Neural Networks Using Hybrid Learning Algorithms
Abraham, Ajith
2004-01-01
Evolutionary artificial neural networks (EANNs) refer to a special class of artificial neural networks (ANNs) in which evolution is another fundamental form of adaptation in addition to learning. Evolutionary algorithms are used to adapt the connection weights, network architecture and learning algorithms according to the problem environment. Even though evolutionary algorithms are well known as efficient global search algorithms, very often they miss the best local solutions in the complex s...
Weighted Page Content Rank for Ordering Web Search Result
Directory of Open Access Journals (Sweden)
POOJA SHARMA,
2010-12-01
Full Text Available With the explosive growth of information sources available on the World Wide Web, it has become increasingly necessary for user’s to utilize automated tools in order to find, extract, filter and evaluate the desired information and resources. Web structure mining and content mining plays an effective role in this approach. There are two Ranking algorithms PageRank and Weighted PageRank. PageRank is a commonly used algorithm in Web Structure Mining. Weighted Page Rank also takes the importance of the inlinks and outlinks of the pages but the rank score to all links is not equally distributed. i.e. unequal distribution is performed. In this paper we proposed a new algorithm, Weighted Page Content Rank (WPCRbased on web content mining and structure mining that shows the relevancy of the pages to a given query is better determined, as compared to the existing PageRank and Weighted PageRank algorithms.
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Fact Sheet Proven Weight Loss Methods What can weight loss do for you? Losing weight can improve your health in a number of ways. ... limiting calories) usually isn’t enough to cause weight loss. But exercise plays an important part in helping ...
MOEA/D with adaptive weight adjustment.
Qi, Yutao; Ma, Xiaoliang; Liu, Fang; Jiao, Licheng; Sun, Jianyong; Wu, Jianshe
2014-01-01
Recently, MOEA/D (multi-objective evolutionary algorithm based on decomposition) has achieved great success in the field of evolutionary multi-objective optimization and has attracted a lot of attention. It decomposes a multi-objective optimization problem (MOP) into a set of scalar subproblems using uniformly distributed aggregation weight vectors and provides an excellent general algorithmic framework of evolutionary multi-objective optimization. Generally, the uniformity of weight vectors in MOEA/D can ensure the diversity of the Pareto optimal solutions, however, it cannot work as well when the target MOP has a complex Pareto front (PF; i.e., discontinuous PF or PF with sharp peak or low tail). To remedy this, we propose an improved MOEA/D with adaptive weight vector adjustment (MOEA/D-AWA). According to the analysis of the geometric relationship between the weight vectors and the optimal solutions under the Chebyshev decomposition scheme, a new weight vector initialization method and an adaptive weight vector adjustment strategy are introduced in MOEA/D-AWA. The weights are adjusted periodically so that the weights of subproblems can be redistributed adaptively to obtain better uniformity of solutions. Meanwhile, computing efforts devoted to subproblems with duplicate optimal solution can be saved. Moreover, an external elite population is introduced to help adding new subproblems into real sparse regions rather than pseudo sparse regions of the complex PF, that is, discontinuous regions of the PF. MOEA/D-AWA has been compared with four state of the art MOEAs, namely the original MOEA/D, Adaptive-MOEA/D, [Formula: see text]-MOEA/D, and NSGA-II on 10 widely used test problems, two newly constructed complex problems, and two many-objective problems. Experimental results indicate that MOEA/D-AWA outperforms the benchmark algorithms in terms of the IGD metric, particularly when the PF of the MOP is complex.
Improved symbiotic organisms search algorithm for solving unconstrained function optimization
Directory of Open Access Journals (Sweden)
Sukanta Nama
2016-09-01
Full Text Available Recently, Symbiotic Organisms Search (SOS algorithm is being used for solving complex problems of optimization. This paper proposes an Improved Symbiotic Organisms Search (I-SOS algorithm for solving different complex unconstrained global optimization problems. In the improved algorithm, a random weighted reflective parameter and predation phase are suggested to enhance the performance of the algorithm. The performances of this algorithm are compared with the other state-of-the-art algorithms. The parametric study of the common control parameter has also been performed.
Approximate shortest homotopic paths in weighted regions
Cheng, Siuwing
2012-02-01
A path P between two points s and t in a polygonal subdivision T with obstacles and weighted regions defines a class of paths that can be deformed to P without passing over any obstacle. We present the first algorithm that, given P and a relative error tolerance ε (0, 1), computes a path from this class with cost at most 1 + ε times the optimum. The running time is O(h 3/ε 2kn polylog (k,n,1/ε)), where k is the number of segments in P and h and n are the numbers of obstacles and vertices in T, respectively. The constant in the running time of our algorithm depends on some geometric parameters and the ratio of the maximum region weight to the minimum region weight. © 2012 World Scientific Publishing Company.
Approximate Shortest Homotopic Paths in Weighted Regions
Cheng, Siu-Wing
2010-01-01
Let P be a path between two points s and t in a polygonal subdivision T with obstacles and weighted regions. Given a relative error tolerance ε ∈(0,1), we present the first algorithm to compute a path between s and t that can be deformed to P without passing over any obstacle and the path cost is within a factor 1 + ε of the optimum. The running time is O(h 3/ε2 kn polylog(k, n, 1/ε)), where k is the number of segments in P and h and n are the numbers of obstacles and vertices in T, respectively. The constant in the running time of our algorithm depends on some geometric parameters and the ratio of the maximum region weight to the minimum region weight. © 2010 Springer-Verlag.
DOBD Algorithm for Training Neural Network:Part II. Application
Institute of Scientific and Technical Information of China (English)
吴建昱; 何小荣
2002-01-01
In the first part of the article, a new algorithm for pruning network?Dynamic Optimal Brain Damage(DOBD) is introduced. In this part, two cases and an industrial application are worked out to test the new algorithm. It is verified that the algorithm can obtain good generalization through deleting weight parameters with low sensitivities dynamically and get better result than the Marquardt algorithm or the cross-validation method. Although the initial construction of network may be different, the finial number of free weights pruned by the DOBD algorithm is similar and the number is just close to the optimal number of free weights. The algorithm is also helpful to design the optimal structure of network.
Institute of Scientific and Technical Information of China (English)
张雨浓; 李钧; 张智军; 阮恭勤; 姜孝华
2011-01-01
According to Fourier series approximation theory, a single-input multiple-output (SIMO) trigonometrically-activated Fourier neural network model is constructed by setting the hidden-layer neuron activation function as orthogonal trigonometric function series and selecting the periodical parameter of these activation functions properly. In light of the characteristics of the presented network, a pseudo-inverse based weights-direct-determination method is derived to determine the optimal weights of the network with one step, and a structure-automatic-determination algorithm is designed. Simulation results substantiate that, compared with the traditional BP (backpropogation) neural network and the SIMO Fourier neural network model based on least square method, this model has higher accuracy and faster computing speed.%根据傅里叶级数逼近理论,将正交三角函数系作为隐层神经元激励函数,合理选取这些激励函数的周期参数,构造单输入多输出(SIMO)傅里叶三角基神经网络模型.根据该网络的特点,推导出一种基于伪逆的权值直接确定法,从而1步计算出网络最优权值,并在此基础上设计出隐层结构自确定算法.仿真结果表明,与传统BP(反向传播)神经网络及基于最小二乘法的SIMO傅里叶神经网络模型相比,本网络模型具有更高的计算精度和更快的计算速度.
Hesitant fuzzy agglomerative hierarchical clustering algorithms
Zhang, Xiaolu; Xu, Zeshui
2015-02-01
Recently, hesitant fuzzy sets (HFSs) have been studied by many researchers as a powerful tool to describe and deal with uncertain data, but relatively, very few studies focus on the clustering analysis of HFSs. In this paper, we propose a novel hesitant fuzzy agglomerative hierarchical clustering algorithm for HFSs. The algorithm considers each of the given HFSs as a unique cluster in the first stage, and then compares each pair of the HFSs by utilising the weighted Hamming distance or the weighted Euclidean distance. The two clusters with smaller distance are jointed. The procedure is then repeated time and again until the desirable number of clusters is achieved. Moreover, we extend the algorithm to cluster the interval-valued hesitant fuzzy sets, and finally illustrate the effectiveness of our clustering algorithms by experimental results.
Directory of Open Access Journals (Sweden)
Xuemei Sun
2015-01-01
Full Text Available Degree constrained minimum spanning tree (DCMST refers to constructing a spanning tree of minimum weight in a complete graph with weights on edges while the degree of each node in the spanning tree is no more than d (d ≥ 2. The paper proposes an improved multicolony ant algorithm for degree constrained minimum spanning tree searching which enables independent search for optimal solutions among various colonies and achieving information exchanges between different colonies by information entropy. Local optimal algorithm is introduced to improve constructed spanning tree. Meanwhile, algorithm strategies in dynamic ant, random perturbations ant colony, and max-min ant system are adapted in this paper to optimize the proposed algorithm. Finally, multiple groups of experimental data show the superiority of the improved algorithm in solving the problems of degree constrained minimum spanning tree.
Color-to-grayscale conversion through weighted multiresolution channel fusion
Wu, T.; Toet, A.
2014-01-01
We present a color-to-gray conversion algorithm that retains both the overall appearance and the discriminability of details of the input color image. The algorithm employs a weighted pyramid image fusion scheme to blend the R, G, and B color channels of the input image into a single grayscale
Explicit inverse distance weighting mesh motion for coupled problems
Witteveen, J.A.S.; Bijl, H.
2009-01-01
An explicit mesh motion algorithm based on inverse distance weighting interpolation is presented. The explicit formulation leads to a fast mesh motion algorithm and an easy implementation. In addition, the proposed point-by-point method is robust and flexible in case of large deformations, hanging n
Algorithms, complexity, and the sciences.
Papadimitriou, Christos
2014-11-11
Algorithms, perhaps together with Moore's law, compose the engine of the information technology revolution, whereas complexity--the antithesis of algorithms--is one of the deepest realms of mathematical investigation. After introducing the basic concepts of algorithms and complexity, and the fundamental complexity classes P (polynomial time) and NP (nondeterministic polynomial time, or search problems), we discuss briefly the P vs. NP problem. We then focus on certain classes between P and NP which capture important phenomena in the social and life sciences, namely the Nash equlibrium and other equilibria in economics and game theory, and certain processes in population genetics and evolution. Finally, an algorithm known as multiplicative weights update (MWU) provides an algorithmic interpretation of the evolution of allele frequencies in a population under sex and weak selection. All three of these equivalences are rife with domain-specific implications: The concept of Nash equilibrium may be less universal--and therefore less compelling--than has been presumed; selection on gene interactions may entail the maintenance of genetic variation for longer periods than selection on single alleles predicts; whereas MWU can be shown to maximize, for each gene, a convex combination of the gene's cumulative fitness in the population and the entropy of the allele distribution, an insight that may be pertinent to the maintenance of variation in evolution.
Fourier Lucas-Kanade algorithm.
Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha
2013-06-01
In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs).
Level-0 trigger algorithms for the ALICE PHOS detector
Wang, D; Wang, Y P; Huang, G M; Kral, J; Yin, Z B; Zhou, D C; Zhang, F; Ullaland, K; Muller, H; Liu, L J
2011-01-01
The PHOS level-0 trigger provides a minimum bias trigger for p-p collisions and information for a level-1 trigger at both p-p and Pb-Pb collisions. There are two level-0 trigger generating algorithms under consideration: the Direct Comparison algorithm and the Weighted Sum algorithm. In order to study trigger algorithms via simulation, a simplified equivalent model is extracted from the trigger electronics to derive the waveform function of the Analog-or signal as input to the trigger algorithms. Simulations shown that the Weighted Sum algorithm can achieve higher trigger efficiency and provide more precise single channel energy information than the direct compare algorithm. An energy resolution of 9.75 MeV can be achieved with the Weighted Sum algorithm at a sampling rate of 40 Msps (mega samples per second) at 1 GeV. The timing performance at a sampling rate of 40 Msps with the Weighted Sum algorithm is better than that at a sampling rate of 20 Msps with both algorithms. The level-0 trigger can be delivered...
Distributed algorithms for resource allocation and routing
Hu, Zengjian
2007-01-01
In this thesis, we study distributed algorithms in the context of two fundamental problems in distributed systems, resource allocation and routing. Resource allocation studies how to distribute workload evenly to resources. We consider two different resource allocation models, the diffusive load balancing and the weighted balls-into-bins games. Routing studies how to deliver messages from source to estination efficiently. We design routing algorithms for broadcasting and gossiping in ad hoc n...
A Weighted Block Dictionary Learning Algorithm for Classification
Zhongrong Shi
2016-01-01
Discriminative dictionary learning, playing a critical role in sparse representation based classification, has led to state-of-the-art classification results. Among the existing discriminative dictionary learning methods, two different approaches, shared dictionary and class-specific dictionary, which associate each dictionary atom to all classes or a single class, have been studied. The shared dictionary is a compact method but with lack of discriminative information; the class-specific dict...
Algorithm Animation with Galant.
Stallmann, Matthias F
2017-01-01
Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.
A novel fuzzy sensor fusion algorithm
Institute of Scientific and Technical Information of China (English)
FU Hua; YANG Yi-kui; MA Ke; LIU Yu-jia
2011-01-01
A novel fusion algorithm was given based on fuzzy similarity and fuzzy integral theory.First,it calculated the fuzzy similarity among a certain sensor's measurement values and the multiple sensors' objective prediction values to determine the importance weight of each sensor and realize multi-sensor data fusion.Then according to the determined importance weight,an intelligent fusion system based on fuzzy integral theory was given,which can solve FEI-DEO and DEI-DEO fusion problems and realize the decision fusion.Simulation results were proved that fuzzy integral algorithm has enhanced the capability of handling the uncertain information and improved the intelligence degrees.
Composite multiobjective optimization beamforming based on genetic algorithms
Institute of Scientific and Technical Information of China (English)
Shi Jing; Meng Weixiao; Zhang Naitong; Wang Zheng
2006-01-01
All thc parameters of beamforming are usually optimized simultaneously in implementing the optimization of antenna array pattern with multiple objectives and parameters by genetic algorithms (GAs).Firstly, this paper analyzes the performance of fitness functions of previous algorithms. It shows that original algorithms make the fitness functions too complex leading to large amount of calculation, and also the selection of the weight of parameters very sensitive due to many parameters optimized simultaneously. This paper proposes a kind of algorithm of composite beamforming, which detaches the antenna array into two parts corresponding to optimization of different objective parameters respectively. New algorithm substitutes the previous complex fitness function with two simpler functions. Both theoretical analysis and simulation results show that this method simplifies the selection of weighting parameters and reduces the complexity of calculation. Furthermore, the algorithm has better performance in lowering side lobe and interferences in comparison with conventional algorithms of beamforming in the case of slightly widening the main lobe.
Algorithm of communication network reliability combining links, nodes and capacity
Institute of Scientific and Technical Information of China (English)
无
2005-01-01
The conception of the normalized reliability index weighted by capacity is introduced, which combing the communication capacity, the reliability probability of exchange nodes and the reliability probability of the transmission links,in order to estimate the reliability performance of communication network comprehensively and objectively. To realize the full algebraic calculation, the key problem should be resolved, which is to find an algorithm to calculate all the routes between nodes of a network. A kind of logic algebraic algorithm of network routes is studied and based on this algorithm,the full algebraic algorithm of normalized reliability index weighted by capacity is studied. For this algorithm, it is easy to design program and the calculation of reliability index is finished, which is the foundation of the comprehensive and objective estimation of comnunication networks. The calculation procedure of the algorithm is introduced through typical ex amples and the results verify the algorithm.
Frequency domain simultaneous algebraic reconstruction techniques: algorithm and convergence
Wang, Jiong; Zheng, Yibin
2005-03-01
We propose a simultaneous algebraic reconstruction technique (SART) in the frequency domain for linear imaging problems. This algorithm has the advantage of efficiently incorporating pixel correlations in an a priori image model. First it is shown that the generalized SART algorithm converges to the weighted minimum norm solution of a weighted least square problem. Then an implementation in the frequency domain is described. The performance of the new algorithm is demonstrated with fan beam computed tomography (CT) examples. Compared to the traditional SART and its major alternative ART, the new algorithm offers superior image quality and potential application to other modalities.
Weight loss, weight maintenance, and adaptive thermogenesis.
Camps, Stefan G J A; Verhoef, Sanne P M; Westerterp, Klaas R
2013-05-01
Diet-induced weight loss is accompanied by adaptive thermogenesis, ie, a disproportional or greater than expected reduction of resting metabolic rate (RMR). The aim of this study was to investigate whether adaptive thermogenesis is sustained during weight maintenance after weight loss. Subjects were 22 men and 69 women [mean ± SD age: 40 ± 9 y; body mass index (BMI; in kg/m(2)): 31.9 ± 3.0]. They followed a very-low-energy diet for 8 wk, followed by a 44-wk period of weight maintenance. Body composition was assessed with a 3-compartment model based on body weight, total body water (deuterium dilution), and body volume. RMR was measured (RMRm) with a ventilated hood. In addition, RMR was predicted (RMRp) on the basis of the measured body composition: RMRp (MJ/d) = 0.024 × fat mass (kg) + 0.102 × fat-free mass (kg) + 0.85. Measurements took place before the diet and 8, 20, and 52 wk after the start of the diet. The ratio of RMRm to RMRp decreased from 1.004 ± 0.077 before the diet to 0.963 ± 0.073 after the diet (P after 20 wk (0.983 ± 0.063; P weight loss after 8 wk (P Weight loss results in adaptive thermogenesis, and there is no indication for a change in adaptive thermogenesis up to 1 y, when weight loss is maintained. This trial was registered at clinicaltrials.gov as NCT01015508.
Dietary protein, weight loss, and weight maintenance.
Westerterp-Plantenga, M S; Nieuwenhuizen, A; Tomé, D; Soenen, S; Westerterp, K R
2009-01-01
The role of dietary protein in weight loss and weight maintenance encompasses influences on crucial targets for body weight regulation, namely satiety, thermogenesis, energy efficiency, and body composition. Protein-induced satiety may be mainly due to oxidation of amino acids fed in excess, especially in diets with "incomplete" proteins. Protein-induced energy expenditure may be due to protein and urea synthesis and to gluconeogenesis; "complete" proteins having all essential amino acids show larger increases in energy expenditure than do lower-quality proteins. With respect to adverse effects, no protein-induced effects are observed on net bone balance or on calcium balance in young adults and elderly persons. Dietary protein even increases bone mineral mass and reduces incidence of osteoporotic fracture. During weight loss, nitrogen intake positively affects calcium balance and consequent preservation of bone mineral content. Sulphur-containing amino acids cause a blood pressure-raising effect by loss of nephron mass. Subjects with obesity, metabolic syndrome, and type 2 diabetes are particularly susceptible groups. This review provides an overview of how sustaining absolute protein intake affects metabolic targets for weight loss and weight maintenance during negative energy balance, i.e., sustaining satiety and energy expenditure and sparing fat-free mass, resulting in energy inefficiency. However, the long-term relationship between net protein synthesis and sparing fat-free mass remains to be elucidated.
Optimizing neural network forecast by immune algorithm
Institute of Scientific and Technical Information of China (English)
YANG Shu-xia; LI Xiang; LI Ning; YANG Shang-dong
2006-01-01
Considering multi-factor influence, a forecasting model was built. The structure of BP neural network was designed, and immune algorithm was applied to optimize its network structure and weight. After training the data of power demand from the year 1980 to 2005 in China, a nonlinear network model was obtained on the relationship between power demand and the factors which had impacts on it, and thus the above proposed method was verified. Meanwhile, the results were compared to those of neural network optimized by genetic algorithm. The results show that this method is superior to neural network optimized by genetic algorithm and is one of the effective ways of time series forecast.
... be due to menstruation, heart or kidney failure, preeclampsia, or medicines you take. A rapid weight gain ... al. Position of the American Dietetic Association: weight management. J Am Diet Assoc . 2009;109:330-46. ...
... this page: //medlineplus.gov/ency/patientinstructions/000346.htm Weight-loss medicines To use the sharing features on this page, please enable JavaScript. Several weight-loss medicines are available. Ask your health care provider ...
Weighted thinned linear array design with the iterative FFT technique
CSIR Research Space (South Africa)
Du Plessis, WP
2011-09-01
Full Text Available A version of the iterative Fourier technique (IFT) for the design of thinned antenna arrays with weighted elements is presented. The structure of the algorithm means that it is well suited to the design of weighted thinned arrays with low current...
Weight management in pregnancy
Olander, E. K.
2015-01-01
Key learning points: - Women who start pregnancy in an overweight or obese weight category have increased health risks - Irrespective of pre-pregnancy weight category, there are health risks associated with gaining too much weight in pregnancy for both mother and baby - There are currently no official weight gain guidelines for pregnancy in the UK, thus focus needs to be on supporting pregnant women to eat healthily and keep active
Modeling Evolution of Weighted Clique Networks
Yang, Xu-Hua; Jiang, Feng-Ling; Chen, Sheng-Yong; Wang, Wan-Liang
2011-11-01
We propose a weighted clique network evolution model, which expands continuously by the addition of a new clique (maximal complete sub-graph) at each time step. And the cliques in the network overlap with each other. The structural expansion of the weighted clique network is combined with the edges' weight and vertices' strengths dynamical evolution. The model is based on a weight-driven dynamics and a weights' enhancement mechanism combining with the network growth. We study the network properties, which include the distribution of vertices' strength and the distribution of edges' weight, and find that both the distributions follow the scale-free distribution. At the same time, we also find that the relationship between strength and degree of a vertex are linear correlation during the growth of the network. On the basis of mean-field theory, we study the weighted network model and prove that both vertices' strength and edges' weight of this model follow the scale-free distribution. And we exploit an algorithm to forecast the network dynamics, which can be used to reckon the distributions and the corresponding scaling exponents. Furthermore, we observe that mean-field based theoretic results are consistent with the statistical data of the model, which denotes the theoretical result in this paper is effective.
Applications of Weighted Automata in Natural Language Processing
Knight, Kevin; May, Jonathan
We explain why weighted automata are an attractive knowledge representation for natural language problems. We first trace the close historical ties between the two fields, then present two complex real-world problems, transliteration and translation. These problems are usefully decomposed into a pipeline of weighted transducers, and weights can be set to maximize the likelihood of a training corpus using standard algorithms. We additionally describe the representation of language models, critical data sources in natural language processing, as weighted automata. We outline the wide range of work in natural language processing that makes use of weighted string and tree automata and describe current work and challenges.
An objective approach to determining criteria weights
Directory of Open Access Journals (Sweden)
Milić R. Milićević
2012-01-01
Full Text Available This paper presents an objective approach to determining criteria weights that can be successfully used in multiple criteria models. The methods of entropy, CRITIC and FANMA are presented in this paper as well as a possible combination of the methods of objective and subjective approaches. Although based on different theoretical settings, and therefore with different algorithms of realization, all methods have a decision matrix as a starting point. An objective approach to determining the weight of criteria eliminates the negative impacts of a decision maker on criteria weights as well as on the final solution of multicriteria problems. The main aim of this paper is to systematize description procedures as a kind of help when encountering a problem of determining the criteria weights for solving multicriteria tasks. A possibility of the method application is shown in a numerical example.
Dynamic airspace configuration method based on a weighted graph model
Directory of Open Access Journals (Sweden)
Chen Yangzhou
2014-08-01
Full Text Available This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph partitioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm transfers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is completed by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connectivity, as well as minimum distance constraint.
Dynamic airspace configuration method based on a weighted graph model
Institute of Scientific and Technical Information of China (English)
Chen Yangzhou; Zhang Defu
2014-01-01
This paper proposes a new method for dynamic airspace configuration based on a weighted graph model. The method begins with the construction of an undirected graph for the given airspace, where the vertices represent those key points such as airports, waypoints, and the edges represent those air routes. Those vertices are used as the sites of Voronoi diagram, which divides the airspace into units called as cells. Then, aircraft counts of both each cell and of each air-route are computed. Thus, by assigning both the vertices and the edges with those aircraft counts, a weighted graph model comes into being. Accordingly the airspace configuration problem is described as a weighted graph partitioning problem. Then, the problem is solved by a graph par-titioning algorithm, which is a mixture of general weighted graph cuts algorithm, an optimal dynamic load balancing algorithm and a heuristic algorithm. After the cuts algorithm partitions the model into sub-graphs, the load balancing algorithm together with the heuristic algorithm trans-fers aircraft counts to balance workload among sub-graphs. Lastly, airspace configuration is com-pleted by determining the sector boundaries. The simulation result shows that the designed sectors satisfy not only workload balancing condition, but also the constraints such as convexity, connec-tivity, as well as minimum distance constraint.
Skiena, Steven S
2008-01-01
Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
A new Hedging algorithm and its application to inferring latent random variables
Freund, Yoav
2008-01-01
We present a new online learning algorithm for cumulative discounted gain. This learning algorithm does not use exponential weights on the experts. Instead, it uses a weighting scheme that depends on the regret of the master algorithm relative to the experts. In particular, experts whose discounted cumulative gain is smaller (worse) than that of the master algorithm receive zero weight. We also sketch how a regret-based algorithm can be used as an alternative to Bayesian averaging in the context of inferring latent random variables.
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...... the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
Jacques, Paul F; Wang, Huifen
2014-05-01
A large body of observational studies and randomized controlled trials (RCTs) has examined the role of dairy products in weight loss and maintenance of healthy weight. Yogurt is a dairy product that is generally very similar to milk, but it also has some unique properties that may enhance its possible role in weight maintenance. This review summarizes the human RCT and prospective observational evidence on the relation of yogurt consumption to the management and maintenance of body weight and composition. The RCT evidence is limited to 2 small, short-term, energy-restricted trials. They both showed greater weight losses with yogurt interventions, but the difference between the yogurt intervention and the control diet was only significant in one of these trials. There are 5 prospective observational studies that have examined the association between yogurt and weight gain. The results of these studies are equivocal. Two of these studies reported that individuals with higher yogurt consumption gained less weight over time. One of these same studies also considered changes in waist circumference (WC) and showed that higher yogurt consumption was associated with smaller increases in WC. A third study was inconclusive because of low statistical power. A fourth study observed no association between changes in yogurt intake and weight gain, but the results suggested that those with the largest increases in yogurt intake during the study also had the highest increase in WC. The final study examined weight and WC change separately by sex and baseline weight status and showed benefits for both weight and WC changes for higher yogurt consumption in overweight men, but it also found that higher yogurt consumption in normal-weight women was associated with a greater increase in weight over follow-up. Potential underlying mechanisms for the action of yogurt on weight are briefly discussed.
Refinement of the community detection performance by weighted relationship coupling
Indian Academy of Sciences (India)
DONG MIN; KAI YU; HUI-JIA LI
2017-03-01
The complexity of many community detection algorithms is usually an exponential function with the scale which hard to uncover community structure with high speed. Inspired by the ideas of the famous modularity optimization, in this paper, we proposed a proper weighting scheme utilizing a novel k-strength relationship whichnaturally represents the coupling distance between two nodes. Community structure detection using a generalized weighted modularity measure is refined based on the weighted k-strength matrix. We apply our algorithm on both the famous benchmark network and the real networks. Theoretical analysis and experiments show that the weighted algorithm can uncover communities fast and accurately and can be easily extended to large-scale real networks.
Refinement of the community detection performance by weighted relationship coupling
MIN, DONG; YU, KAI; LI, HUI-JIA
2017-03-01
The complexity of many community detection algorithms is usually an exponential function with the scale which hard to uncover community structure with high speed. Inspired by the ideas of the famous modularity optimization, in this paper, we proposed a proper weighting scheme utilizing a novel k-strength relationship which naturally represents the coupling distance between two nodes. Community structure detection using a generalized weighted modularity measure is refined based on the weighted k-strength matrix. We apply our algorithm on both the famous benchmark network and the real networks. Theoretical analysis and experiments show that the weighted algorithm can uncover communities fast and accurately and can be easily extended to large-scale real networks.
Computing the nucleolus of weighted voting games
Elkind, Edith
2008-01-01
Weighted voting games (WVG) are coalitional games in which an agent's contribution to a coalition is given by his it weight, and a coalition wins if its total weight meets or exceeds a given quota. These games model decision-making in political bodies as well as collaboration and surplus division in multiagent domains. The computational complexity of various solution concepts for weighted voting games received a lot of attention in recent years. In particular, Elkind et al.(2007) studied the complexity of stability-related solution concepts in WVGs, namely, of the core, the least core, and the nucleolus. While they have completely characterized the algorithmic complexity of the core and the least core, for the nucleolus they have only provided an NP-hardness result. In this paper, we solve an open problem posed by Elkind et al. by showing that the nucleolus of WVGs, and, more generally, k-vector weighted voting games with fixed k, can be computed in pseudopolynomial time, i.e., there exists an algorithm that ...
Energy Technology Data Exchange (ETDEWEB)
Lee, Kok Foong [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); Patterson, Robert I.A.; Wagner, Wolfgang [Weierstrass Institute for Applied Analysis and Stochastics, Mohrenstraße 39, 10117 Berlin (Germany); Kraft, Markus, E-mail: mk306@cam.ac.uk [Department of Chemical Engineering and Biotechnology, University of Cambridge, New Museums Site, Pembroke Street, Cambridge CB2 3RA (United Kingdom); School of Chemical and Biomedical Engineering, Nanyang Technological University, 62 Nanyang Drive, Singapore, 637459 (Singapore)
2015-12-15
Graphical abstract: -- Highlights: •Problems concerning multi-compartment population balance equations are studied. •A class of fragmentation weight transfer functions is presented. •Three stochastic weighted algorithms are compared against the direct simulation algorithm. •The numerical errors of the stochastic solutions are assessed as a function of fragmentation rate. •The algorithms are applied to a multi-dimensional granulation model. -- Abstract: This paper introduces stochastic weighted particle algorithms for the solution of multi-compartment population balance equations. In particular, it presents a class of fragmentation weight transfer functions which are constructed such that the number of computational particles stays constant during fragmentation events. The weight transfer functions are constructed based on systems of weighted computational particles and each of it leads to a stochastic particle algorithm for the numerical treatment of population balance equations. Besides fragmentation, the algorithms also consider physical processes such as coagulation and the exchange of mass with the surroundings. The numerical properties of the algorithms are compared to the direct simulation algorithm and an existing method for the fragmentation of weighted particles. It is found that the new algorithms show better numerical performance over the two existing methods especially for systems with significant amount of large particles and high fragmentation rates.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Solving the MDBCS Problem Using the Metaheuric–Genetic Algorithm
Directory of Open Access Journals (Sweden)
Milena Bogdanovic
2011-12-01
Full Text Available The problems degree-limited graph of nodes considering the weight of the vertex or weight of the edges, with the aim to find the optimal weighted graph in terms of certain restrictions on the degree of the vertices in the subgraph. This class of combinatorial problems was extensively studied because of the implementation and application in network design, connection of networks and routing algorithms. It is likely that solution of MDBCS problem will find its place and application in these areas. The paper is given an ILP model to solve the problem MDBCS, as well as the genetic algorithm, which calculates a good enough solution for the input graph with a greater number of nodes. An important feature of the heuristic algorithms is that can approximate, but still good enough to solve the problems of exponential complexity. However, it should solve the problem heuristic algorithms may not lead to a satisfactory solution, and that for some of the problems, heuristic algorithms give relatively poor results. This is particularly true of problems for which no exact polynomial algorithm complexity. Also, heuristic algorithms are not the same, because some parts of heuristic algorithms differ depending on the situation and problems in which they are used. These parts are usually the objective function (transformation, and their definition significantly affects the efficiency of the algorithm. By mode of action, genetic algorithms are among the methods directed random search space solutions are looking for a global optimum.
Robust reactor power control system design by genetic algorithm
Energy Technology Data Exchange (ETDEWEB)
Lee, Yoon Joon; Cho, Kyung Ho; Kim, Sin [Cheju National University, Cheju (Korea, Republic of)
1997-12-31
The H{sub {infinity}} robust controller for the reactor power control system is designed by use of the mixed weight sensitivity. The system is configured into the typical two-port model with which the weight functions are augmented. Since the solution depends on the weighting functions and the problem is of nonconvex, the genetic algorithm is used to determine the weighting functions. The cost function applied in the genetic algorithm permits the direct control of the power tracking performances. In addition, the actual operating constraints such as rod velocity and acceleration can be treated as design parameters. Compared with the conventional approach, the controller designed by the genetic algorithm results in the better performances with the realistic constraints. Also, it is found that the genetic algorithm could be used as an effective tool in the robust design. 4 refs., 6 figs. (Author)
RESEARCH ON WEIGHTED PRIORITY OF EXEMPLAR-BASED IMAGE INPAINTING
Institute of Scientific and Technical Information of China (English)
Zhou Yatong; Li Lin; Xia Kewen
2012-01-01
The priority of the filled patch play a key role in the exemplar-based image inpainting,and it should be determined firstly to optimize the process of image inpainting.A modified image inpainting algorithm is proposed by weighted-priority based on the Criminisi algorithm.The improved algorithm demonstrates better relationship between the data term and the confidence term for the optimization of the priority than the classical Criminisi algorithm.By comparing the effect of the inpainted images with different structure,conclusion can be drawn that the optimal priority should be chosen properly for different images with different structures.
Directory of Open Access Journals (Sweden)
Lianhui Li
2015-12-01
Full Text Available Medium-and-long-term load forecasting plays an important role in energy policy implementation and electric department investment decision. Aiming to improve the robustness and accuracy of annual electric load forecasting, a robust weighted combination load forecasting method based on forecast model filtering and adaptive variable weight determination is proposed. Similar years of selection is carried out based on the similarity between the history year and the forecast year. The forecast models are filtered to select the better ones according to their comprehensive validity degrees. To determine the adaptive variable weight of the selected forecast models, the disturbance variable is introduced into Immune Algorithm-Particle Swarm Optimization (IA-PSO and the adaptive adjustable strategy of particle search speed is established. Based on the forecast model weight determined by improved IA-PSO, the weighted combination forecast of annual electric load is obtained. The given case study illustrates the correctness and feasibility of the proposed method.
Renormalization algorithm with graph enhancement
Hübener, R; Hartmann, L; Dür, W; Plenio, M B; Eisert, J
2011-01-01
We present applications of the renormalization algorithm with graph enhancement (RAGE). This analysis extends the algorithms and applications given for approaches based on matrix product states introduced in [Phys. Rev. A 79, 022317 (2009)] to other tensor-network states such as the tensor tree states (TTS) and projected entangled pair states (PEPS). We investigate the suitability of the bare TTS to describe ground states, showing that the description of certain graph states and condensed matter models improves. We investigate graph-enhanced tensor-network states, demonstrating that in some cases (disturbed graph states and for certain quantum circuits) the combination of weighted graph states with tensor tree states can greatly improve the accuracy of the description of ground states and time evolved states. We comment on delineating the boundary of the classically efficiently simulatable states of quantum many-body systems.
Combinatorial Multiobjective Optimization Using Genetic Algorithms
Crossley, William A.; Martin. Eric T.
2002-01-01
The research proposed in this document investigated multiobjective optimization approaches based upon the Genetic Algorithm (GA). Several versions of the GA have been adopted for multiobjective design, but, prior to this research, there had not been significant comparisons of the most popular strategies. The research effort first generalized the two-branch tournament genetic algorithm in to an N-branch genetic algorithm, then the N-branch GA was compared with a version of the popular Multi-Objective Genetic Algorithm (MOGA). Because the genetic algorithm is well suited to combinatorial (mixed discrete / continuous) optimization problems, the GA can be used in the conceptual phase of design to combine selection (discrete variable) and sizing (continuous variable) tasks. Using a multiobjective formulation for the design of a 50-passenger aircraft to meet the competing objectives of minimizing takeoff gross weight and minimizing trip time, the GA generated a range of tradeoff designs that illustrate which aircraft features change from a low-weight, slow trip-time aircraft design to a heavy-weight, short trip-time aircraft design. Given the objective formulation and analysis methods used, the results of this study identify where turboprop-powered aircraft and turbofan-powered aircraft become more desirable for the 50 seat passenger application. This aircraft design application also begins to suggest how a combinatorial multiobjective optimization technique could be used to assist in the design of morphing aircraft.
Sequential algorithm for fast clique percolation.
Kumpula, Jussi M; Kivelä, Mikko; Kaski, Kimmo; Saramäki, Jari
2008-08-01
In complex network research clique percolation, introduced by Palla, Derényi, and Vicsek [Nature (London) 435, 814 (2005)], is a deterministic community detection method which allows for overlapping communities and is purely based on local topological properties of a network. Here we present a sequential clique percolation algorithm (SCP) to do fast community detection in weighted and unweighted networks, for cliques of a chosen size. This method is based on sequentially inserting the constituent links to the network and simultaneously keeping track of the emerging community structure. Unlike existing algorithms, the SCP method allows for detecting k -clique communities at multiple weight thresholds in a single run, and can simultaneously produce a dendrogram representation of hierarchical community structure. In sparse weighted networks, the SCP algorithm can also be used for implementing the weighted clique percolation method recently introduced by Farkas [New J. Phys. 9, 180 (2007)]. The computational time of the SCP algorithm scales linearly with the number of k -cliques in the network. As an example, the method is applied to a product association network, revealing its nested community structure.
Navigating Weighted Regions with Scattered Skinny Tetrahedra
Cheng, Siu-Wing
2015-11-26
We propose an algorithm for finding a (1 + ε)-approximate shortest path through a weighted 3D simplicial complex T. The weights are integers from the range [1,W] and the vertices have integral coordinates. Let N be the largest vertex coordinate magnitude, and let n be the number of tetrahedra in T. Let ρ be some arbitrary constant. Let κ be the size of the largest connected component of tetrahedra whose aspect ratios exceed ρ. There exists a constant C dependent on ρ but independent of T such that if κ ≤ 1 C log log n + O(1), the running time of our algorithm is polynomial in n, 1/ε and log(NW). If κ = O(1), the running time reduces to O(nε(log(NW))).
Modeling Evolution of Weighted Clique Networks
Institute of Scientific and Technical Information of China (English)
杨旭华; 蒋峰岭; 陈胜勇; 王万良
2011-01-01
We propose a weighted clique network evolution model, which expands continuously by the addition of a new clique （maximal complete sub-graph） at. each time step. And the cliques in the network overlap with each other. The structural expansion of the weighted clique network is combined with the edges＇ weight and vertices＇ strengths dynamical evolution. The model is based on a weight-driven dynamics and a weights＇ enhancement mechanism combining with the network growth. We study the network properties, which include the distribution of vertices＇ strength and the distribution o~ edges＇ weight, and find that both the distributions follow the scale-free distribution. At the same time, we also find that the relationship between strength and degree of a vertex are linear correlation during the growth of the network. On the basis of mean-field theory, we study the weighted network model and prove that both vertices＇ strength and edges＇ weight of this model follow the scale-free distribution. And we exploit an algorithm to forecast the network dynamics, which can be used to reckon the distributions and the corresponding scaling exponents. Furthermore, we observe that mean-field based theoretic results are consistent with the statistical data of the model, which denotes the theoretical result in this paper is effective.
Healthy weight game!: Lose weight together
Lentelink, S.J.; Spil, Antonius A.M.; Broens, T.; Broens, T.H.F.; Hermens, Hermanus J.; Jones, Valerie M.
2013-01-01
Overweight and obesity pose a serious and increasing problem worldwide. Current treatment methods can result in weight loss in the short term but often fail in the longer term. Increasing motivation and thereby improving adherence can be a key factor in achieving the needed behavioral change. One
Computed laminography and reconstruction algorithm
Institute of Scientific and Technical Information of China (English)
QUE Jie-Min; YU Zhong-Qiang; YAN Yong-Lian; CAO Da-Quan; ZHAO Wei; TANG Xiao; SUN Cui-Li; WANG Yan-Fang; WEI Cun-Feng; SHI Rong-Jian; WEI Long
2012-01-01
Computed laminography (CL) is an alternative to computed tomography if large objects are to be inspected with high resolution.This is especially true for planar objects.In this paper,we set up a new scanning geometry for CL,and study the algebraic reconstruction technique (ART) for CL imaging.We compare the results of ART with variant weighted functions by computer simulation with a digital phantom.It proves that ART algorithm is a good choice for the CL system.
Chen, Shyi-Ming; Hsin, Wen-Chyuan
2015-07-01
In this paper, we propose a new weighted fuzzy interpolative reasoning method for sparse fuzzy rule-based systems based on the slopes of fuzzy sets. We also propose a particle swarm optimization (PSO)-based weights-learning algorithm to automatically learn the optimal weights of the antecedent variables of fuzzy rules for weighted fuzzy interpolative reasoning. We apply the proposed weighted fuzzy interpolative reasoning method using the proposed PSO-based weights-learning algorithm to deal with the computer activity prediction problem, the multivariate regression problems, and the time series prediction problems. The experimental results show that the proposed weighted fuzzy interpolative reasoning method using the proposed PSO-based weights-learning algorithm outperforms the existing methods for dealing with the computer activity prediction problem, the multivariate regression problems, and the time series prediction problems.
Apriori Association Rule Algorithms using VMware Environment
Directory of Open Access Journals (Sweden)
R. Sumithra
2014-07-01
Full Text Available The aim of this study is to carry out a research in distributed data mining using cloud platform. Distributed Data mining becomes a vital component of big data analytics due to the development of network and distributed technology. Map-reduce hadoop framework is a very familiar concept in big data analytics. Association rule algorithm is one of the popular data mining techniques which finds the relationships between different transactions. A work has been executed using weighted apriori and hash T apriori algorithms for association rule mining on a map reduce hadoop framework using a retail data set of transactions. This study describes the above concepts, explains the experiment carried out with retail data set on a VMW are environment and compares the performances of weighted apriori and hash-T apriori algorithms in terms of memory and time.
Margolis, C Z
1983-02-04
The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.
Negative weights makes adversaries stronger
Hoyer, P; Spalek, R; Hoyer, Peter; Lee, Troy; Spalek, Robert
2006-01-01
The quantum adversary method is one of the most successful techniques for proving lower bounds on quantum query complexity. It gives optimal lower bounds for many problems, has application to classical complexity in formula size lower bounds, and is versatile with equivalent formulations in terms of weight schemes, eigenvalues, and Kolmogorov complexity. All these formulations are information-theoretic and rely on the principle that if an algorithm successfully computes a function then, in particular, it is able to distinguish between inputs which map to different values. We present a stronger version of the adversary method which goes beyond this principle to make explicit use of the existence of a measurement in a successful algorithm which gives the correct answer, with high probability. We show that this new method, which we call ADV+-, has all the advantages of the old: it is a lower bound on bounded-error quantum query complexity, its square is a lower bound on formula size, and it behaves well with res...
Predictors of weight maintenance
Pasman, W.J.; Saris, W.H.M.; Westerterp-Plantenga, M.S.
1999-01-01
Objective: To obtain predictors of weight maintenance after a weight-loss intervention. Research Methods and Procedures: An overall analysis of data from two-long intervention studies [n = 67 women; age: 37.9±1.0 years; body weight (BW): 87.0±1.2 kg; body mass index: 32.1±0.5 kg·m-2; % body fat: 42.
Predictors of weight maintenance
Pasman, W.J.; Saris, W.H.M.; Westerterp-Plantenga, M.S.
1999-01-01
Objective: To obtain predictors of weight maintenance after a weight-loss intervention. Research Methods and Procedures: An overall analysis of data from two-long intervention studies [n = 67 women; age: 37.9±1.0 years; body weight (BW): 87.0±1.2 kg; body mass index: 32.1±0.5 kg·m-2; % body fat: 42.
Predictors of weight maintenance
Pasman, W.J.; Saris, W.H.M.; Westerterp-Plantenga, M.S.
1999-01-01
Objective: To obtain predictors of weight maintenance after a weight-loss intervention. Research Methods and Procedures: An overall analysis of data from two-long intervention studies [n = 67 women; age: 37.9±1.0 years; body weight (BW): 87.0±1.2 kg; body mass index: 32.1±0.5 kg·m-2; % body fat:
Malliavin Weight Sampling: A Practical Guide
Directory of Open Access Journals (Sweden)
Patrick B. Warren
2013-12-01
Full Text Available Malliavin weight sampling (MWS is a stochastic calculus technique for computing the derivatives of averaged system properties with respect to parameters in stochastic simulations, without perturbing the system’s dynamics. It applies to systems in or out of equilibrium, in steady state or time-dependent situations, and has applications in the calculation of response coefficients, parameter sensitivities and Jacobian matrices for gradient-based parameter optimisation algorithms. The implementation of MWS has been described in the specific contexts of kinetic Monte Carlo and Brownian dynamics simulation algorithms. Here, we present a general theoretical framework for deriving the appropriate MWS update rule for any stochastic simulation algorithm. We also provide pedagogical information on its practical implementation.
Institute of Scientific and Technical Information of China (English)
Tian-qi WU; Min YAO; Jian-hua YANG
2016-01-01
By adopting the distributed problem-solving strategy, swarm intelligence algorithms have been successfully applied to many optimization problems that are difficult to deal with using traditional methods. At present, there are many well-implemented algorithms, such as particle swarm optimization, genetic algorithm, artificial bee colony algorithm, and ant colony optimization. These algorithms have already shown favorable performances. However, with the objects becoming increasingly complex, it is becoming gradually more difficult for these algorithms to meet human’s demand in terms of accuracy and time. Designing a new algorithm to seek better solutions for optimization problems is becoming increasingly essential. Dolphins have many noteworthy biological characteristics and living habits such as echolocation, information exchanges, cooperation, and division of labor. Combining these biological characteristics and living habits with swarm intelligence and bringing them into optimization prob-lems, we propose a brand new algorithm named the ‘dolphin swarm algorithm’ in this paper. We also provide the definitions of the algorithm and specific descriptions of the four pivotal phases in the algorithm, which are the search phase, call phase, reception phase, and predation phase. Ten benchmark functions with different properties are tested using the dolphin swarm algorithm, particle swarm optimization, genetic algorithm, and artificial bee colony algorithm. The convergence rates and benchmark func-tion results of these four algorithms are compared to testify the effect of the dolphin swarm algorithm. The results show that in most cases, the dolphin swarm algorithm performs better. The dolphin swarm algorithm possesses some great features, such as first-slow-then-fast convergence, periodic convergence, local-optimum-free, and no specific demand on benchmark functions. Moreover, the dolphin swarm algorithm is particularly appropriate to optimization problems, with more
Institute of Scientific and Technical Information of China (English)
WU An-Cai; XU Xin-Jian; WU Zhi-Xi; WANG Ying-Hai
2007-01-01
We investigate the dynamics of random walks on weighted networks. Assuming that the edge weight and the node strength are used as local information by a random walker. Two kinds of walks, weight-dependent walk and strength-dependent walk, are studied. Exact expressions for stationary distribution and average return time are derived and confirmed by computer simulations. The distribution of average return time and the mean-square that a weight-dependent walker can arrive at a new territory more easily than a strength-dependent one.
Estimation Normal Vector of Triangular Mesh Vertex by Angle and Centroid Weights and its Application
Directory of Open Access Journals (Sweden)
Yueping Chen
2013-04-01
Full Text Available To compute vertex normal of triangular meshes more accurately, this paper presents an improved algorithm based on angle and centroid weights. Firstly, four representational algorithms are analyzed by comparing their weighting characteristics such as angles, areas and centroids. The drawbacks of each algorithm are discussed. Following that, an improved algorithm is put forward based on angle and centroid weights. Finally, by taking the deviation angle between the nominal normal vector and the estimated one as the error evaluation standard factor, the triangular mesh models of spheres, ellipsoids, paraboloids and cylinders are used to analyze the performance of all these estimation algorithms. The machining and inspection operations of one mould part are conducted to verify the improved algorithm. Experimental results demonstrate that the algorithm is effective.
A New Algorithm for System of Integral Equations
Directory of Open Access Journals (Sweden)
Abdujabar Rasulov
2014-01-01
Full Text Available We develop a new algorithm to solve the system of integral equations. In this new method no need to use matrix weights. Beacause of it, we reduce computational complexity considerable. Using the new algorithm it is also possible to solve an initial boundary value problem for system of parabolic equations. To verify the efficiency, the results of computational experiments are given.
Application of a Genetic Algorithm to Nearest Neighbour Classification
Simkin, S.; Verwaart, D.; Vrolijk, H.C.J.
2005-01-01
This paper describes the application of a genetic algorithm to nearest-neighbour based imputation of sample data into a census data dataset. The genetic algorithm optimises the selection and weights of variables used for measuring distance. The results show that the measure of fit can be improved by
Geometric Algorithms for Private-Cache Chip Multiprocessors
DEFF Research Database (Denmark)
Ajwani, Deepak; Sitchinava, Nodari; Zeh, Norbert
2010-01-01
We study techniques for obtaining efficient algorithms for geometric problems on private-cache chip multiprocessors. We show how to obtain optimal algorithms for interval stabbing counting, 1-D range counting, weighted 2-D dominance counting, and for computing 3-D maxima, 2-D lower envelopes, and 2...
New focused crawling algorithm
Institute of Scientific and Technical Information of China (English)
Su Guiyang; Li Jianhua; Ma Yinghua; Li Shenghong; Song Juping
2005-01-01
Focused carawling is a new research approach of search engine. It restricts information retrieval and provides search service in specific topic area. Focused crawling search algorithm is a key technique of focused crawler which directly affects the search quality. This paper first introduces several traditional topic-specific crawling algorithms, then an inverse link based topic-specific crawling algorithm is put forward. Comparison experiment proves that this algorithm has a good performance in recall, obviously better than traditional Breadth-First and Shark-Search algorithms. The experiment also proves that this algorithm has a good precision.
Symplectic algebraic dynamics algorithm
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the algebraic dynamics solution of ordinary differential equations andintegration of ,the symplectic algebraic dynamics algorithm sn is designed,which preserves the local symplectic geometric structure of a Hamiltonian systemand possesses the same precision of the na ve algebraic dynamics algorithm n.Computer experiments for the 4th order algorithms are made for five test modelsand the numerical results are compared with the conventional symplectic geometric algorithm,indicating that sn has higher precision,the algorithm-inducedphase shift of the conventional symplectic geometric algorithm can be reduced,and the dynamical fidelity can be improved by one order of magnitude.
Adaptive cockroach swarm algorithm
Obagbuwa, Ibidun C.; Abidoye, Ademola P.
2017-07-01
An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.
Decoherence in Search Algorithms
Abal, G; Marquezino, F L; Oliveira, A C; Portugal, R
2009-01-01
Recently several quantum search algorithms based on quantum walks were proposed. Those algorithms differ from Grover's algorithm in many aspects. The goal is to find a marked vertex in a graph faster than classical algorithms. Since the implementation of those new algorithms in quantum computers or in other quantum devices is error-prone, it is important to analyze their robustness under decoherence. In this work we analyze the impact of decoherence on quantum search algorithms implemented on two-dimensional grids and on hypercubes.
An energy efficient clustering routing algorithm for wireless sensor networks
Institute of Scientific and Technical Information of China (English)
LI Li; DONG Shu-song; WEN Xiang-ming
2006-01-01
This article proposes an energy efficient clustering routing (EECR) algorithm for wireless sensor network. The algorithm can divide a sensor network into a few clusters and select a cluster head base on weight value that leads to more uniform energy dissipation evenly among all sensor nodes.Simulations and results show that the algorithm can save overall energy consumption and extend the lifetime of the wireless sensor network.
Brain MR image segmentation improved algorithm based on probability
Liao, Hengxu; Liu, Gang; Guo, Xiantang
2017-08-01
Local weight voting algorithm is a kind of current mainstream segmentation algorithm. It takes full account of the influences of the likelihood of image likelihood and the prior probabilities of labels on the segmentation results. But this method still can be improved since the essence of this method is to get the label with the maximum probability. If the probability of a label is 70%, it may be acceptable in mathematics. But in the actual segmentation, it may be wrong. So we use the matrix completion algorithm as a supplement. When the probability of the former is larger, the result of the former algorithm is adopted. When the probability of the later is larger, the result of the later algorithm is adopted. This is equivalent to adding an automatic algorithm selection switch that can theoretically ensure that the accuracy of the algorithm we propose is superior to the local weight voting algorithm. At the same time, we propose an improved matrix completion algorithm based on enumeration method. In addition, this paper also uses a multi-parameter registration model to reduce the influence that the registration made on the segmentation. The experimental results show that the accuracy of the algorithm is better than the common segmentation algorithm.
A Novel Register Allocation Algorithm for Testability
Institute of Scientific and Technical Information of China (English)
SUN Qiang; ZHOU Tao; LI Haijun
2007-01-01
In the course of high-level synthesis of integrate circuit, the hard-to-test structure caused by irrational schedule and allocation reduces the testability of circuit. In order to improve the circuit testability, this paper proposes a weighted compatibility graph (WCG), which provides a weighted formula of compatibility graph based on register allocation for testability and uses improved weighted compatibility clique partition algorithm to deal with this WCG. As a result, four rules for testability are considered simultaneously in the course of register allocation so that the objective of improving the design of testability is acquired. Tested by many experimental results of benchmarks and compared with many other models, the register allocation algorithm proposed in this paper has greatly improved the circuit testability with little overhead on the final circuit area.
Sansone, Randy A; Sansone, Lori A
2014-07-01
Acute marijuana use is classically associated with snacking behavior (colloquially referred to as "the munchies"). In support of these acute appetite-enhancing effects, several authorities report that marijuana may increase body mass index in patients suffering from human immunodeficiency virus and cancer. However, for these medical conditions, while appetite may be stimulated, some studies indicate that weight gain is not always clinically meaningful. In addition, in a study of cancer patients in which weight gain did occur, it was less than the comparator drug (megestrol). However, data generally suggest that acute marijuana use stimulates appetite, and that marijuana use may stimulate appetite in low-weight individuals. As for large epidemiological studies in the general population, findings consistently indicate that users of marijuana tend to have lower body mass indices than nonusers. While paradoxical and somewhat perplexing, these findings may be explained by various study confounds, such as potential differences between acute versus chronic marijuana use; the tendency for marijuana use to be associated with other types of drug use; and/or the possible competition between food and drugs for the same reward sites in the brain. Likewise, perhaps the effects of marijuana are a function of initial weight status-i.e., maybe marijuana is a metabolic regulatory substance that increases body weight in low-weight individuals but not in normal-weight or overweight individuals. Only further research will clarify the complex relationships between marijuana and body weight.
DEFF Research Database (Denmark)
Xue, Bingtian; Larsen, Kim Guldstrand; Mardare, Radu Iulian
2015-01-01
We introduce Concurrent Weighted Logic (CWL), a multimodal logic for concurrent labeled weighted transition systems (LWSs). The synchronization of LWSs is described using dedicated functions that, in various concurrency paradigms, allow us to encode the compositionality of LWSs. To reflect these......-completeness results for this logic. To complete these proofs we involve advanced topological techniques from Model Theory....
DEFF Research Database (Denmark)
Holstein, Bjørn Evald; Due, Pernille; Brixval, Carina Sjöberg;
2017-01-01
day) communication with friends through cellphones, SMS messages, or Internet (1.66, 1.03-2.67). In the full population, overweight/obese weight status was associated with not perceiving best friend as a confidant (1.59, 1.11-2.28). No associations were found between weight status and number of close...
... only. To assess the weight of children or teenagers, see the Child and Teen BMI Calculator . Top of Page Want to learn more? Preventing Weight Gain Choosing a lifestyle that includes good eating habits and daily physical activity can help you maintain ...
... of your weight loss. When to Contact a Medical Professional Call your health care provider if: You or a family member loses more ... to Expect at Your Office Visit The ... be asked questions about your medical history and symptoms, including: How much weight have ...
Flow enforcement algorithms for ATM networks
DEFF Research Database (Denmark)
Dittmann, Lars; Jacobsen, Søren B.; Moth, Klaus
1991-01-01
theory and partly on signal processing theory, is carried out. It is seen that the time constant involved increases with the increasing burstiness of the connection. It is suggested that the RMS measurement bandwidth be used to dimension linear algorithms for equal flow enforcement characteristics....... Implementations are proposed on the block diagram level, and dimensioning examples are carried out when flow enforcing a renewal-type connection using the four algorithms. The corresponding hardware demands are estimated aid compared......Four measurement algorithms for flow enforcement in asynchronous transfer mode (ATM) networks are presented. The algorithms are the leaky bucket, the rectangular sliding window, the triangular sliding window, and the exponentially weighted moving average. A comparison, based partly on teletraffic...
Information criterion based fast PCA adaptive algorithm
Institute of Scientific and Technical Information of China (English)
Li Jiawen; Li Congxin
2007-01-01
The novel information criterion (NIC) algorithm can find the principal subspace quickly, but it is not an actual principal component analysis (PCA) algorithm and hence it cannot find the orthonormal eigen-space which corresponds to the principal component of input vector.This defect limits its application in practice.By weighting the neural network's output of NIC, a modified novel information criterion (MNIC) algorithm is presented.MNIC extractes the principal components and corresponding eigenvectors in a parallel online learning program, and overcomes the NIC's defect.It is proved to have a single global optimum and nonquadratic convergence rate, which is superior to the conventional PCA online algorithms such as Oja and LMSER.The relationship among Oja, LMSER and MNIC is exhibited.Simulations show that MNIC could converge to the optimum fast.The validity of MNIC is proved.
Doubly Constrained Robust Blind Beamforming Algorithm
Directory of Open Access Journals (Sweden)
Xin Song
2013-01-01
Full Text Available We propose doubly constrained robust least-squares constant modulus algorithm (LSCMA to solve the problem of signal steering vector mismatches via the Bayesian method and worst-case performance optimization, which is based on the mismatches between the actual and presumed steering vectors. The weight vector is iteratively updated with penalty for the worst-case signal steering vector by the partial Taylor-series expansion and Lagrange multiplier method, in which the Lagrange multipliers can be optimally derived and incorporated at each step. A theoretical analysis for our proposed algorithm in terms of complexity cost, convergence performance, and SINR performance is presented in this paper. In contrast to the linearly constrained LSCMA, the proposed algorithm provides better robustness against the signal steering vector mismatches, yields higher signal captive performance, improves greater array output SINR, and has a lower computational cost. The simulation results confirm the superiority of the proposed algorithm on beampattern control and output SINR enhancement.
Weight discrimination and bullying.
Puhl, Rebecca M; King, Kelly M
2013-04-01
Despite significant attention to the medical impacts of obesity, often ignored are the negative outcomes that obese children and adults experience as a result of stigma, bias, and discrimination. Obese individuals are frequently stigmatized because of their weight in many domains of daily life. Research spanning several decades has documented consistent weight bias and stigmatization in employment, health care, schools, the media, and interpersonal relationships. For overweight and obese youth, weight stigmatization translates into pervasive victimization, teasing, and bullying. Multiple adverse outcomes are associated with exposure to weight stigmatization, including depression, anxiety, low self-esteem, body dissatisfaction, suicidal ideation, poor academic performance, lower physical activity, maladaptive eating behaviors, and avoidance of health care. This review summarizes the nature and extent of weight stigmatization against overweight and obese individuals, as well as the resulting consequences that these experiences create for social, psychological, and physical health for children and adults who are targeted. Copyright © 2013 Elsevier Ltd. All rights reserved.
Software For Genetic Algorithms
Wang, Lui; Bayer, Steve E.
1992-01-01
SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.
Progressive geometric algorithms
Directory of Open Access Journals (Sweden)
Sander P.A. Alewijnse
2015-01-01
Full Text Available Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms for two geometric problems: computing the convex hull of a planar point set, and finding popular places in a set of trajectories.
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Borbely, Eva
2007-01-01
A quantum algorithm is a set of instructions for a quantum computer, however, unlike algorithms in classical computer science their results cannot be guaranteed. A quantum system can undergo two types of operation, measurement and quantum state transformation, operations themselves must be unitary (reversible). Most quantum algorithms involve a series of quantum state transformations followed by a measurement. Currently very few quantum algorithms are known and no general design methodology e...
Competing Sudakov Veto Algorithms
Kleiss, Ronald
2016-01-01
We present a way to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance and show that there are significantly faster alternatives to the commonly used algorithms.
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Generating Realistic Labelled, Weighted Random Graphs
Davis, Michael Charles; Liu, Weiru; Miller, Paul; Hunter, Ruth; Kee, Frank
2015-01-01
Generative algorithms for random graphs have yielded insights into the structure and evolution of real-world networks. Most networks exhibit a well-known set of properties, such as heavy-tailed degree distributions, clustering and community formation. Usually, random graph models consider only structural information, but many real-world networks also have labelled vertices and weighted edges. In this paper, we present a generative model for random graphs with discrete vertex labels and numeric edge weights. The weights are represented as a set of Beta Mixture Models (BMMs) with an arbitrary number of mixtures, which are learned from real-world networks. We propose a Bayesian Variational Inference (VI) approach, which yields an accurate estimation while keeping computation times tractable. We compare our approach to state-of-the-art random labelled graph generators and an earlier approach based on Gaussian Mixture Models (GMMs). Our results allow us to draw conclusions about the contribution of vertex labels a...
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available techniques and is organized by algorithmic paradigm.
Optimal Mixing Evolutionary Algorithms
Thierens, D.; Bosman, P.A.N.; Krasnogor, N.
2011-01-01
A key search mechanism in Evolutionary Algorithms is the mixing or juxtaposing of partial solutions present in the parent solutions. In this paper we look at the efficiency of mixing in genetic algorithms (GAs) and estimation-of-distribution algorithms (EDAs). We compute the mixing probabilities of
Implementation of Parallel Algorithms
1991-09-30
Lecture Notes in Computer Science , Warwich, England, July 16-20... Lecture Notes in Computer Science , Springer-Verlag, Bangalor, India, December 1990. J. Reif, J. Canny, and A. Page, "An Exact Algorithm for Kinodynamic...Parallel Algorithms and its Impact on Computational Geometry, in Optimal Algorithms, H. Djidjev editor, Springer-Verlag Lecture Notes in Computer Science
Semiclassical Shor's Algorithm
Giorda, P; Sen, S; Sen, S; Giorda, Paolo; Iorio, Alfredo; Sen, Samik; Sen, Siddhartha
2003-01-01
We propose a semiclassical version of Shor's quantum algorithm to factorize integer numbers, based on spin-1/2 SU(2) generalized coherent states. Surprisingly, we find numerical evidence that the algorithm's success probability is not too severely modified by our semiclassical approximation. This suggests that it is worth pursuing practical implementations of the algorithm on semiclassical devices.
Calibration with empirically weighted mean subset
DEFF Research Database (Denmark)
Øjelund, Henrik; Madsen, Henrik; Thyregod, Poul
2002-01-01
In this article a new calibration method called empirically weighted mean subset (EMS) is presented. The method is illustrated using spectral data. Using several near-infrared (NIR) benchmark data sets, EMS is compared to partial least-squares regression (PLS) and interval partial least-squares r......In this article a new calibration method called empirically weighted mean subset (EMS) is presented. The method is illustrated using spectral data. Using several near-infrared (NIR) benchmark data sets, EMS is compared to partial least-squares regression (PLS) and interval partial least...... is obtained by calculating the weighted mean of all coefficient vectors for subsets of the same size. The weighting is proportional to SSgamma-omega, where SSgamma is the residual sum of squares from a linear regression with subset gamma and omega is a weighting parameter estimated using cross......-validation. This construction of the weighting implies that even if some coefficients will become numerically small, none will become exactly zero. An efficient algorithm has been implemented in MATLAB to calculate the EMS solution and the source code has been made available on the Internet....
A Table Based Algorithm for Minimum Directed Spanning Trees
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
As far as the weighted digraph is considered, an optimal directed spanning tree algorithm called table basedalgorithm (TBA) ia proposed in the paper based on the table instead of the weighted digraph. The optimality is proved,and a numerical example is demonatrated.
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
A VLSI optimal constructive algorithm for classification problems
Energy Technology Data Exchange (ETDEWEB)
Beiu, V. [Los Alamos National Lab., NM (United States); Draghici, S.; Sethi, I.K. [Wayne State Univ., Detroit, MI (United States)
1997-10-01
If neural networks are to be used on a large scale, they have to be implemented in hardware. However, the cost of the hardware implementation is critically sensitive to factors like the precision used for the weights, the total number of bits of information and the maximum fan-in used in the network. This paper presents a version of the Constraint Based Decomposition training algorithm which is able to produce networks using limited precision integer weights and units with limited fan-in. The algorithm is tested on the 2-spiral problem and the results are compared with other existing algorithms.
ORDERED WEIGHTED DISTANCE MEASURE
Institute of Scientific and Technical Information of China (English)
Zeshui XU; Jian CHEN
2008-01-01
The aim of this paper is to develop an ordered weighted distance (OWD) measure, which is thegeneralization of some widely used distance measures, including the normalized Hamming distance, the normalized Euclidean distance, the normalized geometric distance, the max distance, the median distance and the min distance, etc. Moreover, the ordered weighted averaging operator, the generalized ordered weighted aggregation operator, the ordered weighted geometric operator, the averaging operator, the geometric mean operator, the ordered weighted square root operator, the square root operator, the max operator, the median operator and the min operator axe also the special cases of the OWD measure. Some methods depending on the input arguments are given to determine the weights associated with the OWD measure. The prominent characteristic of the OWD measure is that it can relieve (or intensify) the influence of unduly large or unduly small deviations on the aggregation results by assigning them low (or high) weights. This desirable characteristic makes the OWD measure very suitable to be used in many actual fields, including group decision making, medical diagnosis, data mining, and pattern recognition, etc. Finally, based on the OWD measure, we develop a group decision making approach, and illustrate it with a numerical example.
Fast image interpolation using directional inverse distance weighting for real-time applications
Jing, Minggang; Wu, Jitao
2013-01-01
A novel simple image interpolation method with adaptive weights, which is motivated by the inverse-distance weighting (IDW) method, is proposed. A new distance is defined to implement the IDW-based algorithm. The weights corresponding to four diagonal pixels are computed in their own diagonal directions. And in order to make the algorithm robust to fine structures and noises, the weights are calculated in local windows. The new approach can be implemented efficiently and gives weights adaptively according to the local image structures. Experimental results demonstrate that the proposed method can generate visually pleasant interpolated images with high peak signal-to-noise ratios (PSNR) values in real time.
Predicting Students’ Performance using Modified ID3 Algorithm
Directory of Open Access Journals (Sweden)
Ramanathan L
2013-06-01
Full Text Available The ability to predict performance of students is very crucial in our present education system. We can use data mining concepts for this purpose. ID3 algorithm is one of the famous algorithms present today to generate decision trees. But this algorithm has a shortcoming that it is inclined to attributes with many values. So , this research aims to overcome this shortcoming of the algorithm by using gain ratio(instead of information gain as well as by giving weights to each attribute at every decision making point. Several other algorithms like J48 and Naive Bayes classification algorithm are alsoapplied on the dataset. The WEKA tool was used for the analysis of J48 and Naive Bayes algorithms. The results are compared and presented. The dataset used in our study is taken from the School of Computing Sciences and Engineering (SCSE, VIT University.
Three penalized EM-type algorithms for PET image reconstruction.
Teng, Yueyang; Zhang, Tie
2012-06-01
Based on Bayes theory, Green introduced the maximum a posteriori (MAP) algorithm to obtain a smoothing reconstruction for positron emission tomography. This algorithm is flexible and convenient for most of the penalties, but it is hard to guarantee convergence. For a common goal, Fessler penalized a weighted least squares (WLS) estimator by a quadratic penalty and then solved it with the successive over-relaxation (SOR) algorithm, however, the algorithm was time-consuming and difficultly parallelized. Anderson proposed another WLS estimator for faster convergence, on which there were few regularization methods studied. For three regularized estimators above, we develop three new expectation maximization (EM) type algorithms to solve them. Unlike MAP and SOR, the proposed algorithms yield update rules by minimizing the auxiliary functions constructed on the previous iterations, which ensure the cost functions monotonically decreasing. Experimental results demonstrated the robustness and effectiveness of the proposed algorithms.
Weighted learning of bidirectional associative memories by global minimization.
Wang, T; Zhuang, X; Xing, X
1992-01-01
A weighted learning algorithm for bidirectional associative memories (BAMs) by means of global minimization, where each desired pattern is weighted, is described. According to the cost function that measures the goodness of the BAM, the learning algorithm is formulated as a global minimization problem and solved by a gradient descent rule. The learning approach guarantees not only that each desired pattern is stored as a stable state, but also that the basin of attraction is constructed as large as possible around each desired pattern. The existence of the weights, the asymptotic stability of each desired pattern and its basin of attraction, and the convergence of the proposed learning algorithm are investigated in an analytic way. A large number of computer experiments are reported to demonstrate the efficiency of the learning rule.
Institute of Scientific and Technical Information of China (English)
周清; 王奉伟
2015-01-01
The article [1] thought the method of Weighted Algorithm of Inverse Distance and Angle could overcome the influence of uneven distribution and uneven density of interpolation on the known sample points with interpolation point full circle range .Also it thought the method can be widely used in DEM and image processing .But the author proved only when the points is distributed the same as the article [1] ,the results are better.On the contrary that the method against the First Law of Geography which proposed by geographer W.R.Tobler from USA.So by the point,it shows that the method is not of universality and generalization and the theory needs further perfection .%文献[1]认为反距离夹角加权空间内插方法能减弱甚至消除已知样本点在内插点全圆方位上分布不均及疏密不匀对内插值的影响，保证内插值接近近点值，将广泛应用在建立DEM 及图像处理等地理空间信息数据内插中。本文通过理论分析和实例验证发现：只有当点位分布完全符合文献[1]中的情况时，内插结果才比较好；反之，该方法违背了美国地理学家W.R.Tobler提出的地理学第一定律。距离待插值点近的点权反而小；距离远的点权反而大，因此，该方法不具有普适性和推广性，其理论仍待完善。
Algorithms for Quantum Computers
Smith, Jamie
2010-01-01
This paper surveys the field of quantum computer algorithms. It gives a taste of both the breadth and the depth of the known algorithms for quantum computers, focusing on some of the more recent results. It begins with a brief review of quantum Fourier transform based algorithms, followed by quantum searching and some of its early generalizations. It continues with a more in-depth description of two more recent developments: algorithms developed in the quantum walk paradigm, followed by tensor network evaluation algorithms (which include approximating the Tutte polynomial).
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
Directory of Open Access Journals (Sweden)
Hyo Seon Park
2014-01-01
Full Text Available Since genetic algorithm-based optimization methods are computationally expensive for practical use in the field of structural optimization, a resizing technique-based hybrid genetic algorithm for the drift design of multistory steel frame buildings is proposed to increase the convergence speed of genetic algorithms. To reduce the number of structural analyses required for the convergence, a genetic algorithm is combined with a resizing technique that is an efficient optimal technique to control the drift of buildings without the repetitive structural analysis. The resizing technique-based hybrid genetic algorithm proposed in this paper is applied to the minimum weight design of three steel frame buildings. To evaluate the performance of the algorithm, optimum weights, computational times, and generation numbers from the proposed algorithm are compared with those from a genetic algorithm. Based on the comparisons, it is concluded that the hybrid genetic algorithm shows clear improvements in convergence properties.
Study on the Hungarian algorithm for the maximum likelihood data association problem
Institute of Scientific and Technical Information of China (English)
Wang Jianguo; He Peikun; Cao Wei
2007-01-01
A specialized Hungarian algorithm was developed here for the maximum likelihood data association problem with two implementation versions due to presence of false alarms and missed detections. The maximum likelihood data association problem is formulated as a bipartite weighted matching problem. Its duality and the optimality conditions are given. The Hungarian algorithm with its computational steps, data structure and computational complexity is presented. The two implementation versions, Hungarian forest (HF) algorithm and Hungarian tree (HT) algorithm, and their combination with the na(i)ve auction initialization are discussed. The computational results show that HT algorithm is slightly faster than HF algorithm and they are both superior to the classic Munkres algorithm.
Group Leaders Optimization Algorithm
Daskin, Anmer
2010-01-01
Complexity of global optimization algorithms makes implementation of the algorithms difficult and leads the algorithms to require more computer resources for the optimization process. The ability to explore the whole solution space without increasing the complexity of algorithms has a great importance to not only get reliable results but so also make the implementation of these algorithms more convenient for higher dimensional and complex-real world problems in science and engineering. In this paper, we present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique that is designed into a group architecture similar to the architecture of Cooperative Coevolutionary Algorithms. Therefore, we present the implementation method and the experimental results for the single and multidimensional optimization test problems and a scientific real world problem, the energies and the geometric structures of Lennard-Jones clusters.
Entropy Message Passing Algorithm
Ilic, Velimir M; Branimir, Todorovic T
2009-01-01
Message passing over factor graph can be considered as generalization of many well known algorithms for efficient marginalization of multivariate function. A specific instance of the algorithm is obtained by choosing an appropriate commutative semiring for the range of the function to be marginalized. Some examples are Viterbi algorithm, obtained on max-product semiring and forward-backward algorithm obtained on sum-product semiring. In this paper, Entropy Message Passing algorithm (EMP) is developed. It operates over entropy semiring, previously introduced in automata theory. It is shown how EMP extends the use of message passing over factor graphs to probabilistic model algorithms such as Expectation Maximization algorithm, gradient methods and computation of model entropy, unifying the work of different authors.
Algorithms for reconstruction of chromosomal structures.
Lyubetsky, Vassily; Gershgorin, Roman; Seliverstov, Alexander; Gorbunov, Konstantin
2016-01-19
One of the main aims of phylogenomics is the reconstruction of objects defined in the leaves along the whole phylogenetic tree to minimize the specified functional, which may also include the phylogenetic tree generation. Such objects can include nucleotide and amino acid sequences, chromosomal structures, etc. The structures can have any set of linear and circular chromosomes, variable gene composition and include any number of paralogs, as well as any weights of individual evolutionary operations to transform a chromosome structure. Many heuristic algorithms were proposed for this purpose, but there are just a few exact algorithms with low (linear, cubic or similar) polynomial computational complexity among them to our knowledge. The algorithms naturally start from the calculation of both the distance between two structures and the shortest sequence of operations transforming one structure into another. Such calculation per se is an NP-hard problem. A general model of chromosomal structure rearrangements is considered. Exact algorithms with almost linear or cubic polynomial complexities have been developed to solve the problems for the case of any chromosomal structure but with certain limitations on operation weights. The computer programs are tested on biological data for the problem of mitochondrial or plastid chromosomal structure reconstruction. To our knowledge, no computer programs are available for this model. Exactness of the proposed algorithms and such low polynomial complexities were proved. The reconstructed evolutionary trees of mitochondrial and plastid chromosomal structures as well as the ancestral states of the structures appear to be reasonable.
Target-tracking algorithm for omnidirectional vision
Cai, Chengtao; Weng, Xiangyu; Fan, Bing; Zhu, Qidan
2017-05-01
Omnidirectional vision with the advantage of a large field-of-view overcomes the problem that a target is easily lost due to the narrow sight of perspective vision. We improve a target-tracking algorithm based on discriminative tracking features in several aspects and propose a target-tracking algorithm for an omnidirectional vision system. (1) An elliptical target window expression model is presented to represent the target's outline, which can adapt to the deformation of an object and reduce background interference. (2) The background-weighted linear RGB histogram target feature is introduced, which decreases the weight of the background feature. (3) The Bhattacharyya coefficients-based feature identification method is employed, which reduces the computation time of the tracking algorithm. (4) An adaptive target scale and orientation measurement method is applied to adapt to severe deformations of the target's outline. (5) A model update strategy is put forward, which is based on similarity measurements to achieve an effective and accurate model update. The experimental results show the proposed algorithm can achieve better performance than the state-of-the-art algorithms when using omnidirectional vision to perform long-term target-tracking tasks.
An extended EM algorithm for subspace clustering
Institute of Scientific and Technical Information of China (English)
Lifei CHEN; Qingshan JIANG
2008-01-01
Clustering high dimensional data has become a challenge in data mining due to the curse of dimension-ality. To solve this problem, subspace clustering has been defined as an extension of traditional clustering that seeks to find clusters in subspaces spanned by different combinations of dimensions within a dataset. This paper presents a new subspace clustering algorithm that calcu-lates the local feature weights automatically in an EM-based clustering process. In the algorithm, the features are locally weighted by using a new unsupervised weight-ing method, as a means to minimize a proposed cluster-ing criterion that takes into account both the average intra-clusters compactness and the average inter-clusters separation for subspace clustering. For the purposes of capturing accurate subspace information, an additional outlier detection process is presented to identify the pos-sible local outliers of subspace clusters, and is embedded between the E-step and M-step of the algorithm. The method has been evaluated in clustering real-world gene expression data and high dimensional artificial data with outliers, and the experimental results have shown its effectiveness.
Englberger, L.
1999-01-01
A programme of weight loss competitions and associated activities in Tonga, intended to combat obesity and the noncommunicable diseases linked to it, has popular support and the potential to effect significant improvements in health. PMID:10063662
... Some kids and teens are underweight because of eating disorders , like anorexia or bulimia, which ... weight. People from different races, ethnic groups, and nationalities tend to have different body fat ...
... Global Map Premature birth report card Careers Archives Pregnancy Before or between pregnancies Nutrition, weight & fitness Prenatal ... virus and pregnancy Folic acid Medicine safety and pregnancy Birth defects prevention Learn how to help reduce ...
... this page, enter your email address: Enter Email Address What's this? Submit What's this? Submit Button About Us Division Information Nutrition Physical Activity Overweight & Obesity Healthy Weight Breastfeeding Micronutrient Malnutrition ...
Menichetti, Giulia; Panzarasa, Pietro; Mondragón, Raúl J; Bianconi, Ginestra
2013-01-01
One of the most important challenges in network science is to quantify the information encoded in complex network structures. Disentangling randomness from organizational principles is even more demanding when networks have a multiplex nature. Multiplex networks are multilayer systems of $N$ nodes that can be linked in multiple interacting and co-evolving layers. In these networks, relevant information might not be captured if the single layers were analyzed separately. Here we demonstrate that such partial analysis of layers fails to capture significant correlations between weights and topology of complex multiplex networks. To this end, we study two weighted multiplex co-authorship and citation networks involving the authors included in the American Physical Society. We show that in these networks weights are strongly correlated with multiplex structure, and provide empirical evidence in favor of the advantage of studying weighted measures of multiplex networks, such as multistrength and the inverse multipa...
Explicit inverse distance weighting mesh motion for coupled problems
Witteveen, J.A.S.; Bijl, H.
2009-01-01
An explicit mesh motion algorithm based on inverse distance weighting interpolation is presented. The explicit formulation leads to a fast mesh motion algorithm and an easy implementation. In addition, the proposed point-by-point method is robust and flexible in case of large deformations, hanging nodes, and parallelization. Mesh quality results and CPU time comparisons are presented for triangular and hexahedral unstructured meshes in an airfoil flutter fluid-structure interaction problem.
Vokřínek, Lukáš
2012-01-01
Let V be a cofibrantly generated closed symmetric monoidal model category and M a model V-category. We say that a weighted colimit W*D of a diagram D weighted by W is a homotopy weighted colimit if the diagram D is pointwise cofibrant and the weight W is cofibrant in the projective model structure on [C^op,V]. We then proceed to describe such homotopy weighted colimits through homotopy tensors and ordinary (conical) homotopy colimits. This is a homotopy version of the well known isomorphism W*D=\\int^C(W\\tensor D). After proving this homotopy decomposition in general we study in some detail a few special cases. For simplicial sets tensors may be replaced up to weak equivalence by conical homotopy colimits and thus the weighted homotopy colimits have no added value. The situation is completely different for model dg-categories where the desuspension cannot be constructed from conical homotopy colimits. In the last section we characterize those V-functors inducing a Quillen equivalence on the enriched presheaf c...
Subband Affine Projection Algorithm for Acoustic Echo Cancellation System
Directory of Open Access Journals (Sweden)
Choi Hun
2007-01-01
Full Text Available We present a new subband affine projection (SAP algorithm for the adaptive acoustic echo cancellation with long echo path delay. Generally, the acoustic echo canceller suffers from the long echo path and large computational complexity. To solve this problem, the proposed algorithm combines merits of the affine projection (AP algorithm and the subband filtering. Convergence speed of the proposed algorithm is improved by the signal-decorrelating property of the orthogonal subband filtering and the weight updating with the prewhitened input signal of the AP algorithm. Moreover, in the proposed algorithms, as applying the polyphase decomposition, the noble identity, and the critical decimation to subband the adaptive filter, the sufficiently decomposed SAP updates the weights of adaptive subfilters without a matrix inversion. Therefore, computational complexity of the proposed method is considerably reduced. In the SAP, the derived weight updating formula for the subband adaptive filter has a simple form as ever compared with the normalized least-mean-square (NLMS algorithm. The efficiency of the proposed algorithm for the colored signal and speech signal was evaluated experimentally.
Exploring the Constrained Maximum Edge-weight Connected Graph Problem
Institute of Scientific and Technical Information of China (English)
Zhen-ping Li; Shi-hua Zhang; Xiang-Sun Zhang; Luo-nan Chen
2009-01-01
Given an edge weighted graph,the maximum edge-weight connected graph (MECG) is a connected subgraph with a given number of edges and the maximal weight sum.Here we study a special case,i.e.the Constrained Maximum Edge-Weight Connected Graph problem (CMECG),which is an MECG whose candidate subgraphs must include a given set of k edges,then also called the k-CMECG.We formulate the k-CMECG into an integer linear programming model based on the network flow problem.The k-CMECG is proved to be NP-hard.For the special case 1-CMECG,we propose an exact algorithm and a heuristic algorithm respectively.We also propose a heuristic algorithm for the k-CMECG problem.Some simulations have been done to analyze the quality of these algorithms.Moreover,we show that the algorithm for 1-CMECG problem can lead to the solution of the general MECG problem.
Color-to-grayscale conversion through weighted multiresolution channel fusion
Wu, Tirui; Toet, Alexander
2014-07-01
We present a color-to-gray conversion algorithm that retains both the overall appearance and the discriminability of details of the input color image. The algorithm employs a weighted pyramid image fusion scheme to blend the R, G, and B color channels of the input image into a single grayscale image. The use of simple visual quality metrics as weights in the fusion scheme serves to retain visual contrast from each of the input color channels. We demonstrate the effectiveness of the method by qualitative and quantitative comparison with several state-of-the-art methods.
Scale transform algorithm used in FMCW SAR data processing
Institute of Scientific and Technical Information of China (English)
Jiang Zhihong; Kan Huangfu; Wan Jianwei
2007-01-01
The frequency-modulated continuous-wave (FMCW) synthetic aperture radar (SAR) is a light-weight,cost-effective, high-resolution imaging radar, which is suitable for a small flight platform. The signal model is derived for FMCW SAR used in unmanned aerial vehicles (UAV) reconnaissance and remote sensing. An appropriate algorithm is proposed. The algorithm performs the range cell migration correction (RCMC) for continuous nonchirped raw data using the energy invariance of the scaling of a signal in the scale domain. The azimuth processing is based on step transform without geometric resampling operation. The complete derivation of the algorithm is presented. The algorithm performance is shown by simulation results.
A multicast dynamic wavelength assignment algorithm based on matching degree
Institute of Scientific and Technical Information of China (English)
WU Qi-wu; ZHOU Xian-wei; WANG Jian-ping; YIN Zhi-hong; ZHANG Long
2009-01-01
The wavelength assignment with multiple multicast requests in fixed routing WDM network is studied. A new multicast dynamic wavelength assignment algorithm is presented based on matching degree. First, the wavelength matching degree between available wavelengths and multicast routing trees is introduced into the algorithm. Then, the wavelength assign-ment is translated into the maximum weight matching in bipartite graph, and this matching problem is solved by using an extended Kuhn-Munkres algorithm. The simulation results prove that the overall optimal wavelength assignment scheme is obtained in polynomial time. At the same time, the proposed algorithm can reduce the connecting blocking probability and improve the system resource utilization.
Heuristic Reduction Algorithm Based on Pairwise Positive Region
Institute of Scientific and Technical Information of China (English)
QI Li; LIU Yu-shu
2007-01-01
To guarantee the optimal reduct set, a heuristic reduction algorithm is proposed, which considers the distinguishing information between the members of each pair decision classes. Firstly the pairwise positive region is defined, based on which the pairwise significance measure is calculated between the members of each pair classes. Finally the weighted pairwise significance of attribute is used as the attribute reduction criterion, which indicates the necessity of attributes very well. By introducing the noise tolerance factor, the new algorithm can tolerate noise to some extent. Experimental results show the advantages of our novel heuristic reduction algorithm over the traditional attribute dependency based algorithm.
Inhomogeneous phase shifting: an algorithm for nonconstant phase displacements
Energy Technology Data Exchange (ETDEWEB)
Tellez-Quinones, Alejandro; Malacara-Doblado, Daniel
2010-11-10
In this work, we have developed a different algorithm than the classical one on phase-shifting interferometry. These algorithms typically use constant or homogeneous phase displacements and they can be quite accurate and insensitive to detuning, taking appropriate weight factors in the formula to recover the wrapped phase. However, these algorithms have not been considered with variable or inhomogeneous displacements. We have generalized these formulas and obtained some expressions for an implementation with variable displacements and ways to get partially insensitive algorithms with respect to these arbitrary error shifts.
Image segmentation by using the localized subspace iteration algorithm
Institute of Scientific and Technical Information of China (English)
2008-01-01
An image segmentation algorithm called"segmentation based on the localized subspace iterations"(SLSI)is proposed in this paper.The basic idea is to combine the strategies in Ncut algorithm by Shi and Malik in 2000 and the LSI by E,Li and Lu in 2007.The LSI is applied to solve an eigenvalue problem associated with the affinity matrix of an image,which makes the overall algorithm linearly scaled.The choices of the partition number,the supports and weight functions in SLSI are discussed.Numerical experiments for real images show the applicability of the algorithm.
A New Class of Hybrid Particle Swarm Optimization Algorithm
Institute of Scientific and Technical Information of China (English)
Da-Qing Guo; Yong-Jin Zhao; Hui Xiong; Xiao Li
2007-01-01
A new class of hybrid particle swarm optimization (PSO) algorithm is developed for solving the premature convergence caused by some particles in standard PSO fall into stagnation. In this algorithm, the linearly decreasing inertia weight technique (LDIW) and the mutative scale chaos optimization algorithm (MSCOA) are combined with standard PSO, which are used to balance the global and local exploration abilities and enhance the local searching abilities, respectively. In order to evaluate the performance of the new method, three benchmark functions are used. The simulation results confirm the proposed algorithm can greatly enhance the searching ability and effectively improve the premature convergence.
Lower Bounds for Howard's Algorithm for Finding Minimum Mean-Cost Cycles
DEFF Research Database (Denmark)
Hansen, Thomas Dueholm; Zwick, Uri
2010-01-01
Howard’s policy iteration algorithm is one of the most widely used algorithms for finding optimal policies for controlling Markov Decision Processes (MDPs). When applied to weighted directed graphs, which may be viewed as Deterministic MDPs (DMDPs), Howard’s algorithm can be used to find Minimum ...
DOA Estimation with Local-Peak-Weighted CSP
Directory of Open Access Journals (Sweden)
Ichikawa Osamu
2010-01-01
Full Text Available This paper proposes a novel weighting algorithm for Cross-power Spectrum Phase (CSP analysis to improve the accuracy of direction of arrival (DOA estimation for beamforming in a noisy environment. Our sound source is a human speaker and the noise is broadband noise in an automobile. The harmonic structures in the human speech spectrum can be used for weighting the CSP analysis, because harmonic bins must contain more speech power than the others and thus give us more reliable information. However, most conventional methods leveraging harmonic structures require pitch estimation with voiced-unvoiced classification, which is not sufficiently accurate in noisy environments. In our new approach, the observed power spectrum is directly converted into weights for the CSP analysis by retaining only the local peaks considered to be harmonic structures. Our experiment showed the proposed approach significantly reduced the errors in localization, and it showed further improvements when used with other weighting algorithms.
DEFF Research Database (Denmark)
Ortiz-Arroyo, Daniel; Yazdani, Hossein
2017-01-01
The accuracy of machine learning methods for clustering depends on the optimal selection of similarity functions. Conventional distance functions for the vector space might cause an algorithm to being affected by some dominant features that may skew its ﬁnal results. This paper introduces a ﬂexib...
Weight Loss Nutritional Supplements
Eckerson, Joan M.
Obesity has reached what may be considered epidemic proportions in the United States, not only for adults but for children. Because of the medical implications and health care costs associated with obesity, as well as the negative social and psychological impacts, many individuals turn to nonprescription nutritional weight loss supplements hoping for a quick fix, and the weight loss industry has responded by offering a variety of products that generates billions of dollars each year in sales. Most nutritional weight loss supplements are purported to work by increasing energy expenditure, modulating carbohydrate or fat metabolism, increasing satiety, inducing diuresis, or blocking fat absorption. To review the literally hundreds of nutritional weight loss supplements available on the market today is well beyond the scope of this chapter. Therefore, several of the most commonly used supplements were selected for critical review, and practical recommendations are provided based on the findings of well controlled, randomized clinical trials that examined their efficacy. In most cases, the nutritional supplements reviewed either elicited no meaningful effect or resulted in changes in body weight and composition that are similar to what occurs through a restricted diet and exercise program. Although there is some evidence to suggest that herbal forms of ephedrine, such as ma huang, combined with caffeine or caffeine and aspirin (i.e., ECA stack) is effective for inducing moderate weight loss in overweight adults, because of the recent ban on ephedra manufacturers must now use ephedra-free ingredients, such as bitter orange, which do not appear to be as effective. The dietary fiber, glucomannan, also appears to hold some promise as a possible treatment for weight loss, but other related forms of dietary fiber, including guar gum and psyllium, are ineffective.
An Optimized Weighted Association Rule Mining On Dynamic Content
Velvadivu, P
2010-01-01
Association rule mining aims to explore large transaction databases for association rules. Classical Association Rule Mining (ARM) model assumes that all items have the same significance without taking their weight into account. It also ignores the difference between the transactions and importance of each and every itemsets. But, the Weighted Association Rule Mining (WARM) does not work on databases with only binary attributes. It makes use of the importance of each itemset and transaction. WARM requires each item to be given weight to reflect their importance to the user. The weights may correspond to special promotions on some products, or the profitability of different items. This research work first focused on a weight assignment based on a directed graph where nodes denote items and links represent association rules. A generalized version of HITS is applied to the graph to rank the items, where all nodes and links are allowed to have weights. This research then uses enhanced HITS algorithm by developing...
Sidelobe Suppression with Null Steering by Independent Weight Control
Directory of Open Access Journals (Sweden)
Zafar-Ullah Khan
2015-01-01
Full Text Available A uniform linear array of n antenna elements can steer up to n-1 nulls. In situations where less than n-1 nulls are required to be steered, the existing algorithms have no criterion to utilize the remaining weights for sidelobe suppression. This work combines sidelobe suppression capability with null steering by independent weight control. For this purpose, the array factor is transformed as the product of two polynomials. One of the polynomials is used for null steering by independent weight control, while the second one is for sidelobe suppression whose coefficients or weights are determined by using convex optimization. Finally, a new structure is proposed to incorporate the product of two polynomials such that sidelobe suppression weights are decoupled from those of null steering weights. Simulation results validate the effectiveness of the proposed scheme.
Incorporating the sampling design in weighting adjustments for panel attrition.
Chen, Qixuan; Gelman, Andrew; Tracy, Melissa; Norris, Fran H; Galea, Sandro
2015-12-10
We review weighting adjustment methods for panel attrition and suggest approaches for incorporating design variables, such as strata, clusters, and baseline sample weights. Design information can typically be included in attrition analysis using multilevel models or decision tree methods such as the chi-square automatic interaction detection algorithm. We use simulation to show that these weighting approaches can effectively reduce bias in the survey estimates that would occur from omitting the effect of design factors on attrition while keeping the resulted weights stable. We provide a step-by-step illustration on creating weighting adjustments for panel attrition in the Galveston Bay Recovery Study, a survey of residents in a community following a disaster, and provide suggestions to analysts in decision-making about weighting approaches. Copyright © 2015 John Wiley & Sons, Ltd.
Graphs and matroids weighted in a bounded incline algebra.
Lu, Ling-Xia; Zhang, Bei
2014-01-01
Firstly, for a graph weighted in a bounded incline algebra (or called a dioid), a longest path problem (LPP, for short) is presented, which can be considered the uniform approach to the famous shortest path problem, the widest path problem, and the most reliable path problem. The solutions for LPP and related algorithms are given. Secondly, for a matroid weighted in a linear matroid, the maximum independent set problem is studied.
Adaptive Alternating Minimization Algorithms
Niesen, Urs; Wornell, Gregory
2007-01-01
The classical alternating minimization (or projection) algorithm has been successful in the context of solving optimization problems over two variables or equivalently of finding a point in the intersection of two sets. The iterative nature and simplicity of the algorithm has led to its application to many areas such as signal processing, information theory, control, and finance. A general set of sufficient conditions for the convergence and correctness of the algorithm is quite well-known when the underlying problem parameters are fixed. In many practical situations, however, the underlying problem parameters are changing over time, and the use of an adaptive algorithm is more appropriate. In this paper, we study such an adaptive version of the alternating minimization algorithm. As a main result of this paper, we provide a general set of sufficient conditions for the convergence and correctness of the adaptive algorithm. Perhaps surprisingly, these conditions seem to be the minimal ones one would expect in ...
Inverse Distance Weighted Interpolation Involving Position Shading
Directory of Open Access Journals (Sweden)
LI Zhengquan
2015-01-01
Full Text Available Considering the shortcomings of inverse distance weighted (IDW interpolation in practical applications, this study improved the IDW algorithm and put forward a new spatial interpolation method that named as adjusted inverse distance weighted (AIDW. In interpolating process, the AIDW is capable of taking into account the comprehensive influence of distance and position of sample point to interpolation point, by adding a coefficient (K into the normal IDW formula. The coefficient (K is used to adjust interpolation weight of the sample point according to its position in sample points. Theoretical analysis and practical application indicates that the AIDW algorithm could diminish or eliminate the IDW interpolation defect of non-uniform distribution of sample points. Consequently the AIDW interpolating is more reasonable, compared with the IDW interpolating. On the other hand, the contour plotting of the AIDW interpolation could effectively avoid the implausible isolated and concentric circles that originated from the defect of the IDW interpolation, with the result that the contour derived from the AIDW interpolated surface is more similar to the professional manual identification.
Collective Prediction of Individual Mobility Traces with Exponential Weights
Hawelka, Bartosz; Kazakopoulos, Pavlos; Beinat, Euro
2015-01-01
We present and test a sequential learning algorithm for the short-term prediction of human mobility. This novel approach pairs the Exponential Weights forecaster with a very large ensemble of experts. The experts are individual sequence prediction algorithms constructed from the mobility traces of 10 million roaming mobile phone users in a European country. Average prediction accuracy is significantly higher than that of individual sequence prediction algorithms, namely constant order Markov models derived from the user's own data, that have been shown to achieve high accuracy in previous studies of human mobility prediction. The algorithm uses only time stamped location data, and accuracy depends on the completeness of the expert ensemble, which should contain redundant records of typical mobility patterns. The proposed algorithm is applicable to the prediction of any sufficiently large dataset of sequences.
Optimization Algorithms for Nuclear Reactor Power Control
Energy Technology Data Exchange (ETDEWEB)
Kim, Yeong Min; Oh, Won Jong; Oh, Seung Jin; Chun, Won Gee; Lee, Yoon Joon [Jeju National University, Jeju (Korea, Republic of)
2010-10-15
One of the control techniques that could replace the present conventional PID controllers in nuclear plants is the linear quadratic regulator (LQR) method. The most attractive feature of the LQR method is that it can provide the systematic environments for the control design. However, the LQR approach heavily depends on the selection of cost function and the determination of the suitable weighting matrices of cost function is not an easy task, particularly when the system order is high. The purpose of this paper is to develop an efficient and reliable algorithm that could optimize the weighting matrices of the LQR system
Semantic Web Improved with the Weighted IDF Feature
Directory of Open Access Journals (Sweden)
Mrs. Jyoti Gautam
2015-02-01
Full Text Available The development of search engines is taking at a very fast rate. A lot of algorithms have been tried and tested. But, still the people are not getting precise results. Social networking sites are developing at tremendous rate and their growth has given birth to the new interesting problems. The social networking sites use semantic data to enhance the results. This provides us with a new perspective on how to improve the quality of information retrieval. As we are aware, many techniques of text classification are based on TFIDF algorithm. Term weighting has a significant role in classifying a text document. In this paper, firstly, we are extending the queries by “keyword+tags” instead of keywords only. In addition to this, secondly, we have developed a new ranking algorithm (JEKS algorithm based on semantic tags from user feedback that uses CiteUlike data. The algorithm enhances the already existing semantic web by using the weighted IDF feature of the TFIDF algorithm. The suggested algorithm provides a better ranking than Google and can be viewed as a semantic web service in the domain of academics.
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
2016-06-07
REFERENCE ONLY NAVAL U~DERWATER SYSTEMS CENTER NEW LONDON LABORATORY NEW LONDON, CONNECTICUT 06320 Technical Memorandum SIMAS ADM XBT ALGORITHM ...REPORT TYPE Technical Memo 3. DATES COVERED 05-12-1984 to 05-12-1984 4. TITLE AND SUBTITLE SIMAS ADM XBT Algorithm 5a. CONTRACT NUMBER 5b...NOTES NUWC2015 14. ABSTRACT An algorithm has been developed for the detection and correction of surface ship launched expendable bathythermograph
Static Analysis Numerical Algorithms
2016-04-01
STATIC ANALYSIS OF NUMERICAL ALGORITHMS KESTREL TECHNOLOGY, LLC APRIL 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...3. DATES COVERED (From - To) NOV 2013 – NOV 2015 4. TITLE AND SUBTITLE STATIC ANALYSIS OF NUMERICAL ALGORITHMS 5a. CONTRACT NUMBER FA8750-14-C... algorithms , linear digital filters and integrating accumulators, modifying existing versions of Honeywell’s HiLiTE model-based development system and
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
Fingerprint Feature Extraction Algorithm
Directory of Open Access Journals (Sweden)
Mehala. G
2014-03-01
Full Text Available The goal of this paper is to design an efficient Fingerprint Feature Extraction (FFE algorithm to extract the fingerprint features for Automatic Fingerprint Identification Systems (AFIS. FFE algorithm, consists of two major subdivisions, Fingerprint image preprocessing, Fingerprint image postprocessing. A few of the challenges presented in an earlier are, consequently addressed, in this paper. The proposed algorithm is able to enhance the fingerprint image and also extracting true minutiae.
Fingerprint Feature Extraction Algorithm
Mehala. G
2014-01-01
The goal of this paper is to design an efficient Fingerprint Feature Extraction (FFE) algorithm to extract the fingerprint features for Automatic Fingerprint Identification Systems (AFIS). FFE algorithm, consists of two major subdivisions, Fingerprint image preprocessing, Fingerprint image postprocessing. A few of the challenges presented in an earlier are, consequently addressed, in this paper. The proposed algorithm is able to enhance the fingerprint image and also extractin...
Redesigning linear algebra algorithms
Energy Technology Data Exchange (ETDEWEB)
Dongarra, J.J.
1983-01-01
Many of the standard algorithms in linear algebra as implemented in FORTRAN do not achieve maximum performance on today's large-scale vector computers. The author examines the problem and constructs alternative formulations of algorithms that do not lose the clarity of the original algorithm or sacrifice the FORTRAN portable environment, but do gain the performance attainable on these supercomputers. The resulting implementation not only performs well on vector computers but also increases performance on conventional sequential computers. 13 references.
Redesigning linear algebra algorithms
Energy Technology Data Exchange (ETDEWEB)
Dongarra, J.J.
1983-01-01
Many of the standard algorithms in linear algebra as implemented in FORTRAN do not achieve maximum performance on today's large-scale vector computers. In this paper we examine the problem and construct alternative formulations of algorithms that do not lose the clarity of the original algorithm or sacrifice the Fortran portable environment, but do gain the performance attainable on these supercomputers. The resulting implementation not only performs well on vector computers but also increases performance on conventional sequential computers.
Algorithms in Algebraic Geometry
Dickenstein, Alicia; Sommese, Andrew J
2008-01-01
In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its
Distributed Minimum Hop Algorithms
1982-01-01
acknowledgement), node d starts iteration i+1, and otherwise the algorithm terminates. A detailed description of the algorithm is given in pidgin algol...precise behavior of the algorithm under these circumstances is described by the pidgin algol program in the appendix which is executed by each node. The...l) < N!(2) for each neighbor j, and thus by induction,J -1 N!(2-1) < n-i + (Z-1) + N!(Z-1), completing the proof. Algorithm Dl in Pidgin Algol It is
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
License plate detection algorithm
Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds
2013-12-01
A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Light weight phosphate cements
Wagh, Arun S.; Natarajan, Ramkumar,; Kahn, David
2010-03-09
A sealant having a specific gravity in the range of from about 0.7 to about 1.6 for heavy oil and/or coal bed methane fields is disclosed. The sealant has a binder including an oxide or hydroxide of Al or of Fe and a phosphoric acid solution. The binder may have MgO or an oxide of Fe and/or an acid phosphate. The binder is present from about 20 to about 50% by weight of the sealant with a lightweight additive present in the range of from about 1 to about 10% by weight of said sealant, a filler, and water sufficient to provide chemically bound water present in the range of from about 9 to about 36% by weight of the sealant when set. A porous ceramic is also disclosed.
Family Weight School treatment
DEFF Research Database (Denmark)
Nowicka, Paulina; Höglund, Peter; Pietrobelli, Angelo
2008-01-01
OBJECTIVE: The aim was to evaluate the efficacy of a Family Weight School treatment based on family therapy in group meetings with adolescents with a high degree of obesity. METHODS: Seventy-two obese adolescents aged 12-19 years old were referred to a childhood obesity center by pediatricians...... and school nurses and offered a Family Weight School therapy program in group meetings given by a multidisciplinary team. Intervention was compared with an untreated waiting list control group. Body mass index (BMI) and BMI z-scores were calculated before and after intervention. RESULTS: Ninety percent...... group with initial BMI z-score 3.5. CONCLUSIONS: Family Weight School treatment model might be suitable for adolescents with BMI z...
Weight Management in Phenylketonuria
DEFF Research Database (Denmark)
Rocha, Julio César; van Rijn, Margreet; van Dam, Esther
2016-01-01
specialized clinic, the second objective is important in establishing an understanding of the breadth of overweight and obesity in PKU in Europe. KEY MESSAGES: In PKU, the importance of adopting a European nutritional management strategy on weight management is highlighted in order to optimize long-term....... It is becoming evident that in addition to acceptable blood phenylalanine control, metabolic dieticians should regard weight management as part of routine clinical practice. SUMMARY: It is important for practitioners to differentiate the 3 levels for overweight interpretation: anthropometry, body composition...... and frequency and severity of associated metabolic comorbidities. The main objectives of this review are to suggest proposals for the minimal standard and gold standard for the assessment of weight management in PKU. While the former aims to underline the importance of nutritional status evaluation in every...
STATISTICAL SPACE-TIME ADAPTIVE PROCESSING ALGORITHM
Institute of Scientific and Technical Information of China (English)
Yang Jie
2010-01-01
For the slowly changed environment-range-dependent non-homogeneity,a new statistical space-time adaptive processing algorithm is proposed,which uses the statistical methods,such as Bayes or likelihood criterion to estimate the approximative covariance matrix in the non-homogeneous condition. According to the statistical characteristics of the space-time snapshot data,via defining the aggregate snapshot data and corresponding events,the conditional probability of the space-time snapshot data which is the effective training data is given,then the weighting coefficients are obtained for the weighting method. The theory analysis indicates that the statistical methods of the Bayes and likelihood criterion for covariance matrix estimation are more reasonable than other methods that estimate the covariance matrix with the use of training data except the detected outliers. The last simulations attest that the proposed algorithms can estimate the covariance in the non-homogeneous condition exactly and have favorable characteristics.
McConnel, Craig S; McNeil, Ashleigh A; Hadrich, Joleen C; Lombard, Jason E; Garry, Franklyn B; Heller, Jane
2017-08-01
Over the past 175 years, data related to human disease and death have progressed to a summary measure of population health, the Disability-Adjusted Life Year (DALY). As dairies have intensified there has been no equivalent measure of the impact of disease on the productive life and well-being of animals. The development of a disease-adjusted metric requires a consistent set of disability weights that reflect the relative severity of important diseases. The objective of this study was to use an international survey of dairy authorities to derive disability weights for primary disease categories recorded on dairies. National and international dairy health and management authorities were contacted through professional organizations, dairy industry publications and conferences, and industry contacts. Estimates of minimum, most likely, and maximum disability weights were derived for 12 common dairy cow diseases. Survey participants were asked to estimate the impact of each disease on overall health and milk production. Diseases were classified from 1 (minimal adverse effects) to 10 (death). The data was modelled using BetaPERT distributions to demonstrate the variation in these dynamic disease processes, and to identify the most likely aggregated disability weights for each disease classification. A single disability weight was assigned to each disease using the average of the combined medians for the minimum, most likely, and maximum severity scores. A total of 96 respondents provided estimates of disability weights. The final disability weight values resulted in the following order from least to most severe: retained placenta, diarrhea, ketosis, metritis, mastitis, milk fever, lame (hoof only), calving trauma, left displaced abomasum, pneumonia, musculoskeletal injury (leg, hip, back), and right displaced abomasum. The peaks of the probability density functions indicated that for certain disease states such as retained placenta there was a relatively narrow range of
Exercise in weight management.
Pinto, B M; Szymanski, L
1997-11-01
Exercise is integral to successful weight loss and maintenance. When talking to patients about exercise, consider their readiness, and address the barriers that prevent exercise. Physicians can help those patients who already exercise by encouraging them to continue and helping them anticipate, and recover from, lapses. Providing resource material to patients on behavioral strategies for exercise adoption and weight management can supplement the physician's efforts. Overall, patients need to hear that any regular exercise, be it step-aerobics, walking, or taking the stairs, will benefit them.
Formation Pattern Based on Modified Cell Decomposition Algorithm
Directory of Open Access Journals (Sweden)
Iswanto Iswanto
2017-06-01
Full Text Available The purpose of this paper is to present the shortest path algorithm for Quadrotor to make a formation quickly and avoid obstacles in an unknown area. There are three algorithms proposed in this paper namely fuzzy, cell decomposition, and potential field algorithms. Cell decomposition algorithm is an algorithm derived from graph theory used to create maps of robot formations. Fuzzy algorithm is an artificial intelligence control algorithm used for robot navigation. The merger of these two algorithms are not able to form an optimum formation because some Quadrotors which have been hovering should wait for the other Quadrotors which are unable to find the shortest distance to reach the formation quickly. The problem is that the longer time the multi Quadrotors take to make a formation, the more energy they use. It can be overcome by adding potential field algorithm. The algorithm is used to give values of weight to the path planning taken by the Quadrotors. The proposed algorithms have shown that multi Quadrotors can quickly make a formation because they are able to avoid various obstacles and find the shortest path so that the time required to get to the goal position is fast.
The evaluation of the OSGLR algorithm for restructurable controls
Bonnice, W. F.; Wagner, E.; Hall, S. R.; Motyka, P.
1986-01-01
The detection and isolation of commercial aircraft control surface and actuator failures using the orthogonal series generalized likelihood ratio (OSGLR) test was evaluated. The OSGLR algorithm was chosen as the most promising algorithm based on a preliminary evaluation of three failure detection and isolation (FDI) algorithms (the detection filter, the generalized likelihood ratio test, and the OSGLR test) and a survey of the literature. One difficulty of analytic FDI techniques and the OSGLR algorithm in particular is their sensitivity to modeling errors. Therefore, methods of improving the robustness of the algorithm were examined with the incorporation of age-weighting into the algorithm being the most effective approach, significantly reducing the sensitivity of the algorithm to modeling errors. The steady-state implementation of the algorithm based on a single cruise linear model was evaluated using a nonlinear simulation of a C-130 aircraft. A number of off-nominal no-failure flight conditions including maneuvers, nonzero flap deflections, different turbulence levels and steady winds were tested. Based on the no-failure decision functions produced by off-nominal flight conditions, the failure detection performance at the nominal flight condition was determined. The extension of the algorithm to a wider flight envelope by scheduling the linear models used by the algorithm on dynamic pressure and flap deflection was also considered. Since simply scheduling the linear models over the entire flight envelope is unlikely to be adequate, scheduling of the steady-state implentation of the algorithm was briefly investigated.
Institute of Scientific and Technical Information of China (English)
Shao Wei; Qian Zuping; Yuan Feng
2007-01-01
A robust phase-only Direct Data Domain Least Squares (D3LS) algorithm based on generalized Rayleigh quotient optimization using hybrid Genetic Algorithm (GA) is presented in this letter. The optimization efficiency and computational speed are improved via the hybrid GA composed of standard GA and Nelder-Mead simplex algorithms. First, the objective function, with a form of generalized Rayleigh quotient, is derived via the standard D3LS algorithm. It is then taken as a fitness function and the unknown phases of all adaptive weights are taken as decision variables.Then, the nonlinear optimization is performed via the hybrid GA to obtain the optimized solution of phase-only adaptive weights. As a phase-only adaptive algorithm, the proposed algorithm is simpler than conventional algorithms when it comes to hardware implementation. Moreover, it processes only a single snapshot data as opposed to forming sample covariance matrix and operating matrix inversion. Simulation results show that the proposed algorithm has a good signal recovery and interferences nulling performance, which are superior to that of the phase-only D3LS algorithm based on standard GA.
Real-Coded Quantum-Inspired Genetic Algorithm-Based BP Neural Network Algorithm
Directory of Open Access Journals (Sweden)
Jianyong Liu
2015-01-01
Full Text Available The method that the real-coded quantum-inspired genetic algorithm (RQGA used to optimize the weights and threshold of BP neural network is proposed to overcome the defect that the gradient descent method makes the algorithm easily fall into local optimal value in the learning process. Quantum genetic algorithm (QGA is with good directional global optimization ability, but the conventional QGA is based on binary coding; the speed of calculation is reduced by the coding and decoding processes. So, RQGA is introduced to explore the search space, and the improved varied learning rate is adopted to train the BP neural network. Simulation test shows that the proposed algorithm is effective to rapidly converge to the solution conformed to constraint conditions.
Weight and Diabetes (For Parents)
... help all kids maintain a healthy weight. For kids with diabetes, diet and exercise are even more important because ... weight is good for the entire family! When kids with diabetes reach and maintain a healthy weight, they feel ...
Overweight, Obesity, and Weight Loss
... Overweight, obesity, and weight loss fact sheet ePublications Overweight, obesity, and weight loss fact sheet Print this fact sheet Overweight, obesity, and weight loss fact sheet (full version) ( ...
Web Based Genetic Algorithm Using Data Mining
Ashiqur Rahman; Asaduzzaman Noman; Md. Ashraful Islam; Al-Amin Gaji
2016-01-01
This paper presents an approach for classifying students in order to predict their final grade based on features extracted from logged data in an education web-based system. A combination of multiple classifiers leads to a significant improvement in classification performance. Through weighting the feature vectors using a Genetic Algorithm we can optimize the prediction accuracy and get a marked improvement over raw classification. It further shows that when the number of features is few; fea...
Quantum Adiabatic Evolution Algorithms versus Simulated Annealing
Farhi, E; Gutmann, S; Farhi, Edward; Goldstone, Jeffrey; Gutmann, Sam
2002-01-01
We explain why quantum adiabatic evolution and simulated annealing perform similarly in certain examples of searching for the minimum of a cost function of n bits. In these examples each bit is treated symmetrically so the cost function depends only on the Hamming weight of the n bits. We also give two examples, closely related to these, where the similarity breaks down in that the quantum adiabatic algorithm succeeds in polynomial time whereas simulated annealing requires exponential time.
Novel quantum inspired binary neural network algorithm
Indian Academy of Sciences (India)
OM PRAKASH PATEL; ARUNA TIWARI
2016-11-01
In this paper, a quantum based binary neural network algorithm is proposed, named as novel quantum binary neural network algorithm (NQ-BNN). It forms a neural network structure by deciding weights and separability parameter in quantum based manner. Quantum computing concept represents solution probabilistically and gives large search space to find optimal value of required parameters using Gaussian random number generator. The neural network structure forms constructively having three number of layers input layer: hidden layer and output layer. A constructive way of deciding the network eliminates the unnecessary training of neural network. A new parameter that is a quantum separability parameter (QSP) is introduced here, which finds an optimal separability plane to classify input samples. During learning, it searches for an optimal separability plane. This parameter is taken as the threshold of neuron for learning of neural network. This algorithm is tested with three benchmark datasets and produces improved results than existing quantum inspired and other classification approaches.
Lane Detection Based on Machine Learning Algorithm
Directory of Open Access Journals (Sweden)
Chao Fan
2013-09-01
Full Text Available In order to improve accuracy and robustness of the lane detection in complex conditions, such as the shadows and illumination changing, a novel detection algorithm was proposed based on machine learning. After pretreatment, a set of haar-like filters were used to calculate the eigenvalue in the gray image f(x,y and edge e(x,y. Then these features were trained by using improved boosting algorithm and the final class function g(x was obtained, which was used to judge whether the point x belonging to the lane or not. To avoid the over fitting in traditional boosting, Fisher discriminant analysis was used to initialize the weights of samples. After testing by many road in all conditions, it showed that this algorithm had good robustness and real-time to recognize the lane in all challenging conditions.
An exact algorithm for graph partitioning
Hager, William; Zhang, Hongchao
2009-01-01
An exact algorithm is presented for solving edge weighted graph partitioning problems. The algorithm is based on a branch and bound method applied to a continuous quadratic programming formulation of the problem. Lower bounds are obtained by decomposing the objective function into convex and concave parts and replacing the concave part by an affine underestimate. It is shown that the best affine underestimate can be expressed in terms of the center and the radius of the smallest sphere containing the feasible set. The concave term is obtained either by a constant diagonal shift associated with the smallest eigenvalue of the objective function Hessian, or by a diagonal shift obtained by solving a semidefinite programming problem. Numerical results show that the proposed algorithm is competitive with state-of-the-art graph partitioning codes.
Load Balancing Algorithm for Cache Cluster
Institute of Scientific and Technical Information of China (English)
刘美华; 古志民; 曹元大
2003-01-01
By the load definition of cluster, the request is regarded as granularity to compute load and implement the load balancing in cache cluster. First, the processing power of cache-node is studied from four aspects: network bandwidth, memory capacity, disk access rate and CPU usage. Then, the weighted load of cache-node is customized. Based on this, a load-balancing algorithm that can be applied to the cache cluster is proposed. Finally, Polygraph is used as a benchmarking tool to test the cache cluster possessing the load-balancing algorithm and the cache cluster with cache array routing protocol respectively. The results show the load-balancing algorithm can improve the performance of the cache cluster.
A algorithm for the Vehicle Problem
Directory of Open Access Journals (Sweden)
Aziz Ezzatneshan
2010-09-01
Full Text Available In this paper, we propose a hybrid ACO algorithm for solving vehicle routing problem (VRP heuristically in combination with an exact In the basic VRP, geographically scattered customers of known demand are supplied from a single depot by a fleet of identically capacitated vehicles. The intuition of the proposed algorithm is that nodes which are near to each other will probably belong to the same branch of the minimum spanning tree of the problem graph and thus will probably belong to the same route in VRP. Given a clustering of client nodes, the solution is to find a route in these clusters by using ACO with a modified version of transition rule of the ants. At the end of each iteration, ACO tries to improve the quality of solutions by using a local search algorithm, and update the associated weights of the graph arcs.
A Algorithm for the Vehicle Problem
Directory of Open Access Journals (Sweden)
Aziz Ezzatneshan
2010-06-01
Full Text Available In this paper, we propose a hybrid ACO algorithm for solving vehicle routing problem (VRP heuristically in combination with an exact In the basic VRP, geographically scattered customers of known demand are supplied from a single depot by a fleet of identically capacitated vehicles. The intuition of the proposed algorithm is that nodes which are near to each other will probably belong to the same branch of the minimum spanning tree of the problem graph and thus will probably belong to the same route in VRP. Given a clustering of client nodes, the solution is to find a route in these clusters by using ACO with a modified version of transition rule of the ants. At the end of each iteration, ACO tries to improve the quality of solutions by using a local search algorithm, and update the associated weights of the graph arcs.
Weighted exponential polynomial approximation
Institute of Scientific and Technical Information of China (English)
邓冠铁
2003-01-01
A necessary and sufficient condition for completeness of systems of exponentials with a weightin Lp is established and a quantitative relation between the weight and the system of exponential in Lp isobtained by using a generalization of Malliavin's uniqueness theorem about Watson's problem.
2005-05-01
Process 16 Prototype Hardware Testing and Results 17 Barrel Weight 17 Functional Testing 17 Barrel Deflection 18 Drop Test 19 Thermal Test 20 References 23...measurements were compliant. 19 Thermal Test As discussed in the Transient Analysis Model Verification section of this report, the analytical results from the
African Journals Online (AJOL)
hayati
Efficiency of growth is a function of metabolisable energy retained relative to that which is .... distribution of other sexes in certain housing, initial weight or season categories ..... Fox, D.G., Johnson, R.R., Preston, R.L. & Dockerty, T.R., 1972.
Energy Technology Data Exchange (ETDEWEB)
Avakian, Harut [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Gamberg, Leonard [Pennsylvania State Univ., University Park, PA (United States); Rossi, Patrizia [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Prokudin, Alexei [Pennsylvania State Univ., University Park, PA (United States)
2016-05-01
We review the concept of Bessel weighted asymmetries for semi-inclusive deep inelastic scattering and focus on the cross section in Fourier space, conjugate to the outgoing hadron’s transverse momentum, where convolutions of transverse momentum dependent parton distribution functions and fragmentation functions become simple products. Individual asymmetric terms in the cross section can be projected out by means of a generalized set of weights involving Bessel functions. The procedure is applied to studies of the double longitudinal spin asymmetry in semi-inclusive deep inelastic scattering using a new dedicated Monte Carlo generator which includes quark intrinsic transverse momentum within the generalized parton model. We observe a few percent systematic offset of the Bessel-weighted asymmetry obtained from Monte Carlo extraction compared to input model calculations, which is due to the limitations imposed by the energy and momentum conservation at the given energy and hard scale Q2. We find that the Bessel weighting technique provides a powerful and reliable tool to study the Fourier transform of TMDs with controlled systematics due to experimental acceptances and resolutions with different TMD model inputs.
Nieuwenhuijsen, Mark J; Northstone, Kate; Golding, Jean
2002-11-01
Swimmers can be exposed to high levels of trihalomethanes, byproducts of chlorination disinfection. There are no published studies on the relation between swimming and birth weight. We explored this relation in a large birth cohort, the Avon (England) Longitudinal Study of Parents and Children (ALSPAC), in 1991-1992. Information on the amount of swimming per week during the first 18-20 weeks of pregnancy was available for 11,462 pregnant women. Fifty-nine percent never swam, 31% swam up to 1 hour per week, and 10% swam for longer. We used linear regression to explore the relation between birth weight and the amount of swimming, with adjustment for gestational age, maternal age, parity, maternal education level, ethnicity, housing tenure, drug use, smoking and alcohol consumption. We found little effect of the amount of swimming on birth weight. More highly educated women were more likely to swim compared with less educated women, whereas smokers were less likely to swim compared with nonsmokers. There appears to be no relation between the duration of swimming and birth weight.
Weight lifting builds muscle, which increases overall body strength, tone, and balance. Muscles also burn calories more efficiently than fat and other body tissues. So even at rest the more muscle tissue a person has the more calories a person is ...
... Exercise is a key component of a healthy lifestyle before, during and after pregnancy. After pregnancy, most women can start exercising as ... the skinny jeans. Focus on living a healthy lifestyle, and the rest will fall into place. More tips ... or between pregnancies Nutrition, weight & fitness Prenatal ...
Graphlet decomposition of a weighted network
Soufiani, Hossein Azari
2012-01-01
We introduce the graphlet decomposition of a weighted network, which encodes a notion of social information based on social structure. We develop a scalable inference algorithm, which combines EM with Bron-Kerbosch in a novel fashion, for estimating the parameters of the model underlying graphlets using one network sample. We explore some theoretical properties of the graphlet decomposition, including computational complexity, redundancy and expected accuracy. We demonstrate graphlets on synthetic and real data. We analyze messaging patterns on Facebook and criminal associations in the 19th century.
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network-oblivi...
2016-01-01
This project concerns the implementation of a decentralized algorithm for shape formation. The first idea was to test this algorithm with a swarm of autonomous drones but, due to the lack of time and the complexity of the project, the work was just developed in 2D and in simulation.
DEFF Research Database (Denmark)
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...
Ciocanea Teodorescu I.,
2016-01-01
In this thesis we are interested in describing algorithms that answer questions arising in ring and module theory. Our focus is on deterministic polynomial-time algorithms and rings and modules that are finite. The first main result of this thesis is a solution to the module isomorphism problem in
Implementing Vehicle Routing Algorithms
1975-09-01
Multiple Depot Vehicle Dispatch Problem," presented at the URSA/TIMS meeting San Juan, Puerto Rico, Oct. 1974. 28. Gillett, B., and Miller, L., " A Heuristic Algorithm for...45. Lam, T., "Comments on a Heuristic Algorithm for the Multiple Terminal Delivery Problem," Transportation Science, Vol. 4, No. 4, Nov. 1970, p. 403
Parallel scheduling algorithms
Energy Technology Data Exchange (ETDEWEB)
Dekel, E.; Sahni, S.
1983-01-01
Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.
Ciocanea Teodorescu I.,
2016-01-01
In this thesis we are interested in describing algorithms that answer questions arising in ring and module theory. Our focus is on deterministic polynomial-time algorithms and rings and modules that are finite. The first main result of this thesis is a solution to the module isomorphism problem in
Loo, K
2005-01-01
The goals of this paper are to show the following. First, Grover's algorithm can be viewed as a digital approximation to the analog quantum algorithm proposed in "An Analog Analogue of a Digital Quantum Computation", by E. Farhi and S. Gutmann, Phys.Rev. A 57, 2403 - 2406 (1998), quant-ph/9612026. We will call the above analog algorithm the Grover-Farhi-Gutmann or GFG algorithm. Second, the propagator of the GFG algorithm can be written as a sum-over-paths formula and given a sum-over-path interpretation, i.e., a Feynman path sum/integral. We will use nonstandard analysis to do this. Third, in the semi-classical limit $\\hbar\\to 0$, both the Grover and the GFG algorithms (viewed in the setting of the approximation in this paper) must run instantaneously. Finally, we will end the paper with an open question. In "Semiclassical Shor's Algorithm", by P. Giorda, et al, Phys. Rev.A 70, 032303 (2004), quant-ph/0303037, the authors proposed building semi-classical quantum computers to run Shor's algorithm because the ...