Experience with CANDID: Comparison algorithm for navigating digital image databases
Energy Technology Data Exchange (ETDEWEB)
Kelly, P.; Cannon, M.
1994-10-01
This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.
CANDID: Comparison algorithm for navigating digital image databases
Energy Technology Data Exchange (ETDEWEB)
Kelly, P.M.; Cannon, T.M.
1994-02-21
In this paper, we propose a method for calculating the similarity between two digital images. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized distance between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to an example target image. This algorithm is applied to the problem of search and retrieval for database containing pulmonary CT imagery, and experimental results are provided.
Lee, K J; Jenet, F A; Martinez, J; Dartez, L P; Mata, A; Lunsford, G; Cohen, S; Biwer, C M; Rohr, M; Flanigan, J; Walker, A; Banaszak, S; Allen, B; Barr, E D; Bhat, N D R; Bogdanov, S; Brazier, A; Camilo, F; Champion, D J; Chatterjee, S; Cordes, J; Crawford, F; Deneva, J; Desvignes, G; Ferdman, R D; Freire, P; Hessels, J W T; Karuppusamy, R; Kaspi, V M; Knispel, B; Kramer, M; Lazarus, P; Lynch, R; Lyne, A; McLaughlin, M; Ransom, S; Scholz, P; Siemens, X; Spitler, L; Stairs, I; Tan, M; van Leeuwen, J; Zhu, W W
2013-01-01
Modern radio pulsar surveys produce a large volume of prospective candidates, the majority of which are polluted by human-created radio frequency interference or other forms of noise. Typically, large numbers of candidates need to be visually inspected in order to determine if they are real pulsars. This process can be labor intensive. In this paper, we introduce an algorithm called PEACE (Pulsar Evaluation Algorithm for Candidate Extraction) which improves the efficiency of identifying pulsar signals. The algorithm ranks the candidates based on a score function. Unlike popular machine-learning based algorithms, no prior training data sets are required. This algorithm has been applied to data from several large-scale radio pulsar surveys. Using the human-based ranking results generated by students in the Arecibo Remote Command enter programme, the statistical performance of PEACE was evaluated. It was found that PEACE ranked 68% of the student-identified pulsars within the top 0.17% of sorted candidates, 95% ...
Comparison of Text Categorization Algorithms
Institute of Scientific and Technical Information of China (English)
SHI Yong-feng; ZHAO Yan-ping
2004-01-01
This paper summarizes several automatic text categorization algorithms in common use recently, analyzes and compares their advantages and disadvantages.It provides clues for making use of appropriate automatic classifying algorithms in different fields.Finally some evaluations and summaries of these algorithms are discussed, and directions to further research have been pointed out.
Comparison of fast discrete wavelet transform algorithms
Institute of Scientific and Technical Information of China (English)
MENG Shu-ping; TIAN Feng-chun; XU Xin
2005-01-01
This paper presents an analysis on and experimental comparison of several typical fast algorithms for discrete wavelet transform (DWT) and their implementation in image compression, particularly the Mallat algorithm, FFT-based algorithm, Short-length based algorithm and Lifting algorithm. The principles, structures and computational complexity of these algorithms are explored in details respectively. The results of the experiments for comparison are consistent to those simulated by MATLAB. It is found that there are limitations in the implementation of DWT. Some algorithms are workable only for special wavelet transform, lacking in generality. Above all, the speed of wavelet transform, as the governing element to the speed of image processing, is in fact the retarding factor for real-time image processing.
Evaluation of GPM candidate algorithms on hurricane observations
Le, M.; Chandrasekar, C. V.
2012-12-01
storms and hurricanes. In this paper, the performance of GPM candidate algorithms [2][3] to perform profile classification, melting region detection as well as drop size distribution retrieval for hurricane Earl will be presented. This analysis will be compared with other storm observations that are not tropical storms. The philosophy of the algorithm is based on the vertical characteristic of measured dual-frequency ratio (DFRm), defined as the difference in measured radar reflectivities at the two frequencies. It helps our understanding of how hurricanes such as Earl form and intensify rapidly. Reference [1] T. Iguchi, R. Oki, A. Eric and Y. Furuhama, "Global precipitation measurement program and the development of dual-frequency precipitation radar," J. Commun. Res. Lab. (Japan), 49, 37-45.2002. [2] M. Le and V. Chandrasekar, Recent updates on precipitation classification and hydrometeor identification algorithm for GPM-DPR, Geoscience science and remote sensing symposium, IGARSS'2012, IEEE International, Munich, Germany. [3] M. Le ,V. Chandrasekar and S. Lim, Microphysical retrieval from dual-frequency precipitation radar board GPM, Geoscience science and remote sensing symposium, IGARSS'2010, IEEE International, Honolulu, USA.
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Institute of Scientific and Technical Information of China (English)
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Comparison Study Of Multiobjective Evolutionary Algorithms
Syomkin, A. M.
2004-01-01
Many real-world problems involve two types of difficulties: 1) multiple, conflicting objectives and 2) a highly complex search space. Efficient evolutionary strategies have been developed to deal with both difficulties. Evolutionary algorithms possess several characteristics such as parallelism and robustness that make them preferable to classical optimization methods. In the presented paper I conducted comparison studies among the well-known evolutionary algorithms based on NP-hard 0-1 multi...
USING HASH BASED APRIORI ALGORITHM TO REDUCE THE CANDIDATE 2- ITEMSETS FOR MINING ASSOCIATION RULE
K. Vanitha
2011-01-01
In this paper we describe an implementation of Hash based Apriori. We analyze, theoretically and experimentally, the principal data structure of our solution. This data structure is the main factor in the efficiency of our implementation. We propose an effective hash-based algorithm for the candidate set generation. Explicitly, the number of candidate 2-itemsets generated by the proposed algorithm is, in orders of magnitude, smaller than that by previous methods, thus resolving the performanc...
An Algorithm for Selecting QGP Candidate Events from Relativistic Heavy Ion Collision Data Sample
Lian Shou Liu; Yuan, H B; Lianshou, Liu; Qinghua, Chen; Yuan, Hu
1998-01-01
The formation of quark-gluon plasma (QGP) in relativistic heavy ion collision, is expected to be accompanied by a background of ordinary collision events without phase transition. In this short note an algorithm is proposed to select the QGP candidate events from the whole event sample. This algorithm is based on a simple geometrical consideration together with some ordinary QGP signal, e.g. the increasing of $K/\\pi$ ratio. The efficiency of this algorithm in raising the 'signal/noise ratio' of QGP events in the selected sub-sample is shown explicitly by using Monte-Carlo simulation.
Comparison Study for Clonal Selection Algorithm and Genetic Algorithm
Ezgi Deniz Ulker; Sadık Ulker
2012-01-01
Two metaheuristic algorithms namely Artificial Immune Systems (AIS) and Genetic Algorithms are classified as computational systems inspired by theoretical immunology and genetics mechanisms. In this work we examine the comparative performances of two algorithms. A special selection algorithm, Clonal Selection Algorithm (CLONALG), which is a subset of Artificial Immune Systems, and Genetic Algorithms are tested with certain benchmark functions. It is shown that depending on type of a function ...
International Nuclear Information System (INIS)
Classification of the nodule candidates in computer-aided detection (CAD) of lung nodules in CT images was addressed by constructing a nonlinear discriminant function using a kernel-based learning algorithm called the kernel recursive least-squares (KRLS) algorithm. Using the nodule candidates derived from the processing by a CAD scheme of 100 CT datasets containing 253 non-calcified nodules or 3 mm or larger as determined by the consensus of two thoracic radiologists, the following trial were carried out 100 times: by randomly selecting 50 datasets for training, a nonlinear discriminant function was obtained using the nodule candidates in the training datasets and tested with the remaining candidates; for comparison, a rule-based classification was tested in a similar manner. At the number of false positives per case of about 5, the nonlinear classification method showed an improved sensitivity of 80% (mean over the 100 trials) compared with 74% of the rule-based method. (orig.)
The Performance Comparisons between the Unconstrained and Constrained Equalization Algorithms
Institute of Scientific and Technical Information of China (English)
HE Zhong-qiu; LI Dao-ben
2003-01-01
This paper proposes two unconstrained algorithms, the Steepest Decent (SD) algorithm and the Conjugate Gradient (CG) algorithm, based on a superexcellent cost function [1～3]. At the same time, two constrained algorithms which include the Constrained Steepest Decent (CSD) algorithm and the Constrained Conjugate Gradient algorithm (CCG) are deduced subject to a new constrain condition. They are both implemented in unitary transform domain. The computational complexities of the constrained algorithms are compared to those of the unconstrained algorithms. Resulting simulations show their performance comparisons.
Tradeoffs Between Branch Mispredictions and Comparisons for Sorting Algorithms
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Moruz, Gabriel
) comparisons performs Omega(nlogd n) branch mispredictions. We show that Multiway MergeSort achieves this tradeoff by adopting a multiway merger with a low number of branch mispredictions. For adaptive sorting algorithms we similarly obtain that an algorithm performing O(dn(1+log (1+Inv/n))) comparisons must...
Sorting on STAR. [CDC computer algorithm timing comparison
Stone, H. S.
1978-01-01
Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
Triad pattern algorithm for predicting strong promoter candidates in bacterial genomes
Directory of Open Access Journals (Sweden)
Sakanyan Vehary
2008-05-01
Full Text Available Abstract Background Bacterial promoters, which increase the efficiency of gene expression, differ from other promoters by several characteristics. This difference, not yet widely exploited in bioinformatics, looks promising for the development of relevant computational tools to search for strong promoters in bacterial genomes. Results We describe a new triad pattern algorithm that predicts strong promoter candidates in annotated bacterial genomes by matching specific patterns for the group I σ70 factors of Escherichia coli RNA polymerase. It detects promoter-specific motifs by consecutively matching three patterns, consisting of an UP-element, required for interaction with the α subunit, and then optimally-separated patterns of -35 and -10 boxes, required for interaction with the σ70 subunit of RNA polymerase. Analysis of 43 bacterial genomes revealed that the frequency of candidate sequences depends on the A+T content of the DNA under examination. The accuracy of in silico prediction was experimentally validated for the genome of a hyperthermophilic bacterium, Thermotoga maritima, by applying a cell-free expression assay using the predicted strong promoters. In this organism, the strong promoters govern genes for translation, energy metabolism, transport, cell movement, and other as-yet unidentified functions. Conclusion The triad pattern algorithm developed for predicting strong bacterial promoters is well suited for analyzing bacterial genomes with an A+T content of less than 62%. This computational tool opens new prospects for investigating global gene expression, and individual strong promoters in bacteria of medical and/or economic significance.
COMPARISON OF LOSSLESS DATA COMPRESSION ALGORITHMS FOR TEXT DATA
Directory of Open Access Journals (Sweden)
U. S.Amarasinghe
2010-12-01
Full Text Available Data compression is a common requirement for most of the computerized applications. There are number of data compression algorithms,which are dedicated to compress different data formats. Even for a single data type there are number of different compression algorithms,which use different approaches. This paper examines lossless data compression algorithms and compares their performance. A set ofselected algorithms are examined and implemented to evaluate the performance in compressing text data. An experimental comparison of anumber of different lossless data compression algorithms is presented in this paper. The article is concluded by stating which algorithmperforms well for text data.
Trust Based Algorithm for Candidate Node Selection in Hybrid MANET-DTN
Directory of Open Access Journals (Sweden)
Jan Papaj
2014-01-01
Full Text Available The hybrid MANET - DTN is a mobile network that enables transport of the data between groups of the disconnected mobile nodes. The network provides benefits of the Mobile Ad-Hoc Networks (MANET and Delay Tolerant Network (DTN. The main problem of the MANET occurs if the communication path is broken or disconnected for some short time period. On the other side, DTN allows sending data in the disconnected environment with respect to higher tolerance to delay. Hybrid MANET - DTN provides optimal solution for emergency situation in order to transport information. Moreover, the security is the critical factor because the data are transported by mobile devices. In this paper, we investigate the issue of secure candidate node selection for transportation of the data in a disconnected environment for hybrid MANET- DTN. To achieve the secure selection of the reliable mobile nodes, the trust algorithm is introduced. The algorithm enables select reliable nodes based on collecting routing information. This algorithm is implemented to the simulator OPNET modeler.
A systematic comparison of genome-scale clustering algorithms
Jay, Jeremy J.; Eblen, John D; Zhang, Yun; Benson, Mikael; Perkins, Andy D.; Saxton, Arnold M.; Voy, Brynn H.; Elissa J Chesler; Langston, Michael A.
2012-01-01
Background: A wealth of clustering algorithms has been applied to gene co-expression experiments. These algorithms cover a broad range of approaches, from conventional techniques such as k-means and hierarchical clustering, to graphical approaches such as k-clique communities, weighted gene co-expression networks (WGCNA) and paraclique. Comparison of these methods to evaluate their relative effectiveness provides guidance to algorithm selection, development and implementation. Most prior work...
A Comparison of First-order Algorithms for Machine Learning
Wei, Yu; Thomas, Pock
2014-01-01
Using an optimization algorithm to solve a machine learning problem is one of mainstreams in the field of science. In this work, we demonstrate a comprehensive comparison of some state-of-the-art first-order optimization algorithms for convex optimization problems in machine learning. We concentrate on several smooth and non-smooth machine learning problems with a loss function plus a regularizer. The overall experimental results show the superiority of primal-dual algorithms in solving a mac...
Garg, Poonam
2010-01-01
Genetic algorithms are a population-based Meta heuristics. They have been successfully applied to many optimization problems. However, premature convergence is an inherent characteristic of such classical genetic algorithms that makes them incapable of searching numerous solutions of the problem domain. A memetic algorithm is an extension of the traditional genetic algorithm. It uses a local search technique to reduce the likelihood of the premature convergence. The cryptanalysis of simplified data encryption standard can be formulated as NP-Hard combinatorial problem. In this paper, a comparison between memetic algorithm and genetic algorithm were made in order to investigate the performance for the cryptanalysis on simplified data encryption standard problems(SDES). The methods were tested and various experimental results show that memetic algorithm performs better than the genetic algorithms for such type of NP-Hard combinatorial problem. This paper represents our first effort toward efficient memetic algo...
Comparison of greedy algorithms for α-decision tree construction
Alkhalid, Abdulaziz
2011-01-01
A comparison among different heuristics that are used by greedy algorithms which constructs approximate decision trees (α-decision trees) is presented. The comparison is conducted using decision tables based on 24 data sets from UCI Machine Learning Repository [2]. Complexity of decision trees is estimated relative to several cost functions: depth, average depth, number of nodes, number of nonterminal nodes, and number of terminal nodes. Costs of trees built by greedy algorithms are compared with minimum costs calculated by an algorithm based on dynamic programming. The results of experiments assign to each cost function a set of potentially good heuristics that minimize it. © 2011 Springer-Verlag.
A comparison of heuristic search algorithms for molecular docking.
Westhead, D R; Clark, D E; Murray, C W
1997-05-01
This paper describes the implementation and comparison of four heuristic search algorithms (genetic algorithm, evolutionary programming, simulated annealing and tabu search) and a random search procedure for flexible molecular docking. To our knowledge, this is the first application of the tabu search algorithm in this area. The algorithms are compared using a recently described fast molecular recognition potential function and a diverse set of five protein-ligand systems. Statistical analysis of the results indicates that overall the genetic algorithm performs best in terms of the median energy of the solutions located. However, tabu search shows a better performance in terms of locating solutions close to the crystallographic ligand conformation. These results suggest that a hybrid search algorithm may give superior results to any of the algorithms alone. PMID:9263849
Directory of Open Access Journals (Sweden)
Ait-Ali Lamia
2011-11-01
Full Text Available Abstract Background To propose a new diagnostic algorithm for candidates for Fontan and identify those who can skip cardiac catheterization (CC. Methods Forty-four candidates for Fontan (median age 4.8 years, range: 2-29 years were prospectively evaluated by trans-thoracic echocardiography (TTE, Cardiovascular magnetic resonance (CMR and CC. Before CC, according to clinical, echo and CMR findings, patients were divided in two groups: Group I comprised 18 patients deemed suitable for Fontan without requiring CC; group II comprised 26 patients indicated for CC either in order to detect more details, or for interventional procedures. Results In Group I ("CC not required" no unexpected new information affecting surgical planning was provided by CC. Conversely, in Group II new information was provided by CC in three patients (0 vs 11.5%, p = 0.35 and in six an interventional procedure was performed. During CC, minor complications occurred in one patient from Group I and in three from Group II (6 vs 14%, p = 0.7. Radiation Dose-Area product was similar in the two groups (Median 20 Gycm2, range: 5-40 vs 26.5 Gycm2, range: 9-270 p = 0.37. All 18 Group I patients and 19 Group II patients underwent a total cavo-pulmonary anastomosis; in the remaining seven group II patients, four were excluded from Fontan; two are awaiting Fontan; one refused the intervention. Conclusion In this paper we propose a new diagnostic algorithm in a pre-Fontan setting. An accurate non-invasive evaluation comprising TTE and CMR could select patients who can skip CC.
Comparison of fuzzy connectedness and graph cut segmentation algorithms
Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Falcão, A. X.; Miranda, P. A. V.
2011-03-01
The goal of this paper is a theoretical and experimental comparison of two popular image segmentation algorithms: fuzzy connectedness (FC) and graph cut (GC). On the theoretical side, our emphasis will be on describing a common framework in which both of these methods can be expressed. We will give a full analysis of the framework and describe precisely a place which each of the two methods occupies in it. Within the same framework, other region based segmentation methods, like watershed, can also be expressed. We will also discuss in detail the relationship between FC segmentations obtained via image forest transform (IFT) algorithms, as opposed to FC segmentations obtained by other standard versions of FC algorithms. We also present an experimental comparison of the performance of FC and GC algorithms. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as influence of the choice of the seeds on the output.
A Comparison of learning algorithms on the Arcade Learning Environment
Defazio, Aaron; Graepel, Thore
2014-01-01
Reinforcement learning agents have traditionally been evaluated on small toy problems. With advances in computing power and the advent of the Arcade Learning Environment, it is now possible to evaluate algorithms on diverse and difficult problems within a consistent framework. We discuss some challenges posed by the arcade learning environment which do not manifest in simpler environments. We then provide a comparison of model-free, linear learning algorithms on this challenging problem set.
The DCA:SOMe Comparison A comparative study between two biologically-inspired algorithms
Greensmith, Julie; Aickelin, Uwe
2010-01-01
The Dendritic Cell Algorithm (DCA) is an immune-inspired algorithm, developed for the purpose of anomaly detection. The algorithm performs multi-sensor data fusion and correlation which results in a 'context aware' detection system. Previous applications of the DCA have included the detection of potentially malicious port scanning activity, where it has produced high rates of true positives and low rates of false positives. In this work we aim to compare the performance of the DCA and of a Self-Organizing Map (SOM) when applied to the detection of SYN port scans, through experimental analysis. A SOM is an ideal candidate for comparison as it shares similarities with the DCA in terms of the data fusion method employed. It is shown that the results of the two systems are comparable, and both produce false positives for the same processes. This shows that the DCA can produce anomaly detection results to the same standard as an established technique.
Does a Least-Preferred Candidate Win a Seat? A Comparison of Three Electoral Systems
Directory of Open Access Journals (Sweden)
Yoichi Hizen
2015-01-01
Full Text Available In this paper, the differences between two variations of proportional representation (PR, open-list PR and closed-list PR, are analyzed in terms of their ability to accurately reflect voter preference. The single nontransferable vote (SNTV is also included in the comparison as a benchmark. We construct a model of voting equilibria with a candidate who is least preferred by voters in the sense that replacing the least-preferred candidate in the set of winners with any loser is Pareto improving, and our focus is on whether the least-preferred candidate wins under each electoral system. We demonstrate that the least-preferred candidate never wins under the SNTV, but can win under open-list PR, although this is less likely than winning under closed-list PR.
Directory of Open Access Journals (Sweden)
Amin Mubark Alamin Ibrahim
2015-04-01
Full Text Available The subject of matching text or search the texts is important topics in the field of computer science and is used in many programs such as Microsoft word program in correct spelling mistakes and search &replace, and other uses. The aim of this study was to learn how to trade-off texts matching algorithms, which are very much where we have the application on Horspool's and Brute Force algorithms. According to the standard number of comparisons and time of execution. The study pointed on preference Horspool's algorithm.
Reranking candidate gene models with cross-species comparison for improved gene prediction
Directory of Open Access Journals (Sweden)
Pereira Fernando CN
2008-10-01
Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.
BRASERO: A Resource for Benchmarking RNA Secondary Structure Comparison Algorithms
Chauve, Cedric; Allali, Julien; Saule, Cedric
2012-01-01
The pairwise comparison of RNA secondary structures is a fundamental problem, with direct application in mining databases for annotating putative noncoding RNA candidates in newly sequenced genomes. An increasing number of software tools are available for comparing RNA secondary structures, based on different models (such as ordered trees or forests, arc annotated sequences, and multilevel trees) and computational principles (edit distance, alignment). We describe here the website BRASERO tha...
BRASERO: A Resource for Benchmarking RNA Secondary Structure Comparison Algorithms.
Allali, Julien; Saule, Cédric; Chauve, Cédric; d'Aubenton-Carafa, Yves; Denise, Alain; Drevet, Christine; Ferraro, Pascal; Gautheret, Daniel; Herrbach, Claire; Leclerc, Fabrice; de Monte, Antoine; Ouangraoua, Aida; Sagot, Marie-France; Termier, Michel; Thermes, Claude; Touzet, Hélène
2012-01-01
The pairwise comparison of RNA secondary structures is a fundamental problem, with direct application in mining databases for annotating putative noncoding RNA candidates in newly sequenced genomes. An increasing number of software tools are available for comparing RNA secondary structures, based on different models (such as ordered trees or forests, arc annotated sequences, and multilevel trees) and computational principles (edit distance, alignment). We describe here the website BRASERO that offers tools for evaluating such software tools on real and synthetic datasets. PMID:22675348
Comparison of face Recognition Algorithms on Dummy Faces
Directory of Open Access Journals (Sweden)
Aruni Singh
2012-09-01
Full Text Available In the age of rising crime face recognition is enormously important in the contexts of computer vision, psychology, surveillance, fraud detection, pattern recognition, neural network, content based video processing, etc. Face is a non intrusive strong biometrics for identification and hence criminals always try to hide their facial organs by different artificial means such as plastic surgery, disguise and dummy. The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms. However, while existing publicly-available face databases contain face images with a wide variety of poses, illumination, gestures and face occlusions but there is no dummy face database is available in public domain. The contributions of this research paper are: i Preparation of dummy face database of 110 subjects ii Comparison of some texture based, feature based and holistic face recognition algorithms on that dummy face database, iii Critical analysis of these types of algorithms on dummy face database.
Amin Mubark Alamin Ibrahim; Mustafa Elgili Mustafa
2015-01-01
The subject of matching text or search the texts is important topics in the field of computer science and is used in many programs such as Microsoft word program in correct spelling mistakes and search &replace, and other uses. The aim of this study was to learn how to trade-off texts matching algorithms, which are very much where we have the application on Horspool's and Brute Force algorithms. According to the standard number of comparisons and time of execution. The study pointed on prefer...
Criteria for comparison of synchronization algorithms spaced measures time and frequency
Koval, Yuriy; Kostyrya, Alexander; Pryimak, Viacheslav; Al-Tvezhri, Basim
2012-01-01
The role and gives a classification of synchronization algorithms spatially separated measures time and frequency. For comparison algorithms introduced criteria that consider the example of one of the algorithms.
Comparison of machine learning algorithms for detecting coral reef
Directory of Open Access Journals (Sweden)
Eduardo Tusa
2014-09-01
Full Text Available (Received: 2014/07/31 - Accepted: 2014/09/23This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009 because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing. We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009.
Parallel Branch and Bound Algorithm - A comparison between serial, OpenMP and MPI implementations
International Nuclear Information System (INIS)
This paper presents a comparison of an extended version of the regular Branch and Bound algorithm previously implemented in serial with a new parallel implementation, using both MPI (distributed memory parallel model) and OpenMP (shared memory parallel model). The branch-and-bound algorithm is an enumerative optimization technique, where finding a solution to a mixed integer programming (MIP) problem is based on the construction of a tree where nodes represent candidate problems and branches represent the new restrictions to be considered. Through this tree all integer solutions of the feasible region of the problem are listed explicitly or implicitly ensuring that all the optimal solutions will be found. A common approach to solve such problems is to convert sub-problems of the mixed integer problem to linear programming problems, thereby eliminating some of the integer constraints, and then trying to solve that problem using an existing linear program approach. The paper describes the general branch and bound algorithm used and provides details on the implementation and the results of the comparison.
Selection of candidate plus phenotypes of Jatropha curcas L. using method of paired comparisons
Energy Technology Data Exchange (ETDEWEB)
Mishra, D.K. [Silviculture Division, Arid Forest Research Institute, P.O. Krishi Mandi, New Pali Road, Jodhpur 342005, Rajasthan (India)
2009-03-15
Jatropha curcas L. (Euphorbiaceae) is an oil bearing species with multiple uses and considerable potential as a biodiesel crop. The present communication deals with the method of selecting plus phenotypes of J. curcas for exploiting genetic variability for further improvement. Candidate plus tree selection is the first and most important stage in any tree improvement programme. The selection of candidate plus plants (CPPs) is based upon various important attributes associated with the species and their relative ranking. Relative preference between various traits and scoring for each trait has been worked out by using the method of paired comparisons for the selection of CPP in J. curcas L. The most important ones are seed and oil yields. (author)
Comparison of evolutionary algorithms in gene regulatory network model inference
Directory of Open Access Journals (Sweden)
Crane Martin
2010-01-01
Full Text Available Abstract Background The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs. However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. Results This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. Conclusions Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.
Comparison of evolutionary algorithms in gene regulatory network model inference.
LENUS (Irish Health Repository)
2010-01-01
ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.
Detecting protein candidate fragments using a structural alphabet profile comparison approach.
Shen, Yimin; Picord, Géraldine; Guyon, Frédéric; Tuffery, Pierre
2013-01-01
Predicting accurate fragments from sequence has recently become a critical step for protein structure modeling, as protein fragment assembly techniques are presently among the most efficient approaches for de novo prediction. A key step in these approaches is, given the sequence of a protein to model, the identification of relevant fragments - candidate fragments - from a collection of the available 3D structures. These fragments can then be assembled to produce a model of the complete structure of the protein of interest. The search for candidate fragments is classically achieved by considering local sequence similarity using profile comparison, or threading approaches. In the present study, we introduce a new profile comparison approach that, instead of using amino acid profiles, is based on the use of predicted structural alphabet profiles, where structural alphabet profiles contain information related to the 3D local shapes associated with the sequences. We show that structural alphabet profile-profile comparison can be used efficiently to retrieve accurate structural fragments, and we introduce a fully new protocol for the detection of candidate fragments. It identifies fragments specific of each position of the sequence and of size varying between 6 and 27 amino-acids. We find it outperforms present state of the art approaches in terms (i) of the accuracy of the fragments identified, (ii) the rate of true positives identified, while having a high coverage score. We illustrate the relevance of the approach on complete target sets of the two previous Critical Assessment of Techniques for Protein Structure Prediction (CASP) rounds 9 and 10. A web server for the approach is freely available at http://bioserv.rpbs.univ-paris-diderot.fr/SAFrag. PMID:24303019
Web Data Extraction Using Tree Structure Algorithms – A Comparison
Directory of Open Access Journals (Sweden)
Ms. Seema Kolkur
2013-07-01
Full Text Available Nowadays, Web pages provide a large amount of structured data, which is required by many advanced applications. This data can be searched through their Web query interfaces. The retrieved information is also called ‘deep or hidden data’. The deep data is enwrapped in Web pages in the form of data records. These special Web pages are generated dynamically and presented to users in the form of HTML documents along with other content. These webpages can be a virtual gold mine of information for business, if mined effectively. Web Data Extraction systems or web wrappers are software applications for the purpose of extracting information from Web sources like Web pages. A Web Data Extraction system usually interacts with a Web source and extracts data stored in it. The extracted data is converted into the most convenient structured format and stored for further usage. This paper deals with the development of such a wrapper, which takes search engine result pages as input and converts them into structured format. Secondly, this paper proposes a new algorithm called Improved Tree Matching algorithm, which in turn, is based on the efficient Simple Tree Matching (STM algorithm. Towards the end of this work, there is given a comparison with existing works. Experimental results show that this approach can extract web data with lower complexity compared to other existing approaches.
Directory of Open Access Journals (Sweden)
Saira Beg
2011-11-01
Full Text Available This paper presents performance evaluation of Bionomic Algorithm (BA for Shortest Path Finding (SPF problem as compared with the performance of Genetic Algorithm (GA for the same problem. SPF is a classical problem having many applications in networks, robotics and electronics etc. SPF problem has been solved using different algorithms such as Dijkstras Algorithm, Floyd including GA, Neural Network (NN, Tabu Search (TS, and Ant Colony Optimization (ACO etc. We have employed Bionomic Algorithm for solving the SPF problem and have given the performance comparison of BA vs. GA for the same problem. Simulation results are presented at the end which is carried out using MATLAB.
Comparison of depletion algorithms for large systems of nuclides
International Nuclear Information System (INIS)
In this work five algorithms for solving the system of decay and transmutation equations with constant reaction rates encountered in burnup calculations were compared. These are Chebyshev rational approximation method (CRAM), which is a new matrix exponential method, the matrix exponential power series with instant decay and a secular equilibrium approximations for short-lived nuclides, which is used in ORIGEN, and three different variants of transmutation trajectory analysis (TTA), which is also known as the linear chains method. The common feature of these methods is their ability to deal with thousands of nuclides and reactions. Consequently, there is no need to simplify the system of equations and all nuclides can be accounted for explicitly. The methods were compared in single depletion steps using decay and cross-section data taken from the default ORIGEN libraries. Very accurate reference solutions were obtained from a high precision TTA algorithm. The results from CRAM and TTA were found to be very accurate. While ORIGEN was not as accurate, it should still be sufficient for most purposes. All TTA variants are much slower than the other two, which are so fast that their running time should be negligible in most, if not all, applications. The combination of speed and accuracy makes CRAM the clear winner of the comparison.
A benchmark for comparison of dental radiography analysis algorithms.
Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia
2016-07-01
Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/). PMID:26974042
Comparison of New Multilevel Association Rule Algorithm with MAFIA
Arpna Shrivastava; Jain, R. C.; Ajay Kumar Shrivastava
2014-01-01
Multilevel association rules provide the more precise and specific information. Apriori algorithm is an established algorithm for finding association rules. Fast Apriori implementation is modified to develop new algorithm for finding frequent item sets and mining multilevel association rules. MAFIA is another established algorithm for finding frequent item sets. In this paper, the performance of this new algorithm is analyzed and compared with MAFIA algorithm.
Comparison of New Multilevel Association Rule Algorithm with MAFIA
Directory of Open Access Journals (Sweden)
Arpna Shrivastava
2014-10-01
Full Text Available Multilevel association rules provide the more precise and specific information. Apriori algorithm is an established algorithm for finding association rules. Fast Apriori implementation is modified to develop new algorithm for finding frequent item sets and mining multilevel association rules. MAFIA is another established algorithm for finding frequent item sets. In this paper, the performance of this new algorithm is analyzed and compared with MAFIA algorithm.
Comparison of Adhesion and Retention Forces for Two Candidate Docking Seal Elastomers
Hartzler, Brad D.; Panickar, Marta B.; Wasowski, Janice L.; Daniels, Christopher C.
2011-01-01
To successfully mate two pressurized vehicles or structures in space, advanced seals are required at the interface to prevent the loss of breathable air to the vacuum of space. A critical part of the development testing of candidate seal designs was a verification of the integrity of the retaining mechanism that holds the silicone seal component to the structure. Failure to retain the elastomer seal during flight could liberate seal material in the event of high adhesive loads during undocking. This work presents an investigation of the force required to separate the elastomer from its metal counter-face surface during simulated undocking as well as a comparison to that force which was necessary to destructively remove the elastomer from its retaining device. Two silicone elastomers, Wacker 007-49524 and Esterline ELASA-401, were evaluated. During the course of the investigation, modifications were made to the retaining devices to determine if the modifications improved the force needed to destructively remove the seal. The tests were completed at the expected operating temperatures of -50, +23, and +75 C. Under the conditions investigated, the comparison indicated that the adhesion between the elastomer and the metal counter-face was significantly less than the force needed to forcibly remove the elastomer seal from its retainer, and no failure would be expected.
Performance Comparison Of Evolutionary Algorithms For Image Clustering
Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.
2014-09-01
Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.
Comparison of Load Balancing and Scheduling Algorithms in Cloud Environment
Karthika M T,; Neethu Kurian,; Mariya Seby,
2013-01-01
The importance of cloud computing is increasing nowadays. Cloud computing is used for the delivery of hosted services like reliable, fault tolerant and scalable infrastructure over Internet. A variety of algorithms is used in the cloud environment for scheduling and load balancing, thereby reducing the total cost. The main algorithms usually used include, optimal cloud resource provisioning (OCRP) algorithm and hybrid cloud optimized cost(HCOC)scheduling algorithm These algorithms will formul...
Performance Comparison of Adaptive Algorithms for Adaptive line Enhancer
Sanjeev Kumar Dhull; Sandeep K. Arya; O. P. Sahu
2011-01-01
We have designed and simulated an adaptive line enhancer system for conferencing. This system is based upon a least-mean-square (LMS) and recursive adaptive algorithm (RLS) Performance of ALE is compared for LMSandRLS algorithms.
A comparison of performance measures for online algorithms
DEFF Research Database (Denmark)
Boyar, Joan; Irani, Sandy; Larsen, Kim Skak
balance greediness and adaptability. We examine how these measures evaluate the Greedy Algorithm and Lazy Double Coverage, commonly studied algorithms in the context of server problems. We examine Competitive Analysis, the Max/Max Ratio, the Random Order Ratio, Bijective Analysis and Relative Worst Order...... Analysis and determine how they compare the two algorithms. We find that by the Max/Max Ratio and Bijective Analysis, Greedy is the better algorithm. Under the other measures Lazy Double Coverage is better, though Relative Worst Order Analysis indicates that Greedy is sometimes better. Our results also...... provide the first proof of optimality of an algorithm under Relative Worst Order Analysis....
Poonam Garg
2010-01-01
Genetic algorithms are a population-based Meta heuristics. They have been successfully applied to many optimization problems. However, premature convergence is an inherent characteristic of such classical genetic algorithms that makes them incapable of searching numerous solutions of the problem domain. A memetic algorithm is an extension of the traditional genetic algorithm. It uses a local search technique to reduce the likelihood of the premature convergence. The cryptanalysis of simplifie...
Directory of Open Access Journals (Sweden)
DURUSU, A.
2014-08-01
Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.
Comparison between various beam steering algorithms for the CEBAF lattice
International Nuclear Information System (INIS)
In this paper we describe a comparative study performed to evaluate various beam steering algorithms for CEBAF lattice. The first approach that was evaluated used a Singular Value Decomposition (SVD) based algorithm to determine the corrector magnet setting for various regions of the CEBAF lattice. The second studied algorithm is known as PROSAC (Projective RMS Orbit Subtraction And Correction). This algorithm was developed at TJNAF to support the commissioning activity. The third set of algorithms tested are known as COCU (CERN Orbit Correction Utility) which is a production steering package used at CERN. A program simulating a variety of errors such as misalignment, BPM offset, etc. was used to generate test inputs for these three sets of algorithms. Conclusions of this study are presented in this paper. copyright 1997 American Institute of Physics
Comparison between various beam steering algorithms for the CEBAF lattice
International Nuclear Information System (INIS)
In this paper we describe a comparative study performed to evaluate various beam steering algorithms for CEBAF lattice. The first approach that was evaluated used a Singular Value Decomposition (SVD) based algorithm to determine the corrector magnet setting for various regions of the CEBAF lattice. The second studied algorithm is known as PROSAC (Projective RMS Orbit Subtraction And Correction). This algorithm was developed at TJNAF to support the commissioning activity. The third set of algorithms tested are known as COCU (CERN Orbit Correction Utility) which is a production steering package used at CERN. A program simulating a variety of errors such as misalignment, BPM offset, etc. was used to generate test inputs for these three sets of algorithms. Conclusions of this study are presented in this paper
The Comparison and Application of Corner Detection Algorithms
Jie Chen; Li-hui Zou; Juan Zhang; Li-hua Dou
2009-01-01
Corners in images represent a lot of important information. Extracting corners accurately is significant to image processing, which can reduce much of the calculations. In this paper, two widely used corner detection algorithms, SUSAN and Harris corner detection algorithms which are both based on intensity, were compared in stability, noise immunity and complexity quantificationally via stability factor η, anti-noise factor ρ and the runtime of each algorithm. It concluded that Harris corner ...
Comparison of two global digital algorithms for Minkowski tensor estimation
DEFF Research Database (Denmark)
Christensen, Sabrina Tang; Kiderlen, Markus
2016-01-01
The geometry of real world objects can be described by Minkowski tensors. Algorithms have been suggested to approximate Minkowski tensors if only a binary image of the object is available. This paper presents implementations of two such algorithms. The theoretical convergence properties are...... confirmed by simulations on test sets, and recommendations for input arguments of the algorithms are given. For increasing resolutions, we obtain more accurate es- timators for the Minkowski tensors. Digitisations of more complicated objects are shown to require higher resolutions....
The Comparison and Application of Corner Detection Algorithms
Directory of Open Access Journals (Sweden)
Jie Chen
2009-12-01
Full Text Available Corners in images represent a lot of important information. Extracting corners accurately is significant to image processing, which can reduce much of the calculations. In this paper, two widely used corner detection algorithms, SUSAN and Harris corner detection algorithms which are both based on intensity, were compared in stability, noise immunity and complexity quantificationally via stability factor η, anti-noise factor ρ and the runtime of each algorithm. It concluded that Harris corner detection algorithm was superior to SUSAN corner detection algorithm on the whole. Moreover, SUSAN and Harris detection algorithms were improved by selecting an adaptive gray difference threshold and by changing directional differentials, respectively, and compared using these three criterions. In addition, SUSAN and Harris corner detectors were applied to an image matching experiment. It was verified that the quantitative evaluations of the corner detection algorithms were valid through calculating match efficiency, defined as correct matching corner pairs dividing by matching time, which can reflect the performances of a corner detection algorithm comprehensively. Furthermore, the better corner detector was used into image mosaic experiment, and the result was satisfied. The work of this paper can provide a direction to the improvement and the utilization of these two corner detection algorithms.
Institute of Scientific and Technical Information of China (English)
Li Xi; Ji Hong; Zheng Ruiming; Li Ting
2009-01-01
In order to improve the performance of peer-to-peer files sharing system under mobile distributed environments, a novel always-optimally-coordinated (AOC) criterion and corresponding candidate selection algorithm are proposed in this paper. Compared with the traditional min-hops criterion, the new approach introduces a fuzzy knowledge combination theory to investigate several important factors that influence files transfer success rate and efficiency. Whereas the min-hops based protocols only ask the nearest candidate peer for desired files, the selection algorithm based on AOC comprehensively considers users' preference and network requirements with flexible balancing rules. Furthermore, its advantage also expresses in the independence of specified resource discovering protocols, allowing for scalability. The simulation results show that when using the AOC based peer selection algorithm, system performance is much better than the min-hops scheme, with files successful transfer rate improved more than 50% and transfer time reduced at least 20%.
Performance Comparison of Constrained Artificial Bee Colony Algorithm
Directory of Open Access Journals (Sweden)
Soudeh Babaeizadeh
2015-06-01
Full Text Available This study is aimed to evaluate, analyze and compare the performances of available constrained Artificial Bee Colony (ABC algorithms in the literature. In recent decades, many different variants of the ABC algorithms have been suggested to solve Constrained Optimization Problems (COPs. However, to the best of the authors' knowledge, there rarely are comparative studies on the numerical performance of those algorithms. This study is considering a set of well-known benchmark problems from test problems of Congress of Evolutionary Computation 2006 (CEC2006.
A Comparison of Evolutionary Algorithms for Tracking Time-Varying Recursive Systems
Directory of Open Access Journals (Sweden)
White Michael S
2003-01-01
Full Text Available A comparison is made of the behaviour of some evolutionary algorithms in time-varying adaptive recursive filter systems. Simulations show that an algorithm including random immigrants outperforms a more conventional algorithm using the breeder genetic algorithm as the mutation operator when the time variation is discontinuous, but neither algorithm performs well when the time variation is rapid but smooth. To meet this deficit, a new hybrid algorithm which uses a hill climber as an additional genetic operator, applied for several steps at each generation, is introduced. A comparison is made of the effect of applying the hill climbing operator a few times to all members of the population or a larger number of times solely to the best individual; it is found that applying to the whole population yields the better results, substantially improved compared with those obtained using earlier methods.
A First Comparison of Kepler Planet Candidates in Single and Multiple Systems
Latham, David W; Quinn, Samuel N; Batalha, Natalie M; Borucki, William J; Brown, Timothy M; Bryson, Stephen T; Buchhave, Lars A; Caldwell, Douglas A; Carter, Joshua A; Christiansen, Jesse L; Ciardi, David R; Cochran, William D; Dunham, Edward W; Fabrycky, Daniel C; Ford, Eric B; Gautier, Thomas N; Gilliland, Ronald L; Holman, Matthew J; Howell, Steve B; Ibrahim, Khadeejah A; Isaacson, Howard; Basri, Gibor; Furesz, Gabor; Geary, John C; Jenkins, Jon M; Koch, David G; Lissauer, Jack J; Marcy, Geoffrey W; Quintana, Elisa V; Ragozzine, Darin; Sasselov, Dimitar D; Shporer, Avi; Steffen, Jason H; Welsh, William F; Wohler, Bill
2011-01-01
In this letter we present an overview of the rich population of systems with multiple candidate transiting planets found in the first four months of Kepler data. The census of multiples includes 115 targets that show 2 candidate planets, 45 with 3, 8 with 4, and 1 each with 5 and 6, for a total of 170 systems with 408 candidates. When compared to the 827 systems with only one candidate, the multiples account for 17 percent of the total number of systems, and a third of all the planet candidates. We compare the characteristics of candidates found in multiples with those found in singles. False positives due to eclipsing binaries are much less common for the multiples, as expected. Singles and multiples are both dominated by planets smaller than Neptune; 69 +2/-3 percent for singles and 86 +2/-5 percent for multiples. This result, that systems with multiple transiting planets are less likely to include a transiting giant planet, suggests that close-in giant planets tend to disrupt the orbital inclinations of sm...
A FIRST COMPARISON OF KEPLER PLANET CANDIDATES IN SINGLE AND MULTIPLE SYSTEMS
International Nuclear Information System (INIS)
In this Letter, we present an overview of the rich population of systems with multiple candidate transiting planets found in the first four months of Kepler data. The census of multiples includes 115 targets that show two candidate planets, 45 with three, eight with four, and one each with five and six, for a total of 170 systems with 408 candidates. When compared to the 827 systems with only one candidate, the multiples account for 17% of the total number of systems, and one-third of all the planet candidates. We compare the characteristics of candidates found in multiples with those found in singles. False positives due to eclipsing binaries are much less common for the multiples, as expected. Singles and multiples are both dominated by planets smaller than Neptune; 69+2-3% for singles and 86+2-5% for multiples. This result, that systems with multiple transiting planets are less likely to include a transiting giant planet, suggests that close-in giant planets tend to disrupt the orbital inclinations of small planets in flat systems, or maybe even prevent the formation of such systems in the first place.
An Empirical Comparison of Boosting and Bagging Algorithms
Directory of Open Access Journals (Sweden)
R. Kalaichelvi Chandrahasan
2011-11-01
Full Text Available Classification is one of the data mining techniques that analyses a given data set and induces a model for each class based on their features present in the data. Bagging and boosting are heuristic approaches to develop classification models. These techniques generate a diverse ensemble of classifiers by manipulating the training data given to a base learning algorithm. They are very successful in improving the accuracy of some algorithms in artificial and real world datasets. We review the algorithms such as AdaBoost, Bagging, ADTree, and Random Forest in conjunction with the Meta classifier and the Decision Tree classifier. Also we describe a large empirical study by comparing several variants. The algorithms are analyzed on Accuracy, Precision, Error Rate and Execution Time.
Fast Quantum Search Algorithms in Protein Sequence Comparison - Quantum Biocomputing
Hollenberg, L C L
2000-01-01
Quantum search algorithms are considered in the context of protein sequencecomparison in biocomputing. Given a sample protein sequence of length m (i.e mresidues), the problem considered is to find an optimal match in a largedatabase containing N residues. Initially, Grover's quantum search algorithm isapplied to a simple illustrative case - namely where the database forms acomplete set of states over the 2^m basis states of a m qubit register, andthus is known to contain the exact sequence of interest. This exampledemonstrates explicitly the typical O(sqrt{N}) speedup on the classical O(N)requirements. An algorithm is then presented for the (more realistic) casewhere the database may contain repeat sequences, and may not necessarilycontain an exact match to the sample sequence. In terms of minimizing theHamming distance between the sample sequence and the database subsequences thealgorithm finds an optimal alignment, in O(sqrt{N}) steps, by employing anextension of Grover's algorithm, due to Boyer, Brassard,...
Comparison of parameter estimation algorithms in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well for...... these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective and...
Advanced reconstruction algorithms for electron tomography: From comparison to combination
Energy Technology Data Exchange (ETDEWEB)
Goris, B. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Roelandts, T. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Batenburg, K.J. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Heidari Mezerji, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Bals, S., E-mail: sara.bals@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)
2013-04-15
In this work, the simultaneous iterative reconstruction technique (SIRT), the total variation minimization (TVM) reconstruction technique and the discrete algebraic reconstruction technique (DART) for electron tomography are compared and the advantages and disadvantages are discussed. Furthermore, we describe how the result of a three dimensional (3D) reconstruction based on TVM can provide objective information that is needed as the input for a DART reconstruction. This approach results in a tomographic reconstruction of which the segmentation is carried out in an objective manner. - Highlights: ► A comparative study between different reconstruction algorithms for tomography is performed. ► Reconstruction algorithms that uses prior knowledge about the specimen have a superior result. ► One reconstruction algorithm can provide the prior knowledge for a second algorithm.
New enumeration algorithm for protein structure comparison and classification
2013-01-01
Background Protein structure comparison and classification is an effective method for exploring protein structure-function relations. This problem is computationally challenging. Many different computational approaches for protein structure comparison apply the secondary structure elements (SSEs) representation of protein structures. Results We study the complexity of the protein structure comparison problem based on a mixed-graph model with respect to different computational frameworks. We d...
DURUSU, A.; NAKIR, I.; AJDER, A.; Ayaz, R.; Akca, H.; TANRIOVEN, M.
2014-01-01
Maximum power point trackers (MPPTs) play an essential role in extracting power from photovoltaic (PV) panels as they make the solar panels to operate at the maximum power point (MPP) whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are ...
A comparison of surface fitting algorithms for geophysical data
El Abbass, Tihama; Jallouli, C.; Albouy, Yves; Diament, M.
1990-01-01
Cet article présente les résultats d'une comparaison de différents algorithmes d'approximation de surface. Pour chacun de ces algorithmes (approximation polynomiale, combinaison spline-laplace, krigeage, approximation aux moindres carrés, méthode des éléments finis) la pertinence pour différents ensembles de données et les limites d'application sont discutées
Comparison of Voice Activity Detection Algorithms for VoIP
Prasad, Venkatesha R; Sangwan, Abhijeet; Jamadagni, HS; Chiranth, MC; Sah, Rahul
2002-01-01
We discuss techniques for Voice Activity Detection (VAD) for Voice over Internet Protocol (VoIP). VAD aids in saving bandwidth requirement of a voice session thereby increasing the bandwidth efficiently. In this paper, we compare the quality of speech, level of compression and computational complexity for three time-domain and three frequency-domain VAD algorithms. Implementation of time-domain algorithms is computationally simple. However, better speech quality is obtained with the frequency...
An Empirical Comparison of Learning Algorithms for Nonparametric Scoring
Depecker, Marine; Clémençon, Stéphan; Vayatis, Nicolas
2011-01-01
The TreeRank algorithm was recently proposed as a scoring-based method based on recursive partitioning of the input space. This tree induction algorithm builds orderings by recursively optimizing the Receiver Operating Characteristic (ROC) curve through a one-step optimization procedure called LeafRank. One of the aim of this paper is the indepth analysis of the empirical performance of the variants of TreeRank/LeafRank method. Numerical experiments based on both artificial and real data sets...
COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES
Directory of Open Access Journals (Sweden)
A.A. Haseena Thasneem
2015-05-01
Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.
Comparison of Hierarchical Agglomerative Algorithms for Clustering Medical Documents
Directory of Open Access Journals (Sweden)
Rafa E. Al-Qutaish
2012-06-01
Full Text Available Extensive amount of data stored in medical documents require developing methods that help users to find what they are looking for effectively by organizing large amounts of information into a small number of meaningful clusters. The produced clusters contain groups of objects which are more similar to each other than to the members of any other group. Thus, the aim of high-quality document clustering algorithms is to determine a set of clusters in which the inter-cluster similarity is minimized and intra-cluster similarity is maximized. The most important feature in many clustering algorithms is treating the clustering problem as an optimization process, that is, maximizing or minimizing a particular clustering criterion function defined over the whole clustering solution. The only real difference between agglomerative algorithms is how they choose which clusters to merge. The main purpose of this paper is to compare different agglomerative algorithms based on the evaluation of the clusters quality produced by different hierarchical agglomerative clustering algorithms using different criterion functions for the problem of clustering medical documents. Our experimental results showed that the agglomerative algorithm that uses I1 as its criterion function for choosing which clusters to merge produced better clusters quality than the other criterion functions in term of entropy and purity as external measures.
Performance comparison of several optimization algorithms in matched field inversion
Institute of Scientific and Technical Information of China (English)
ZOU Shixin; YANG Kunde; MA Yuanliang
2004-01-01
Optimization efficiencies and mechanisms of simulated annealing, genetic algorithm, differential evolution and downhill simplex differential evolution are compared and analyzed. Simulated annealing and genetic algorithm use a directed random process to search the parameter space for an optimal solution. They include the ability to avoid local minima, but as no gradient information is used, searches may be relatively inefficient. Differential evolution uses information from a distance and azimuth between individuals of a population to search the parameter space, the initial search is effective, but the search speed decreases quickly because differential information between the individuals of population vanishes. Local downhill simplex and global differential evolution methods are developed separately, and combined to produce a hybrid downhill simplex differential evolution algorithm. The hybrid algorithm is sensitive to gradients of the object function and search of the parameter space is effective. These algorithms are applied to the matched field inversion with synthetic data. Optimal values of the parameters, the final values of object function and inversion time is presented and compared.
A COMPARISON OF CONSTRUCTIVE AND PRUNING ALGORITHMS TO DESIGN NEURAL NETWORKS
Directory of Open Access Journals (Sweden)
KAZI MD. ROKIBUL ALAM
2011-06-01
Full Text Available This paper presents a comparison between constructive and pruning algorithms to design Neural Network (NN. Both algorithms have advantages as well as drawbacks while designing the architecture of NN. Constructive algorithm is computationally economic because it simply specifies straightforward initial NN architecture. Whereas the large initial NN size of pruning algorithm allows reasonably quick learning with reduced complexity. Two popular ideas from two categories: “cascade-correlation [1]” from constructive algorithms and “skeletonization [2]” from pruning algorithms are chosen here. They have been tested on several benchmark problems in machine learning and NNs. These are the cancer, the credit card, the heart disease, the thyroid and the soybean problems. The simulation results show the number of iterations during the training period and the generalization ability of NNs designed by using these algorithms for these problems.
A Comparison of Improved Artificial Bee Colony Algorithms Based on Differential Evolution
Directory of Open Access Journals (Sweden)
Jianfeng Qiu
2013-10-01
Full Text Available The Artificial Bee Colony (ABC algorithm is an active field of optimization based on swarm intelligence in recent years. Inspired by the mutation strategies used in Differential Evolution (DE algorithm, this paper introduced three types strategies (“rand”,” best”, and “current-to-best” and one or two numbers of disturbance vectors to ABC algorithm. Although individual mutation strategies in DE have been used in ABC algorithm by some researchers in different occasions, there have not a comprehensive application and comparison of the mutation strategies used in ABC algorithm. In this paper, these improved ABC algorithms can be analyzed by a set of testing functions including the rapidity of the convergence. The results show that those improvements based on DE achieve better performance in the whole than basic ABC algorithm.
Optimization of a statistical algorithm for objective comparison of toolmarks.
Spotts, Ryan; Chumbley, L Scott; Ekstrand, Laura; Zhang, Song; Kreiser, James
2015-03-01
Due to historical legal challenges, there is a driving force for the development of objective methods of forensic toolmark identification. This study utilizes an algorithm to separate matching and nonmatching shear cut toolmarks created using fifty sequentially manufactured pliers. Unlike previously analyzed striated screwdriver marks, shear cut marks contain discontinuous groups of striations, posing a more difficult test of algorithm applicability. The algorithm compares correlation between optical 3D toolmark topography data, producing a Wilcoxon rank sum test statistic. Relative magnitude of this metric separates the matching and nonmatching toolmarks. Results show a high degree of statistical separation between matching and nonmatching distributions. Further separation is achieved with optimized input parameters and implementation of a "leash" preventing a previous source of outliers--however complete statistical separation was not achieved. This paper represents further development of objective methods of toolmark identification and further validation of the assumption that toolmarks are identifiably unique. PMID:25425426
Comparison of Algorithms for an Electronic Nose in Identifying Liquors
Institute of Scientific and Technical Information of China (English)
Zhi-biao Shi; Tao Yu; Qun Zhao; Yang Li; Yu-bin Lan
2008-01-01
When the electronic nose is used to identify different varieties of distilled liquors, the pattern recognition algorithm is chosen on the basis of the experience, which lacks the guiding principle. In this research, the different brands of distilled spirits were identified using the pattern recognition algorithms (principal component analysis and the artificial neural network). The recognition rates of different algorithms were compared. The recognition rate of the Back Propagation Neural Network (BPNN) is the highest. Owing to the slow convergence speed of the BPNN, it tends easily to get into a local minimum. A chaotic BPNN was tried in order to overcome the disadvantage of the BPNN. The convergence speed of the chaotic BPNN is 75.5 times faster than that of the BPNN.
Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification
Directory of Open Access Journals (Sweden)
R. Sathya
2013-02-01
Full Text Available This paper presents a comparative account of unsupervised and supervised learning models and their pattern classification evaluations as applied to the higher education scenario. Classification plays a vital role in machine based learning algorithms and in the present study, we found that, though the error back-propagation learning algorithm as provided by supervised learning model is very efficient for a number of non-linear real-time problems, KSOM of unsupervised learning model, offers efficient solution and classification in the present study.
The RedGOLD cluster detection algorithm and its cluster candidate catalogue for the CFHT-LS W1
Licitra, Rossella; Mei, Simona; Raichoor, Anand; Erben, Thomas; Hildebrandt, Hendrik
2016-01-01
We present RedGOLD (Red-sequence Galaxy Overdensity cLuster Detector), a new optical/NIR galaxy cluster detection algorithm, and apply it to the CFHT-LS W1 field. RedGOLD searches for red-sequence galaxy overdensities while minimizing contamination from dusty star-forming galaxies. It imposes an Navarro-Frenk-White profile and calculates cluster detection significance and richness. We optimize these latter two parameters using both simulations and X-ray-detected cluster catalogues, and obtain a catalogue ˜80 per cent pure up to z ˜ 1, and ˜100 per cent (˜70 per cent) complete at z ≤ 0.6 (z ≲ 1) for galaxy clusters with M ≳ 1014 M⊙ at the CFHT-LS Wide depth. In the CFHT-LS W1, we detect 11 cluster candidates per deg2 out to z ˜ 1.1. When we optimize both completeness and purity, RedGOLD obtains a cluster catalogue with higher completeness and purity than other public catalogues, obtained using CFHT-LS W1 observations, for M ≳ 1014 M⊙. We use X-ray-detected cluster samples to extend the study of the X-ray temperature-optical richness relation to a lower mass threshold, and find a mass scatter at fixed richness of σlnM|λ = 0.39 ± 0.07 and σlnM|λ = 0.30 ± 0.13 for the Gozaliasl et al. and Mehrtens et al. samples. When considering similar mass ranges as previous work, we recover a smaller scatter in mass at fixed richness. We recover 93 per cent of the redMaPPer detections, and find that its richness estimates is on average ˜40-50 per cent larger than ours at z > 0.3. RedGOLD recovers X-ray cluster spectroscopic redshifts at better than 5 per cent up to z ˜ 1, and the centres within a few tens of arcseconds.
Smail, Linda
2016-06-01
The basic task of any probabilistic inference system in Bayesian networks is computing the posterior probability distribution for a subset or subsets of random variables, given values or evidence for some other variables from the same Bayesian network. Many methods and algorithms have been developed to exact and approximate inference in Bayesian networks. This work compares two exact inference methods in Bayesian networks-Lauritzen-Spiegelhalter and the successive restrictions algorithm-from the perspective of computational efficiency. The two methods were applied for comparison to a Chest Clinic Bayesian Network. Results indicate that the successive restrictions algorithm shows more computational efficiency than the Lauritzen-Spiegelhalter algorithm.
Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches
Çelik, Ufuk; Yurtay, Nilüfer; Koç, Emine Rabia; Tepe, Nermin; Güllüoğlu, Halil; Ertaş, Mustafa
2015-01-01
The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS) were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy. PMID:26075014
Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches
Directory of Open Access Journals (Sweden)
Ufuk Çelik
2015-01-01
Full Text Available The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy.
Evaluation and Comparison of Motion Estimation Algorithms for Video Compression
Directory of Open Access Journals (Sweden)
Avinash Nayak
2013-08-01
Full Text Available Video compression has become an essential component of broadcast and entertainment media. Motion Estimation and compensation techniques, which can eliminate temporal redundancy between adjacent frames effectively, have been widely applied to popular video compression coding standards such as MPEG-2, MPEG-4. Traditional fast block matching algorithms are easily trapped into the local minima resulting in degradation on video quality to some extent after decoding. In this paper various computing techniques are evaluated in video compression for achieving global optimal solution for motion estimation. Zero motion prejudgment is implemented for finding static macro blocks (MB which do not need to perform remaining search thus reduces the computational cost. Adaptive Rood Pattern Search (ARPS motion estimation algorithm is also adapted to reduce the motion vector overhead in frame prediction. The simulation results showed that the ARPS algorithm is very effective in reducing the computations overhead and achieves very good Peak Signal to Noise Ratio (PSNR values. This method significantly reduces the computational complexity involved in the frame prediction and also least prediction error in all video sequences. Thus ARPS technique is more efficient than the conventional searching algorithms in video compression.
Jelen, Birsen
2015-01-01
In recent years almost every newly opened government funded university in Turkey has established a music department where future music teachers are educated and piano is compulsory for every single music teacher candidate in Turkey. The aim of this research is to compare piano teaching instructors' and their students' perceptions about the current…
Hees, A; Guéna, J; Abgrall, M; Bize, S; Wolf, P
2016-08-01
We use 6 yrs of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions, and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine structure constant and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed. PMID:27541455
Hees, A; Abgrall, M; Bize, S; Wolf, P
2016-01-01
We use six years of accurate hyperfine frequency comparison data of the dual Rubidium and Caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions and of the quantum chromodynamic mass scale, which will directly impact the Rubidium/Caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine-structure constant, and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed.
Hees, A.; Guéna, J.; Abgrall, M.; Bize, S.; Wolf, P.
2016-08-01
We use 6 yrs of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions, and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine structure constant and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed.
Parallel divide and conquer bio-sequence comparison based on Smith-Waterman algorithm
Institute of Scientific and Technical Information of China (English)
ZHANG Fa; QIAO Xiangzhen; LIU Zhiyong
2004-01-01
Tools for pair-wise bio-sequence alignment have for long played a central role in computation biology. Several algorithms for bio-sequence alignment have been developed. The Smith-Waterman algorithm, based on dynamic programming, is considered the most fundamental alignment algorithm in bioinformatics. However the existing parallel Smith-Waterman algorithm needs large memory space, and this disadvantage limits the size of a sequence to be handled. As the data of biological sequences expand rapidly, the memory requirement of the existing parallel SmithWaterman algorithm has become a critical problem. For solving this problem, we develop a new parallel bio-sequence alignment algorithm, using the strategy of divide and conquer, named PSW-DC algorithm. In our algorithm, first, we partition the query sequence into several subsequences and distribute them to every processor respectively,then compare each subsequence with the whole subject sequence in parallel, using the Smith-Waterman algorithm, and get an interim result, finally obtain the optimal alignment between the query sequence and subject sequence, through the special combination and extension method. Memory space required in our algorithm is reduced significantly in comparison with existing ones. We also develop a key technique of combination and extension, named the C&E method, to manipulate the interim results and obtain the final sequences alignment. We implement the new parallel bio-sequences alignment algorithm,the PSW-DC, in a cluster parallel system.
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations. PMID:25429460
Empirical Comparison of Algorithms for Network Community Detection
Leskovec, Jure; Mahoney, Michael W
2010-01-01
Detecting clusters or communities in large real-world graphs such as large social or information networks is a problem of considerable interest. In practice, one typically chooses an objective function that captures the intuition of a network cluster as set of nodes with better internal connectivity than external connectivity, and then one applies approximation algorithms or heuristics to extract sets of nodes that are related to the objective function and that "look like" good communities for the application of interest. In this paper, we explore a range of network community detection methods in order to compare them and to understand their relative performance and the systematic biases in the clusters they identify. We evaluate several common objective functions that are used to formalize the notion of a network community, and we examine several different classes of approximation algorithms that aim to optimize such objective functions. In addition, rather than simply fixing an objective and asking for an a...
A benchmark for comparison of cell tracking algorithms
Maška, Martin; Ulman, Vladimír; Svoboda, David; Matula, Pavel; Matula, Petr; Ederra, Cristina; Urbiola, Ainhoa; España, Tomás; Venkatesan, Subramanian; Balak, Deepak M.W.; Karas, Pavel; Bolcková, Tereza; Štreitová, Markéta; Carthel, Craig; Coraluppi, Stefano
2014-01-01
Motivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2013 Cell Tracking Challenge. In this article, we present the logistics, datasets, methods and results of the challenge and lay down the principles for future uses of this benchma...
A comparison of fitness scallng methods in evolutionary algorithms
Bertone, E.; Alfonso, Hugo; Gallard, Raúl Hector
1999-01-01
Proportional selection (PS), as a selection mechanism for mating (reproduction with emphasis), selects individuals according to their fitness. Consequently the probability of an individual to obtain a number of offspring is directly proportional to its fitness value. This can lead to a loss of selective pressure in the fmal stages of the evolutionary process degrading the search. This presentation discusses performance results on evolutionary algorithms optimizing two highly multimodal ...
A numeric comparison of variable selection algorithms for supervised learning
International Nuclear Information System (INIS)
Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( (http://sourceforge.net/projects/statpatrec/)). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ('Add N Remove R') implemented in SPR.
Comparison of Adaptive Antenna Arrays Controlled by Gradient Algorithms
Directory of Open Access Journals (Sweden)
Z. Raida
1994-09-01
Full Text Available The paper presents the Simple Kalman filter (SKF that has been designed for the control of digital adaptive antenna arrays. The SKF has been applied to the pilot signal system and the steering vector one. The above systems based on the SKF are compared with adaptive antenna arrays controlled by the classical LMS and the Variable Step Size (VSS LMS algorithms and by the pure Kalman filter. It is shown that the pure Kalman filter is the most convenient for the control of the adaptive arrays because it does not require any a priori information about noise statistics and excels in high rate of convergence and low misadjustment. Extremely high computational requirements are drawback of this filter. Hence, if low computational power of signal processors is at the disposal, the SKF is recommended to be used. Computational requirements of the SKF are of the same order as the classical LMS algorithm exhibits. On the other hand, all the important features of the pure Kalman filter are inherited by the SKF. The paper shows that presented Kalman filters can be regarded as special gradient algorithms. That is why they can be compared with the LMS family.
Comparison of cluster expansion fitting algorithms for interactions at surfaces
Herder, Laura M.; Bray, Jason M.; Schneider, William F.
2015-10-01
Cluster expansions (CEs) are Ising-type interaction models that are increasingly used to model interaction and ordering phenomena at surfaces, such as the adsorbate-adsorbate interactions that control coverage-dependent adsorption or surface-vacancy interactions that control surface reconstructions. CEs are typically fit to a limited set of data derived from density functional theory (DFT) calculations. The CE fitting process involves iterative selection of DFT data points to include in a fit set and selection of interaction clusters to include in the CE. Here we compare the performance of three CE fitting algorithms-the MIT Ab-initio Phase Stability code (MAPS, the default in ATAT software), a genetic algorithm (GA), and a steepest descent (SD) algorithm-against synthetic data. The synthetic data is encoded in model Hamiltonians of varying complexity motivated by the observed behavior of atomic adsorbates on a face-centered-cubic transition metal close-packed (111) surface. We compare the performance of the leave-one-out cross-validation score against the true fitting error available from knowledge of the hidden CEs. For these systems, SD achieves lowest overall fitting and prediction error independent of the underlying system complexity. SD also most accurately predicts cluster interaction energies without ignoring or introducing extra interactions into the CE. MAPS achieves good results in fewer iterations, while the GA performs least well for these particular problems.
A comparison of cohesive features in IELTS writing of Chinese candidates and IELTS examiners
Institute of Scientific and Technical Information of China (English)
刘可
2012-01-01
This study aims at investigating cohesive ties applied in IELTS written texts produced by Chinese candidates and IELTS examiners,uncovering the differences in the use of cohesive features between the two groups,and analyzing whether the employment of cohesive ties is a possible problem in the Chinese candidates’ writing.Six written texts are analyzed in the study,with three Chinese candidates’ and three IELTS examiners’ IELTS writing respectively.The findings show that there exist differences in the use of cohesive devices between the two groups.Compared to the IETLS’ examiners’ writing,the group of Chinese candidates employed excessive conjunctions,with relatively less comparative and demonstrative reference ties used in their texts.Additionally,it appears that overusing repetition ties constitutes a potential problem in the candidates’ writing.Implications and suggestions about raising learners’ awareness and helping them to use cohesive devices effectively are discussed.
Direct Imaging of Extra-solar Planets - Homogeneous Comparison of Detected Planets and Candidates
Neuhäuser, Ralph; Schmidt, Tobias
2012-01-01
Searching the literature, we found 25 stars with directly imaged planets and candidates. We gathered photometric and spectral information for all these objects to derive their luminosities in a homogeneous way, taking a bolometric correction into account. Using theoretical evolutionary models, one can then estimate the mass from luminosity, temperature, and age. According to our mass estimates, all of them can have a mass below 25 Jup masses, so that they are considered as planets.
An Adaptive Algorithm for Pairwise Comparison-based Preference Measurement
DEFF Research Database (Denmark)
Meissner, Martin; Decker, Reinhold; Scholz, Sören W.
2011-01-01
The Pairwise Comparison‐based Preference Measurement (PCPM) approach has been proposed for products featuring a large number of attributes. In the PCPM framework, a static two‐cyclic design is used to reduce the number of pairwise comparisons. However, adaptive questioning routines that maximize ...
Comparison of four Adaboost algorithm based artificial neural networks in wind speed predictions
International Nuclear Information System (INIS)
Highlights: • Four hybrid algorithms are proposed for the wind speed decomposition. • Adaboost algorithm is adopted to provide a hybrid training framework. • MLP neural networks are built to do the forecasting computation. • Four important network training algorithms are included in the MLP networks. • All the proposed hybrid algorithms are suitable for the wind speed predictions. - Abstract: The technology of wind speed prediction is important to guarantee the safety of wind power utilization. In this paper, four different hybrid methods are proposed for the high-precision multi-step wind speed predictions based on the Adaboost (Adaptive Boosting) algorithm and the MLP (Multilayer Perceptron) neural networks. In the hybrid Adaboost–MLP forecasting architecture, four important algorithms are adopted for the training and modeling of the MLP neural networks, including GD-ALR-BP algorithm, GDM-ALR-BP algorithm, CG-BP-FR algorithm and BFGS algorithm. The aim of the study is to investigate the promoted forecasting percentages of the MLP neural networks by the Adaboost algorithm’ optimization under various training algorithms. The hybrid models in the performance comparison include Adaboost–GD-ALR-BP–MLP, Adaboost–GDM-ALR-BP–MLP, Adaboost–CG-BP-FR–MLP, Adaboost–BFGS–MLP, GD-ALR-BP–MLP, GDM-ALR-BP–MLP, CG-BP-FR–MLP and BFGS–MLP. Two experimental results show that: (1) the proposed hybrid Adaboost–MLP forecasting architecture is effective for the wind speed predictions; (2) the Adaboost algorithm has promoted the forecasting performance of the MLP neural networks considerably; (3) among the proposed Adaboost–MLP forecasting models, the Adaboost–CG-BP-FR–MLP model has the best performance; and (4) the improved percentages of the MLP neural networks by the Adaboost algorithm decrease step by step with the following sequence of training algorithms as: GD-ALR-BP, GDM-ALR-BP, CG-BP-FR and BFGS
Comparison with reconstruction algorithms in magnetic induction tomography.
Han, Min; Cheng, Xiaolin; Xue, Yuyan
2016-05-01
Magnetic induction tomography (MIT) is a kind of imaging technology, which uses the principle of electromagnetic detection to measure the conductivity distribution. In this research, we make an effort to improve the quality of image reconstruction mainly via the image reconstruction of MIT analysis, including solving the forward problem and image reconstruction. With respect to the forward problem, the variational finite element method is adopted. We transform the solution of a nonlinear partial differential equation into linear equations by using field subdividing and the appropriate interpolation function so that the voltage data of the sensing coils can be calculated. With respect to the image reconstruction, a method of modifying the iterative Newton-Raphson (NR) algorithm is presented in order to improve the quality of the image. In the iterative NR, weighting matrix and L1-norm regularization are introduced to overcome the drawbacks of large estimation errors and poor stability of the reconstruction image. On the other hand, within the incomplete-data framework of the expectation maximization (EM) algorithm, the image reconstruction can be converted to the problem of EM through the likelihood function for improving the under-determined problem. In the EM, the missing-data is introduced and the measurement data and the sensitivity matrix are compensated to overcome the drawback that the number of the measurement voltage is far less than the number of the unknown. In addition to the two aspects above, image segmentation is also used to make the lesion more flexible and adaptive to the patients' real conditions, which provides a theoretical reference for the development of the application of the MIT technique in clinical applications. The results show that solving the forward problem with the variational finite element method can provide the measurement voltage data for image reconstruction, the improved iterative NR method and EM algorithm can enhance the image
Direct sequential simulation with histogram reproduction: A comparison of algorithms
Robertson, Robyn K.; Mueller, Ute A.; Bloom, Lynette M.
2006-04-01
Sequential simulation is a widely used technique applied in geostatistics to generate realisations that reproduce properties such as the mean, variance and semivariogram. Sequential Gaussian simulation requires the original variable to be transformed to a standard normal distribution before implementing variography, kriging and simulation procedures. Direct sequential simulation allows one to perform the simulation using the original variable rather than in normal score space. The shape of the local probability distribution from which simulated values are drawn is generally unknown and this results in direct simulation not being able to guarantee reproduction of the target histogram; only the Gaussian distribution ensures reproduction of the target distribution, and most geostatistical data sets are not normally distributed. This problem can be overcome by defining the shape of the local probability distribution through the use of constrained optimisation algorithms or by using the target normal-score transformation. We investigate two non-parametric approaches based on the minimisation of an objective function subject to a set of linear constraints, and an alternative approach that creates a lookup table using Gaussian transformation. These approaches allow the variography, kriging and simulation to be performed using original data values and result in the reproduction of both the histogram and semivariogram, within statistical fluctuations. The programs for the algorithms are written in Fortran 90 and follow the GSLIB format. Routines for constrained optimisation have been incorporated.
Absorption, refraction and scattering in analyzer-based imaging: comparison of different algorithms.
Diemoz, P. C.; Coan, P.; Glaser, C; Bravin, A.
2010-01-01
Many mathematical methods have been so far proposed in order to separate absorption, refraction and ultra-small angle scattering information in phase-contrast analyzer-based images. These algorithms all combine a given number of images acquired at different positions of the crystal analyzer along its rocking curve. In this paper a comprehensive quantitative comparison between five of the most widely used phase extraction algorithms based on the geometrical optics approximation is presented: t...
A COMPARISON BETWEEN TWO ALGORITHMS FOR THE RETRIEVAL OF SOIL MOISTURE USING AMSR-E DATA
Simonetta ePaloscia; Emanuele eSanti; Simone ePettinato; Iliana eMladenova; Tom eJackson; Michael eCosh
2015-01-01
A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS) watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions) and provide long-term in-situ data. One of the algorithms is the artificial neural network-ba...
A comparison between two algorithms for the retrieval of soil moisture using AMSR-E data
Paloscia, Simonetta; santi, emanuele; Pettinato, Simone; Mladenova, Iliana; Jackson, Thomas; Bindlish, Rajat; Cosh, Michael
2015-01-01
A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS) watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions) and provide long-term in-situ data. One of the algorithms is the artificial neural network-ba...
Comparison of Greedy Algorithms for Decision Tree Optimization
Alkhalid, Abdulaziz
2013-01-01
This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values of average depth, depth, number of nodes, number of terminal nodes, and number of nonterminal nodes of decision trees. We compare average depth, depth, number of nodes, number of terminal nodes and number of nonterminal nodes of constructed trees with minimum values of the considered parameters obtained based on a dynamic programming approach. We report experiments performed on data sets from UCI ML Repository and randomly generated binary decision tables. As a result, for depth, average depth, and number of nodes we propose a number of good heuristics. © Springer-Verlag Berlin Heidelberg 2013.
Multi-pattern string matching algorithms comparison for intrusion detection system
Hasan, Awsan A.; Rashid, Nur'Aini Abdul; Abdulrazzaq, Atheer A.
2014-12-01
Computer networks are developing exponentially and running at high speeds. With the increasing number of Internet users, computers have become the preferred target for complex attacks that require complex analyses to be detected. The Intrusion detection system (IDS) is created and turned into an important part of any modern network to protect the network from attacks. The IDS relies on string matching algorithms to identify network attacks, but these string matching algorithms consume a considerable amount of IDS processing time, thereby slows down the IDS performance. A new algorithm that can overcome the weakness of the IDS needs to be developed. Improving the multi-pattern matching algorithm ensure that an IDS can work properly and the limitations can be overcome. In this paper, we perform a comparison between our three multi-pattern matching algorithms; MP-KR, MPHQS and MPH-BMH with their corresponding original algorithms Kr, QS and BMH respectively. The experiments show that MPH-QS performs best among the proposed algorithms, followed by MPH-BMH, and MP-KR is the slowest. MPH-QS detects a large number of signature patterns in short time compared to other two algorithms. This finding can prove that the multi-pattern matching algorithms are more efficient in high-speed networks.
International Nuclear Information System (INIS)
We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.
Effective Comparison and Evaluation of DES and Rijndael Algorithm (AES
Directory of Open Access Journals (Sweden)
Prof.N..Penchalaiah,
2010-08-01
Full Text Available This paper discusses the effective coding of Rijndael algorithm, Advanced Encryption Standard (AES in Hardware Description Language, Verilog. In this work we analyze the structure and design of new AES, following three criteria: a resistance against all known attacks; b speed and code compactness on a wide range of platforms; and c designsimplicity; as well as its similarities and dissimilarities with other symmetric ciphers. On the other side, the principal advantages of new AES with respect to DES, as well as its limitations, are investigated. Thus, for example, the fact that the new cipher and its inverse use different components, which practically eliminates the possibility for weak and semi-weak keys, as existing for DES, and the non-linearity of the key expansion, which practically eliminates the possibility of equivalent keys, are two of the principal advantages of new cipher. Finally, the implementation aspects of Rijndael cipherand its inverse are treated. Thus, although Rijndael is well suited to be implemented efficiently on a wide range of processors and in dedicated hardware, we have concentrated our study on 8-bit processors, typical for current Smart Cards and on 32-bit processors, typical for PCs.
Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison
Directory of Open Access Journals (Sweden)
Olympia Roeva
2005-12-01
Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.
A comparison between two algorithms for the retrieval of soil moisture using AMSR-E data
A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS) watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground condit...
Ridge extraction algorithms for one-dimensional continuous wavelet transform: a comparison
International Nuclear Information System (INIS)
This paper compares between three different algorithms that are used in detecting the phase of a fringe pattern from the ridge of its wavelet transform. A Morlet wavelet is adapted for the continuous wavelet transform of the fringe pattern. A numerical simulation is used to perform this comparison
Ridge extraction algorithms for one-dimensional continuous wavelet transform: a comparison
Energy Technology Data Exchange (ETDEWEB)
Abid, A Z; Gdeisat, M A; Burton, D R; Lalor, M J [General Engineering Research Institute (GERI), Liverpool John Moores University, Liverpool L3 3AF (United Kingdom)
2007-07-15
This paper compares between three different algorithms that are used in detecting the phase of a fringe pattern from the ridge of its wavelet transform. A Morlet wavelet is adapted for the continuous wavelet transform of the fringe pattern. A numerical simulation is used to perform this comparison.
Tang, Jie; Nett, Brian E.; Chen, Guang-Hong
2009-10-01
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
International Nuclear Information System (INIS)
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
Gallenne, A.; Mérand, A.; Kervella, P; Monnier, J. D.; Schaefer, G. H.; Baron, F; Breitfelder, J.; Bouquin, J. B. Le; Roettenbacher, R. M.; Gieren, W.; Pietrzynski, G.; McAlister, H.; Brummelaar, T. ten; Sturmann, J.; Sturmann, L.
2015-01-01
Long-baseline interferometry is an important technique to spatially resolve binary or multiple systems in close orbits. By combining several telescopes together and spectrally dispersing the light, it is possible to detect faint components around bright stars. Aims. We provide a rigorous and detailed method to search for high-contrast companions around stars, determine the detection level, and estimate the dynamic range from interferometric observations. We developed the code CANDID (Companio...
A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller
Energy Technology Data Exchange (ETDEWEB)
Tapp, P.A.
1992-04-01
A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms` performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.
A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller
Energy Technology Data Exchange (ETDEWEB)
Tapp, P.A.
1992-04-01
A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.
A benchmark comparison of Monte Carlo particle transport algorithms for binary stochastic mixtures
International Nuclear Information System (INIS)
We numerically investigate the accuracy of two Monte Carlo algorithms originally proposed by Zimmerman and Zimmerman and Adams for particle transport through binary stochastic mixtures. We assess the accuracy of these algorithms using a standard suite of planar geometry incident angular flux benchmark problems and a new suite of interior source benchmark problems. In addition to comparisons of the ensemble-averaged leakage values, we compare the ensemble-averaged material scalar flux distributions. Both Monte Carlo transport algorithms robustly produce physically realistic scalar flux distributions for the benchmark transport problems examined. The base Monte Carlo algorithm reproduces the standard Levermore-Pomraning model results. The improved Monte Carlo algorithm generally produces significantly more accurate leakage values and also significantly more accurate material scalar flux distributions. We also present deterministic atomic mix solutions of the benchmark problems for comparison with the benchmark and the Monte Carlo solutions. Both Monte Carlo algorithms are generally significantly more accurate than the atomic mix approximation for the benchmark suites examined.
A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller
International Nuclear Information System (INIS)
A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs
A Damage Resistance Comparison Between Candidate Polymer Matrix Composite Feedline Materials
Nettles, A. T
2000-01-01
As part of NASAs focused technology programs for future reusable launch vehicles, a task is underway to study the feasibility of using the polymer matrix composite feedlines instead of metal ones on propulsion systems. This is desirable to reduce weight and manufacturing costs. The task consists of comparing several prototype composite feedlines made by various methods. These methods are electron-beam curing, standard hand lay-up and autoclave cure, solvent assisted resin transfer molding, and thermoplastic tape laying. One of the critical technology drivers for composite components is resistance to foreign objects damage. This paper presents results of an experimental study of the damage resistance of the candidate materials that the prototype feedlines are manufactured from. The materials examined all have a 5-harness weave of IM7 as the fiber constituent (except for the thermoplastic, which is unidirectional tape laid up in a bidirectional configuration). The resin tested were 977-6, PR 520, SE-SA-1, RS-E3 (e-beam curable), Cycom 823 and PEEK. The results showed that the 977-6 and PEEK were the most damage resistant in all tested cases.
Energy Technology Data Exchange (ETDEWEB)
Carroll, Mark C
2014-09-01
High-purity graphite is the core structural material of choice in the Very High Temperature Reactor (VHTR) design, a graphite-moderated, helium-cooled configuration that is capable of producing thermal energy for power generation as well as process heat for industrial applications that require temperatures higher than the outlet temperatures of present nuclear reactors. The Baseline Graphite Characterization Program is endeavoring to minimize the conservative estimates of as-manufactured mechanical and physical properties in nuclear-grade graphites by providing comprehensive data that captures the level of variation in measured values. In addition to providing a thorough comparison between these values in different graphite grades, the program is also carefully tracking individual specimen source, position, and orientation information in order to provide comparisons both in specific properties and in the associated variability between different lots, different billets, and different positions from within a single billet. This report is a preliminary comparison between each of the grades of graphite that are considered “candidate” grades from four major international graphite producers. These particular grades (NBG-18, NBG-17, PCEA, IG-110, and 2114) are the major focus of the evaluations presently underway on irradiated graphite properties through the series of Advanced Graphite Creep (AGC) experiments. NBG-18, a medium-grain pitch coke graphite from SGL from which billets are formed via vibration molding, was the favored structural material in the pebble-bed configuration. NBG-17 graphite from SGL is essentially NBG-18 with the grain size reduced by a factor of two. PCEA, petroleum coke graphite from GrafTech with a similar grain size to NBG-17, is formed via an extrusion process and was initially considered the favored grade for the prismatic layout. IG-110 and 2114, from Toyo Tanso and Mersen (formerly Carbone Lorraine), respectively, are fine-grain grades
International Nuclear Information System (INIS)
Development of attenuated mutants for use as vaccines is in progress for other viruses, including influenza, rotavirus, varicella-zoster, cytomegalovirus, and hepatitis-A virus (HAV). Attenuated viruses may be derived from naturally occurring mutants that infect human or nonhuman hosts. Alternatively, attenuated mutants may be generated by passage of wild-type virus in cell culture. Production of attenuated viruses in cell culture is a laborious and empiric process. Despite previous empiric successes, understanding the molecular basis for attenuation of vaccine viruses could facilitate future development and use of live-virus vaccines. Comparison of the complete nucleotide sequences of wild-type (virulent) and vaccine (attenuated) viruses has been reported for polioviruses and yellow fever virus. Here, the authors compare the nucleotide sequence of wild-type HAV HM-175 with that of a candidate vaccine derivative
VennPainter: A Tool for the Comparison and Identification of Candidate Genes Based on Venn Diagrams.
Directory of Open Access Journals (Sweden)
Guoliang Lin
Full Text Available VennPainter is a program for depicting unique and shared sets of genes lists and generating Venn diagrams, by using the Qt C++ framework. The software produces Classic Venn, Edwards' Venn and Nested Venn diagrams and allows for eight sets in a graph mode and 31 sets in data processing mode only. In comparison, previous programs produce Classic Venn and Edwards' Venn diagrams and allow for a maximum of six sets. The software incorporates user-friendly features and works in Windows, Linux and Mac OS. Its graphical interface does not require a user to have programing skills. Users can modify diagram content for up to eight datasets because of the Scalable Vector Graphics output. VennPainter can provide output results in vertical, horizontal and matrix formats, which facilitates sharing datasets as required for further identification of candidate genes. Users can obtain gene lists from shared sets by clicking the numbers on the diagram. Thus, VennPainter is an easy-to-use, highly efficient, cross-platform and powerful program that provides a more comprehensive tool for identifying candidate genes and visualizing the relationships among genes or gene families in comparative analysis.
VennPainter: A Tool for the Comparison and Identification of Candidate Genes Based on Venn Diagrams.
Lin, Guoliang; Chai, Jing; Yuan, Shuo; Mai, Chao; Cai, Li; Murphy, Robert W; Zhou, Wei; Luo, Jing
2016-01-01
VennPainter is a program for depicting unique and shared sets of genes lists and generating Venn diagrams, by using the Qt C++ framework. The software produces Classic Venn, Edwards' Venn and Nested Venn diagrams and allows for eight sets in a graph mode and 31 sets in data processing mode only. In comparison, previous programs produce Classic Venn and Edwards' Venn diagrams and allow for a maximum of six sets. The software incorporates user-friendly features and works in Windows, Linux and Mac OS. Its graphical interface does not require a user to have programing skills. Users can modify diagram content for up to eight datasets because of the Scalable Vector Graphics output. VennPainter can provide output results in vertical, horizontal and matrix formats, which facilitates sharing datasets as required for further identification of candidate genes. Users can obtain gene lists from shared sets by clicking the numbers on the diagram. Thus, VennPainter is an easy-to-use, highly efficient, cross-platform and powerful program that provides a more comprehensive tool for identifying candidate genes and visualizing the relationships among genes or gene families in comparative analysis. PMID:27120465
VennPainter: A Tool for the Comparison and Identification of Candidate Genes Based on Venn Diagrams
Yuan, Shuo; Mai, Chao; Cai, Li; Murphy, Robert W.; Zhou, Wei; Luo, Jing
2016-01-01
VennPainter is a program for depicting unique and shared sets of genes lists and generating Venn diagrams, by using the Qt C++ framework. The software produces Classic Venn, Edwards’ Venn and Nested Venn diagrams and allows for eight sets in a graph mode and 31 sets in data processing mode only. In comparison, previous programs produce Classic Venn and Edwards’ Venn diagrams and allow for a maximum of six sets. The software incorporates user-friendly features and works in Windows, Linux and Mac OS. Its graphical interface does not require a user to have programing skills. Users can modify diagram content for up to eight datasets because of the Scalable Vector Graphics output. VennPainter can provide output results in vertical, horizontal and matrix formats, which facilitates sharing datasets as required for further identification of candidate genes. Users can obtain gene lists from shared sets by clicking the numbers on the diagram. Thus, VennPainter is an easy-to-use, highly efficient, cross-platform and powerful program that provides a more comprehensive tool for identifying candidate genes and visualizing the relationships among genes or gene families in comparative analysis. PMID:27120465
A study and implementation of algorithm for automatic ECT result comparison
International Nuclear Information System (INIS)
Automatic ECT Result Comparison Algorithm was developed and implemented with computer language to remove the human error in manual comparison with many data. The structures of two ECT Program (Eddy net and ECT IDS) that have unique file structure were analyzed to open file and upload data in PC memory. Comparison algorithm was defined graphically for easy PC programming language conversion. Automatic Result Program was programmed with C language that is suitable for future code management and has object oriented programming structure and fast development potential. Automatic Result Program has MS Excel file exporting function that is useful to use external S/W for additional analysis and intuitive result visualization function with color mapping in user friendly fashion that helps analyze efficiently
Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling
DEFF Research Database (Denmark)
Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath; Larsen, Torben
This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals....... The performance of the existing compressed sensing reconstruction algorithms have not been investigated yet for the discrete multi-coset sampling. We compare the following algorithms – orthogonal matching pursuit, multiple signal classification, subspace-augmented multiple signal classification, focal...... under-determined system solver and basis pursuit denoising. The comparison is performed via numerical simulations for different sampling conditions. According to the simulations, focal under-determined system solver outperforms all other algorithms for signals with low signal-to-noise ratio. In other...
Lu, Jing; Chen, Lei; Yin, Jun; Huang, Tao; Bi, Yi; Kong, Xiangyin; Zheng, Mingyue; Cai, Yu-Dong
2016-04-01
Lung cancer, characterized by uncontrolled cell growth in the lung tissue, is the leading cause of global cancer deaths. Until now, effective treatment of this disease is limited. Many synthetic compounds have emerged with the advancement of combinatorial chemistry. Identification of effective lung cancer candidate drug compounds among them is a great challenge. Thus, it is necessary to build effective computational methods that can assist us in selecting for potential lung cancer drug compounds. In this study, a computational method was proposed to tackle this problem. The chemical-chemical interactions and chemical-protein interactions were utilized to select candidate drug compounds that have close associations with approved lung cancer drugs and lung cancer-related genes. A permutation test and K-means clustering algorithm were employed to exclude candidate drugs with low possibilities to treat lung cancer. The final analysis suggests that the remaining drug compounds have potential anti-lung cancer activities and most of them have structural dissimilarity with approved drugs for lung cancer. PMID:26849843
Directory of Open Access Journals (Sweden)
B. Y. Volochiy
2014-12-01
Full Text Available Introduction. Nowadays it is actual task to provide the necessary efficiency indexes of radioelectronic complex system by its behavior algorithm design. There are several methods using for solving this task, intercomparison of which is required. Main part. For behavior algorithm of radioelectronic complex system four mathematical models were built by two known methods (the space of states method and the algorithmic algebras method and new scheme of paths method. Scheme of paths is compact representation of the radioelectronic complex system’s behavior and it is easily and directly formed from the behavior algorithm’s flowchart. Efficiency indexes of tested behavior algorithm - probability and mean time value of successful performance - were obtained. The intercomparison of estimated results was carried out. Conclusion. The model of behavior algorithm, which was constructed using scheme of paths method, gives commensurate values of efficiency indexes in comparison with mathematical models of the same behavior algorithm, which were obtained by space of states and algorithmic algebras methods.
Directory of Open Access Journals (Sweden)
Devesh Batra
2014-11-01
Full Text Available The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error, whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration on a simple MLP structure (2 hidden layers.
Comparison of two algorithms in the automatic segmentation of blood vessels in fundus images
LeAnder, Robert; Chowdary, Myneni Sushma; Mokkapati, Swapnasri; Umbaugh, Scott E.
2008-03-01
Effective timing and treatment are critical to saving the sight of patients with diabetes. Lack of screening, as well as a shortage of ophthalmologists, help contribute to approximately 8,000 cases per year of people who lose their sight to diabetic retinopathy, the leading cause of new cases of blindness [1] [2]. Timely treatment for diabetic retinopathy prevents severe vision loss in over 50% of eyes tested [1]. Fundus images can provide information for detecting and monitoring eye-related diseases, like diabetic retinopathy, which if detected early, may help prevent vision loss. Damaged blood vessels can indicate the presence of diabetic retinopathy [9]. So, early detection of damaged vessels in retinal images can provide valuable information about the presence of disease, thereby helping to prevent vision loss. Purpose: The purpose of this study was to compare the effectiveness of two blood vessel segmentation algorithms. Methods: Fifteen fundus images from the STARE database were used to develop two algorithms using the CVIPtools software environment. Another set of fifteen images were derived from the first fifteen and contained ophthalmologists' hand-drawn tracings over the retinal vessels. The ophthalmologists' tracings were used as the "gold standard" for perfect segmentation and compared with the segmented images that were output by the two algorithms. Comparisons between the segmented and the hand-drawn images were made using Pratt's Figure of Merit (FOM), Signal-to-Noise Ratio (SNR) and Root Mean Square (RMS) Error. Results: Algorithm 2 has an FOM that is 10% higher than Algorithm 1. Algorithm 2 has a 6%-higher SNR than Algorithm 1. Algorithm 2 has only 1.3% more RMS error than Algorithm 1. Conclusions: Algorithm 1 extracted most of the blood vessels with some missing intersections and bifurcations. Algorithm 2 extracted all the major blood vessels, but eradicated some vessels as well. Algorithm 2 outperformed Algorithm 1 in terms of visual clarity, FOM
International Nuclear Information System (INIS)
The objective of this work is to present the capabilities of the NUMERICS web platform for evaluation of the performance of image registration algorithms. The NUMERICS platform is a web accessible tool which provides access to dedicated numerical algorithms for registration and comparison of medical images (http://numerics.phys.uni-sofia.bg). The platform allows comparison of noisy medical images by means of different types of image comparison algorithms, which are based on statistical tests for outliers. The platform also allows 2D image registration with different techniques like Elastic Thin-Plate Spline registration, registration based on rigid transformations, affine transformations, as well as non-rigid image registration based on Mobius transformations. In this work we demonstrate how the platform can be used as a tool for evaluation of the quality of the image registration process. We demonstrate performance evaluation of a deformable image registration technique based on Mobius transformations. The transformations are applied with appropriate cost functions like: Mutual information, Correlation coefficient, Sum of Squared Differences. The accent is on the results provided by the platform to the user and their interpretation in the context of the performance evaluation of 2D image registration. The NUMERICS image registration and image comparison platform provides detailed statistical information about submitted image registration jobs and can be used to perform quantitative evaluation of the performance of different image registration techniques. (authors)
Directory of Open Access Journals (Sweden)
Nur Ariffin Mohd Zin
2012-01-01
Full Text Available This paper discusses on a comparative study towards solution for solving Travelling Salesman Problem based on three techniques proposed namely exhaustive, heuristic and genetic algorithm. Each solution is to cater on finding an optimal path of available 25 contiguous cities in England whereby solution is written in Prolog. Comparisons were made with emphasis against time consumed and closeness to optimal solutions. Based on the experimental, we found that heuristic is very promising in terms of time taken, while on the other hand, Genetic Algorithm manages to be outstanding on big number of traversal by resulting the shortest path among the others.
Directory of Open Access Journals (Sweden)
Miguel G. Villarreal-Cervantes
2012-10-01
Full Text Available Mobile robots with omnidirectional wheels are expected to perform a wide variety of movements in a narrow space. However, kinematic mobility and dexterity have not been clearly identified as an objective to be considered when designing omnidirectional redundant robots. In light of this fact, this article proposes to maximize the dexterity of the mobile robot by properly locating the omnidirectional wheels. In addition, four hybrid differential evolution (DE algorithm based on the synergetic integration of different kinds of mutation and crossover are presented. A comparison of metaheuristic and gradient‐based algorithms for kinematic dexterity maximization is also presented.
A COMPARISON BETWEEN TWO ALGORITHMS FOR THE RETRIEVAL OF SOIL MOISTURE USING AMSR-E DATA
Directory of Open Access Journals (Sweden)
Simonetta ePaloscia
2015-04-01
Full Text Available A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions and provide long-term in-situ data. One of the algorithms is the artificial neural network-based algorithm developed by the Institute of Applied Physics of the National Research Council (IFAC-CNR (HydroAlgo and the second one is the Single Channel Algorithm (SCA developed by USDA-ARS (US Department of Agriculture-Agricultural Research Service. Both algorithms are based on the same radiative transfer equations but are implemented very differently. Both made use of datasets provided by the Japanese Aerospace Exploration Agency (JAXA, within the framework of Advanced Microwave Scanning Radiometer–Earth Observing System (AMSR-E and Global Change Observation Mission–Water GCOM/AMSR-2 programs. Results demonstrated that both algorithms perform better than the mission specified accuracy, with Root Mean Square Error (RMSE ≤0.06 m3/m3 and Bias <0.02 m3/m3. These results expand on previous investigations using different algorithms and sites. The novelty of the paper consists of the fact that it is the first intercomparison of the HydroAlgo algorithm with a more traditional retrieval algorithm, which offers an approach to higher spatial resolution products.
Li Li; Guo Yang; Wu Wenwu; Shi Youyi; Cheng Jian; Tao Shiheng
2012-01-01
Abstract Background Several biclustering algorithms have been proposed to identify biclusters, in which genes share similar expression patterns across a number of conditions. However, different algorithms would yield different biclusters and further lead to distinct conclusions. Therefore, some testing and comparisons between these algorithms are strongly required. Methods In this study, five biclustering algorithms (i.e. BIMAX, FABIA, ISA, QUBIC and SAMBA) were compared with each other in th...
Comparison of genetic algorithm and harmony search for generator maintenance scheduling
International Nuclear Information System (INIS)
GMS (Generator Maintenance Scheduling) ranks very high in decision making of power generation management. Generators maintenance schedule decides the time period of maintenance tasks and a reliable reserve margin is also maintained during this time period. In this paper, a comparison of GA (Genetic Algorithm) and US (Harmony Search) algorithm is presented to solve generators maintenance scheduling problem for WAPDA (Water And Power Development Authority) Pakistan. GA is a search procedure, which is used in search problems to compute exact and optimized solution. GA is considered as global search heuristic technique. HS algorithm is quite efficient, because the convergence rate of this algorithm is very fast. HS algorithm is based on the concept of music improvisation process of searching for a perfect state of harmony. The two algorithms generate feasible and optimal solutions and overcome the limitations of the conventional methods including extensive computational effort, which increases exponentially as the size of the problem increases. The proposed methods are tested, validated and compared on the WAPDA electric system. (author)
Directory of Open Access Journals (Sweden)
Ota Motonori
2008-08-01
Full Text Available Abstract Background Understanding how proteins fold is essential to our quest in discovering how life works at the molecular level. Current computation power enables researchers to produce a huge amount of folding simulation data. Hence there is a pressing need to be able to interpret and identify novel folding features from them. Results In this paper, we model each folding trajectory as a multi-dimensional curve. We then develop an effective multiple curve comparison (MCC algorithm, called the enhanced partial order (EPO algorithm, to extract features from a set of diverse folding trajectories, including both successful and unsuccessful simulation runs. The EPO algorithm addresses several new challenges presented by comparing high dimensional curves coming from folding trajectories. A detailed case study on miniprotein Trp-cage 1 demonstrates that our algorithm can detect similarities at rather low level, and extract biologically meaningful folding events. Conclusion The EPO algorithm is general and applicable to a wide range of applications. We demonstrate its generality and effectiveness by applying it to aligning multiple protein structures with low similarities. For user's convenience, we provide a web server for the algorithm at http://db.cse.ohio-state.edu/EPO.
K-Means Re-Clustering-Algorithmic Options with Quantifiable Performance Comparisons
Energy Technology Data Exchange (ETDEWEB)
Meyer, A W; Paglieroni, D; Asteneh, C
2002-12-17
This paper presents various architectural options for implementing a K-Means Re-Clustering algorithm suitable for unsupervised segmentation of hyperspectral images. Performance metrics are developed based upon quantitative comparisons of convergence rates and segmentation quality. A methodology for making these comparisons is developed and used to establish K values that produce the best segmentations with minimal processing requirements. Convergence rates depend on the initial choice of cluster centers. Consequently, this same methodology may be used to evaluate the effectiveness of different initialization techniques.
A Comparison of the Machine Learning Algorithm for Evaporation Duct Estimation
Directory of Open Access Journals (Sweden)
C. Yang
2013-06-01
Full Text Available In this research, a comparison of the relevance vector machine (RVM, least square support vector machine (LSSVM and the radial basis function neural network (RBFNN for evaporation duct estimation are presented. The parabolic equation model is adopted as the forward propagation model, and which is used to establish the training database between the radar sea clutter power and the evaporation duct height. The comparison of the RVM, LSSVM and RBFNN for evaporation duct estimation are investigated via the experimental and the simulation studies, and the statistical analysis method is employed to analyze the performance of the three machine learning algorithms in the simulation study. The analysis demonstrate that the M profile of RBFNN estimation has a relatively good match to the measured profile for the experimental study; for the simulation study, the LSSVM is the most precise one among the three machine learning algorithms, besides, the performance of RVM is basically identical to the RBFNN.
Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region
Directory of Open Access Journals (Sweden)
Matko Šarić
2008-06-01
Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.
A comparison between genetic algorithms and neural networks for optimizing fuel recharges in BWR
International Nuclear Information System (INIS)
In this work the results of a genetic algorithm (AG) and a neural recurrent multi state network (RNRME) for optimizing the fuel reload of 5 cycles of the Laguna Verde nuclear power plant (CNLV) are presented. The fuel reload obtained by both methods are compared and it was observed that the RNRME creates better fuel distributions that the AG. Moreover a comparison of the utility for using one or another one techniques is make. (Author)
Sedenka, V.; Z. Raida
2010-01-01
The paper deals with efficiency comparison of two global evolutionary optimization methods implemented in MATLAB. Attention is turned to an elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) and a novel multi-objective Particle Swarm Optimization (PSO). The performance of optimizers is compared on three different test functions and on a cavity resonator synthesis. The microwave resonator is modeled using the Finite Element Method (FEM). The hit rate and the quality of the Pareto front ...
Comparison a Performance of Data Mining Algorithms (CPDMA) in Prediction Of Diabetes Disease
Dr.V.Karthikeyani; I.Parvin Begum
2013-01-01
Detection of knowledge patterns in clinicial data through data mining. Data mining algorithms can be trained from past examples in clinical data and model the frequent times non-linear relationships between the independent and dependent variables. The consequential model represents formal knowledge, which can often make available a good analytic judgment. Classification is the generally used technique in medical data mining. This paper presents results comparison of ten supervised data mining...
A comparison of semiglobal and local dense matching algorithms for surface reconstruction
Dall'Asta, E.; Roncella, R.
2014-06-01
Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.
Marchant, B.; Platnick, S. E.; Arnold, T.; Meyer, K.; Riedi, J.
2014-12-01
Cloud thermodynamic phase (ice or liquid) discrimination is an important first step for cloud retrievals from passive sensors such as MODIS (Moderate-Resolution Imaging Spectroradiometer). Because ice and liquid phase clouds have very different scattering and absorbing properties, an incorrect cloud phase decision can lead to substantial uncertainties in the cloud optical and microphysical property products such as cloud optical thickness or effective particle radius. Furthermore, it is well-established that ice and liquid clouds have different impacts on the Earth's energy budget and hydrological cycle, thus accurately monitoring the spatial and temporal distribution of these clouds is of continued importance. For MODIS Collection 6 (C6), the shortwave-derived cloud thermodynamic phase algorithm used by the optical and microphysical property retrievals has been completely rewritten to improve the phase discrimination skill for a variety of cloudy scenes (e.g., thin/thick clouds, over ocean/land/desert/snow/ice surface, etc). To evaluate the performance of the C6 cloud phase algorithm, extensive granule-level and global comparisons have been conducted against the heritage C5 algorithm, CALIOP, and POLDER. A wholesale improvement is seen for C6 compared to C5. We will present an overview of the MODIS C6 cloud phase algorithm updates and their impacts on cloud retrieval statistics, as well as ongoing efforts to continue algorithm improvement.
Energy Technology Data Exchange (ETDEWEB)
Antoniucci, S.; Giannini, T.; Li Causi, G.; Lorenzetti, D., E-mail: simone.antoniucci@oa-roma.inaf.it, E-mail: teresa.giannini@oa-roma.inaf.it, E-mail: gianluca.licausi@oa-roma.inaf.it, E-mail: dario.lorenzetti@oa-roma.inaf.it [INAF-Osservatorio Astronomico di Roma, via Frascati 33, I-00040 Monte Porzio (Italy)
2014-02-10
Aiming to statistically study the variability in the mid-IR of young stellar objects, we have compared the 3.6, 4.5, and 24 μm Spitzer fluxes of 1478 sources belonging to the C2D (Cores to Disks) legacy program with the WISE fluxes at 3.4, 4.6, and 22 μm. From this comparison, we have selected a robust sample of 34 variable sources. Their variations were classified per spectral Class (according to the widely accepted scheme of Class I/flat/II/III protostars), and per star forming region. On average, the number of variable sources decreases with increasing Class and is definitely higher in Perseus and Ophiuchus than in Chamaeleon and Lupus. According to the paradigm Class ≡ Evolution, the photometric variability can be considered to be a feature more pronounced in less evolved protostars, and, as such, related to accretion processes. Moreover, our statistical findings agree with the current knowledge of star formation activity in different regions. The 34 selected variables were further investigated for similarities with known young eruptive variables, namely the EXors. In particular, we analyzed (1) the shape of the spectral energy distribution, (2) the IR excess over the stellar photosphere, (3) magnitude versus color variations, and (4) output parameters of model fitting. This first systematic search for EXors ends up with 11 bona fide candidates that can be considered as suitable targets for monitoring or future investigations.
Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS
Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.
Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.
International Nuclear Information System (INIS)
We are developing a cross-species comparison strategy to distinguish between cancer driver- and passenger gene alteration candidates, by utilizing the difference in genomic location of orthologous genes between the human and other mammals. As an initial test of this strategy, we conducted a pilot study with human colorectal cancer (CRC) and its mouse model C57BL/6J ApcMin/+, focusing on human 5q22.2 and 18q21.1-q21.2. We first performed bioinformatics analysis on the evolution of 5q22.2 and 18q21.1-q21.2 regions. Then, we performed exon-targeted sequencing, real time quantitative polymerase chain reaction (qPCR), and real time quantitative reverse transcriptase PCR (qRT-PCR) analyses on a number of genes of both regions with both human and mouse colon tumors. These two regions (5q22.2 and 18q21.1-q21.2) are frequently deleted in human CRCs and encode genuine colorectal tumor suppressors APC and SMAD4. They also encode genes such as MCC (mutated in colorectal cancer) with their role in CRC etiology unknown. We have discovered that both regions are evolutionarily unstable, resulting in genes that are clustered in each human region being found scattered at several distinct loci in the genome of many other species. For instance, APC and MCC are within 200 kb apart in human 5q22.2 but are 10 Mb apart in the mouse genome. Importantly, our analyses revealed that, while known CRC driver genes APC and SMAD4 were disrupted in both human colorectal tumors and tumors from ApcMin/+ mice, the questionable MCC gene was disrupted in human tumors but appeared to be intact in mouse tumors. These results indicate that MCC may not actually play any causative role in early colorectal tumorigenesis. We also hypothesize that its disruption in human CRCs is likely a mere result of its close proximity to APC in the human genome. Expanding this pilot study to the entire genome may identify more questionable genes like MCC, facilitating the discovery of new CRC driver gene candidates
Directory of Open Access Journals (Sweden)
Rhythm Suren Wadhwa
2011-11-01
Full Text Available The paper presents a comparison and application of metaheuristic population-based optimization algorithms to a flexible manufacturing automation scenario in a metacasting foundry. It presents a novel application and comparison of Bee Colony Algorithm (BCA with variations of Particle Swarm Optimization (PSO and Ant Colony Optimization (ACO for object recognition problem in a robot material handling system. To enable robust pick and place activity of metalcasted parts by a six axis industrial robot manipulator, it is important that the correct orientation of the parts is input to the manipulator, via the digital image captured by the vision system. This information is then used for orienting the robot gripper to grip the part from a moving conveyor belt. The objective is to find the reference templates on the manufactured parts from the target landscape picture which may contain noise. The Normalized cross-correlation (NCC function is used as an objection function in the optimization procedure. The ultimate goal is to test improved algorithms that could prove useful in practical manufacturing automation scenarios.
Sensitivity study of voxel-based PET image comparison to image registration algorithms
Energy Technology Data Exchange (ETDEWEB)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Chen, Aileen B.; Berbeco, Ross [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Aerts, Hugo J. W. L. [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 and Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts 02115 (United States)
2014-11-01
Purpose: Accurate deformable registration is essential for voxel-based comparison of sequential positron emission tomography (PET) images for proper adaptation of treatment plan and treatment response assessment. The comparison may be sensitive to the method of deformable registration as the optimal algorithm is unknown. This study investigated the impact of registration algorithm choice on therapy response evaluation. Methods: Sixteen patients with 20 lung tumors underwent a pre- and post-treatment computed tomography (CT) and 4D FDG-PET scans before and after chemoradiotherapy. All CT images were coregistered using a rigid and ten deformable registration algorithms. The resulting transformations were then applied to the respective PET images. Moreover, the tumor region defined by a physician on the registered PET images was classified into progressor, stable-disease, and responder subvolumes. Particularly, voxels with standardized uptake value (SUV) decreases >30% were classified as responder, while voxels with SUV increases >30% were progressor. All other voxels were considered stable-disease. The agreement of the subvolumes resulting from difference registration algorithms was assessed by Dice similarity index (DSI). Coefficient of variation (CV) was computed to assess variability of DSI between individual tumors. Root mean square difference (RMS{sub rigid}) of the rigidly registered CT images was used to measure the degree of tumor deformation. RMS{sub rigid} and DSI were correlated by Spearman correlation coefficient (R) to investigate the effect of tumor deformation on DSI. Results: Median DSI{sub rigid} was found to be 72%, 66%, and 80%, for progressor, stable-disease, and responder, respectively. Median DSI{sub deformable} was 63%–84%, 65%–81%, and 82%–89%. Variability of DSI was substantial and similar for both rigid and deformable algorithms with CV > 10% for all subvolumes. Tumor deformation had moderate to significant impact on DSI for progressor
Quantitative comparison of direct phase retrieval algorithms in in-line phase tomography
International Nuclear Information System (INIS)
A well-known problem in x-ray microcomputed tomography is low sensitivity. Phase contrast imaging offers an increase of sensitivity of up to a factor of 103 in the hard x-ray region, which makes it possible to image soft tissue and small density variations. If a sufficiently coherent x-ray beam, such as that obtained from a third generation synchrotron, is used, phase contrast can be obtained by simply moving the detector downstream of the imaged object. This setup is known as in-line or propagation based phase contrast imaging. A quantitative relationship exists between the phase shift induced by the object and the recorded intensity and inversion of this relationship is called phase retrieval. Since the phase shift is proportional to projections through the three-dimensional refractive index distribution in the object, once the phase is retrieved, the refractive index can be reconstructed by using the phase as input to a tomographic reconstruction algorithm. A comparison between four phase retrieval algorithms is presented. The algorithms are based on the transport of intensity equation (TIE), transport of intensity equation for weak absorption, the contrast transfer function (CTF), and a mixed approach between the CTF and TIE, respectively. The compared methods all rely on linearization of the relationship between phase shift and recorded intensity to yield fast phase retrieval algorithms. The phase retrieval algorithms are compared using both simulated and experimental data, acquired at the European Synchrotron Radiation Facility third generation synchrotron light source. The algorithms are evaluated in terms of two different reconstruction error metrics. While being slightly less computationally effective, the mixed approach shows the best performance in terms of the chosen criteria.
Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia
2011-01-01
LS-Lab provides automatic support to comparison/evaluation of the Learning Object Sequences produced by different Curriculum Sequencing Algorithms. Through this framework a teacher can verify the correspondence between the behaviour of different sequencing algorithms and her pedagogical preferences. In fact the teacher can compare algorithms…
Digital Sound Synthesis Algorithms: a Tutorial Introduction and Comparison of Methods
Lee, J. Robert
The objectives of the dissertation are to provide both a compendium of sound-synthesis methods with detailed descriptions and sound examples, as well as a comparison of the relative merits of each method based on ease of use, observed sound quality, execution time, and data storage requirements. The methods are classified under the general headings of wavetable-lookup synthesis, additive synthesis, subtractive synthesis, nonlinear methods, and physical modelling. The nonlinear methods comprise a large group that ranges from the well-known frequency-modulation synthesis to waveshaping. The final category explores computer modelling of real musical instruments and includes numerical and analytical solutions to the classical wave equation of motion, along with some of the more sophisticated time -domain models that are possible through the prudent combination of simpler synthesis techniques. The dissertation is intended to be understandable by a musician who is mathematically literate but who does not necessarily have a background in digital signal processing. With this limitation in mind, a brief and somewhat intuitive description of digital sampling theory is provided in the introduction. Other topics such as filter theory are discussed as the need arises. By employing each of the synthesis methods to produce the same type of sound, interesting comparisons can be made. For example, a struck string sound, such as that typical of a piano, can be produced by algorithms in each of the synthesis classifications. Many sounds, however, are peculiar to a single algorithm and must be examined independently. Psychoacoustic studies were conducted as an aid in the comparison of the sound quality of several implementations of the synthesis algorithms. Other psychoacoustic experiments were conducted to supplement the established notions of which timbral issues are important in the re -synthesis of the sounds of acoustic musical instruments.
Bircher, Pascal; Liniger, Hanspeter; Prasuhn, Volker
2016-04-01
Soil erosion is a well-known challenge both from a global perspective and in Switzerland, and it is assessed and discussed in many projects (e.g. national or European erosion risk maps). Meaningful assessment of soil erosion requires models that adequately reflect surface water flows. Various studies have attempted to achieve better modelling results by including multiple flow algorithms in the topographic length and slope factor (LS-factor) of the Revised Universal Soil Loss Equation (RUSLE). The choice of multiple flow algorithms is wide, and many of them have been implemented in programs or tools like Saga-Gis, GrassGis, ArcGIS, ArcView, Taudem, and others. This study compares six different multiple flow algorithms with the aim of identifying a suitable approach to calculating the LS factor for a new soil erosion risk map of Switzerland. The comparison of multiple flow algorithms is part of a broader project to model soil erosion for the entire agriculturally used area in Switzerland and to renew and optimize the current erosion risk map of Switzerland (ERM2). The ERM2 was calculated in 2009, using a high resolution digital elevation model (2 m) and a multiple flow algorithm in ArcView. This map has provided the basis for enforcing soil protection regulations since 2010 and has proved its worth in practice, but it has become outdated (new basic data are now available, e.g. data on land use change, a new rainfall erosivity map, a new digital elevation model, etc.) and is no longer user friendly (ArcView). In a first step towards its renewal, a new data set from the Swiss Federal Office of Topography (Swisstopo) was used to generate the agricultural area based on the existing field block map. A field block is an area consisting of farmland, pastures, and meadows which is bounded by hydrological borders such as streets, forests, villages, surface waters, etc. In our study, we compared the six multiple flow algorithms with the LS factor calculation approach used in
Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca
2010-05-01
distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.
Directory of Open Access Journals (Sweden)
V. Sedenka
2010-09-01
Full Text Available The paper deals with efficiency comparison of two global evolutionary optimization methods implemented in MATLAB. Attention is turned to an elitist Non-dominated Sorting Genetic Algorithm (NSGA-II and a novel multi-objective Particle Swarm Optimization (PSO. The performance of optimizers is compared on three different test functions and on a cavity resonator synthesis. The microwave resonator is modeled using the Finite Element Method (FEM. The hit rate and the quality of the Pareto front distribution are classified.
Comparison of SAR Wind Speed Retrieval Algorithms for Evaluating Offshore Wind Energy Resources
DEFF Research Database (Denmark)
Kozai, K.; Ohsawa, T.; Takeyama, Y.; Shimada, S.; Niwa, R.; Hasager, Charlotte Bay; Badger, Merete
2010-01-01
stability, while CMOD5.N assumes a neutral condition. By utilizing Monin-Obukov similarity theory in the inverse LKB code, equivalent neutral wind speeds derived from CMOD5.N are converted to stability dependent wind speeds (CMOD5N_ SDW). Results of comparison in terms of energy density indicate the CMOD5N......Envisat/ASAR-derived offshore wind speeds and energy densities based on 4 different SAR wind speed retrieval algorithms (CMOD4, CMOD-IFR2, CMOD5, CMOD5.N) are compared with observed wind speeds and energy densities for evaluating offshore wind energy resources. CMOD4 ignores effects of atmospheric...
Directory of Open Access Journals (Sweden)
A. Rozanov
2007-09-01
Full Text Available This paper is devoted to an intercomparison of ozone vertical profiles retrieved from the measurements of scattered solar radiation performed by the SCIAMACHY instrument in the limb viewing geometry. Three different inversion algorithms including the prototype of the operational Level 1 to 2 processor to be operated by the European Space Agency are considered. Unlike usual validation studies, this comparison removes the uncertainties arising when comparing measurements made by different instruments probing slightly different air masses and focuses on the uncertainties specific to the modeling-retrieval problem only. The intercomparison was performed for 5 selected orbits of SCIAMACHY showing a good overall agreement of the results in the middle stratosphere, whereas considerable discrepancies were identified in the lower stratosphere and upper troposphere altitude region. Additionally, comparisons with ground-based lidar measurements are shown for selected profiles demonstrating an overall correctness of the retrievals.
Directory of Open Access Journals (Sweden)
Prabhat Kumar Giri
2016-01-01
Full Text Available In the present era of globalization and competitive market, cellular manufacturing has become a vital tool for meeting the challenges of improving productivity, which is the way to sustain growth. Getting best results of cellular manufacturing depends on the formation of the machine cells and part families. This paper examines advantages of ART method of cell formation over array based clustering algorithms, namely ROC-2 and DCA. The comparison and evaluation of the cell formation methods has been carried out in the study. The most appropriate approach is selected and used to form the cellular manufacturing system. The comparison and evaluation is done on the basis of performance measure as grouping efficiency and improvements over the existing cellular manufacturing system is presented.
Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)
2002-01-01
A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.
Marchant, Benjamin; Platnick, Steven; Meyer, Kerry; Arnold, G. Thomas; Riedi, Jérôme
2016-04-01
Cloud thermodynamic phase (ice, liquid, undetermined) classification is an important first step for cloud retrievals from passive sensors such as MODIS (Moderate Resolution Imaging Spectroradiometer). Because ice and liquid phase clouds have very different scattering and absorbing properties, an incorrect cloud phase decision can lead to substantial errors in the cloud optical and microphysical property products such as cloud optical thickness or effective particle radius. Furthermore, it is well established that ice and liquid clouds have different impacts on the Earth's energy budget and hydrological cycle, thus accurately monitoring the spatial and temporal distribution of these clouds is of continued importance. For MODIS Collection 6 (C6), the shortwave-derived cloud thermodynamic phase algorithm used by the optical and microphysical property retrievals has been completely rewritten to improve the phase discrimination skill for a variety of cloudy scenes (e.g., thin/thick clouds, over ocean/land/desert/snow/ice surface, etc). To evaluate the performance of the C6 cloud phase algorithm, extensive granule-level and global comparisons have been conducted against the heritage C5 algorithm and CALIOP. A wholesale improvement is seen for C6 compared to C5.
Comparison of PID Controller Tuning Methods with Genetic Algorithm for FOPTD System
Directory of Open Access Journals (Sweden)
K. Mohamed Hussain
2014-02-01
Full Text Available Measurement of Level, Temperature, Pressure and Flow parameters are very vital in all process industries. A combination of a few transducers with a controller, that forms a closed loop system leads to a stable and effective process. This article deals with control of in the process tank and comparative analysis of various PID control techniques and Genetic Algorithm (GA technique. The model for such a Real-time process is identified as First Order Plus Dead Time (FOPTD process and validated. The need for improved performance of the process has led to the development of model based controllers. Well-designed conventional Proportional, Integral and Derivative (PID controllers are the most widely used controller in the chemical process industries because of their simplicity, robustness and successful practical applications. Many tuning methods have been proposed for PID controllers. Many tuning methods have been proposed for obtaining better PID controller parameter settings. The comparison of various tuning methods for First Order Plus Dead Time (FOPTD process are analysed using simulation software. Our purpose in this study is comparison of these tuning methods for single input single output (SISO systems using computer simulation.Also efficiency of various PID controller are investigated for different performance metrics such as Integral Square Error (ISE, Integral Absolute Error (IAE, Integral Time absolute Error (ITAE, and Mean square Error (MSE is presented and simulation is carried out. Work in this paper explores basic concepts, mathematics, and design aspect of PID controller. Comparison between the PID controller and Genetic Algorithm (GA will have been carried out to determine the best controller for the temperature system.
Analysis and Comparison of Symmetric Key Cryptographic Algorithms Based on Various File Features
Directory of Open Access Journals (Sweden)
Ranjeet Masram
2014-07-01
Full Text Available For achieving faster communication most of confiden tial data is circulated through networks as electro nic data. Cryptographic ciphers have an important role for providing security to these confidential data against unauthorized attacks. Though security is an important factor, there are various factors that c an affect the performance and selection of cryptograph ic algorithms during the practical implementation o f these cryptographic ciphers for various application s. This paper provides analysis and comparison of s ome symmetric key cryptographic ciphers (RC4, AES, Blow fish, RC2, DES, Skipjack, and Triple DES on the basis of encryption time with the variation of vari ous file features like different data types, data s ize, data density and key sizes.
Comparison of Bayesian Land Surface Temperature algorithm performance with Terra MODIS observations
Morgan, J A
2009-01-01
An approach to land surface temperature (LST) estimation that relies upon Bayesian inference has been validated against multiband infrared radiometric imagery from the Terra MODIS instrument. Bayesian LST estimators are shown to reproduce standard MODIS product LST values starting from a parsimoniously chosen (hence, uninformative) range of prior band emissivity knowledge. Two estimation methods have been tested. The first is the iterative contraction mapping of joint expectation values for LST and surface emissivity described in a previous paper. In the second method, the Bayesian algorithm is reformulated as a Maximum \\emph{A-Posteriori} (MAP) search for the maximum joint \\emph{a-posteriori} probability for LST, given observed sensor aperture radiances and \\emph{a-priori} probabilities for LST and emissivity. Two MODIS data granules each for daytime and nighttime were used for the comparison. The granules were chosen to be largely cloud-free, with limited vertical relief in those portions of the granules fo...
Ivanova, Natalia; Pedersen, Leif T.; Lavergne, Thomas; Tonboe, Rasmus T.; Saldo, Roberto; Mäkynen, Marko; Heygster, Georg; Rösel, Anja; Kern, Stefan; Dybkjær, Gorm; Sørensen, Atle; Brucker, Ludovic; Shokr, Mohammed; Korosov, Anton; Hansen, Morten W.
2015-04-01
Sea ice concentration (SIC) has been derived globally from satellite passive microwave observations since the 1970s by a multitude of algorithms. However, existing datasets and algorithms, although agreeing in the large-scale picture, differ substantially in the details and have disadvantages in summer and fall due to presence of melt ponds and thin ice. There is thus a need for understanding of the causes for the differences and identifying the most suitable method to retrieve SIC. Therefore, during the ESA Climate Change Initiative effort 30 algorithms have been implemented, inter-compared and validated by a standardized reference dataset. The algorithms were evaluated over low and high sea ice concentrations and thin ice. Based on the findings, an optimal approach to retrieve sea ice concentration globally for climate purposes was suggested and validated. The algorithm was implemented with atmospheric correction and dynamical tie points in order to produce the final sea ice concentration dataset with per-pixel uncertainties. The issue of melt ponds was addressed in particular because they are interpreted as open water by the algorithms and thus SIC can be underestimated by up to 40%. To improve our understanding of this issue, melt-pond signatures in AMSR2 images were investigated based on their physical properties with help of observations of melt pond fraction from optical (MODIS and MERIS) and active microwave (SAR) satellite measurements.
Directory of Open Access Journals (Sweden)
Akanksha Mathur
2012-09-01
Full Text Available Encryption is the process of transforming plaintext into the ciphertext where plaintext is the input to the encryption process and ciphertext is the output of the encryption process. Decryption isthe process of transforming ciphertext into the plaintext where ciphertext is the input to the decryption process and plaintext is the output of the decryption process. There are various encryption algorithms exist classified as symmetric and asymmetric encryption algorithms. Here, I present an algorithm for data encryption and decryption which is based on ASCII values of characters in the plaintext. This algorithm is used to encrypt data by using ASCII values of the data to be encrypted. The secret used will be modifying o another string and that string is used as a key to encrypt or decrypt the data. So, it can be said that it is a kind of symmetric encryption algorithm because it uses same key for encryption anddecryption but by slightly modifying it. This algorithm operates when the length of input and the length of key are same.
Kunde-Ramamoorthy, Govindarajan; Coarfa, Cristian; Laritsky, Eleonora; Kessler, Noah J; Harris, R Alan; Xu, Mingchu; Chen, Rui; Shen, Lanlan; Milosavljevic, Aleksandar; Waterland, Robert A
2014-04-01
Coupling bisulfite conversion with next-generation sequencing (Bisulfite-seq) enables genome-wide measurement of DNA methylation, but poses unique challenges for mapping. However, despite a proliferation of Bisulfite-seq mapping tools, no systematic comparison of their genomic coverage and quantitative accuracy has been reported. We sequenced bisulfite-converted DNA from two tissues from each of two healthy human adults and systematically compared five widely used Bisulfite-seq mapping algorithms: Bismark, BSMAP, Pash, BatMeth and BS Seeker. We evaluated their computational speed and genomic coverage and verified their percentage methylation estimates. With the exception of BatMeth, all mappers covered >70% of CpG sites genome-wide and yielded highly concordant estimates of percentage methylation (r(2) ≥ 0.95). Fourfold variation in mapping time was found between BSMAP (fastest) and Pash (slowest). In each library, 8-12% of genomic regions covered by Bismark and Pash were not covered by BSMAP. An experiment using simulated reads confirmed that Pash has an exceptional ability to uniquely map reads in genomic regions of structural variation. Independent verification by bisulfite pyrosequencing generally confirmed the percentage methylation estimates by the mappers. Of these algorithms, Bismark provides an attractive combination of processing speed, genomic coverage and quantitative accuracy, whereas Pash offers considerably higher genomic coverage. PMID:24391148
Directory of Open Access Journals (Sweden)
Gaurav Prakash
2016-01-01
Conclusions: Preoperative whole eye HOA were similar for refractive surgery candidates of Arab and South.Asian origin. The values were comparable to historical data for Caucasian eyes and were lower than Asian. (Chinese eyes. These findings may aid in refining refractive nomograms for wavefront ablations.
küçükosmanoğlu, hayrettin onur
2013-01-01
The purpose of this study to evaluate the levels of the music teacher candidates Self-Esteem by socio-demographic variables. Literature was reviewed, "Personal Information Form" and the "Rosenberg Self-Esteem Scale" used in order to obtain research data. For the purposes of this study, statistical analysis of the findings are presented in the tables. The study group of said research encompasses 101 undergraduates studying in Necmettin Erbakan University, Department of Fine Arts Education, a...
Szöllösi, Tomáš
2012-01-01
The first part of this work deals with the optimization and evolutionary algorithms which are used as a tool to solve complex optimization problems. The discussed algorithms are Differential Evolution, Genetic Algorithm, Simulated Annealing and deterministic non-evolutionary algorithm Taboo Search.. Consequently the discussion is held on the issue of testing the optimization algorithms through the use of the test function gallery and comparison solution all algorithms on Travelling salesman p...
International Nuclear Information System (INIS)
Liquid metals are attractive candidates for both near-term and long-term fusion applications. The subjects of this comparison are the differences between the two candidate liquid metal breeder materials Li and LiPb for use in breeding blankets in the areas of neutronics, magnetohydrodynamics, tritium control, compatibility with structural materials, heat extraction system, safety and required research and development program. Both candidates appear to be promising for use in self-cooled breeding blankets which have inherent simplicity with the liquid metal serving as both breeder and coolant. Each liquid metal breeder has advantages and concerns associated with it, and further development is needed to resolve these concerns. The remaining feasibility question for both breeder materials is the electrical insulation between the liquid metal and the duct walls. Different ceramic coatings are required for the two breeders, and their crucial issues, namely self-healing of insulator cracks and tolerance to radiation-induced electrical degradation, have not yet been demonstrated. (orig.)
Performance Comparison of Binary Search Tree and Framed ALOHA Algorithms for RFID Anti-Collision
Chen, Wen-Tzu
Binary search tree and framed ALOHA algorithms are commonly adopted to solve the anti-collision problem in RFID systems. In this letter, the read efficiency of these two anti-collision algorithms is compared through computer simulations. Simulation results indicate the framed ALOHA algorithm requires less total read time than the binary search tree algorithm. The initial frame length strongly affects the uplink throughput for the framed ALOHA algorithm.
Comparison of three TCC calculation algorithms for partially coherent imaging simulation
Wu, Xiaofei; Liu, Shiyuan; Liu, Wei; Zhou, Tingting; Wang, Lijuan
2010-08-01
Three kinds of TCC (transmission cross coefficient) calculation algorithms used for partially coherent imaging simulation, including the integration algorithm, the analytical algorithm, and the matrix-based fast algorithm, are reviewed for their rigorous formulations and numerical implementations. The accuracy and speed achievable using these algorithms are compared by simulations conducted on several mainstream illumination sources commonly used in current lithographic tools. Simulation results demonstrate that the integration algorithm is quite accurate but time consuming, while the matrix-based fast algorithm is efficient but its accuracy is heavily dependent on simulation resolution. The analytical algorithm is both efficient and accurate but not suitable for arbitrary optical systems. It is therefore concluded that each TCC calculation algorithm has its pros and cons with a compromise necessary to achieve a balance between accuracy and speed. The observations are useful in fast lithographic simulation for aerial image modeling, optical proximity correction (OPC), source mask optimization (SMO), and critical dimension (CD) prediction.
International Nuclear Information System (INIS)
The European InnoMed-PredTox project was a collaborative effort between 15 pharmaceutical companies, 2 small and mid-sized enterprises, and 3 universities with the goal of delivering deeper insights into the molecular mechanisms of kidney and liver toxicity and to identify mechanism-linked diagnostic or prognostic safety biomarker candidates by combining conventional toxicological parameters with 'omics' data. Mechanistic toxicity studies with 16 different compounds, 2 dose levels, and 3 time points were performed in male Crl: WI(Han) rats. Three of the 16 investigated compounds, BI-3 (FP007SE), Gentamicin (FP009SF), and IMM125 (FP013NO), induced kidney proximal tubule damage (PTD). In addition to histopathology and clinical chemistry, transcriptomics microarray and proteomics 2D-DIGE analysis were performed. Data from the three PTD studies were combined for a cross-study and cross-omics meta-analysis of the target organ. The mechanistic interpretation of kidney PTD-associated deregulated transcripts revealed, in addition to previously described kidney damage transcript biomarkers such as KIM-1, CLU and TIMP-1, a number of additional deregulated pathways congruent with histopathology observations on a single animal basis, including a specific effect on the complement system. The identification of new, more specific biomarker candidates for PTD was most successful when transcriptomics data were used. Combining transcriptomics data with proteomics data added extra value.
Energy Technology Data Exchange (ETDEWEB)
Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.
2000-01-01
We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.
International Nuclear Information System (INIS)
We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior
Query by image example: The CANDID approach
Energy Technology Data Exchange (ETDEWEB)
Kelly, P.M.; Cannon, M. [Los Alamos National Lab., NM (United States). Computer Research and Applications Group; Hush, D.R. [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Electrical and Computer Engineering
1995-02-01
CANDID (Comparison Algorithm for Navigating Digital Image Databases) was developed to enable content-based retrieval of digital imagery from large databases using a query-by-example methodology. A user provides an example image to the system, and images in the database that are similar to that example are retrieved. The development of CANDID was inspired by the N-gram approach to document fingerprinting, where a ``global signature`` is computed for every document in a database and these signatures are compared to one another to determine the similarity between any two documents. CANDID computes a global signature for every image in a database, where the signature is derived from various image features such as localized texture, shape, or color information. A distance between probability density functions of feature vectors is then used to compare signatures. In this paper, the authors present CANDID and highlight two results from their current research: subtracting a ``background`` signature from every signature in a database in an attempt to improve system performance when using inner-product similarity measures, and visualizing the contribution of individual pixels in the matching process. These ideas are applicable to any histogram-based comparison technique.
Fedorova , E.; Vasylenko, A.; Hnatyk, B. I.; Zhdanov, V. I.
2016-02-01
We analyze the X-ray properties of the Compton-thick Seyfert 1.9 radio quiet AGN in NGC 1194 using INTEGRAL (ISGRI), XMM-Newton (EPIC), Swift (BAT and XRT), and Suzaku (XIS) observations. There is a set of Fe-K lines in the NGC 1194 spectrum with complex relativistic profiles that can be considered as a sign of either a warped Bardeen-Petterson accretion disk or double black hole. We compare our results on NGC 1194 with two other megamaser warped disk candidates, NGC 1068 and NGC 4258, to trace out the other properties which can be typical for AGNs with warped accretion disks. To finally confirm or disprove the double black-hole hypotheses, further observations of the iron lines and their evolution of their shape with time are necessary. Based on obsrvations made with INTEGRAL, XMM-Newton, Swift, Suzaku.
A Comparison of the Effects of K-Anonymity on Machine Learning Algorithms
Hayden Wimmer; Loreen Powell
2014-01-01
While research has been conducted in machine learning algorithms and in privacy preserving in data mining (PPDM), a gap in the literature exists which combines the aforementioned areas to determine how PPDM affects common machine learning algorithms. The aim of this research is to narrow this literature gap by investigating how a common PPDM algorithm, K-Anonymity, affects common machine learning and data mining algorithms, namely neural networks, logistic regression, decision trees, and Baye...
DEFF Research Database (Denmark)
Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika
2013-01-01
, genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application....
Lesniak, Joseph; Behrman, Elizabeth; Zandler, Melvin; Kumar, Preethika
2008-03-01
Very few quantum algorithms are currently useable today. When calculating molecular energies, using a quantum algorithm takes advantage of the quantum nature of the algorithm and calculation. A few small molecules have been used to show that this method is possible. This method will be applied to larger molecules and compared to classical computer methods.
Comparison and analysis of nonlinear algorithms for compressed sensing in MRI.
Yu, Yeyang; Hong, Mingjian; Liu, Feng; Wang, Hua; Crozier, Stuart
2010-01-01
Compressed sensing (CS) theory has been recently applied in Magnetic Resonance Imaging (MRI) to accelerate the overall imaging process. In the CS implementation, various algorithms have been used to solve the nonlinear equation system for better image quality and reconstruction speed. However, there are no explicit criteria for an optimal CS algorithm selection in the practical MRI application. A systematic and comparative study of those commonly used algorithms is therefore essential for the implementation of CS in MRI. In this work, three typical algorithms, namely, the Gradient Projection For Sparse Reconstruction (GPSR) algorithm, Interior-point algorithm (l(1)_ls), and the Stagewise Orthogonal Matching Pursuit (StOMP) algorithm are compared and investigated in three different imaging scenarios, brain, angiogram and phantom imaging. The algorithms' performances are characterized in terms of image quality and reconstruction speed. The theoretical results show that the performance of the CS algorithms is case sensitive; overall, the StOMP algorithm offers the best solution in imaging quality, while the GPSR algorithm is the most efficient one among the three methods. In the next step, the algorithm performances and characteristics will be experimentally explored. It is hoped that this research will further support the applications of CS in MRI. PMID:21097312
Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms
Directory of Open Access Journals (Sweden)
Dominik Zurek
2013-01-01
Full Text Available Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting algorithms on GPU cards and multicore processors. Then hybrid algorithm will be presented which consists of parts executed on both platforms, standard CPU and GPU.
Li, Zhaokun; Cao, Jingtai; Liu, Wei; Feng, Jianfeng; Zhao, Xiaohui
2015-03-01
We use conventional adaptive optical system to compensate atmospheric turbulence in free space optical (FSO) communication system under strong scintillation circumstances, undesired wave-front measurements based on Shark-Hartman sensor (SH). Since wavefront sensor-less adaptive optics is a feasible option, we propose several swarm intelligence algorithms to compensate the wavefront aberration from atmospheric interference in FSO and mainly discuss the algorithm principle, basic flows, and simulation result. The numerical simulation experiment and result analysis show that compared with SPGD algorithm, the proposed algorithms can effectively restrain wavefront aberration, and improve convergence rate of the algorithms and the coupling efficiency of receiver in large extent.
Humeau-Heurtier, Anne; Mahé, Guillaume; Abraham, Pierre
2015-12-01
Laser speckle contrast imaging (LSCI) enables a noninvasive monitoring of microvascular perfusion. Some studies have proposed to extract information from LSCI data through their multiscale entropy (MSE). However, for reaching a large range of scales, the original MSE algorithm may require long recordings for reliability. Recently, a novel approach to compute MSE with shorter data sets has been proposed: the short-time MSE (sMSE). Our goal is to apply, for the first time, the sMSE algorithm in LSCI data and to compare results with those given by the original MSE. Moreover, we apply the original MSE algorithm on data of different lengths and compare results with those given by longer recordings. For this purpose, synthetic signals and 192 LSCI regions of interest (ROIs) of different sizes are processed. Our results show that the sMSE algorithm is valid to compute the MSE of LSCI data. Moreover, with time series shorter than those initially proposed, the sMSE and original MSE algorithms give results with no statistical difference from those of the original MSE algorithm with longer data sets. The minimal acceptable length depends on the ROI size. Comparisons of MSE from healthy and pathological subjects can be performed with shorter data sets than those proposed until now. PMID:26220209
Directory of Open Access Journals (Sweden)
Li Li
2012-07-01
Full Text Available Abstract Background Several biclustering algorithms have been proposed to identify biclusters, in which genes share similar expression patterns across a number of conditions. However, different algorithms would yield different biclusters and further lead to distinct conclusions. Therefore, some testing and comparisons between these algorithms are strongly required. Methods In this study, five biclustering algorithms (i.e. BIMAX, FABIA, ISA, QUBIC and SAMBA were compared with each other in the cases where they were used to handle two expression datasets (GDS1620 and pathway with different dimensions in Arabidopsis thaliana (A. thaliana GO (gene ontology annotation and PPI (protein-protein interaction network were used to verify the corresponding biological significance of biclusters from the five algorithms. To compare the algorithms’ performance and evaluate quality of identified biclusters, two scoring methods, namely weighted enrichment (WE scoring and PPI scoring, were proposed in our study. For each dataset, after combining the scores of all biclusters into one unified ranking, we could evaluate the performance and behavior of the five biclustering algorithms in a better way. Results Both WE and PPI scoring methods has been proved effective to validate biological significance of the biclusters, and a significantly positive correlation between the two sets of scores has been tested to demonstrate the consistence of these two methods. A comparative study of the above five algorithms has revealed that: (1 ISA is the most effective one among the five algorithms on the dataset of GDS1620 and BIMAX outperforms the other algorithms on the dataset of pathway. (2 Both ISA and BIMAX are data-dependent. The former one does not work well on the datasets with few genes, while the latter one holds well for the datasets with more conditions. (3 FABIA and QUBIC perform poorly in this study and they may be suitable to large datasets with more genes and
Korean Medication Algorithm for Bipolar Disorder 2014: comparisons with other treatment guidelines
Directory of Open Access Journals (Sweden)
Jeong JH
2015-06-01
with MS or AAP for dysphoric/psychotic mania. Aripiprazole, olanzapine, quetiapine, and risperidone were the first-line AAPs in nearly all of the phases of bipolar disorder across the guidelines. Most guidelines advocated newer AAPs as first-line treatment options in all phases, and lamotrigine in depressive and maintenance phases. Lithium and valproic acid were commonly used as MSs in all phases of bipolar disorder. As research evidence accumulated over time, recommendations of newer AAPs – such as asenapine, paliperidone, lurasidone, and long-acting injectable risperidone – became prominent. This comparison identifies that the treatment recommendations of the KMAP-BP 2014 are similar to those of other treatment guidelines and reflect current changes in prescription patterns for bipolar disorder based on accumulated research data. Further studies are needed to address several issues identified in our review. Keywords: bipolar disorder, pharmacotherapy, treatment algorithm, guideline comparison, KMAP-2014
Li, Borui; Mu, Chundi; WANG, Tao; Peng, Qian
2014-01-01
This is a revised version of our paper published in Journal of Convergence Information Technology(JCIT): "Comparison of Feature Point Extraction Algorithms for Vision Based Autonomous Aerial Refueling". We corrected some errors including measurement unit errors, spelling errors and so on. Since the published papers in JCIT are not allowed to be modified, we submit the revised version to arXiv.org to make the paper more rigorous and not to confuse other researchers.
Binnicker, Matthew J.; Jespersen, Deborah J.; Rollins, Leonard O.
2012-01-01
We describe the first direct comparison of the reverse and traditional syphilis screening algorithms in a population with a low prevalence of syphilis. Among 1,000 patients tested, the results for 6 patients were falsely reactive by reverse screening, compared to none by traditional testing. However, reverse screening identified 2 patients with possible latent syphilis that were not detected by rapid plasma reagin (RPR).
International Nuclear Information System (INIS)
Work in the respective areas included assessment of conditions related to sinkhole development. Information collected and assessed involved geology, hydrogeology, land use, lineaments and linear trends, identification of karst features and zones, and inventory of historical sinkhole development and type. Karstification of the candidate, Rhea County, and Morristown study areas, in comparison to other karst areas in Tennessee, can be classified informally as youthful, submature, and mature, respectively. Historical sinkhole development in the more karstified areas is attributed to the greater degree of structural deformation by faulting and fracturing, subsequent solutioning of bedrock, thinness of residuum, and degree of development by man. Sinkhole triggering mechanisms identified are progressive solution of bedrock, water-level fluctuations, piping, and loading. 68 refs., 18 figs., 11 tabs
Singh, Niraj Kumar
2012-01-01
Smart Sort algorithm is a "smart" fusion of heap construction procedures (of Heap sort algorithm) into the conventional "Partition" function (of Quick sort algorithm) resulting in a robust version of Quick sort algorithm. We have also performed empirical analysis of average case behavior of our proposed algorithm along with the necessary theoretical analysis for best and worst cases. Its performance was checked against some standard probability distributions, both uniform and non-uniform, like Binomial, Poisson, Discrete & Continuous Uniform, Exponential, and Standard Normal. The analysis exhibited the desired robustness coupled with excellent performance of our algorithm. Although this paper assumes the static partition ratios, its dynamic version is expected to yield still better results.
Osborn, John C
2013-01-01
ABSTRACT The Candidate is an attempt to marry elements of journalism and gaming into a format that both entertains and educates the player. The Google-AP Scholarship, a new scholarship award that is given to several journalists a year to work on projects at the threshold of technology and journalism, funded the project. The objective in this prototype version of the game is to put the player in the shoes of a congressional candidate during an off-year election, specificall...
Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms
Dominik Zurek; Marcin Pietron; Maciej Wielgosz; Kazimierz Wiatr
2013-01-01
Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting alg...
Performance Comparison of Known ICA Algorithms to a Wavelet-ICA Merger
Janett Walters-Williams, Yan Li
2011-01-01
These signals are however contaminated with artifacts which must be removed to have pure EEGsignals. These artifacts can be removed by Independent Component Analysis (ICA). In this paperwe studied the performance of three ICA algorithms (FastICA, JADE, and Radical) as well as ournewly developed ICA technique. Comparing these ICA algorithms, it is observed that our newtechniques perform as well as these algorithms at denoising EEG signals.
Jie TANG; Nett, Brian E; Chen, Guang-Hong
2009-01-01
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical rec...
Comparison of algorithms for distributed space exploration in a simulated environment
Cikač, Jaka
2014-01-01
Space exploration algorithms aim to discover as much unknown space as possible as efficiently as possible in the shortest possible time. To achieve this goal, we use distributed algorithms, implemented on multi-agent systems. In this work, we explore, which of the algorithms can efficiently explore space in a simulated environment Gridland. Since Gridland, in it's original release, was not meant for simulating space exploration, we had to make some modifications and enable movement history an...
Performance Comparison of Known ICA Algorithms to a Wavelet-ICA Merger
Directory of Open Access Journals (Sweden)
Janett Walters-Williams, Yan Li
2011-08-01
Full Text Available These signals are however contaminated with artifacts which must be removed to have pure EEGsignals. These artifacts can be removed by Independent Component Analysis (ICA. In this paperwe studied the performance of three ICA algorithms (FastICA, JADE, and Radical as well as ournewly developed ICA technique. Comparing these ICA algorithms, it is observed that our newtechniques perform as well as these algorithms at denoising EEG signals.
Comparison of strapdown inertial navigation algorithm based on rotation vector and dual quaternion
Institute of Scientific and Technical Information of China (English)
Wang Zhenhuan; Chen Xijun; Zeng Qingshuang
2013-01-01
For the navigation algorithm of the strapdown inertial navigation system,by comparing to the equations of the dual quaternion and quaternion,the superiority of the attitude algorithm based on dual quaternion over the ones based on rotation vector in accuracy is analyzed in the case of the rotation of navigation frame.By comparing the update algorithm of the gravitational velocity in dual quaternion solution with the compensation algorithm of the harmful acceleration in traditional velocity solution,the accuracy advantage of the gravitational velocity based on dual quaternion is addressed.In view of the idea of the attitude and velocity algorithm based on dual quaternion,an improved navigation algorithm is proposed,which is as much as the rotation vector algorithm in computational complexity.According to this method,the attitude quaternion does not require compensating as the navigation frame rotates.In order to verify the correctness of the theoretical analysis,simulations are carried out utilizing the software,and the simulation results show that the accuracy of the improved algorithm is approximately equal to the dual quaternion algorithm.
Directory of Open Access Journals (Sweden)
Rajeswari Sridhar
2010-07-01
Full Text Available In this work we have compared two indexing algorithms that have been used to index and retrieve Carnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithm for music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. The modification in the dual ternary algorithm was essential to handle variable length query phrase and to accommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted for Carnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternary algorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval in which features like MFCC, spectral flux, melody string and spectral centroid are used as features for indexing data into a hash table. The way in which collision resolution was handled by this hash table is different than the normal hash table approaches. It was observed that multi-key hashing based retrieval had a lesser time complexity than dual-ternary based indexing The algorithms were also compared for their precision and recall in which multi-key hashing had a better recall than modified dual ternary indexing for the sample data considered.
Directory of Open Access Journals (Sweden)
Chansiri Singhtaun
2010-01-01
Full Text Available Problem statement: The objective of this study is to develop efficient exact algorithms for a single source capacitated multi-facility location problem with rectilinear distance. This problem is concerned with locating m capacitated facilities in the two dimensional plane to satisfy the demand of n customers with minimum total transportation cost which is proportional to the rectilinear distance between the facilities and their customers. Approach: Two exact algorithms are proposed and compared. The first algorithm, decomposition algorithm, uses explicit branching on the allocation variables and then solve for location variable corresponding to each branch as the original Mixed Integer Programming (MIP formulation with nonlinear objective function of the problem. For the other algorithm, the new formulation of the problem is first created by making use of a well-known condition for the optimal facility locations. The problem is considered as a p-median problem and the original formulation is transformed to a binary integer programming problem. The classical exact algorithm based on this formulation which is branch-and-bound algorithm (implicit branching is then used. Results: Computational results show that decomposition algorithm can provide the optimum solution for larger size of the studied problem with much less processing time than the implicit branching on the discrete reformulated problem. Conclusion: The decomposition algorithm has a higher efficiency to deal with the studied NP-hard problems but is required to have efficient MIP software to support.
An Improved Chaotic Bat Algorithm for Solving Integer Programming Problems
Directory of Open Access Journals (Sweden)
Osama Abdel Raouf
2014-08-01
Full Text Available Bat Algorithm is a recently-developed method in the field of computational intelligence. In this paper is presented an improved version of a Bat Meta-heuristic Algorithm, (IBACH, for solving integer programming problems. The proposed algorithm uses chaotic behaviour to generate a candidate solution in behaviors similar to acoustic monophony. Numerical results show that the IBACH is able to obtain the optimal results in comparison to traditional methods (branch and bound, particle swarm optimization algorithm (PSO, standard Bat algorithm and other harmony search algorithms. However, the benefits of this proposed algorithm is in its ability to obtain the optimal solution within less computation, which save time in comparison with the branch and bound algorithm (exact solution method.
Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.
2012-01-01
Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.
Comparison of neuron selection algorithms of wavelet-based neural network
Mei, Xiaodan; Sun, Sheng-He
2001-09-01
Wavelet networks have increasingly received considerable attention in various fields such as signal processing, pattern recognition, robotics and automatic control. Recently people are interested in employing wavelet functions as activation functions and have obtained some satisfying results in approximating and localizing signals. However, the function estimation will become more and more complex with the growth of the input dimension. The hidden neurons contribute to minimize the approximation error, so it is important to study suitable algorithms for neuron selection. It is obvious that exhaustive search procedure is not satisfying when the number of neurons is large. The study in this paper focus on what type of selection algorithm has faster convergence speed and less error for signal approximation. Therefore, the Genetic algorithm and the Tabu Search algorithm are studied and compared by some experiments. This paper first presents the structure of the wavelet-based neural network, then introduces these two selection algorithms and discusses their properties and learning processes, and analyzes the experiments and results. We used two wavelet functions to test these two algorithms. The experiments show that the Tabu Search selection algorithm's performance is better than the Genetic selection algorithm, TSA has faster convergence rate than GA under the same stopping criterion.
Comparison of several algorithms of the electric force calculation in particle plasma models
International Nuclear Information System (INIS)
This work is devoted to plasma modelling using the technique of molecular dynamics. The crucial problem of most such models is the efficient calculation of electric force. This is usually solved by using the particle-in-cell (PIC) algorithm. However, PIC is an approximative algorithm as it underestimates the short-range interactions of charged particles. We propose a hybrid algorithm which adds these interactions to PIC. Then we include this algorithm in a set of algorithms which we test against each other in a two-dimensional collisionless magnetized plasma model. Besides our hybrid algorithm, this set includes two variants of pure PIC and the direct application of Coulomb's law. We compare particle forces, particle trajectories, total energy conservation and the speed of the algorithms. We find out that the hybrid algorithm can be a good replacement of direct Coulomb's law application (quite accurate and much faster). It is however probably unnecessary to use it in practical 2D models.
Antoniucci, S; Causi, G Li; Lorenzetti, D
2014-01-01
Aiming at statistically studying the variability in the mid-IR of young stellar objects (YSOs), we have compared the 3.6, 4.5, and 24 um Spitzer fluxes of 1478 sources belonging to the C2D (Cores to Disks) legacy program with the WISE fluxes at 3.4, 4.6, and 22 um. From this comparison we have selected a robust sample of 34 variable sources. Their variations were classified per spectral Class (according to the widely accepted scheme of Class I/flat/II/III protostars), and per star forming region. On average, the number of variable sources decreases with increasing Class and is definitely higher in Perseus and Ophiuchus than in Chamaeleon and Lupus. According to the paradigm Class Evolution, the photometric variability can be considered to be a feature more pronounced in less evolved protostars, and, as such, related to accretion processes. Moreover, our statistical findings agree with the current knowledge of the star formation activity in different regions. The 34 selected variables were further investigate...
Tang, Y.; Reed, P.; Wagner, T.
2005-12-01
This study provides the first comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools- relative effectiveness in calibrating integrated hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (??-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study assesses the performances of these three evolutionary multiobjective algorithms using a formal metrics-based methodology. This study uses two phases of testing to compare the algorithms- performances. In the first phase, this study uses a suite of standard computer science test problems to validate the algorithms- abilities to perform global search effectively, efficiently, and reliably. The second phase of testing compares the algorithms- performances for a computationally intensive multiobjective integrated hydrologic model calibration application for the Shale Hills watershed located within the Valley and Ridge province of the Susquehanna River Basin in north central Pennsylvania. The Shale Hills test case demonstrates the computational challenges posed by the paradigmatic shift in environmental and water resources simulation tools towards highly nonlinear physical models that seek to holistically simulate the water cycle. Specifically, the Shale Hills test case is an excellent test for the three EMO algorithms due to the large number of continuous decision variables, the increased computational demands posed by the simulating fully-coupled hydrologic processes, and the highly multimodal nature of the search space. A challenge and contribution of this work is the development of a comprehensive methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques.
Comparison of Fully Numerical Predictor-Corrector and Apollo Skip Entry Guidance Algorithms
Brunner, Christopher W.; Lu, Ping
2012-09-01
The dramatic increase in computational power since the Apollo program has enabled the development of numerical predictor-corrector (NPC) entry guidance algorithms that allow on-board accurate determination of a vehicle's trajectory. These algorithms are sufficiently mature to be flown. They are highly adaptive, especially in the face of extreme dispersion and off-nominal situations compared with reference-trajectory following algorithms. The performance and reliability of entry guidance are critical to mission success. This paper compares the performance of a recently developed fully numerical predictor-corrector entry guidance (FNPEG) algorithm with that of the Apollo skip entry guidance. Through extensive dispersion testing, it is clearly demonstrated that the Apollo skip entry guidance algorithm would be inadequate in meeting the landing precision requirement for missions with medium (4000-7000 km) and long (>7000 km) downrange capability requirements under moderate dispersions chiefly due to poor modeling of atmospheric drag. In the presence of large dispersions, a significant number of failures occur even for short-range missions due to the deviation from planned reference trajectories. The FNPEG algorithm, on the other hand, is able to ensure high landing precision in all cases tested. All factors considered, a strong case is made for adopting fully numerical algorithms for future skip entry missions.
Energy Technology Data Exchange (ETDEWEB)
Gotway, C.A. [Nebraska Univ., Lincoln, NE (United States). Dept. of Biometry; Rutherford, B.M. [Sandia National Labs., Albuquerque, NM (United States)
1993-09-01
Stochastic simulation has been suggested as a viable method for characterizing the uncertainty associated with the prediction of a nonlinear function of a spatially-varying parameter. Geostatistical simulation algorithms generate realizations of a random field with specified statistical and geostatistical properties. A nonlinear function is evaluated over each realization to obtain an uncertainty distribution of a system response that reflects the spatial variability and uncertainty in the parameter. Crucial management decisions, such as potential regulatory compliance of proposed nuclear waste facilities and optimal allocation of resources in environmental remediation, are based on the resulting system response uncertainty distribution. Many geostatistical simulation algorithms have been developed to generate the random fields, and each algorithm will produce fields with different statistical properties. These different properties will result in different distributions for system response, and potentially, different managerial decisions. The statistical properties of the resulting system response distributions are not completely understood, nor is the ability of the various algorithms to generate response distributions that adequately reflect the associated uncertainty. This paper reviews several of the algorithms available for generating random fields. Algorithms are compared in a designed experiment using seven exhaustive data sets with different statistical and geostatistical properties. For each exhaustive data set, a number of realizations are generated using each simulation algorithm. The realizations are used with each of several deterministic transfer functions to produce a cumulative uncertainty distribution function of a system response. The uncertainty distributions are then compared to the single value obtained from the corresponding exhaustive data set.
Directory of Open Access Journals (Sweden)
Rajeswari Sridhar
2010-07-01
Full Text Available In this work we have compared two indexing algorithms that have been used to index and retrieveCarnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithmfor music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. Themodification in the dual ternary algorithm was essential to handle variable length query phrase and toaccommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted forCarnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternaryalgorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval inwhich features like MFCC, spectral flux, melody string and spectral centroid are used as features forindexing data into a hash table. The way in which collision resolution was handled by this hash table isdifferent than the normal hash table approaches. It was observed that multi-key hashing based retrievalhad a lesser time complexity than dual-ternary based indexing The algorithms were also compared fortheir precision and recall in which multi-key hashing had a better recall than modified dual ternaryindexing for the sample data considered.
Comparison of reconstruction algorithms for sparse-array detection photoacoustic tomography
Chaudhary, G.; Roumeliotis, M.; Carson, J. J. L.; Anastasio, M. A.
2010-02-01
A photoacoustic tomography (PAT) imaging system based on a sparse 2D array of detector elements and an iterative image reconstruction algorithm has been proposed, which opens the possibility for high frame-rate 3D PAT. The efficacy of this PAT implementation is highly influenced by the choice of the reconstruction algorithm. In recent years, a variety of new reconstruction algorithms have been proposed for medical image reconstruction that have been motivated by the emerging theory of compressed sensing. These algorithms have the potential to accurately reconstruct sparse objects from highly incomplete measurement data, and therefore may be highly suited for sparse array PAT. In this context, a sparse object is one that is described by a relatively small number of voxel elements, such as typically arises in blood vessel imaging. In this work, we investigate the use of a gradient projection-based iterative reconstruction algorithm for image reconstruction in sparse-array PAT. The algorithm seeks to minimize an 1-norm penalized least-squares cost function. By use of computer-simulation studies, we demonstrate that the gradient projection algorithm may further improve the efficacy of sparse-array PAT.
COMPARISON PROCESS LONG EXECUTION BETWEEN PQ ALGORTHM AND NEW FUZZY LOGIC ALGORITHM FOR VOIP
Directory of Open Access Journals (Sweden)
Suardinata
2011-01-01
Full Text Available The transmission of voice over IP networks can generate network congestion due to weak supervision of the traffic incoming packet, queuing and scheduling. This congestion negatively affects the Quality of Service (QoS such as delay, packet drop and packet loss. Packet delay effects will affect the other QoS such as: unstable voice packet delivery, packet jitter, packet loss and echo. Priority Queuing (PQ algorithm is a more popular technique used in the VoIP network to reduce delays. In operation, the PQ is to use the method of sorting algorithms, search and route planning to classify packets on the router. Thus,this packet classifying method can result in repetition of the process. And this recursive loop leads to thenext queue starved. In this paper, to solving problems, there are three phases namely queuing phase,classifying phase and scheduling phase. The PQ algorithm technique is based on the priority. It will beapplied to the fuzzy inference system to classify the queuing incoming packet (voice, video and text; that can reduce recursive loop and starvation. After the incoming packet is classified, the packet will be sent to the packet buffering. In addition, to justify the research objective of the PQ improved algorithm will becompared against the algorithm existing PQ, which is found in the literature using metrics such as delay,packets drop and packet losses. This paper described about different execution long process in Priority (PQ and our algorithm. Our Algorithm is to simplify process execution Algorithm that can cause starvation occurs in PQ algorithm.
A comparison of three additive tree algorithms that rely on a least-squares loss criterion.
Smith, T J
1998-11-01
The performances of three additive tree algorithms which seek to minimize a least-squares loss criterion were compared. The algorithms included the penalty-function approach of De Soete (1983), the iterative projection strategy of Hubert & Arabie (1995) and the two-stage ADDTREE algorithm, (Corter, 1982; Sattath & Tversky, 1977). Model fit, comparability of structure, processing time and metric recovery were assessed. Results indicated that the iterative projection strategy consistently located the best-fitting tree, but also displayed a wider range and larger number of local optima. PMID:9854946
Clustering performance comparison using K-means and expectation maximization algorithms
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-01-01
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results. PMID:26019610
Algorithm comparison and benchmarking using a parallel spectra transform shallow water model
Energy Technology Data Exchange (ETDEWEB)
Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)
1995-04-01
In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.
Verbeeck, Cis; Colak, Tufan; Watson, Fraser T; Delouille, Veronique; Mampaey, Benjamin; Qahwaji, Rami
2011-01-01
Since the Solar Dynamics Observatory (SDO) began recording ~ 1 TB of data per day, there has been an increased need to automatically extract features and events for further analysis. Here we compare the overall detection performance, correlations between extracted properties, and usability for feature tracking of four solar feature-detection algorithms: the Solar Monitor Active Region Tracker (SMART) detects active regions in line-of-sight magnetograms; the Automated Solar Activity Prediction code (ASAP) detects sunspots and pores in white-light continuum images; the Sunspot Tracking And Recognition Algorithm (STARA) detects sunspots in white-light continuum images; the Spatial Possibilistic Clustering Algorithm (SPoCA) automatically segments solar EUV images into active regions (AR), coronal holes (CH) and quiet Sun (QS). One month of data from the SOHO/MDI and SOHO/EIT instruments during 12 May - 23 June 2003 is analysed. The overall detection performance of each algorithm is benchmarked against National Oc...
Comparison of Algorithms for Prediction of Protein Structural Features from Evolutionary Data.
Bywater, Robert P
2016-01-01
Proteins have many functions and predicting these is still one of the major challenges in theoretical biophysics and bioinformatics. Foremost amongst these functions is the need to fold correctly thereby allowing the other genetically dictated tasks that the protein has to carry out to proceed efficiently. In this work, some earlier algorithms for predicting protein domain folds are revisited and they are compared with more recently developed methods. In dealing with intractable problems such as fold prediction, when different algorithms show convergence onto the same result there is every reason to take all algorithms into account such that a consensus result can be arrived at. In this work it is shown that the application of different algorithms in protein structure prediction leads to results that do not converge as such but rather they collude in a striking and useful way that has never been considered before. PMID:26963911
DEFF Research Database (Denmark)
Ivanova, N.; Pedersen, L. T.; Tonboe, R. T.;
2015-01-01
algorithm inter-comparison and evaluation experiment. The skills of 30 sea ice algorithms were evaluated systematically over low and high sea ice concentrations. Evaluation criteria included standard deviation relative to independent validation data, performance in the presence of thin ice and melt ponds...
Performance Comparison of Incremental K-means and Incremental DBSCAN Algorithms
Chakraborty, Sanjay; Nagwani, N. K.; Dey, Lopamudra
2014-01-01
Incremental K-means and DBSCAN are two very important and popular clustering techniques for today's large dynamic databases (Data warehouses, WWW and so on) where data are changed at random fashion. The performance of the incremental K-means and the incremental DBSCAN are different with each other based on their time analysis characteristics. Both algorithms are efficient compare to their existing algorithms with respect to time, cost and effort. In this paper, the performance evaluation of i...
A comparison of two extended Kalman filter algorithms for air-to-air passive ranging.
Ewing, Ward Hubert.
1983-01-01
Approved for public release; distribution is unlimited Two Extended Kalman Filter algorithms for air-to-air passive ranging are proposed, and examined by computer simulation. One algorithm uses only bearing observations while the other uses both bearing and elevation angles. Both are tested using a flat-Earth model and also using a spherical-Earth model where the benefit of a simple correction for the curvature-of-the-Earth effect on elevation angle is examined. The effects of varied an...
Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas
2016-03-01
Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.
A Comparison of Two Open Source LiDAR Surface Classification Algorithms
Danny G Marks; Nancy F. Glenn; Timothy E. Link; Hudak, Andrew T.; Rupesh Shrestha; Michael J. Falkowski; Alistair M. S. Smith; Hongyu Huang; Wade T. Tinkham
2011-01-01
With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are openly available, free to use, and are supported by published results. Two of the latter are the multiscale curvature classification and the Boise Center Aerospace Laboratory LiDAR (BCAL) algorithms....
A Comparison and Selection on Basic Type of Searching Algorithm in Data Structure
Kamlesh Kumar Pandey; Narendra Pradhan
2014-01-01
A lot of problems in different practical fields of Computer Science, Database Management System, Networks, Data Mining and Artificial intelligence. Searching is common fundamental operation and solve to searching problem in a different formats of these field. This research paper are presents the basic type of searching algorithms of data structure like linear search, binary search, and hash search. We have try to cover some technical aspects to this searching algorithm. This research is provi...
Huh, Hee Jin; Chung, Jae-Woo; Park, Seong Yeon; Chae, Seok Lae
2015-01-01
Background Automated Mediace Treponema pallidum latex agglutination (TPLA) and Mediace rapid plasma reagin (RPR) assays are used by many laboratories for syphilis diagnosis. This study compared the results of the traditional syphilis screening algorithm and a reverse algorithm using automated Mediace RPR or Mediace TPLA as first-line screening assays in subjects undergoing a health checkup. Methods Samples from 24,681 persons were included in this study. We routinely performed Mediace RPR and...
Comparison of Fractal Dimension Algorithms for the Computation of EEG Biomarkers for Dementia
Goh, Cindy; Hamadicharef, Brahim; Henderson, Goeff,; Ifeachor, Emmanuel
2005-01-01
Analysis of the Fractal Dimension of the EEG appears to be a good approach for the computation of biomarkers for dementia. Several Fractal Dimension algorithms have been used in the EEG analysis of cognitive and sleep disorders. The aim of this paper is to find an accurate Fractal Dimension algorithm that can be applied to the EEG for computing reliable biomarkers, specifically, for the assessment of dementia. To achieve this, some of the common methods for estimating the Fractal Dimension of...
Performance Comparison Research of the FECG Signal Separation Based on the BSS Algorithm
Directory of Open Access Journals (Sweden)
Xinling Wen
2012-08-01
Full Text Available Fetal Electrocardiogram (FECG is a weak signal through placing the electrodes upon the maternal belly surface to indirectly monitor, which contains all the forms of jamming signal. So, how to separate the FECG from the strong background interference has important value of clinical application. Independent Component Analysis (ICA is a kind of developed new Blind Source Separation (BSS technology in recent years. This study adopted ICA method to the extraction of FECG and carried out the blind signal separation by using the Fast ICA algorithm and natural gradient algorithm in the FECG separation research. The experimental results shown that the two kind of algorithm can obtain the good separation result. But because the natural gradient algorithm can achieve FECG online separation and separation effect is better than Fast ICA algorithm, therefore, the natural gradient algorithm is a better way to used in FECG separation. And it will help to monitor the congenital heart disease, neonatal arrhythmia, intrauterine fetal retardation and other diseases, which has very important test application value.
Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.
2016-03-01
Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.
International Nuclear Information System (INIS)
The dose accuracy calculated by a treatment planning system is directly related to the chosen algorithm. Nowadays, several calculation doses algorithms are commercially available and they differ in calculation time and accuracy, especially when individual tissue densities are taken into account. The aim of this study was to compare two different calculation algorithms from iPlan®, BrainLAB, in the treatment of pituitary gland tumor with intensity-modulated radiation therapy (IMRT). These tumors are located in a region with variable electronic density tissues. The deviations from the plan with no heterogeneity correction were evaluated. To initial validation of the data inserted into the planning system, an IMRT plan was simulated in a anthropomorphic phantom and the dose distribution was measured with a radiochromic film. The gamma analysis was performed in the film, comparing it with dose distributions calculated with X-ray Voxel Monte Carlo (XVMC) algorithm and pencil beam convolution (PBC). Next, 33 patients plans, initially calculated by PBC algorithm, were recalculated with XVMC algorithm. The treatment volumes and organs-at-risk dose-volume histograms were compared. No relevant differences were found in dose-volume histograms between XVMC and PBC. However, differences were obtained when comparing each plan with the plan without heterogeneity correction. (author)
Directory of Open Access Journals (Sweden)
Natarajan Meghanathan
2013-05-01
Full Text Available The high-level contribution of this paper is an exhaustive simulation-based comparison study of three categories (density, node id and stability-based of algorithms to determine connected dominating sets (CDS for mobile ad hoc networks and evaluate their performance under two categories (random node mobility and grid-based vehicular ad hoc network of mobility models. The CDS algorithms studied are the maximum density-based (MaxD-CDS, node ID-based (ID-CDS and the minimum velocity-based (MinV-CDS algorithms representing the density, node id and stability categories respectively. The node mobility models used are the Random Waypoint model (representing random node mobility and the City Section and Manhattan mobility models (representing the grid-based vehicular ad hoc networks. The three CDS algorithms under the three mobility models are evaluated with respect to two critical performance metrics: the effective CDS lifetime (calculated taking into consideration the CDS connectivity and absolute CDS lifetime and the CDS node size. Simulations are conducted under a diverse set of conditions representing low, moderate and high network density, coupled with low, moderate and high node mobility scenarios. For each CDS, the paper identifies the mobility model that can be employed to simultaneously maximize the lifetime and minimize the node size with minimal tradeoff. For the two VANET mobility models, the impact of the grid block length on the CDS lifetime and node size is also evaluated.
Performance comparison of neural network training algorithms in modeling of bimodal drug delivery.
Ghaffari, A; Abdollahi, H; Khoshayand, M R; Bozchalooi, I Soltani; Dadgar, A; Rafiee-Tehrani, M
2006-12-11
The major aim of this study was to model the effect of two causal factors, i.e. coating weight gain and amount of pectin-chitosan in the coating solution on the in vitro release profile of theophylline for bimodal drug delivery. Artificial neural network (ANN) as a multilayer perceptron feedforward network was incorporated for developing a predictive model of the formulations. Five different training algorithms belonging to three classes: gradient descent, quasi-Newton (Levenberg-Marquardt, LM) and genetic algorithm (GA) were used to train ANN containing a single hidden layer of four nodes. The next objective of the current study was to compare the performance of aforementioned algorithms with regard to predicting ability. The ANNs were trained with those algorithms using the available experimental data as the training set. The divergence of the RMSE between the output and target values of test set was monitored and used as a criterion to stop training. Two versions of gradient descent backpropagation algorithms, i.e. incremental backpropagation (IBP) and batch backpropagation (BBP) outperformed the others. No significant differences were found between the predictive abilities of IBP and BBP, although, the convergence speed of BBP is three- to four-fold higher than IBP. Although, both gradient descent backpropagation and LM methodologies gave comparable results for the data modeling, training of ANNs with genetic algorithm was erratic. The precision of predictive ability was measured for each training algorithm and their performances were in the order of: IBP, BBP>LM>QP (quick propagation)>GA. According to BBP-ANN implementation, an increase in coating levels and a decrease in the amount of pectin-chitosan generally retarded the drug release. Moreover, the latter causal factor namely the amount of pectin-chitosan played slightly more dominant role in determination of the dissolution profiles. PMID:16959449
Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.
2016-06-01
High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.
Montilla, I; Béchet, C; Le Louarn, M; Reyes, M; Tallon, M
2010-11-01
Extremely Large Telescopes (ELTs) are very challenging with respect to their adaptive optics (AO) requirements. Their diameters and the specifications required by the astronomical science for which they are being designed imply a huge increment in the number of degrees of freedom in the deformable mirrors. Faster algorithms are needed to implement the real-time reconstruction and control in AO at the required speed. We present the results of a study of the AO correction performance of three different algorithms applied to the case of a 42-m ELT: one considered as a reference, the matrix-vector multiply (MVM) algorithm; and two considered fast, the fractal iterative method (FrIM) and the Fourier transform reconstructor (FTR). The MVM and the FrIM both provide a maximum a posteriori estimation, while the FTR provides a least-squares one. The algorithms are tested on the European Southern Observatory (ESO) end-to-end simulator, OCTOPUS. The performance is compared using a natural guide star single-conjugate adaptive optics configuration. The results demonstrate that the methods have similar performance in a large variety of simulated conditions. However, with respect to system misregistrations, the fast algorithms demonstrate an interesting robustness. PMID:21045895
Comparison of optimization algorithms in intensity-modulated radiation therapy planning
Kendrick, Rachel
Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.
Directory of Open Access Journals (Sweden)
C. Keim
2009-05-01
Full Text Available This paper presents a first statistical validation of tropospheric ozone products derived from measurements of the satellite instrument IASI. Since end of 2006, IASI (Infrared Atmospheric Sounding Interferometer aboard the polar orbiter Metop-A measures infrared spectra of the Earth's atmosphere in nadir geometry. This validation covers the northern mid-latitudes and the period from July 2007 to August 2008. The comparison of the ozone products with the vertical ozone concentration profiles from balloon sondes leads to estimates of the systematic and random errors in the IASI ozone products. The intercomparison of the retrieval results from four different sources (including the EUMETSAT ozone products shows systematic differences due to the used methods and algorithms. On average the tropospheric columns have a small bias of less than 2 Dobson Units (DU when compared to the sonde measured columns. The comparison of the still pre-operational EUMETSAT columns shows higher mean differences of about 5 DU.
Amooee, Golriz; Bagheri-Dehnavi, Malihe
2012-01-01
In the current competitive world, industrial companies seek to manufacture products of higher quality which can be achieved by increasing reliability, maintainability and thus the availability of products. On the other hand, improvement in products lifecycle is necessary for achieving high reliability. Typically, maintenance activities are aimed to reduce failures of industrial machinery and minimize the consequences of such failures. So the industrial companies try to improve their efficiency by using different fault detection techniques. One strategy is to process and analyze previous generated data to predict future failures. The purpose of this paper is to detect wasted parts using different data mining algorithms and compare the accuracy of these algorithms. A combination of thermal and physical characteristics has been used and the algorithms were implemented on Ahanpishegan's current data to estimate the availability of its produced parts. Keywords: Data Mining, Fault Detection, Availability, Predictio...
Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models
Directory of Open Access Journals (Sweden)
Hiroki Yoshioka
2011-07-01
Full Text Available The fraction of vegetation cover (FVC is often estimated by unmixing a linear mixture model (LMM to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could be used to assess the robustness of LMM-based algorithms under a two-endmember assumption. The factor was derived from the analytical relationship between FVC values determined according to several previously described algorithms. The factor depended on the target spectra, endmember spectra, and choice of the spectral vegetation index. Numerical simulations were conducted to demonstrate the dependence and usefulness of the technique in terms of robustness against the measurement noise.
Vijay Alagappan, A.; Narasimha Rao, K. V.; Krishna Kumar, R.
2015-02-01
Tyre models are a prerequisite for any vehicle dynamics simulation. Tyre models range from the simplest mathematical models that consider only the cornering stiffness to a complex set of formulae. Among all the steady-state tyre models that are in use today, the Magic Formula tyre model is unique and most popular. Though the Magic Formula tyre model is widely used, obtaining the model coefficients from either the experimental or the simulation data is not straightforward due to its nonlinear nature and the presence of a large number of coefficients. A common procedure used for this extraction is the least-squares minimisation that requires considerable experience for initial guesses. Various researchers have tried different algorithms, namely, gradient and Newton-based methods, differential evolution, artificial neural networks, etc. The issues involved in all these algorithms are setting bounds or constraints, sensitivity of the parameters, the features of the input data such as the number of points, noisy data, experimental procedure used such as slip angle sweep or tyre measurement (TIME) procedure, etc. The extracted Magic Formula coefficients are affected by these variants. This paper highlights the issues that are commonly encountered in obtaining these coefficients with different algorithms, namely, least-squares minimisation using trust region algorithms, Nelder-Mead simplex, pattern search, differential evolution, particle swarm optimisation, cuckoo search, etc. A key observation is that not all the algorithms give the same Magic Formula coefficients for a given data. The nature of the input data and the type of the algorithm decide the set of the Magic Formula tyre model coefficients.
Recent Research and Comparison of QoS Routing Algorithms for MPLS Networks
Directory of Open Access Journals (Sweden)
Santosh Kulkarni
2012-03-01
Full Text Available MPLS enables service providers to meet challenges brought about by explosive growth and provides the opportunity for differentiated services without necessitating the sacrifice of the existing infrastructure. MPLS is a highly scalable data carrying mechanism which forwards packets to outgoing interface based only on label value .MPLS network has the capability of routing with some specific constraints for supporting desired QoS. In this paper we will compare recent QoS Routing Algorithms for MPLS Networks. We are presenting simulation results which will focus on the computational complexity of each algorithm, performances under a wide range of workload, topology and system parameters.
International Nuclear Information System (INIS)
Multichannel pulse height measurements with a cylindrical 3He proportional counter obtained at a reactor filter of natural iron are taken to investigate the properties of three algorithms for neutron spectrum unfolding. For a systematic application of uncertainty propagation the covariance matrix of previously determined 3He response functions is evaluated. The calculated filter transmission function together with a covariance matrix estimated from cross-section uncertainties of the filter material is used as fluence pre-information. The results obtained from algorithms with and without pre-information differ in shape and uncertainties for single group fluence values, but there is sufficient agreement when evaluating integrals over neutron energy intervals
An Efficient Approach for Candidate Set Generation
Nawar Malhis; Arden Ruttan; Hazem H. Refai
2005-01-01
When Apriori was first introduced as an algorithm for discovering association rules in a database of market basket data, the problem of generating the candidate set of the large set was a bottleneck in Apriori's performance, both in space and computational requirements. At first, many unsuccessful attempts were made to improve the generation of a candidate set. Later, other algorithms that out performed Apriori were developed that generate association rules without using a candidate set. They...
A Comparison of the Machine Learning Algorithm for Evaporation Duct Estimation
Yang, C.
2013-01-01
In this research, a comparison of the relevance vector machine (RVM), least square support vector machine (LSSVM) and the radial basis function neural network (RBFNN) for evaporation duct estimation are presented. The parabolic equation model is adopted as the forward propagation model, and which is used to establish the training database between the radar sea clutter power and the evaporation duct height. The comparison of the RVM, LSSVM and RBFNN for evaporation duct estimation are investig...
DEFF Research Database (Denmark)
Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels; Pedersen, Gert Frølund
2014-01-01
A comparison in data achievement between two well-known algorithms with simulated and real measured data is presented. The algorithms maximise the data rate in cooperative base stations (BS) multiple-input-single-output scenario. Weighted sum-minimum mean square error algorithm could be used in...... multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results....
Pick-N Multiple Choice-Exams: A Comparison of Scoring Algorithms
Bauer, Daniel; Holzer, Matthias; Kopp, Veronika; Fischer, Martin R.
2011-01-01
To compare different scoring algorithms for Pick-N multiple correct answer multiple-choice (MC) exams regarding test reliability, student performance, total item discrimination and item difficulty. Data from six 3rd year medical students' end of term exams in internal medicine from 2005 to 2008 at Munich University were analysed (1,255 students,…
Institute of Scientific and Technical Information of China (English)
Haixing Liu,Jing Lu,Ming Zhao∗; Yixing Yuan
2016-01-01
In order to compare two advanced multi⁃objective evolutionary algorithms, a multi⁃objective water distribution problem is formulated in this paper. The multi⁃objective optimization has received more attention in the water distribution system design. On the one hand the cost of water distribution system including capital, operational, and maintenance cost is mostly concerned issue by the utilities all the time; on the other hand improving the performance of water distribution systems is of equivalent importance, which is often conflicting with the previous goal. Many performance metrics of water networks are developed in recent years, including total or maximum pressure deficit, resilience, inequity, probabilistic robustness, and risk measure. In this paper, a new resilience metric based on the energy analysis of water distribution systems is proposed. Two optimization objectives are comprised of capital cost and the new resilience index. A heuristic algorithm, speed⁃constrained multi⁃objective particle swarm optimization ( SMPSO) extended on the basis of the multi⁃objective particle swarm algorithm, is introduced to compare with another state⁃of⁃the⁃art heuristic algorithm, NSGA⁃II. The solutions are evaluated by two metrics, namely spread and hypervolume. To illustrate the capability of SMPSO to efficiently identify good designs, two benchmark problems ( two⁃loop network and Hanoi network) are employed. From several aspects the results demonstrate that SMPSO is a competitive and potential tool to tackle with the optimization problem of complex systems.
Delimata, Paweł
2010-01-01
We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.
Movia, A.; Beinat, A.; Crosilla, F.
2015-04-01
The recognition of vegetation by the analysis of very high resolution (VHR) aerial images provides meaningful information about environmental features; nevertheless, VHR images frequently contain shadows that generate significant problems for the classification of the image components and for the extraction of the needed information. The aim of this research is to classify, from VHR aerial images, vegetation involved in the balance process of the environmental biochemical cycle, and to discriminate it with respect to urban and agricultural features. Three classification algorithms have been experimented in order to better recognize vegetation, and compared to NDVI index; unfortunately all these methods are conditioned by the presence of shadows on the images. Literature presents several algorithms to detect and remove shadows in the scene: most of them are based on the RGB to HSI transformations. In this work some of them have been implemented and compared with one based on RGB bands. Successively, in order to remove shadows and restore brightness on the images, some innovative algorithms, based on Procrustes theory, have been implemented and applied. Among these, we evaluate the capability of the so called "not-centered oblique Procrustes" and "anisotropic Procrustes" methods to efficiently restore brightness with respect to a linear correlation correction based on the Cholesky decomposition. Some experimental results obtained by different classification methods after shadows removal carried out with the innovative algorithms are presented and discussed.