Experience with CANDID: Comparison algorithm for navigating digital image databases
Energy Technology Data Exchange (ETDEWEB)
Kelly, P.; Cannon, M.
1994-10-01
This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.
CANDID: Comparison algorithm for navigating digital image databases
Energy Technology Data Exchange (ETDEWEB)
Kelly, P.M.; Cannon, T.M.
1994-02-21
In this paper, we propose a method for calculating the similarity between two digital images. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized distance between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to an example target image. This algorithm is applied to the problem of search and retrieval for database containing pulmonary CT imagery, and experimental results are provided.
Lee, K J; Jenet, F A; Martinez, J; Dartez, L P; Mata, A; Lunsford, G; Cohen, S; Biwer, C M; Rohr, M; Flanigan, J; Walker, A; Banaszak, S; Allen, B; Barr, E D; Bhat, N D R; Bogdanov, S; Brazier, A; Camilo, F; Champion, D J; Chatterjee, S; Cordes, J; Crawford, F; Deneva, J; Desvignes, G; Ferdman, R D; Freire, P; Hessels, J W T; Karuppusamy, R; Kaspi, V M; Knispel, B; Kramer, M; Lazarus, P; Lynch, R; Lyne, A; McLaughlin, M; Ransom, S; Scholz, P; Siemens, X; Spitler, L; Stairs, I; Tan, M; van Leeuwen, J; Zhu, W W
2013-01-01
Modern radio pulsar surveys produce a large volume of prospective candidates, the majority of which are polluted by human-created radio frequency interference or other forms of noise. Typically, large numbers of candidates need to be visually inspected in order to determine if they are real pulsars. This process can be labor intensive. In this paper, we introduce an algorithm called PEACE (Pulsar Evaluation Algorithm for Candidate Extraction) which improves the efficiency of identifying pulsar signals. The algorithm ranks the candidates based on a score function. Unlike popular machine-learning based algorithms, no prior training data sets are required. This algorithm has been applied to data from several large-scale radio pulsar surveys. Using the human-based ranking results generated by students in the Arecibo Remote Command enter programme, the statistical performance of PEACE was evaluated. It was found that PEACE ranked 68% of the student-identified pulsars within the top 0.17% of sorted candidates, 95% ...
Comparison of Text Categorization Algorithms
Institute of Scientific and Technical Information of China (English)
SHI Yong-feng; ZHAO Yan-ping
2004-01-01
This paper summarizes several automatic text categorization algorithms in common use recently, analyzes and compares their advantages and disadvantages.It provides clues for making use of appropriate automatic classifying algorithms in different fields.Finally some evaluations and summaries of these algorithms are discussed, and directions to further research have been pointed out.
Comparison of fast discrete wavelet transform algorithms
Institute of Scientific and Technical Information of China (English)
MENG Shu-ping; TIAN Feng-chun; XU Xin
2005-01-01
This paper presents an analysis on and experimental comparison of several typical fast algorithms for discrete wavelet transform (DWT) and their implementation in image compression, particularly the Mallat algorithm, FFT-based algorithm, Short-length based algorithm and Lifting algorithm. The principles, structures and computational complexity of these algorithms are explored in details respectively. The results of the experiments for comparison are consistent to those simulated by MATLAB. It is found that there are limitations in the implementation of DWT. Some algorithms are workable only for special wavelet transform, lacking in generality. Above all, the speed of wavelet transform, as the governing element to the speed of image processing, is in fact the retarding factor for real-time image processing.
Evaluation of GPM candidate algorithms on hurricane observations
Le, M.; Chandrasekar, C. V.
2012-12-01
storms and hurricanes. In this paper, the performance of GPM candidate algorithms [2][3] to perform profile classification, melting region detection as well as drop size distribution retrieval for hurricane Earl will be presented. This analysis will be compared with other storm observations that are not tropical storms. The philosophy of the algorithm is based on the vertical characteristic of measured dual-frequency ratio (DFRm), defined as the difference in measured radar reflectivities at the two frequencies. It helps our understanding of how hurricanes such as Earl form and intensify rapidly. Reference [1] T. Iguchi, R. Oki, A. Eric and Y. Furuhama, "Global precipitation measurement program and the development of dual-frequency precipitation radar," J. Commun. Res. Lab. (Japan), 49, 37-45.2002. [2] M. Le and V. Chandrasekar, Recent updates on precipitation classification and hydrometeor identification algorithm for GPM-DPR, Geoscience science and remote sensing symposium, IGARSS'2012, IEEE International, Munich, Germany. [3] M. Le ,V. Chandrasekar and S. Lim, Microphysical retrieval from dual-frequency precipitation radar board GPM, Geoscience science and remote sensing symposium, IGARSS'2010, IEEE International, Honolulu, USA.
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Institute of Scientific and Technical Information of China (English)
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Comparison Study Of Multiobjective Evolutionary Algorithms
Syomkin, A. M.
2004-01-01
Many real-world problems involve two types of difficulties: 1) multiple, conflicting objectives and 2) a highly complex search space. Efficient evolutionary strategies have been developed to deal with both difficulties. Evolutionary algorithms possess several characteristics such as parallelism and robustness that make them preferable to classical optimization methods. In the presented paper I conducted comparison studies among the well-known evolutionary algorithms based on NP-hard 0-1 multi...
USING HASH BASED APRIORI ALGORITHM TO REDUCE THE CANDIDATE 2- ITEMSETS FOR MINING ASSOCIATION RULE
K. Vanitha
2011-01-01
In this paper we describe an implementation of Hash based Apriori. We analyze, theoretically and experimentally, the principal data structure of our solution. This data structure is the main factor in the efficiency of our implementation. We propose an effective hash-based algorithm for the candidate set generation. Explicitly, the number of candidate 2-itemsets generated by the proposed algorithm is, in orders of magnitude, smaller than that by previous methods, thus resolving the performanc...
An Algorithm for Selecting QGP Candidate Events from Relativistic Heavy Ion Collision Data Sample
Lian Shou Liu; Yuan, H B; Lianshou, Liu; Qinghua, Chen; Yuan, Hu
1998-01-01
The formation of quark-gluon plasma (QGP) in relativistic heavy ion collision, is expected to be accompanied by a background of ordinary collision events without phase transition. In this short note an algorithm is proposed to select the QGP candidate events from the whole event sample. This algorithm is based on a simple geometrical consideration together with some ordinary QGP signal, e.g. the increasing of $K/\\pi$ ratio. The efficiency of this algorithm in raising the 'signal/noise ratio' of QGP events in the selected sub-sample is shown explicitly by using Monte-Carlo simulation.
Comparison Study for Clonal Selection Algorithm and Genetic Algorithm
Ezgi Deniz Ulker; Sadık Ulker
2012-01-01
Two metaheuristic algorithms namely Artificial Immune Systems (AIS) and Genetic Algorithms are classified as computational systems inspired by theoretical immunology and genetics mechanisms. In this work we examine the comparative performances of two algorithms. A special selection algorithm, Clonal Selection Algorithm (CLONALG), which is a subset of Artificial Immune Systems, and Genetic Algorithms are tested with certain benchmark functions. It is shown that depending on type of a function ...
International Nuclear Information System (INIS)
Classification of the nodule candidates in computer-aided detection (CAD) of lung nodules in CT images was addressed by constructing a nonlinear discriminant function using a kernel-based learning algorithm called the kernel recursive least-squares (KRLS) algorithm. Using the nodule candidates derived from the processing by a CAD scheme of 100 CT datasets containing 253 non-calcified nodules or 3 mm or larger as determined by the consensus of two thoracic radiologists, the following trial were carried out 100 times: by randomly selecting 50 datasets for training, a nonlinear discriminant function was obtained using the nodule candidates in the training datasets and tested with the remaining candidates; for comparison, a rule-based classification was tested in a similar manner. At the number of false positives per case of about 5, the nonlinear classification method showed an improved sensitivity of 80% (mean over the 100 trials) compared with 74% of the rule-based method. (orig.)
The Performance Comparisons between the Unconstrained and Constrained Equalization Algorithms
Institute of Scientific and Technical Information of China (English)
HE Zhong-qiu; LI Dao-ben
2003-01-01
This paper proposes two unconstrained algorithms, the Steepest Decent (SD) algorithm and the Conjugate Gradient (CG) algorithm, based on a superexcellent cost function [1～3]. At the same time, two constrained algorithms which include the Constrained Steepest Decent (CSD) algorithm and the Constrained Conjugate Gradient algorithm (CCG) are deduced subject to a new constrain condition. They are both implemented in unitary transform domain. The computational complexities of the constrained algorithms are compared to those of the unconstrained algorithms. Resulting simulations show their performance comparisons.
Tradeoffs Between Branch Mispredictions and Comparisons for Sorting Algorithms
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Moruz, Gabriel
) comparisons performs Omega(nlogd n) branch mispredictions. We show that Multiway MergeSort achieves this tradeoff by adopting a multiway merger with a low number of branch mispredictions. For adaptive sorting algorithms we similarly obtain that an algorithm performing O(dn(1+log (1+Inv/n))) comparisons must...
Sorting on STAR. [CDC computer algorithm timing comparison
Stone, H. S.
1978-01-01
Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.
Triad pattern algorithm for predicting strong promoter candidates in bacterial genomes
Directory of Open Access Journals (Sweden)
Sakanyan Vehary
2008-05-01
Full Text Available Abstract Background Bacterial promoters, which increase the efficiency of gene expression, differ from other promoters by several characteristics. This difference, not yet widely exploited in bioinformatics, looks promising for the development of relevant computational tools to search for strong promoters in bacterial genomes. Results We describe a new triad pattern algorithm that predicts strong promoter candidates in annotated bacterial genomes by matching specific patterns for the group I σ70 factors of Escherichia coli RNA polymerase. It detects promoter-specific motifs by consecutively matching three patterns, consisting of an UP-element, required for interaction with the α subunit, and then optimally-separated patterns of -35 and -10 boxes, required for interaction with the σ70 subunit of RNA polymerase. Analysis of 43 bacterial genomes revealed that the frequency of candidate sequences depends on the A+T content of the DNA under examination. The accuracy of in silico prediction was experimentally validated for the genome of a hyperthermophilic bacterium, Thermotoga maritima, by applying a cell-free expression assay using the predicted strong promoters. In this organism, the strong promoters govern genes for translation, energy metabolism, transport, cell movement, and other as-yet unidentified functions. Conclusion The triad pattern algorithm developed for predicting strong bacterial promoters is well suited for analyzing bacterial genomes with an A+T content of less than 62%. This computational tool opens new prospects for investigating global gene expression, and individual strong promoters in bacteria of medical and/or economic significance.
COMPARISON OF LOSSLESS DATA COMPRESSION ALGORITHMS FOR TEXT DATA
Directory of Open Access Journals (Sweden)
U. S.Amarasinghe
2010-12-01
Full Text Available Data compression is a common requirement for most of the computerized applications. There are number of data compression algorithms,which are dedicated to compress different data formats. Even for a single data type there are number of different compression algorithms,which use different approaches. This paper examines lossless data compression algorithms and compares their performance. A set ofselected algorithms are examined and implemented to evaluate the performance in compressing text data. An experimental comparison of anumber of different lossless data compression algorithms is presented in this paper. The article is concluded by stating which algorithmperforms well for text data.
Trust Based Algorithm for Candidate Node Selection in Hybrid MANET-DTN
Directory of Open Access Journals (Sweden)
Jan Papaj
2014-01-01
Full Text Available The hybrid MANET - DTN is a mobile network that enables transport of the data between groups of the disconnected mobile nodes. The network provides benefits of the Mobile Ad-Hoc Networks (MANET and Delay Tolerant Network (DTN. The main problem of the MANET occurs if the communication path is broken or disconnected for some short time period. On the other side, DTN allows sending data in the disconnected environment with respect to higher tolerance to delay. Hybrid MANET - DTN provides optimal solution for emergency situation in order to transport information. Moreover, the security is the critical factor because the data are transported by mobile devices. In this paper, we investigate the issue of secure candidate node selection for transportation of the data in a disconnected environment for hybrid MANET- DTN. To achieve the secure selection of the reliable mobile nodes, the trust algorithm is introduced. The algorithm enables select reliable nodes based on collecting routing information. This algorithm is implemented to the simulator OPNET modeler.
A systematic comparison of genome-scale clustering algorithms
Jay, Jeremy J.; Eblen, John D; Zhang, Yun; Benson, Mikael; Perkins, Andy D.; Saxton, Arnold M.; Voy, Brynn H.; Elissa J Chesler; Langston, Michael A.
2012-01-01
Background: A wealth of clustering algorithms has been applied to gene co-expression experiments. These algorithms cover a broad range of approaches, from conventional techniques such as k-means and hierarchical clustering, to graphical approaches such as k-clique communities, weighted gene co-expression networks (WGCNA) and paraclique. Comparison of these methods to evaluate their relative effectiveness provides guidance to algorithm selection, development and implementation. Most prior work...
A Comparison of First-order Algorithms for Machine Learning
Wei, Yu; Thomas, Pock
2014-01-01
Using an optimization algorithm to solve a machine learning problem is one of mainstreams in the field of science. In this work, we demonstrate a comprehensive comparison of some state-of-the-art first-order optimization algorithms for convex optimization problems in machine learning. We concentrate on several smooth and non-smooth machine learning problems with a loss function plus a regularizer. The overall experimental results show the superiority of primal-dual algorithms in solving a mac...
Garg, Poonam
2010-01-01
Genetic algorithms are a population-based Meta heuristics. They have been successfully applied to many optimization problems. However, premature convergence is an inherent characteristic of such classical genetic algorithms that makes them incapable of searching numerous solutions of the problem domain. A memetic algorithm is an extension of the traditional genetic algorithm. It uses a local search technique to reduce the likelihood of the premature convergence. The cryptanalysis of simplified data encryption standard can be formulated as NP-Hard combinatorial problem. In this paper, a comparison between memetic algorithm and genetic algorithm were made in order to investigate the performance for the cryptanalysis on simplified data encryption standard problems(SDES). The methods were tested and various experimental results show that memetic algorithm performs better than the genetic algorithms for such type of NP-Hard combinatorial problem. This paper represents our first effort toward efficient memetic algo...
Comparison of greedy algorithms for α-decision tree construction
Alkhalid, Abdulaziz
2011-01-01
A comparison among different heuristics that are used by greedy algorithms which constructs approximate decision trees (α-decision trees) is presented. The comparison is conducted using decision tables based on 24 data sets from UCI Machine Learning Repository [2]. Complexity of decision trees is estimated relative to several cost functions: depth, average depth, number of nodes, number of nonterminal nodes, and number of terminal nodes. Costs of trees built by greedy algorithms are compared with minimum costs calculated by an algorithm based on dynamic programming. The results of experiments assign to each cost function a set of potentially good heuristics that minimize it. © 2011 Springer-Verlag.
A comparison of heuristic search algorithms for molecular docking.
Westhead, D R; Clark, D E; Murray, C W
1997-05-01
This paper describes the implementation and comparison of four heuristic search algorithms (genetic algorithm, evolutionary programming, simulated annealing and tabu search) and a random search procedure for flexible molecular docking. To our knowledge, this is the first application of the tabu search algorithm in this area. The algorithms are compared using a recently described fast molecular recognition potential function and a diverse set of five protein-ligand systems. Statistical analysis of the results indicates that overall the genetic algorithm performs best in terms of the median energy of the solutions located. However, tabu search shows a better performance in terms of locating solutions close to the crystallographic ligand conformation. These results suggest that a hybrid search algorithm may give superior results to any of the algorithms alone. PMID:9263849
Directory of Open Access Journals (Sweden)
Ait-Ali Lamia
2011-11-01
Full Text Available Abstract Background To propose a new diagnostic algorithm for candidates for Fontan and identify those who can skip cardiac catheterization (CC. Methods Forty-four candidates for Fontan (median age 4.8 years, range: 2-29 years were prospectively evaluated by trans-thoracic echocardiography (TTE, Cardiovascular magnetic resonance (CMR and CC. Before CC, according to clinical, echo and CMR findings, patients were divided in two groups: Group I comprised 18 patients deemed suitable for Fontan without requiring CC; group II comprised 26 patients indicated for CC either in order to detect more details, or for interventional procedures. Results In Group I ("CC not required" no unexpected new information affecting surgical planning was provided by CC. Conversely, in Group II new information was provided by CC in three patients (0 vs 11.5%, p = 0.35 and in six an interventional procedure was performed. During CC, minor complications occurred in one patient from Group I and in three from Group II (6 vs 14%, p = 0.7. Radiation Dose-Area product was similar in the two groups (Median 20 Gycm2, range: 5-40 vs 26.5 Gycm2, range: 9-270 p = 0.37. All 18 Group I patients and 19 Group II patients underwent a total cavo-pulmonary anastomosis; in the remaining seven group II patients, four were excluded from Fontan; two are awaiting Fontan; one refused the intervention. Conclusion In this paper we propose a new diagnostic algorithm in a pre-Fontan setting. An accurate non-invasive evaluation comprising TTE and CMR could select patients who can skip CC.
Comparison of fuzzy connectedness and graph cut segmentation algorithms
Ciesielski, Krzysztof C.; Udupa, Jayaram K.; Falcão, A. X.; Miranda, P. A. V.
2011-03-01
The goal of this paper is a theoretical and experimental comparison of two popular image segmentation algorithms: fuzzy connectedness (FC) and graph cut (GC). On the theoretical side, our emphasis will be on describing a common framework in which both of these methods can be expressed. We will give a full analysis of the framework and describe precisely a place which each of the two methods occupies in it. Within the same framework, other region based segmentation methods, like watershed, can also be expressed. We will also discuss in detail the relationship between FC segmentations obtained via image forest transform (IFT) algorithms, as opposed to FC segmentations obtained by other standard versions of FC algorithms. We also present an experimental comparison of the performance of FC and GC algorithms. This concentrates on comparing the actual (as opposed to provable worst scenario) algorithms' running time, as well as influence of the choice of the seeds on the output.
A Comparison of learning algorithms on the Arcade Learning Environment
Defazio, Aaron; Graepel, Thore
2014-01-01
Reinforcement learning agents have traditionally been evaluated on small toy problems. With advances in computing power and the advent of the Arcade Learning Environment, it is now possible to evaluate algorithms on diverse and difficult problems within a consistent framework. We discuss some challenges posed by the arcade learning environment which do not manifest in simpler environments. We then provide a comparison of model-free, linear learning algorithms on this challenging problem set.
The DCA:SOMe Comparison A comparative study between two biologically-inspired algorithms
Greensmith, Julie; Aickelin, Uwe
2010-01-01
The Dendritic Cell Algorithm (DCA) is an immune-inspired algorithm, developed for the purpose of anomaly detection. The algorithm performs multi-sensor data fusion and correlation which results in a 'context aware' detection system. Previous applications of the DCA have included the detection of potentially malicious port scanning activity, where it has produced high rates of true positives and low rates of false positives. In this work we aim to compare the performance of the DCA and of a Self-Organizing Map (SOM) when applied to the detection of SYN port scans, through experimental analysis. A SOM is an ideal candidate for comparison as it shares similarities with the DCA in terms of the data fusion method employed. It is shown that the results of the two systems are comparable, and both produce false positives for the same processes. This shows that the DCA can produce anomaly detection results to the same standard as an established technique.
Does a Least-Preferred Candidate Win a Seat? A Comparison of Three Electoral Systems
Directory of Open Access Journals (Sweden)
Yoichi Hizen
2015-01-01
Full Text Available In this paper, the differences between two variations of proportional representation (PR, open-list PR and closed-list PR, are analyzed in terms of their ability to accurately reflect voter preference. The single nontransferable vote (SNTV is also included in the comparison as a benchmark. We construct a model of voting equilibria with a candidate who is least preferred by voters in the sense that replacing the least-preferred candidate in the set of winners with any loser is Pareto improving, and our focus is on whether the least-preferred candidate wins under each electoral system. We demonstrate that the least-preferred candidate never wins under the SNTV, but can win under open-list PR, although this is less likely than winning under closed-list PR.
Directory of Open Access Journals (Sweden)
Amin Mubark Alamin Ibrahim
2015-04-01
Full Text Available The subject of matching text or search the texts is important topics in the field of computer science and is used in many programs such as Microsoft word program in correct spelling mistakes and search &replace, and other uses. The aim of this study was to learn how to trade-off texts matching algorithms, which are very much where we have the application on Horspool's and Brute Force algorithms. According to the standard number of comparisons and time of execution. The study pointed on preference Horspool's algorithm.
Reranking candidate gene models with cross-species comparison for improved gene prediction
Directory of Open Access Journals (Sweden)
Pereira Fernando CN
2008-10-01
Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.
BRASERO: A Resource for Benchmarking RNA Secondary Structure Comparison Algorithms
Chauve, Cedric; Allali, Julien; Saule, Cedric
2012-01-01
The pairwise comparison of RNA secondary structures is a fundamental problem, with direct application in mining databases for annotating putative noncoding RNA candidates in newly sequenced genomes. An increasing number of software tools are available for comparing RNA secondary structures, based on different models (such as ordered trees or forests, arc annotated sequences, and multilevel trees) and computational principles (edit distance, alignment). We describe here the website BRASERO tha...
BRASERO: A Resource for Benchmarking RNA Secondary Structure Comparison Algorithms.
Allali, Julien; Saule, Cédric; Chauve, Cédric; d'Aubenton-Carafa, Yves; Denise, Alain; Drevet, Christine; Ferraro, Pascal; Gautheret, Daniel; Herrbach, Claire; Leclerc, Fabrice; de Monte, Antoine; Ouangraoua, Aida; Sagot, Marie-France; Termier, Michel; Thermes, Claude; Touzet, Hélène
2012-01-01
The pairwise comparison of RNA secondary structures is a fundamental problem, with direct application in mining databases for annotating putative noncoding RNA candidates in newly sequenced genomes. An increasing number of software tools are available for comparing RNA secondary structures, based on different models (such as ordered trees or forests, arc annotated sequences, and multilevel trees) and computational principles (edit distance, alignment). We describe here the website BRASERO that offers tools for evaluating such software tools on real and synthetic datasets. PMID:22675348
Comparison of face Recognition Algorithms on Dummy Faces
Directory of Open Access Journals (Sweden)
Aruni Singh
2012-09-01
Full Text Available In the age of rising crime face recognition is enormously important in the contexts of computer vision, psychology, surveillance, fraud detection, pattern recognition, neural network, content based video processing, etc. Face is a non intrusive strong biometrics for identification and hence criminals always try to hide their facial organs by different artificial means such as plastic surgery, disguise and dummy. The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms. However, while existing publicly-available face databases contain face images with a wide variety of poses, illumination, gestures and face occlusions but there is no dummy face database is available in public domain. The contributions of this research paper are: i Preparation of dummy face database of 110 subjects ii Comparison of some texture based, feature based and holistic face recognition algorithms on that dummy face database, iii Critical analysis of these types of algorithms on dummy face database.
Amin Mubark Alamin Ibrahim; Mustafa Elgili Mustafa
2015-01-01
The subject of matching text or search the texts is important topics in the field of computer science and is used in many programs such as Microsoft word program in correct spelling mistakes and search &replace, and other uses. The aim of this study was to learn how to trade-off texts matching algorithms, which are very much where we have the application on Horspool's and Brute Force algorithms. According to the standard number of comparisons and time of execution. The study pointed on prefer...
Criteria for comparison of synchronization algorithms spaced measures time and frequency
Koval, Yuriy; Kostyrya, Alexander; Pryimak, Viacheslav; Al-Tvezhri, Basim
2012-01-01
The role and gives a classification of synchronization algorithms spatially separated measures time and frequency. For comparison algorithms introduced criteria that consider the example of one of the algorithms.
Comparison of machine learning algorithms for detecting coral reef
Directory of Open Access Journals (Sweden)
Eduardo Tusa
2014-09-01
Full Text Available (Received: 2014/07/31 - Accepted: 2014/09/23This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009 because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing. We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009.
Parallel Branch and Bound Algorithm - A comparison between serial, OpenMP and MPI implementations
International Nuclear Information System (INIS)
This paper presents a comparison of an extended version of the regular Branch and Bound algorithm previously implemented in serial with a new parallel implementation, using both MPI (distributed memory parallel model) and OpenMP (shared memory parallel model). The branch-and-bound algorithm is an enumerative optimization technique, where finding a solution to a mixed integer programming (MIP) problem is based on the construction of a tree where nodes represent candidate problems and branches represent the new restrictions to be considered. Through this tree all integer solutions of the feasible region of the problem are listed explicitly or implicitly ensuring that all the optimal solutions will be found. A common approach to solve such problems is to convert sub-problems of the mixed integer problem to linear programming problems, thereby eliminating some of the integer constraints, and then trying to solve that problem using an existing linear program approach. The paper describes the general branch and bound algorithm used and provides details on the implementation and the results of the comparison.
Selection of candidate plus phenotypes of Jatropha curcas L. using method of paired comparisons
Energy Technology Data Exchange (ETDEWEB)
Mishra, D.K. [Silviculture Division, Arid Forest Research Institute, P.O. Krishi Mandi, New Pali Road, Jodhpur 342005, Rajasthan (India)
2009-03-15
Jatropha curcas L. (Euphorbiaceae) is an oil bearing species with multiple uses and considerable potential as a biodiesel crop. The present communication deals with the method of selecting plus phenotypes of J. curcas for exploiting genetic variability for further improvement. Candidate plus tree selection is the first and most important stage in any tree improvement programme. The selection of candidate plus plants (CPPs) is based upon various important attributes associated with the species and their relative ranking. Relative preference between various traits and scoring for each trait has been worked out by using the method of paired comparisons for the selection of CPP in J. curcas L. The most important ones are seed and oil yields. (author)
Comparison of evolutionary algorithms in gene regulatory network model inference
Directory of Open Access Journals (Sweden)
Crane Martin
2010-01-01
Full Text Available Abstract Background The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs. However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. Results This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. Conclusions Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.
Comparison of evolutionary algorithms in gene regulatory network model inference.
LENUS (Irish Health Repository)
2010-01-01
ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.
Detecting protein candidate fragments using a structural alphabet profile comparison approach.
Shen, Yimin; Picord, Géraldine; Guyon, Frédéric; Tuffery, Pierre
2013-01-01
Predicting accurate fragments from sequence has recently become a critical step for protein structure modeling, as protein fragment assembly techniques are presently among the most efficient approaches for de novo prediction. A key step in these approaches is, given the sequence of a protein to model, the identification of relevant fragments - candidate fragments - from a collection of the available 3D structures. These fragments can then be assembled to produce a model of the complete structure of the protein of interest. The search for candidate fragments is classically achieved by considering local sequence similarity using profile comparison, or threading approaches. In the present study, we introduce a new profile comparison approach that, instead of using amino acid profiles, is based on the use of predicted structural alphabet profiles, where structural alphabet profiles contain information related to the 3D local shapes associated with the sequences. We show that structural alphabet profile-profile comparison can be used efficiently to retrieve accurate structural fragments, and we introduce a fully new protocol for the detection of candidate fragments. It identifies fragments specific of each position of the sequence and of size varying between 6 and 27 amino-acids. We find it outperforms present state of the art approaches in terms (i) of the accuracy of the fragments identified, (ii) the rate of true positives identified, while having a high coverage score. We illustrate the relevance of the approach on complete target sets of the two previous Critical Assessment of Techniques for Protein Structure Prediction (CASP) rounds 9 and 10. A web server for the approach is freely available at http://bioserv.rpbs.univ-paris-diderot.fr/SAFrag. PMID:24303019
Web Data Extraction Using Tree Structure Algorithms – A Comparison
Directory of Open Access Journals (Sweden)
Ms. Seema Kolkur
2013-07-01
Full Text Available Nowadays, Web pages provide a large amount of structured data, which is required by many advanced applications. This data can be searched through their Web query interfaces. The retrieved information is also called ‘deep or hidden data’. The deep data is enwrapped in Web pages in the form of data records. These special Web pages are generated dynamically and presented to users in the form of HTML documents along with other content. These webpages can be a virtual gold mine of information for business, if mined effectively. Web Data Extraction systems or web wrappers are software applications for the purpose of extracting information from Web sources like Web pages. A Web Data Extraction system usually interacts with a Web source and extracts data stored in it. The extracted data is converted into the most convenient structured format and stored for further usage. This paper deals with the development of such a wrapper, which takes search engine result pages as input and converts them into structured format. Secondly, this paper proposes a new algorithm called Improved Tree Matching algorithm, which in turn, is based on the efficient Simple Tree Matching (STM algorithm. Towards the end of this work, there is given a comparison with existing works. Experimental results show that this approach can extract web data with lower complexity compared to other existing approaches.
Directory of Open Access Journals (Sweden)
Saira Beg
2011-11-01
Full Text Available This paper presents performance evaluation of Bionomic Algorithm (BA for Shortest Path Finding (SPF problem as compared with the performance of Genetic Algorithm (GA for the same problem. SPF is a classical problem having many applications in networks, robotics and electronics etc. SPF problem has been solved using different algorithms such as Dijkstras Algorithm, Floyd including GA, Neural Network (NN, Tabu Search (TS, and Ant Colony Optimization (ACO etc. We have employed Bionomic Algorithm for solving the SPF problem and have given the performance comparison of BA vs. GA for the same problem. Simulation results are presented at the end which is carried out using MATLAB.
Comparison of depletion algorithms for large systems of nuclides
International Nuclear Information System (INIS)
In this work five algorithms for solving the system of decay and transmutation equations with constant reaction rates encountered in burnup calculations were compared. These are Chebyshev rational approximation method (CRAM), which is a new matrix exponential method, the matrix exponential power series with instant decay and a secular equilibrium approximations for short-lived nuclides, which is used in ORIGEN, and three different variants of transmutation trajectory analysis (TTA), which is also known as the linear chains method. The common feature of these methods is their ability to deal with thousands of nuclides and reactions. Consequently, there is no need to simplify the system of equations and all nuclides can be accounted for explicitly. The methods were compared in single depletion steps using decay and cross-section data taken from the default ORIGEN libraries. Very accurate reference solutions were obtained from a high precision TTA algorithm. The results from CRAM and TTA were found to be very accurate. While ORIGEN was not as accurate, it should still be sufficient for most purposes. All TTA variants are much slower than the other two, which are so fast that their running time should be negligible in most, if not all, applications. The combination of speed and accuracy makes CRAM the clear winner of the comparison.
A benchmark for comparison of dental radiography analysis algorithms.
Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia
2016-07-01
Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/). PMID:26974042
Comparison of New Multilevel Association Rule Algorithm with MAFIA
Arpna Shrivastava; Jain, R. C.; Ajay Kumar Shrivastava
2014-01-01
Multilevel association rules provide the more precise and specific information. Apriori algorithm is an established algorithm for finding association rules. Fast Apriori implementation is modified to develop new algorithm for finding frequent item sets and mining multilevel association rules. MAFIA is another established algorithm for finding frequent item sets. In this paper, the performance of this new algorithm is analyzed and compared with MAFIA algorithm.
Comparison of New Multilevel Association Rule Algorithm with MAFIA
Directory of Open Access Journals (Sweden)
Arpna Shrivastava
2014-10-01
Full Text Available Multilevel association rules provide the more precise and specific information. Apriori algorithm is an established algorithm for finding association rules. Fast Apriori implementation is modified to develop new algorithm for finding frequent item sets and mining multilevel association rules. MAFIA is another established algorithm for finding frequent item sets. In this paper, the performance of this new algorithm is analyzed and compared with MAFIA algorithm.
Comparison of Adhesion and Retention Forces for Two Candidate Docking Seal Elastomers
Hartzler, Brad D.; Panickar, Marta B.; Wasowski, Janice L.; Daniels, Christopher C.
2011-01-01
To successfully mate two pressurized vehicles or structures in space, advanced seals are required at the interface to prevent the loss of breathable air to the vacuum of space. A critical part of the development testing of candidate seal designs was a verification of the integrity of the retaining mechanism that holds the silicone seal component to the structure. Failure to retain the elastomer seal during flight could liberate seal material in the event of high adhesive loads during undocking. This work presents an investigation of the force required to separate the elastomer from its metal counter-face surface during simulated undocking as well as a comparison to that force which was necessary to destructively remove the elastomer from its retaining device. Two silicone elastomers, Wacker 007-49524 and Esterline ELASA-401, were evaluated. During the course of the investigation, modifications were made to the retaining devices to determine if the modifications improved the force needed to destructively remove the seal. The tests were completed at the expected operating temperatures of -50, +23, and +75 C. Under the conditions investigated, the comparison indicated that the adhesion between the elastomer and the metal counter-face was significantly less than the force needed to forcibly remove the elastomer seal from its retainer, and no failure would be expected.
Performance Comparison Of Evolutionary Algorithms For Image Clustering
Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.
2014-09-01
Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.
Comparison of Load Balancing and Scheduling Algorithms in Cloud Environment
Karthika M T,; Neethu Kurian,; Mariya Seby,
2013-01-01
The importance of cloud computing is increasing nowadays. Cloud computing is used for the delivery of hosted services like reliable, fault tolerant and scalable infrastructure over Internet. A variety of algorithms is used in the cloud environment for scheduling and load balancing, thereby reducing the total cost. The main algorithms usually used include, optimal cloud resource provisioning (OCRP) algorithm and hybrid cloud optimized cost(HCOC)scheduling algorithm These algorithms will formul...
Performance Comparison of Adaptive Algorithms for Adaptive line Enhancer
Sanjeev Kumar Dhull; Sandeep K. Arya; O. P. Sahu
2011-01-01
We have designed and simulated an adaptive line enhancer system for conferencing. This system is based upon a least-mean-square (LMS) and recursive adaptive algorithm (RLS) Performance of ALE is compared for LMSandRLS algorithms.
A comparison of performance measures for online algorithms
DEFF Research Database (Denmark)
Boyar, Joan; Irani, Sandy; Larsen, Kim Skak
balance greediness and adaptability. We examine how these measures evaluate the Greedy Algorithm and Lazy Double Coverage, commonly studied algorithms in the context of server problems. We examine Competitive Analysis, the Max/Max Ratio, the Random Order Ratio, Bijective Analysis and Relative Worst Order...... Analysis and determine how they compare the two algorithms. We find that by the Max/Max Ratio and Bijective Analysis, Greedy is the better algorithm. Under the other measures Lazy Double Coverage is better, though Relative Worst Order Analysis indicates that Greedy is sometimes better. Our results also...... provide the first proof of optimality of an algorithm under Relative Worst Order Analysis....
Poonam Garg
2010-01-01
Genetic algorithms are a population-based Meta heuristics. They have been successfully applied to many optimization problems. However, premature convergence is an inherent characteristic of such classical genetic algorithms that makes them incapable of searching numerous solutions of the problem domain. A memetic algorithm is an extension of the traditional genetic algorithm. It uses a local search technique to reduce the likelihood of the premature convergence. The cryptanalysis of simplifie...
Directory of Open Access Journals (Sweden)
DURUSU, A.
2014-08-01
Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.
Comparison between various beam steering algorithms for the CEBAF lattice
International Nuclear Information System (INIS)
In this paper we describe a comparative study performed to evaluate various beam steering algorithms for CEBAF lattice. The first approach that was evaluated used a Singular Value Decomposition (SVD) based algorithm to determine the corrector magnet setting for various regions of the CEBAF lattice. The second studied algorithm is known as PROSAC (Projective RMS Orbit Subtraction And Correction). This algorithm was developed at TJNAF to support the commissioning activity. The third set of algorithms tested are known as COCU (CERN Orbit Correction Utility) which is a production steering package used at CERN. A program simulating a variety of errors such as misalignment, BPM offset, etc. was used to generate test inputs for these three sets of algorithms. Conclusions of this study are presented in this paper. copyright 1997 American Institute of Physics
Comparison between various beam steering algorithms for the CEBAF lattice
International Nuclear Information System (INIS)
In this paper we describe a comparative study performed to evaluate various beam steering algorithms for CEBAF lattice. The first approach that was evaluated used a Singular Value Decomposition (SVD) based algorithm to determine the corrector magnet setting for various regions of the CEBAF lattice. The second studied algorithm is known as PROSAC (Projective RMS Orbit Subtraction And Correction). This algorithm was developed at TJNAF to support the commissioning activity. The third set of algorithms tested are known as COCU (CERN Orbit Correction Utility) which is a production steering package used at CERN. A program simulating a variety of errors such as misalignment, BPM offset, etc. was used to generate test inputs for these three sets of algorithms. Conclusions of this study are presented in this paper
The Comparison and Application of Corner Detection Algorithms
Jie Chen; Li-hui Zou; Juan Zhang; Li-hua Dou
2009-01-01
Corners in images represent a lot of important information. Extracting corners accurately is significant to image processing, which can reduce much of the calculations. In this paper, two widely used corner detection algorithms, SUSAN and Harris corner detection algorithms which are both based on intensity, were compared in stability, noise immunity and complexity quantificationally via stability factor η, anti-noise factor ρ and the runtime of each algorithm. It concluded that Harris corner ...
Comparison of two global digital algorithms for Minkowski tensor estimation
DEFF Research Database (Denmark)
Christensen, Sabrina Tang; Kiderlen, Markus
2016-01-01
The geometry of real world objects can be described by Minkowski tensors. Algorithms have been suggested to approximate Minkowski tensors if only a binary image of the object is available. This paper presents implementations of two such algorithms. The theoretical convergence properties are...... confirmed by simulations on test sets, and recommendations for input arguments of the algorithms are given. For increasing resolutions, we obtain more accurate es- timators for the Minkowski tensors. Digitisations of more complicated objects are shown to require higher resolutions....
The Comparison and Application of Corner Detection Algorithms
Directory of Open Access Journals (Sweden)
Jie Chen
2009-12-01
Full Text Available Corners in images represent a lot of important information. Extracting corners accurately is significant to image processing, which can reduce much of the calculations. In this paper, two widely used corner detection algorithms, SUSAN and Harris corner detection algorithms which are both based on intensity, were compared in stability, noise immunity and complexity quantificationally via stability factor η, anti-noise factor ρ and the runtime of each algorithm. It concluded that Harris corner detection algorithm was superior to SUSAN corner detection algorithm on the whole. Moreover, SUSAN and Harris detection algorithms were improved by selecting an adaptive gray difference threshold and by changing directional differentials, respectively, and compared using these three criterions. In addition, SUSAN and Harris corner detectors were applied to an image matching experiment. It was verified that the quantitative evaluations of the corner detection algorithms were valid through calculating match efficiency, defined as correct matching corner pairs dividing by matching time, which can reflect the performances of a corner detection algorithm comprehensively. Furthermore, the better corner detector was used into image mosaic experiment, and the result was satisfied. The work of this paper can provide a direction to the improvement and the utilization of these two corner detection algorithms.
Institute of Scientific and Technical Information of China (English)
Li Xi; Ji Hong; Zheng Ruiming; Li Ting
2009-01-01
In order to improve the performance of peer-to-peer files sharing system under mobile distributed environments, a novel always-optimally-coordinated (AOC) criterion and corresponding candidate selection algorithm are proposed in this paper. Compared with the traditional min-hops criterion, the new approach introduces a fuzzy knowledge combination theory to investigate several important factors that influence files transfer success rate and efficiency. Whereas the min-hops based protocols only ask the nearest candidate peer for desired files, the selection algorithm based on AOC comprehensively considers users' preference and network requirements with flexible balancing rules. Furthermore, its advantage also expresses in the independence of specified resource discovering protocols, allowing for scalability. The simulation results show that when using the AOC based peer selection algorithm, system performance is much better than the min-hops scheme, with files successful transfer rate improved more than 50% and transfer time reduced at least 20%.
Performance Comparison of Constrained Artificial Bee Colony Algorithm
Directory of Open Access Journals (Sweden)
Soudeh Babaeizadeh
2015-06-01
Full Text Available This study is aimed to evaluate, analyze and compare the performances of available constrained Artificial Bee Colony (ABC algorithms in the literature. In recent decades, many different variants of the ABC algorithms have been suggested to solve Constrained Optimization Problems (COPs. However, to the best of the authors' knowledge, there rarely are comparative studies on the numerical performance of those algorithms. This study is considering a set of well-known benchmark problems from test problems of Congress of Evolutionary Computation 2006 (CEC2006.
A Comparison of Evolutionary Algorithms for Tracking Time-Varying Recursive Systems
Directory of Open Access Journals (Sweden)
White Michael S
2003-01-01
Full Text Available A comparison is made of the behaviour of some evolutionary algorithms in time-varying adaptive recursive filter systems. Simulations show that an algorithm including random immigrants outperforms a more conventional algorithm using the breeder genetic algorithm as the mutation operator when the time variation is discontinuous, but neither algorithm performs well when the time variation is rapid but smooth. To meet this deficit, a new hybrid algorithm which uses a hill climber as an additional genetic operator, applied for several steps at each generation, is introduced. A comparison is made of the effect of applying the hill climbing operator a few times to all members of the population or a larger number of times solely to the best individual; it is found that applying to the whole population yields the better results, substantially improved compared with those obtained using earlier methods.
A First Comparison of Kepler Planet Candidates in Single and Multiple Systems
Latham, David W; Quinn, Samuel N; Batalha, Natalie M; Borucki, William J; Brown, Timothy M; Bryson, Stephen T; Buchhave, Lars A; Caldwell, Douglas A; Carter, Joshua A; Christiansen, Jesse L; Ciardi, David R; Cochran, William D; Dunham, Edward W; Fabrycky, Daniel C; Ford, Eric B; Gautier, Thomas N; Gilliland, Ronald L; Holman, Matthew J; Howell, Steve B; Ibrahim, Khadeejah A; Isaacson, Howard; Basri, Gibor; Furesz, Gabor; Geary, John C; Jenkins, Jon M; Koch, David G; Lissauer, Jack J; Marcy, Geoffrey W; Quintana, Elisa V; Ragozzine, Darin; Sasselov, Dimitar D; Shporer, Avi; Steffen, Jason H; Welsh, William F; Wohler, Bill
2011-01-01
In this letter we present an overview of the rich population of systems with multiple candidate transiting planets found in the first four months of Kepler data. The census of multiples includes 115 targets that show 2 candidate planets, 45 with 3, 8 with 4, and 1 each with 5 and 6, for a total of 170 systems with 408 candidates. When compared to the 827 systems with only one candidate, the multiples account for 17 percent of the total number of systems, and a third of all the planet candidates. We compare the characteristics of candidates found in multiples with those found in singles. False positives due to eclipsing binaries are much less common for the multiples, as expected. Singles and multiples are both dominated by planets smaller than Neptune; 69 +2/-3 percent for singles and 86 +2/-5 percent for multiples. This result, that systems with multiple transiting planets are less likely to include a transiting giant planet, suggests that close-in giant planets tend to disrupt the orbital inclinations of sm...
A FIRST COMPARISON OF KEPLER PLANET CANDIDATES IN SINGLE AND MULTIPLE SYSTEMS
International Nuclear Information System (INIS)
In this Letter, we present an overview of the rich population of systems with multiple candidate transiting planets found in the first four months of Kepler data. The census of multiples includes 115 targets that show two candidate planets, 45 with three, eight with four, and one each with five and six, for a total of 170 systems with 408 candidates. When compared to the 827 systems with only one candidate, the multiples account for 17% of the total number of systems, and one-third of all the planet candidates. We compare the characteristics of candidates found in multiples with those found in singles. False positives due to eclipsing binaries are much less common for the multiples, as expected. Singles and multiples are both dominated by planets smaller than Neptune; 69+2-3% for singles and 86+2-5% for multiples. This result, that systems with multiple transiting planets are less likely to include a transiting giant planet, suggests that close-in giant planets tend to disrupt the orbital inclinations of small planets in flat systems, or maybe even prevent the formation of such systems in the first place.
An Empirical Comparison of Boosting and Bagging Algorithms
Directory of Open Access Journals (Sweden)
R. Kalaichelvi Chandrahasan
2011-11-01
Full Text Available Classification is one of the data mining techniques that analyses a given data set and induces a model for each class based on their features present in the data. Bagging and boosting are heuristic approaches to develop classification models. These techniques generate a diverse ensemble of classifiers by manipulating the training data given to a base learning algorithm. They are very successful in improving the accuracy of some algorithms in artificial and real world datasets. We review the algorithms such as AdaBoost, Bagging, ADTree, and Random Forest in conjunction with the Meta classifier and the Decision Tree classifier. Also we describe a large empirical study by comparing several variants. The algorithms are analyzed on Accuracy, Precision, Error Rate and Execution Time.
Fast Quantum Search Algorithms in Protein Sequence Comparison - Quantum Biocomputing
Hollenberg, L C L
2000-01-01
Quantum search algorithms are considered in the context of protein sequencecomparison in biocomputing. Given a sample protein sequence of length m (i.e mresidues), the problem considered is to find an optimal match in a largedatabase containing N residues. Initially, Grover's quantum search algorithm isapplied to a simple illustrative case - namely where the database forms acomplete set of states over the 2^m basis states of a m qubit register, andthus is known to contain the exact sequence of interest. This exampledemonstrates explicitly the typical O(sqrt{N}) speedup on the classical O(N)requirements. An algorithm is then presented for the (more realistic) casewhere the database may contain repeat sequences, and may not necessarilycontain an exact match to the sample sequence. In terms of minimizing theHamming distance between the sample sequence and the database subsequences thealgorithm finds an optimal alignment, in O(sqrt{N}) steps, by employing anextension of Grover's algorithm, due to Boyer, Brassard,...
Comparison of parameter estimation algorithms in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well for...... these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective and...
Advanced reconstruction algorithms for electron tomography: From comparison to combination
Energy Technology Data Exchange (ETDEWEB)
Goris, B. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Roelandts, T. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Batenburg, K.J. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Heidari Mezerji, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Bals, S., E-mail: sara.bals@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)
2013-04-15
In this work, the simultaneous iterative reconstruction technique (SIRT), the total variation minimization (TVM) reconstruction technique and the discrete algebraic reconstruction technique (DART) for electron tomography are compared and the advantages and disadvantages are discussed. Furthermore, we describe how the result of a three dimensional (3D) reconstruction based on TVM can provide objective information that is needed as the input for a DART reconstruction. This approach results in a tomographic reconstruction of which the segmentation is carried out in an objective manner. - Highlights: ► A comparative study between different reconstruction algorithms for tomography is performed. ► Reconstruction algorithms that uses prior knowledge about the specimen have a superior result. ► One reconstruction algorithm can provide the prior knowledge for a second algorithm.
New enumeration algorithm for protein structure comparison and classification
2013-01-01
Background Protein structure comparison and classification is an effective method for exploring protein structure-function relations. This problem is computationally challenging. Many different computational approaches for protein structure comparison apply the secondary structure elements (SSEs) representation of protein structures. Results We study the complexity of the protein structure comparison problem based on a mixed-graph model with respect to different computational frameworks. We d...
DURUSU, A.; NAKIR, I.; AJDER, A.; Ayaz, R.; Akca, H.; TANRIOVEN, M.
2014-01-01
Maximum power point trackers (MPPTs) play an essential role in extracting power from photovoltaic (PV) panels as they make the solar panels to operate at the maximum power point (MPP) whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are ...
A comparison of surface fitting algorithms for geophysical data
El Abbass, Tihama; Jallouli, C.; Albouy, Yves; Diament, M.
1990-01-01
Cet article présente les résultats d'une comparaison de différents algorithmes d'approximation de surface. Pour chacun de ces algorithmes (approximation polynomiale, combinaison spline-laplace, krigeage, approximation aux moindres carrés, méthode des éléments finis) la pertinence pour différents ensembles de données et les limites d'application sont discutées
Comparison of Voice Activity Detection Algorithms for VoIP
Prasad, Venkatesha R; Sangwan, Abhijeet; Jamadagni, HS; Chiranth, MC; Sah, Rahul
2002-01-01
We discuss techniques for Voice Activity Detection (VAD) for Voice over Internet Protocol (VoIP). VAD aids in saving bandwidth requirement of a voice session thereby increasing the bandwidth efficiently. In this paper, we compare the quality of speech, level of compression and computational complexity for three time-domain and three frequency-domain VAD algorithms. Implementation of time-domain algorithms is computationally simple. However, better speech quality is obtained with the frequency...
An Empirical Comparison of Learning Algorithms for Nonparametric Scoring
Depecker, Marine; Clémençon, Stéphan; Vayatis, Nicolas
2011-01-01
The TreeRank algorithm was recently proposed as a scoring-based method based on recursive partitioning of the input space. This tree induction algorithm builds orderings by recursively optimizing the Receiver Operating Characteristic (ROC) curve through a one-step optimization procedure called LeafRank. One of the aim of this paper is the indepth analysis of the empirical performance of the variants of TreeRank/LeafRank method. Numerical experiments based on both artificial and real data sets...
COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES
Directory of Open Access Journals (Sweden)
A.A. Haseena Thasneem
2015-05-01
Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.
Comparison of Hierarchical Agglomerative Algorithms for Clustering Medical Documents
Directory of Open Access Journals (Sweden)
Rafa E. Al-Qutaish
2012-06-01
Full Text Available Extensive amount of data stored in medical documents require developing methods that help users to find what they are looking for effectively by organizing large amounts of information into a small number of meaningful clusters. The produced clusters contain groups of objects which are more similar to each other than to the members of any other group. Thus, the aim of high-quality document clustering algorithms is to determine a set of clusters in which the inter-cluster similarity is minimized and intra-cluster similarity is maximized. The most important feature in many clustering algorithms is treating the clustering problem as an optimization process, that is, maximizing or minimizing a particular clustering criterion function defined over the whole clustering solution. The only real difference between agglomerative algorithms is how they choose which clusters to merge. The main purpose of this paper is to compare different agglomerative algorithms based on the evaluation of the clusters quality produced by different hierarchical agglomerative clustering algorithms using different criterion functions for the problem of clustering medical documents. Our experimental results showed that the agglomerative algorithm that uses I1 as its criterion function for choosing which clusters to merge produced better clusters quality than the other criterion functions in term of entropy and purity as external measures.
Performance comparison of several optimization algorithms in matched field inversion
Institute of Scientific and Technical Information of China (English)
ZOU Shixin; YANG Kunde; MA Yuanliang
2004-01-01
Optimization efficiencies and mechanisms of simulated annealing, genetic algorithm, differential evolution and downhill simplex differential evolution are compared and analyzed. Simulated annealing and genetic algorithm use a directed random process to search the parameter space for an optimal solution. They include the ability to avoid local minima, but as no gradient information is used, searches may be relatively inefficient. Differential evolution uses information from a distance and azimuth between individuals of a population to search the parameter space, the initial search is effective, but the search speed decreases quickly because differential information between the individuals of population vanishes. Local downhill simplex and global differential evolution methods are developed separately, and combined to produce a hybrid downhill simplex differential evolution algorithm. The hybrid algorithm is sensitive to gradients of the object function and search of the parameter space is effective. These algorithms are applied to the matched field inversion with synthetic data. Optimal values of the parameters, the final values of object function and inversion time is presented and compared.
A COMPARISON OF CONSTRUCTIVE AND PRUNING ALGORITHMS TO DESIGN NEURAL NETWORKS
Directory of Open Access Journals (Sweden)
KAZI MD. ROKIBUL ALAM
2011-06-01
Full Text Available This paper presents a comparison between constructive and pruning algorithms to design Neural Network (NN. Both algorithms have advantages as well as drawbacks while designing the architecture of NN. Constructive algorithm is computationally economic because it simply specifies straightforward initial NN architecture. Whereas the large initial NN size of pruning algorithm allows reasonably quick learning with reduced complexity. Two popular ideas from two categories: “cascade-correlation [1]” from constructive algorithms and “skeletonization [2]” from pruning algorithms are chosen here. They have been tested on several benchmark problems in machine learning and NNs. These are the cancer, the credit card, the heart disease, the thyroid and the soybean problems. The simulation results show the number of iterations during the training period and the generalization ability of NNs designed by using these algorithms for these problems.
A Comparison of Improved Artificial Bee Colony Algorithms Based on Differential Evolution
Directory of Open Access Journals (Sweden)
Jianfeng Qiu
2013-10-01
Full Text Available The Artificial Bee Colony (ABC algorithm is an active field of optimization based on swarm intelligence in recent years. Inspired by the mutation strategies used in Differential Evolution (DE algorithm, this paper introduced three types strategies (“rand”,” best”, and “current-to-best” and one or two numbers of disturbance vectors to ABC algorithm. Although individual mutation strategies in DE have been used in ABC algorithm by some researchers in different occasions, there have not a comprehensive application and comparison of the mutation strategies used in ABC algorithm. In this paper, these improved ABC algorithms can be analyzed by a set of testing functions including the rapidity of the convergence. The results show that those improvements based on DE achieve better performance in the whole than basic ABC algorithm.
Optimization of a statistical algorithm for objective comparison of toolmarks.
Spotts, Ryan; Chumbley, L Scott; Ekstrand, Laura; Zhang, Song; Kreiser, James
2015-03-01
Due to historical legal challenges, there is a driving force for the development of objective methods of forensic toolmark identification. This study utilizes an algorithm to separate matching and nonmatching shear cut toolmarks created using fifty sequentially manufactured pliers. Unlike previously analyzed striated screwdriver marks, shear cut marks contain discontinuous groups of striations, posing a more difficult test of algorithm applicability. The algorithm compares correlation between optical 3D toolmark topography data, producing a Wilcoxon rank sum test statistic. Relative magnitude of this metric separates the matching and nonmatching toolmarks. Results show a high degree of statistical separation between matching and nonmatching distributions. Further separation is achieved with optimized input parameters and implementation of a "leash" preventing a previous source of outliers--however complete statistical separation was not achieved. This paper represents further development of objective methods of toolmark identification and further validation of the assumption that toolmarks are identifiably unique. PMID:25425426
Comparison of Algorithms for an Electronic Nose in Identifying Liquors
Institute of Scientific and Technical Information of China (English)
Zhi-biao Shi; Tao Yu; Qun Zhao; Yang Li; Yu-bin Lan
2008-01-01
When the electronic nose is used to identify different varieties of distilled liquors, the pattern recognition algorithm is chosen on the basis of the experience, which lacks the guiding principle. In this research, the different brands of distilled spirits were identified using the pattern recognition algorithms (principal component analysis and the artificial neural network). The recognition rates of different algorithms were compared. The recognition rate of the Back Propagation Neural Network (BPNN) is the highest. Owing to the slow convergence speed of the BPNN, it tends easily to get into a local minimum. A chaotic BPNN was tried in order to overcome the disadvantage of the BPNN. The convergence speed of the chaotic BPNN is 75.5 times faster than that of the BPNN.
Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification
Directory of Open Access Journals (Sweden)
R. Sathya
2013-02-01
Full Text Available This paper presents a comparative account of unsupervised and supervised learning models and their pattern classification evaluations as applied to the higher education scenario. Classification plays a vital role in machine based learning algorithms and in the present study, we found that, though the error back-propagation learning algorithm as provided by supervised learning model is very efficient for a number of non-linear real-time problems, KSOM of unsupervised learning model, offers efficient solution and classification in the present study.
The RedGOLD cluster detection algorithm and its cluster candidate catalogue for the CFHT-LS W1
Licitra, Rossella; Mei, Simona; Raichoor, Anand; Erben, Thomas; Hildebrandt, Hendrik
2016-01-01
We present RedGOLD (Red-sequence Galaxy Overdensity cLuster Detector), a new optical/NIR galaxy cluster detection algorithm, and apply it to the CFHT-LS W1 field. RedGOLD searches for red-sequence galaxy overdensities while minimizing contamination from dusty star-forming galaxies. It imposes an Navarro-Frenk-White profile and calculates cluster detection significance and richness. We optimize these latter two parameters using both simulations and X-ray-detected cluster catalogues, and obtain a catalogue ˜80 per cent pure up to z ˜ 1, and ˜100 per cent (˜70 per cent) complete at z ≤ 0.6 (z ≲ 1) for galaxy clusters with M ≳ 1014 M⊙ at the CFHT-LS Wide depth. In the CFHT-LS W1, we detect 11 cluster candidates per deg2 out to z ˜ 1.1. When we optimize both completeness and purity, RedGOLD obtains a cluster catalogue with higher completeness and purity than other public catalogues, obtained using CFHT-LS W1 observations, for M ≳ 1014 M⊙. We use X-ray-detected cluster samples to extend the study of the X-ray temperature-optical richness relation to a lower mass threshold, and find a mass scatter at fixed richness of σlnM|λ = 0.39 ± 0.07 and σlnM|λ = 0.30 ± 0.13 for the Gozaliasl et al. and Mehrtens et al. samples. When considering similar mass ranges as previous work, we recover a smaller scatter in mass at fixed richness. We recover 93 per cent of the redMaPPer detections, and find that its richness estimates is on average ˜40-50 per cent larger than ours at z > 0.3. RedGOLD recovers X-ray cluster spectroscopic redshifts at better than 5 per cent up to z ˜ 1, and the centres within a few tens of arcseconds.
Smail, Linda
2016-06-01
The basic task of any probabilistic inference system in Bayesian networks is computing the posterior probability distribution for a subset or subsets of random variables, given values or evidence for some other variables from the same Bayesian network. Many methods and algorithms have been developed to exact and approximate inference in Bayesian networks. This work compares two exact inference methods in Bayesian networks-Lauritzen-Spiegelhalter and the successive restrictions algorithm-from the perspective of computational efficiency. The two methods were applied for comparison to a Chest Clinic Bayesian Network. Results indicate that the successive restrictions algorithm shows more computational efficiency than the Lauritzen-Spiegelhalter algorithm.
Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches
Çelik, Ufuk; Yurtay, Nilüfer; Koç, Emine Rabia; Tepe, Nermin; Güllüoğlu, Halil; Ertaş, Mustafa
2015-01-01
The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS) were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy. PMID:26075014
Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches
Directory of Open Access Journals (Sweden)
Ufuk Çelik
2015-01-01
Full Text Available The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy.
Evaluation and Comparison of Motion Estimation Algorithms for Video Compression
Directory of Open Access Journals (Sweden)
Avinash Nayak
2013-08-01
Full Text Available Video compression has become an essential component of broadcast and entertainment media. Motion Estimation and compensation techniques, which can eliminate temporal redundancy between adjacent frames effectively, have been widely applied to popular video compression coding standards such as MPEG-2, MPEG-4. Traditional fast block matching algorithms are easily trapped into the local minima resulting in degradation on video quality to some extent after decoding. In this paper various computing techniques are evaluated in video compression for achieving global optimal solution for motion estimation. Zero motion prejudgment is implemented for finding static macro blocks (MB which do not need to perform remaining search thus reduces the computational cost. Adaptive Rood Pattern Search (ARPS motion estimation algorithm is also adapted to reduce the motion vector overhead in frame prediction. The simulation results showed that the ARPS algorithm is very effective in reducing the computations overhead and achieves very good Peak Signal to Noise Ratio (PSNR values. This method significantly reduces the computational complexity involved in the frame prediction and also least prediction error in all video sequences. Thus ARPS technique is more efficient than the conventional searching algorithms in video compression.
Jelen, Birsen
2015-01-01
In recent years almost every newly opened government funded university in Turkey has established a music department where future music teachers are educated and piano is compulsory for every single music teacher candidate in Turkey. The aim of this research is to compare piano teaching instructors' and their students' perceptions about the current…
Hees, A; Guéna, J; Abgrall, M; Bize, S; Wolf, P
2016-08-01
We use 6 yrs of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions, and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine structure constant and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed. PMID:27541455
Hees, A; Abgrall, M; Bize, S; Wolf, P
2016-01-01
We use six years of accurate hyperfine frequency comparison data of the dual Rubidium and Caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions and of the quantum chromodynamic mass scale, which will directly impact the Rubidium/Caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine-structure constant, and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed.
Hees, A.; Guéna, J.; Abgrall, M.; Bize, S.; Wolf, P.
2016-08-01
We use 6 yrs of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions, and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine structure constant and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed.
Parallel divide and conquer bio-sequence comparison based on Smith-Waterman algorithm
Institute of Scientific and Technical Information of China (English)
ZHANG Fa; QIAO Xiangzhen; LIU Zhiyong
2004-01-01
Tools for pair-wise bio-sequence alignment have for long played a central role in computation biology. Several algorithms for bio-sequence alignment have been developed. The Smith-Waterman algorithm, based on dynamic programming, is considered the most fundamental alignment algorithm in bioinformatics. However the existing parallel Smith-Waterman algorithm needs large memory space, and this disadvantage limits the size of a sequence to be handled. As the data of biological sequences expand rapidly, the memory requirement of the existing parallel SmithWaterman algorithm has become a critical problem. For solving this problem, we develop a new parallel bio-sequence alignment algorithm, using the strategy of divide and conquer, named PSW-DC algorithm. In our algorithm, first, we partition the query sequence into several subsequences and distribute them to every processor respectively,then compare each subsequence with the whole subject sequence in parallel, using the Smith-Waterman algorithm, and get an interim result, finally obtain the optimal alignment between the query sequence and subject sequence, through the special combination and extension method. Memory space required in our algorithm is reduced significantly in comparison with existing ones. We also develop a key technique of combination and extension, named the C&E method, to manipulate the interim results and obtain the final sequences alignment. We implement the new parallel bio-sequences alignment algorithm,the PSW-DC, in a cluster parallel system.
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations. PMID:25429460
Empirical Comparison of Algorithms for Network Community Detection
Leskovec, Jure; Mahoney, Michael W
2010-01-01
Detecting clusters or communities in large real-world graphs such as large social or information networks is a problem of considerable interest. In practice, one typically chooses an objective function that captures the intuition of a network cluster as set of nodes with better internal connectivity than external connectivity, and then one applies approximation algorithms or heuristics to extract sets of nodes that are related to the objective function and that "look like" good communities for the application of interest. In this paper, we explore a range of network community detection methods in order to compare them and to understand their relative performance and the systematic biases in the clusters they identify. We evaluate several common objective functions that are used to formalize the notion of a network community, and we examine several different classes of approximation algorithms that aim to optimize such objective functions. In addition, rather than simply fixing an objective and asking for an a...
A benchmark for comparison of cell tracking algorithms
Maška, Martin; Ulman, Vladimír; Svoboda, David; Matula, Pavel; Matula, Petr; Ederra, Cristina; Urbiola, Ainhoa; España, Tomás; Venkatesan, Subramanian; Balak, Deepak M.W.; Karas, Pavel; Bolcková, Tereza; Štreitová, Markéta; Carthel, Craig; Coraluppi, Stefano
2014-01-01
Motivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2013 Cell Tracking Challenge. In this article, we present the logistics, datasets, methods and results of the challenge and lay down the principles for future uses of this benchma...
A comparison of fitness scallng methods in evolutionary algorithms
Bertone, E.; Alfonso, Hugo; Gallard, Raúl Hector
1999-01-01
Proportional selection (PS), as a selection mechanism for mating (reproduction with emphasis), selects individuals according to their fitness. Consequently the probability of an individual to obtain a number of offspring is directly proportional to its fitness value. This can lead to a loss of selective pressure in the fmal stages of the evolutionary process degrading the search. This presentation discusses performance results on evolutionary algorithms optimizing two highly multimodal ...
A numeric comparison of variable selection algorithms for supervised learning
International Nuclear Information System (INIS)
Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( (http://sourceforge.net/projects/statpatrec/)). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ('Add N Remove R') implemented in SPR.
Comparison of Adaptive Antenna Arrays Controlled by Gradient Algorithms
Directory of Open Access Journals (Sweden)
Z. Raida
1994-09-01
Full Text Available The paper presents the Simple Kalman filter (SKF that has been designed for the control of digital adaptive antenna arrays. The SKF has been applied to the pilot signal system and the steering vector one. The above systems based on the SKF are compared with adaptive antenna arrays controlled by the classical LMS and the Variable Step Size (VSS LMS algorithms and by the pure Kalman filter. It is shown that the pure Kalman filter is the most convenient for the control of the adaptive arrays because it does not require any a priori information about noise statistics and excels in high rate of convergence and low misadjustment. Extremely high computational requirements are drawback of this filter. Hence, if low computational power of signal processors is at the disposal, the SKF is recommended to be used. Computational requirements of the SKF are of the same order as the classical LMS algorithm exhibits. On the other hand, all the important features of the pure Kalman filter are inherited by the SKF. The paper shows that presented Kalman filters can be regarded as special gradient algorithms. That is why they can be compared with the LMS family.
Comparison of cluster expansion fitting algorithms for interactions at surfaces
Herder, Laura M.; Bray, Jason M.; Schneider, William F.
2015-10-01
Cluster expansions (CEs) are Ising-type interaction models that are increasingly used to model interaction and ordering phenomena at surfaces, such as the adsorbate-adsorbate interactions that control coverage-dependent adsorption or surface-vacancy interactions that control surface reconstructions. CEs are typically fit to a limited set of data derived from density functional theory (DFT) calculations. The CE fitting process involves iterative selection of DFT data points to include in a fit set and selection of interaction clusters to include in the CE. Here we compare the performance of three CE fitting algorithms-the MIT Ab-initio Phase Stability code (MAPS, the default in ATAT software), a genetic algorithm (GA), and a steepest descent (SD) algorithm-against synthetic data. The synthetic data is encoded in model Hamiltonians of varying complexity motivated by the observed behavior of atomic adsorbates on a face-centered-cubic transition metal close-packed (111) surface. We compare the performance of the leave-one-out cross-validation score against the true fitting error available from knowledge of the hidden CEs. For these systems, SD achieves lowest overall fitting and prediction error independent of the underlying system complexity. SD also most accurately predicts cluster interaction energies without ignoring or introducing extra interactions into the CE. MAPS achieves good results in fewer iterations, while the GA performs least well for these particular problems.
A comparison of cohesive features in IELTS writing of Chinese candidates and IELTS examiners
Institute of Scientific and Technical Information of China (English)
刘可
2012-01-01
This study aims at investigating cohesive ties applied in IELTS written texts produced by Chinese candidates and IELTS examiners,uncovering the differences in the use of cohesive features between the two groups,and analyzing whether the employment of cohesive ties is a possible problem in the Chinese candidates’ writing.Six written texts are analyzed in the study,with three Chinese candidates’ and three IELTS examiners’ IELTS writing respectively.The findings show that there exist differences in the use of cohesive devices between the two groups.Compared to the IETLS’ examiners’ writing,the group of Chinese candidates employed excessive conjunctions,with relatively less comparative and demonstrative reference ties used in their texts.Additionally,it appears that overusing repetition ties constitutes a potential problem in the candidates’ writing.Implications and suggestions about raising learners’ awareness and helping them to use cohesive devices effectively are discussed.
Direct Imaging of Extra-solar Planets - Homogeneous Comparison of Detected Planets and Candidates
Neuhäuser, Ralph; Schmidt, Tobias
2012-01-01
Searching the literature, we found 25 stars with directly imaged planets and candidates. We gathered photometric and spectral information for all these objects to derive their luminosities in a homogeneous way, taking a bolometric correction into account. Using theoretical evolutionary models, one can then estimate the mass from luminosity, temperature, and age. According to our mass estimates, all of them can have a mass below 25 Jup masses, so that they are considered as planets.
An Adaptive Algorithm for Pairwise Comparison-based Preference Measurement
DEFF Research Database (Denmark)
Meissner, Martin; Decker, Reinhold; Scholz, Sören W.
2011-01-01
The Pairwise Comparison‐based Preference Measurement (PCPM) approach has been proposed for products featuring a large number of attributes. In the PCPM framework, a static two‐cyclic design is used to reduce the number of pairwise comparisons. However, adaptive questioning routines that maximize ...
Comparison of four Adaboost algorithm based artificial neural networks in wind speed predictions
International Nuclear Information System (INIS)
Highlights: • Four hybrid algorithms are proposed for the wind speed decomposition. • Adaboost algorithm is adopted to provide a hybrid training framework. • MLP neural networks are built to do the forecasting computation. • Four important network training algorithms are included in the MLP networks. • All the proposed hybrid algorithms are suitable for the wind speed predictions. - Abstract: The technology of wind speed prediction is important to guarantee the safety of wind power utilization. In this paper, four different hybrid methods are proposed for the high-precision multi-step wind speed predictions based on the Adaboost (Adaptive Boosting) algorithm and the MLP (Multilayer Perceptron) neural networks. In the hybrid Adaboost–MLP forecasting architecture, four important algorithms are adopted for the training and modeling of the MLP neural networks, including GD-ALR-BP algorithm, GDM-ALR-BP algorithm, CG-BP-FR algorithm and BFGS algorithm. The aim of the study is to investigate the promoted forecasting percentages of the MLP neural networks by the Adaboost algorithm’ optimization under various training algorithms. The hybrid models in the performance comparison include Adaboost–GD-ALR-BP–MLP, Adaboost–GDM-ALR-BP–MLP, Adaboost–CG-BP-FR–MLP, Adaboost–BFGS–MLP, GD-ALR-BP–MLP, GDM-ALR-BP–MLP, CG-BP-FR–MLP and BFGS–MLP. Two experimental results show that: (1) the proposed hybrid Adaboost–MLP forecasting architecture is effective for the wind speed predictions; (2) the Adaboost algorithm has promoted the forecasting performance of the MLP neural networks considerably; (3) among the proposed Adaboost–MLP forecasting models, the Adaboost–CG-BP-FR–MLP model has the best performance; and (4) the improved percentages of the MLP neural networks by the Adaboost algorithm decrease step by step with the following sequence of training algorithms as: GD-ALR-BP, GDM-ALR-BP, CG-BP-FR and BFGS
Comparison with reconstruction algorithms in magnetic induction tomography.
Han, Min; Cheng, Xiaolin; Xue, Yuyan
2016-05-01
Magnetic induction tomography (MIT) is a kind of imaging technology, which uses the principle of electromagnetic detection to measure the conductivity distribution. In this research, we make an effort to improve the quality of image reconstruction mainly via the image reconstruction of MIT analysis, including solving the forward problem and image reconstruction. With respect to the forward problem, the variational finite element method is adopted. We transform the solution of a nonlinear partial differential equation into linear equations by using field subdividing and the appropriate interpolation function so that the voltage data of the sensing coils can be calculated. With respect to the image reconstruction, a method of modifying the iterative Newton-Raphson (NR) algorithm is presented in order to improve the quality of the image. In the iterative NR, weighting matrix and L1-norm regularization are introduced to overcome the drawbacks of large estimation errors and poor stability of the reconstruction image. On the other hand, within the incomplete-data framework of the expectation maximization (EM) algorithm, the image reconstruction can be converted to the problem of EM through the likelihood function for improving the under-determined problem. In the EM, the missing-data is introduced and the measurement data and the sensitivity matrix are compensated to overcome the drawback that the number of the measurement voltage is far less than the number of the unknown. In addition to the two aspects above, image segmentation is also used to make the lesion more flexible and adaptive to the patients' real conditions, which provides a theoretical reference for the development of the application of the MIT technique in clinical applications. The results show that solving the forward problem with the variational finite element method can provide the measurement voltage data for image reconstruction, the improved iterative NR method and EM algorithm can enhance the image
Direct sequential simulation with histogram reproduction: A comparison of algorithms
Robertson, Robyn K.; Mueller, Ute A.; Bloom, Lynette M.
2006-04-01
Sequential simulation is a widely used technique applied in geostatistics to generate realisations that reproduce properties such as the mean, variance and semivariogram. Sequential Gaussian simulation requires the original variable to be transformed to a standard normal distribution before implementing variography, kriging and simulation procedures. Direct sequential simulation allows one to perform the simulation using the original variable rather than in normal score space. The shape of the local probability distribution from which simulated values are drawn is generally unknown and this results in direct simulation not being able to guarantee reproduction of the target histogram; only the Gaussian distribution ensures reproduction of the target distribution, and most geostatistical data sets are not normally distributed. This problem can be overcome by defining the shape of the local probability distribution through the use of constrained optimisation algorithms or by using the target normal-score transformation. We investigate two non-parametric approaches based on the minimisation of an objective function subject to a set of linear constraints, and an alternative approach that creates a lookup table using Gaussian transformation. These approaches allow the variography, kriging and simulation to be performed using original data values and result in the reproduction of both the histogram and semivariogram, within statistical fluctuations. The programs for the algorithms are written in Fortran 90 and follow the GSLIB format. Routines for constrained optimisation have been incorporated.
Absorption, refraction and scattering in analyzer-based imaging: comparison of different algorithms.
Diemoz, P. C.; Coan, P.; Glaser, C; Bravin, A.
2010-01-01
Many mathematical methods have been so far proposed in order to separate absorption, refraction and ultra-small angle scattering information in phase-contrast analyzer-based images. These algorithms all combine a given number of images acquired at different positions of the crystal analyzer along its rocking curve. In this paper a comprehensive quantitative comparison between five of the most widely used phase extraction algorithms based on the geometrical optics approximation is presented: t...
A COMPARISON BETWEEN TWO ALGORITHMS FOR THE RETRIEVAL OF SOIL MOISTURE USING AMSR-E DATA
Simonetta ePaloscia; Emanuele eSanti; Simone ePettinato; Iliana eMladenova; Tom eJackson; Michael eCosh
2015-01-01
A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS) watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions) and provide long-term in-situ data. One of the algorithms is the artificial neural network-ba...
A comparison between two algorithms for the retrieval of soil moisture using AMSR-E data
Paloscia, Simonetta; santi, emanuele; Pettinato, Simone; Mladenova, Iliana; Jackson, Thomas; Bindlish, Rajat; Cosh, Michael
2015-01-01
A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS) watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions) and provide long-term in-situ data. One of the algorithms is the artificial neural network-ba...
Comparison of Greedy Algorithms for Decision Tree Optimization
Alkhalid, Abdulaziz
2013-01-01
This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values of average depth, depth, number of nodes, number of terminal nodes, and number of nonterminal nodes of decision trees. We compare average depth, depth, number of nodes, number of terminal nodes and number of nonterminal nodes of constructed trees with minimum values of the considered parameters obtained based on a dynamic programming approach. We report experiments performed on data sets from UCI ML Repository and randomly generated binary decision tables. As a result, for depth, average depth, and number of nodes we propose a number of good heuristics. © Springer-Verlag Berlin Heidelberg 2013.
Multi-pattern string matching algorithms comparison for intrusion detection system
Hasan, Awsan A.; Rashid, Nur'Aini Abdul; Abdulrazzaq, Atheer A.
2014-12-01
Computer networks are developing exponentially and running at high speeds. With the increasing number of Internet users, computers have become the preferred target for complex attacks that require complex analyses to be detected. The Intrusion detection system (IDS) is created and turned into an important part of any modern network to protect the network from attacks. The IDS relies on string matching algorithms to identify network attacks, but these string matching algorithms consume a considerable amount of IDS processing time, thereby slows down the IDS performance. A new algorithm that can overcome the weakness of the IDS needs to be developed. Improving the multi-pattern matching algorithm ensure that an IDS can work properly and the limitations can be overcome. In this paper, we perform a comparison between our three multi-pattern matching algorithms; MP-KR, MPHQS and MPH-BMH with their corresponding original algorithms Kr, QS and BMH respectively. The experiments show that MPH-QS performs best among the proposed algorithms, followed by MPH-BMH, and MP-KR is the slowest. MPH-QS detects a large number of signature patterns in short time compared to other two algorithms. This finding can prove that the multi-pattern matching algorithms are more efficient in high-speed networks.
International Nuclear Information System (INIS)
We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.
Effective Comparison and Evaluation of DES and Rijndael Algorithm (AES
Directory of Open Access Journals (Sweden)
Prof.N..Penchalaiah,
2010-08-01
Full Text Available This paper discusses the effective coding of Rijndael algorithm, Advanced Encryption Standard (AES in Hardware Description Language, Verilog. In this work we analyze the structure and design of new AES, following three criteria: a resistance against all known attacks; b speed and code compactness on a wide range of platforms; and c designsimplicity; as well as its similarities and dissimilarities with other symmetric ciphers. On the other side, the principal advantages of new AES with respect to DES, as well as its limitations, are investigated. Thus, for example, the fact that the new cipher and its inverse use different components, which practically eliminates the possibility for weak and semi-weak keys, as existing for DES, and the non-linearity of the key expansion, which practically eliminates the possibility of equivalent keys, are two of the principal advantages of new cipher. Finally, the implementation aspects of Rijndael cipherand its inverse are treated. Thus, although Rijndael is well suited to be implemented efficiently on a wide range of processors and in dedicated hardware, we have concentrated our study on 8-bit processors, typical for current Smart Cards and on 32-bit processors, typical for PCs.
Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison
Directory of Open Access Journals (Sweden)
Olympia Roeva
2005-12-01
Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.
A comparison between two algorithms for the retrieval of soil moisture using AMSR-E data
A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS) watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground condit...
Ridge extraction algorithms for one-dimensional continuous wavelet transform: a comparison
International Nuclear Information System (INIS)
This paper compares between three different algorithms that are used in detecting the phase of a fringe pattern from the ridge of its wavelet transform. A Morlet wavelet is adapted for the continuous wavelet transform of the fringe pattern. A numerical simulation is used to perform this comparison
Ridge extraction algorithms for one-dimensional continuous wavelet transform: a comparison
Energy Technology Data Exchange (ETDEWEB)
Abid, A Z; Gdeisat, M A; Burton, D R; Lalor, M J [General Engineering Research Institute (GERI), Liverpool John Moores University, Liverpool L3 3AF (United Kingdom)
2007-07-15
This paper compares between three different algorithms that are used in detecting the phase of a fringe pattern from the ridge of its wavelet transform. A Morlet wavelet is adapted for the continuous wavelet transform of the fringe pattern. A numerical simulation is used to perform this comparison.
Tang, Jie; Nett, Brian E.; Chen, Guang-Hong
2009-10-01
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
International Nuclear Information System (INIS)
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
Gallenne, A.; Mérand, A.; Kervella, P; Monnier, J. D.; Schaefer, G. H.; Baron, F; Breitfelder, J.; Bouquin, J. B. Le; Roettenbacher, R. M.; Gieren, W.; Pietrzynski, G.; McAlister, H.; Brummelaar, T. ten; Sturmann, J.; Sturmann, L.
2015-01-01
Long-baseline interferometry is an important technique to spatially resolve binary or multiple systems in close orbits. By combining several telescopes together and spectrally dispersing the light, it is possible to detect faint components around bright stars. Aims. We provide a rigorous and detailed method to search for high-contrast companions around stars, determine the detection level, and estimate the dynamic range from interferometric observations. We developed the code CANDID (Companio...
A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller
Energy Technology Data Exchange (ETDEWEB)
Tapp, P.A.
1992-04-01
A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms` performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.
A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller
Energy Technology Data Exchange (ETDEWEB)
Tapp, P.A.
1992-04-01
A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.
A benchmark comparison of Monte Carlo particle transport algorithms for binary stochastic mixtures
International Nuclear Information System (INIS)
We numerically investigate the accuracy of two Monte Carlo algorithms originally proposed by Zimmerman and Zimmerman and Adams for particle transport through binary stochastic mixtures. We assess the accuracy of these algorithms using a standard suite of planar geometry incident angular flux benchmark problems and a new suite of interior source benchmark problems. In addition to comparisons of the ensemble-averaged leakage values, we compare the ensemble-averaged material scalar flux distributions. Both Monte Carlo transport algorithms robustly produce physically realistic scalar flux distributions for the benchmark transport problems examined. The base Monte Carlo algorithm reproduces the standard Levermore-Pomraning model results. The improved Monte Carlo algorithm generally produces significantly more accurate leakage values and also significantly more accurate material scalar flux distributions. We also present deterministic atomic mix solutions of the benchmark problems for comparison with the benchmark and the Monte Carlo solutions. Both Monte Carlo algorithms are generally significantly more accurate than the atomic mix approximation for the benchmark suites examined.
A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller
International Nuclear Information System (INIS)
A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs
A Damage Resistance Comparison Between Candidate Polymer Matrix Composite Feedline Materials
Nettles, A. T
2000-01-01
As part of NASAs focused technology programs for future reusable launch vehicles, a task is underway to study the feasibility of using the polymer matrix composite feedlines instead of metal ones on propulsion systems. This is desirable to reduce weight and manufacturing costs. The task consists of comparing several prototype composite feedlines made by various methods. These methods are electron-beam curing, standard hand lay-up and autoclave cure, solvent assisted resin transfer molding, and thermoplastic tape laying. One of the critical technology drivers for composite components is resistance to foreign objects damage. This paper presents results of an experimental study of the damage resistance of the candidate materials that the prototype feedlines are manufactured from. The materials examined all have a 5-harness weave of IM7 as the fiber constituent (except for the thermoplastic, which is unidirectional tape laid up in a bidirectional configuration). The resin tested were 977-6, PR 520, SE-SA-1, RS-E3 (e-beam curable), Cycom 823 and PEEK. The results showed that the 977-6 and PEEK were the most damage resistant in all tested cases.
Energy Technology Data Exchange (ETDEWEB)
Carroll, Mark C
2014-09-01
High-purity graphite is the core structural material of choice in the Very High Temperature Reactor (VHTR) design, a graphite-moderated, helium-cooled configuration that is capable of producing thermal energy for power generation as well as process heat for industrial applications that require temperatures higher than the outlet temperatures of present nuclear reactors. The Baseline Graphite Characterization Program is endeavoring to minimize the conservative estimates of as-manufactured mechanical and physical properties in nuclear-grade graphites by providing comprehensive data that captures the level of variation in measured values. In addition to providing a thorough comparison between these values in different graphite grades, the program is also carefully tracking individual specimen source, position, and orientation information in order to provide comparisons both in specific properties and in the associated variability between different lots, different billets, and different positions from within a single billet. This report is a preliminary comparison between each of the grades of graphite that are considered “candidate” grades from four major international graphite producers. These particular grades (NBG-18, NBG-17, PCEA, IG-110, and 2114) are the major focus of the evaluations presently underway on irradiated graphite properties through the series of Advanced Graphite Creep (AGC) experiments. NBG-18, a medium-grain pitch coke graphite from SGL from which billets are formed via vibration molding, was the favored structural material in the pebble-bed configuration. NBG-17 graphite from SGL is essentially NBG-18 with the grain size reduced by a factor of two. PCEA, petroleum coke graphite from GrafTech with a similar grain size to NBG-17, is formed via an extrusion process and was initially considered the favored grade for the prismatic layout. IG-110 and 2114, from Toyo Tanso and Mersen (formerly Carbone Lorraine), respectively, are fine-grain grades
International Nuclear Information System (INIS)
Development of attenuated mutants for use as vaccines is in progress for other viruses, including influenza, rotavirus, varicella-zoster, cytomegalovirus, and hepatitis-A virus (HAV). Attenuated viruses may be derived from naturally occurring mutants that infect human or nonhuman hosts. Alternatively, attenuated mutants may be generated by passage of wild-type virus in cell culture. Production of attenuated viruses in cell culture is a laborious and empiric process. Despite previous empiric successes, understanding the molecular basis for attenuation of vaccine viruses could facilitate future development and use of live-virus vaccines. Comparison of the complete nucleotide sequences of wild-type (virulent) and vaccine (attenuated) viruses has been reported for polioviruses and yellow fever virus. Here, the authors compare the nucleotide sequence of wild-type HAV HM-175 with that of a candidate vaccine derivative
VennPainter: A Tool for the Comparison and Identification of Candidate Genes Based on Venn Diagrams.
Directory of Open Access Journals (Sweden)
Guoliang Lin
Full Text Available VennPainter is a program for depicting unique and shared sets of genes lists and generating Venn diagrams, by using the Qt C++ framework. The software produces Classic Venn, Edwards' Venn and Nested Venn diagrams and allows for eight sets in a graph mode and 31 sets in data processing mode only. In comparison, previous programs produce Classic Venn and Edwards' Venn diagrams and allow for a maximum of six sets. The software incorporates user-friendly features and works in Windows, Linux and Mac OS. Its graphical interface does not require a user to have programing skills. Users can modify diagram content for up to eight datasets because of the Scalable Vector Graphics output. VennPainter can provide output results in vertical, horizontal and matrix formats, which facilitates sharing datasets as required for further identification of candidate genes. Users can obtain gene lists from shared sets by clicking the numbers on the diagram. Thus, VennPainter is an easy-to-use, highly efficient, cross-platform and powerful program that provides a more comprehensive tool for identifying candidate genes and visualizing the relationships among genes or gene families in comparative analysis.
VennPainter: A Tool for the Comparison and Identification of Candidate Genes Based on Venn Diagrams.
Lin, Guoliang; Chai, Jing; Yuan, Shuo; Mai, Chao; Cai, Li; Murphy, Robert W; Zhou, Wei; Luo, Jing
2016-01-01
VennPainter is a program for depicting unique and shared sets of genes lists and generating Venn diagrams, by using the Qt C++ framework. The software produces Classic Venn, Edwards' Venn and Nested Venn diagrams and allows for eight sets in a graph mode and 31 sets in data processing mode only. In comparison, previous programs produce Classic Venn and Edwards' Venn diagrams and allow for a maximum of six sets. The software incorporates user-friendly features and works in Windows, Linux and Mac OS. Its graphical interface does not require a user to have programing skills. Users can modify diagram content for up to eight datasets because of the Scalable Vector Graphics output. VennPainter can provide output results in vertical, horizontal and matrix formats, which facilitates sharing datasets as required for further identification of candidate genes. Users can obtain gene lists from shared sets by clicking the numbers on the diagram. Thus, VennPainter is an easy-to-use, highly efficient, cross-platform and powerful program that provides a more comprehensive tool for identifying candidate genes and visualizing the relationships among genes or gene families in comparative analysis. PMID:27120465
VennPainter: A Tool for the Comparison and Identification of Candidate Genes Based on Venn Diagrams
Yuan, Shuo; Mai, Chao; Cai, Li; Murphy, Robert W.; Zhou, Wei; Luo, Jing
2016-01-01
VennPainter is a program for depicting unique and shared sets of genes lists and generating Venn diagrams, by using the Qt C++ framework. The software produces Classic Venn, Edwards’ Venn and Nested Venn diagrams and allows for eight sets in a graph mode and 31 sets in data processing mode only. In comparison, previous programs produce Classic Venn and Edwards’ Venn diagrams and allow for a maximum of six sets. The software incorporates user-friendly features and works in Windows, Linux and Mac OS. Its graphical interface does not require a user to have programing skills. Users can modify diagram content for up to eight datasets because of the Scalable Vector Graphics output. VennPainter can provide output results in vertical, horizontal and matrix formats, which facilitates sharing datasets as required for further identification of candidate genes. Users can obtain gene lists from shared sets by clicking the numbers on the diagram. Thus, VennPainter is an easy-to-use, highly efficient, cross-platform and powerful program that provides a more comprehensive tool for identifying candidate genes and visualizing the relationships among genes or gene families in comparative analysis. PMID:27120465
A study and implementation of algorithm for automatic ECT result comparison
International Nuclear Information System (INIS)
Automatic ECT Result Comparison Algorithm was developed and implemented with computer language to remove the human error in manual comparison with many data. The structures of two ECT Program (Eddy net and ECT IDS) that have unique file structure were analyzed to open file and upload data in PC memory. Comparison algorithm was defined graphically for easy PC programming language conversion. Automatic Result Program was programmed with C language that is suitable for future code management and has object oriented programming structure and fast development potential. Automatic Result Program has MS Excel file exporting function that is useful to use external S/W for additional analysis and intuitive result visualization function with color mapping in user friendly fashion that helps analyze efficiently
Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling
DEFF Research Database (Denmark)
Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath; Larsen, Torben
This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals....... The performance of the existing compressed sensing reconstruction algorithms have not been investigated yet for the discrete multi-coset sampling. We compare the following algorithms – orthogonal matching pursuit, multiple signal classification, subspace-augmented multiple signal classification, focal...... under-determined system solver and basis pursuit denoising. The comparison is performed via numerical simulations for different sampling conditions. According to the simulations, focal under-determined system solver outperforms all other algorithms for signals with low signal-to-noise ratio. In other...
Lu, Jing; Chen, Lei; Yin, Jun; Huang, Tao; Bi, Yi; Kong, Xiangyin; Zheng, Mingyue; Cai, Yu-Dong
2016-04-01
Lung cancer, characterized by uncontrolled cell growth in the lung tissue, is the leading cause of global cancer deaths. Until now, effective treatment of this disease is limited. Many synthetic compounds have emerged with the advancement of combinatorial chemistry. Identification of effective lung cancer candidate drug compounds among them is a great challenge. Thus, it is necessary to build effective computational methods that can assist us in selecting for potential lung cancer drug compounds. In this study, a computational method was proposed to tackle this problem. The chemical-chemical interactions and chemical-protein interactions were utilized to select candidate drug compounds that have close associations with approved lung cancer drugs and lung cancer-related genes. A permutation test and K-means clustering algorithm were employed to exclude candidate drugs with low possibilities to treat lung cancer. The final analysis suggests that the remaining drug compounds have potential anti-lung cancer activities and most of them have structural dissimilarity with approved drugs for lung cancer. PMID:26849843
Directory of Open Access Journals (Sweden)
B. Y. Volochiy
2014-12-01
Full Text Available Introduction. Nowadays it is actual task to provide the necessary efficiency indexes of radioelectronic complex system by its behavior algorithm design. There are several methods using for solving this task, intercomparison of which is required. Main part. For behavior algorithm of radioelectronic complex system four mathematical models were built by two known methods (the space of states method and the algorithmic algebras method and new scheme of paths method. Scheme of paths is compact representation of the radioelectronic complex system’s behavior and it is easily and directly formed from the behavior algorithm’s flowchart. Efficiency indexes of tested behavior algorithm - probability and mean time value of successful performance - were obtained. The intercomparison of estimated results was carried out. Conclusion. The model of behavior algorithm, which was constructed using scheme of paths method, gives commensurate values of efficiency indexes in comparison with mathematical models of the same behavior algorithm, which were obtained by space of states and algorithmic algebras methods.
Directory of Open Access Journals (Sweden)
Devesh Batra
2014-11-01
Full Text Available The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error, whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration on a simple MLP structure (2 hidden layers.
Comparison of two algorithms in the automatic segmentation of blood vessels in fundus images
LeAnder, Robert; Chowdary, Myneni Sushma; Mokkapati, Swapnasri; Umbaugh, Scott E.
2008-03-01
Effective timing and treatment are critical to saving the sight of patients with diabetes. Lack of screening, as well as a shortage of ophthalmologists, help contribute to approximately 8,000 cases per year of people who lose their sight to diabetic retinopathy, the leading cause of new cases of blindness [1] [2]. Timely treatment for diabetic retinopathy prevents severe vision loss in over 50% of eyes tested [1]. Fundus images can provide information for detecting and monitoring eye-related diseases, like diabetic retinopathy, which if detected early, may help prevent vision loss. Damaged blood vessels can indicate the presence of diabetic retinopathy [9]. So, early detection of damaged vessels in retinal images can provide valuable information about the presence of disease, thereby helping to prevent vision loss. Purpose: The purpose of this study was to compare the effectiveness of two blood vessel segmentation algorithms. Methods: Fifteen fundus images from the STARE database were used to develop two algorithms using the CVIPtools software environment. Another set of fifteen images were derived from the first fifteen and contained ophthalmologists' hand-drawn tracings over the retinal vessels. The ophthalmologists' tracings were used as the "gold standard" for perfect segmentation and compared with the segmented images that were output by the two algorithms. Comparisons between the segmented and the hand-drawn images were made using Pratt's Figure of Merit (FOM), Signal-to-Noise Ratio (SNR) and Root Mean Square (RMS) Error. Results: Algorithm 2 has an FOM that is 10% higher than Algorithm 1. Algorithm 2 has a 6%-higher SNR than Algorithm 1. Algorithm 2 has only 1.3% more RMS error than Algorithm 1. Conclusions: Algorithm 1 extracted most of the blood vessels with some missing intersections and bifurcations. Algorithm 2 extracted all the major blood vessels, but eradicated some vessels as well. Algorithm 2 outperformed Algorithm 1 in terms of visual clarity, FOM
International Nuclear Information System (INIS)
The objective of this work is to present the capabilities of the NUMERICS web platform for evaluation of the performance of image registration algorithms. The NUMERICS platform is a web accessible tool which provides access to dedicated numerical algorithms for registration and comparison of medical images (http://numerics.phys.uni-sofia.bg). The platform allows comparison of noisy medical images by means of different types of image comparison algorithms, which are based on statistical tests for outliers. The platform also allows 2D image registration with different techniques like Elastic Thin-Plate Spline registration, registration based on rigid transformations, affine transformations, as well as non-rigid image registration based on Mobius transformations. In this work we demonstrate how the platform can be used as a tool for evaluation of the quality of the image registration process. We demonstrate performance evaluation of a deformable image registration technique based on Mobius transformations. The transformations are applied with appropriate cost functions like: Mutual information, Correlation coefficient, Sum of Squared Differences. The accent is on the results provided by the platform to the user and their interpretation in the context of the performance evaluation of 2D image registration. The NUMERICS image registration and image comparison platform provides detailed statistical information about submitted image registration jobs and can be used to perform quantitative evaluation of the performance of different image registration techniques. (authors)
Directory of Open Access Journals (Sweden)
Nur Ariffin Mohd Zin
2012-01-01
Full Text Available This paper discusses on a comparative study towards solution for solving Travelling Salesman Problem based on three techniques proposed namely exhaustive, heuristic and genetic algorithm. Each solution is to cater on finding an optimal path of available 25 contiguous cities in England whereby solution is written in Prolog. Comparisons were made with emphasis against time consumed and closeness to optimal solutions. Based on the experimental, we found that heuristic is very promising in terms of time taken, while on the other hand, Genetic Algorithm manages to be outstanding on big number of traversal by resulting the shortest path among the others.
Directory of Open Access Journals (Sweden)
Miguel G. Villarreal-Cervantes
2012-10-01
Full Text Available Mobile robots with omnidirectional wheels are expected to perform a wide variety of movements in a narrow space. However, kinematic mobility and dexterity have not been clearly identified as an objective to be considered when designing omnidirectional redundant robots. In light of this fact, this article proposes to maximize the dexterity of the mobile robot by properly locating the omnidirectional wheels. In addition, four hybrid differential evolution (DE algorithm based on the synergetic integration of different kinds of mutation and crossover are presented. A comparison of metaheuristic and gradient‐based algorithms for kinematic dexterity maximization is also presented.
A COMPARISON BETWEEN TWO ALGORITHMS FOR THE RETRIEVAL OF SOIL MOISTURE USING AMSR-E DATA
Directory of Open Access Journals (Sweden)
Simonetta ePaloscia
2015-04-01
Full Text Available A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions and provide long-term in-situ data. One of the algorithms is the artificial neural network-based algorithm developed by the Institute of Applied Physics of the National Research Council (IFAC-CNR (HydroAlgo and the second one is the Single Channel Algorithm (SCA developed by USDA-ARS (US Department of Agriculture-Agricultural Research Service. Both algorithms are based on the same radiative transfer equations but are implemented very differently. Both made use of datasets provided by the Japanese Aerospace Exploration Agency (JAXA, within the framework of Advanced Microwave Scanning Radiometer–Earth Observing System (AMSR-E and Global Change Observation Mission–Water GCOM/AMSR-2 programs. Results demonstrated that both algorithms perform better than the mission specified accuracy, with Root Mean Square Error (RMSE ≤0.06 m3/m3 and Bias <0.02 m3/m3. These results expand on previous investigations using different algorithms and sites. The novelty of the paper consists of the fact that it is the first intercomparison of the HydroAlgo algorithm with a more traditional retrieval algorithm, which offers an approach to higher spatial resolution products.
Li Li; Guo Yang; Wu Wenwu; Shi Youyi; Cheng Jian; Tao Shiheng
2012-01-01
Abstract Background Several biclustering algorithms have been proposed to identify biclusters, in which genes share similar expression patterns across a number of conditions. However, different algorithms would yield different biclusters and further lead to distinct conclusions. Therefore, some testing and comparisons between these algorithms are strongly required. Methods In this study, five biclustering algorithms (i.e. BIMAX, FABIA, ISA, QUBIC and SAMBA) were compared with each other in th...
Comparison of genetic algorithm and harmony search for generator maintenance scheduling
International Nuclear Information System (INIS)
GMS (Generator Maintenance Scheduling) ranks very high in decision making of power generation management. Generators maintenance schedule decides the time period of maintenance tasks and a reliable reserve margin is also maintained during this time period. In this paper, a comparison of GA (Genetic Algorithm) and US (Harmony Search) algorithm is presented to solve generators maintenance scheduling problem for WAPDA (Water And Power Development Authority) Pakistan. GA is a search procedure, which is used in search problems to compute exact and optimized solution. GA is considered as global search heuristic technique. HS algorithm is quite efficient, because the convergence rate of this algorithm is very fast. HS algorithm is based on the concept of music improvisation process of searching for a perfect state of harmony. The two algorithms generate feasible and optimal solutions and overcome the limitations of the conventional methods including extensive computational effort, which increases exponentially as the size of the problem increases. The proposed methods are tested, validated and compared on the WAPDA electric system. (author)
Directory of Open Access Journals (Sweden)
Ota Motonori
2008-08-01
Full Text Available Abstract Background Understanding how proteins fold is essential to our quest in discovering how life works at the molecular level. Current computation power enables researchers to produce a huge amount of folding simulation data. Hence there is a pressing need to be able to interpret and identify novel folding features from them. Results In this paper, we model each folding trajectory as a multi-dimensional curve. We then develop an effective multiple curve comparison (MCC algorithm, called the enhanced partial order (EPO algorithm, to extract features from a set of diverse folding trajectories, including both successful and unsuccessful simulation runs. The EPO algorithm addresses several new challenges presented by comparing high dimensional curves coming from folding trajectories. A detailed case study on miniprotein Trp-cage 1 demonstrates that our algorithm can detect similarities at rather low level, and extract biologically meaningful folding events. Conclusion The EPO algorithm is general and applicable to a wide range of applications. We demonstrate its generality and effectiveness by applying it to aligning multiple protein structures with low similarities. For user's convenience, we provide a web server for the algorithm at http://db.cse.ohio-state.edu/EPO.
K-Means Re-Clustering-Algorithmic Options with Quantifiable Performance Comparisons
Energy Technology Data Exchange (ETDEWEB)
Meyer, A W; Paglieroni, D; Asteneh, C
2002-12-17
This paper presents various architectural options for implementing a K-Means Re-Clustering algorithm suitable for unsupervised segmentation of hyperspectral images. Performance metrics are developed based upon quantitative comparisons of convergence rates and segmentation quality. A methodology for making these comparisons is developed and used to establish K values that produce the best segmentations with minimal processing requirements. Convergence rates depend on the initial choice of cluster centers. Consequently, this same methodology may be used to evaluate the effectiveness of different initialization techniques.
A Comparison of the Machine Learning Algorithm for Evaporation Duct Estimation
Directory of Open Access Journals (Sweden)
C. Yang
2013-06-01
Full Text Available In this research, a comparison of the relevance vector machine (RVM, least square support vector machine (LSSVM and the radial basis function neural network (RBFNN for evaporation duct estimation are presented. The parabolic equation model is adopted as the forward propagation model, and which is used to establish the training database between the radar sea clutter power and the evaporation duct height. The comparison of the RVM, LSSVM and RBFNN for evaporation duct estimation are investigated via the experimental and the simulation studies, and the statistical analysis method is employed to analyze the performance of the three machine learning algorithms in the simulation study. The analysis demonstrate that the M profile of RBFNN estimation has a relatively good match to the measured profile for the experimental study; for the simulation study, the LSSVM is the most precise one among the three machine learning algorithms, besides, the performance of RVM is basically identical to the RBFNN.
Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region
Directory of Open Access Journals (Sweden)
Matko Šarić
2008-06-01
Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.
A comparison between genetic algorithms and neural networks for optimizing fuel recharges in BWR
International Nuclear Information System (INIS)
In this work the results of a genetic algorithm (AG) and a neural recurrent multi state network (RNRME) for optimizing the fuel reload of 5 cycles of the Laguna Verde nuclear power plant (CNLV) are presented. The fuel reload obtained by both methods are compared and it was observed that the RNRME creates better fuel distributions that the AG. Moreover a comparison of the utility for using one or another one techniques is make. (Author)
Sedenka, V.; Z. Raida
2010-01-01
The paper deals with efficiency comparison of two global evolutionary optimization methods implemented in MATLAB. Attention is turned to an elitist Non-dominated Sorting Genetic Algorithm (NSGA-II) and a novel multi-objective Particle Swarm Optimization (PSO). The performance of optimizers is compared on three different test functions and on a cavity resonator synthesis. The microwave resonator is modeled using the Finite Element Method (FEM). The hit rate and the quality of the Pareto front ...
Comparison a Performance of Data Mining Algorithms (CPDMA) in Prediction Of Diabetes Disease
Dr.V.Karthikeyani; I.Parvin Begum
2013-01-01
Detection of knowledge patterns in clinicial data through data mining. Data mining algorithms can be trained from past examples in clinical data and model the frequent times non-linear relationships between the independent and dependent variables. The consequential model represents formal knowledge, which can often make available a good analytic judgment. Classification is the generally used technique in medical data mining. This paper presents results comparison of ten supervised data mining...
A comparison of semiglobal and local dense matching algorithms for surface reconstruction
Dall'Asta, E.; Roncella, R.
2014-06-01
Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.
Marchant, B.; Platnick, S. E.; Arnold, T.; Meyer, K.; Riedi, J.
2014-12-01
Cloud thermodynamic phase (ice or liquid) discrimination is an important first step for cloud retrievals from passive sensors such as MODIS (Moderate-Resolution Imaging Spectroradiometer). Because ice and liquid phase clouds have very different scattering and absorbing properties, an incorrect cloud phase decision can lead to substantial uncertainties in the cloud optical and microphysical property products such as cloud optical thickness or effective particle radius. Furthermore, it is well-established that ice and liquid clouds have different impacts on the Earth's energy budget and hydrological cycle, thus accurately monitoring the spatial and temporal distribution of these clouds is of continued importance. For MODIS Collection 6 (C6), the shortwave-derived cloud thermodynamic phase algorithm used by the optical and microphysical property retrievals has been completely rewritten to improve the phase discrimination skill for a variety of cloudy scenes (e.g., thin/thick clouds, over ocean/land/desert/snow/ice surface, etc). To evaluate the performance of the C6 cloud phase algorithm, extensive granule-level and global comparisons have been conducted against the heritage C5 algorithm, CALIOP, and POLDER. A wholesale improvement is seen for C6 compared to C5. We will present an overview of the MODIS C6 cloud phase algorithm updates and their impacts on cloud retrieval statistics, as well as ongoing efforts to continue algorithm improvement.
Energy Technology Data Exchange (ETDEWEB)
Antoniucci, S.; Giannini, T.; Li Causi, G.; Lorenzetti, D., E-mail: simone.antoniucci@oa-roma.inaf.it, E-mail: teresa.giannini@oa-roma.inaf.it, E-mail: gianluca.licausi@oa-roma.inaf.it, E-mail: dario.lorenzetti@oa-roma.inaf.it [INAF-Osservatorio Astronomico di Roma, via Frascati 33, I-00040 Monte Porzio (Italy)
2014-02-10
Aiming to statistically study the variability in the mid-IR of young stellar objects, we have compared the 3.6, 4.5, and 24 μm Spitzer fluxes of 1478 sources belonging to the C2D (Cores to Disks) legacy program with the WISE fluxes at 3.4, 4.6, and 22 μm. From this comparison, we have selected a robust sample of 34 variable sources. Their variations were classified per spectral Class (according to the widely accepted scheme of Class I/flat/II/III protostars), and per star forming region. On average, the number of variable sources decreases with increasing Class and is definitely higher in Perseus and Ophiuchus than in Chamaeleon and Lupus. According to the paradigm Class ≡ Evolution, the photometric variability can be considered to be a feature more pronounced in less evolved protostars, and, as such, related to accretion processes. Moreover, our statistical findings agree with the current knowledge of star formation activity in different regions. The 34 selected variables were further investigated for similarities with known young eruptive variables, namely the EXors. In particular, we analyzed (1) the shape of the spectral energy distribution, (2) the IR excess over the stellar photosphere, (3) magnitude versus color variations, and (4) output parameters of model fitting. This first systematic search for EXors ends up with 11 bona fide candidates that can be considered as suitable targets for monitoring or future investigations.
Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS
Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.
Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.
International Nuclear Information System (INIS)
We are developing a cross-species comparison strategy to distinguish between cancer driver- and passenger gene alteration candidates, by utilizing the difference in genomic location of orthologous genes between the human and other mammals. As an initial test of this strategy, we conducted a pilot study with human colorectal cancer (CRC) and its mouse model C57BL/6J ApcMin/+, focusing on human 5q22.2 and 18q21.1-q21.2. We first performed bioinformatics analysis on the evolution of 5q22.2 and 18q21.1-q21.2 regions. Then, we performed exon-targeted sequencing, real time quantitative polymerase chain reaction (qPCR), and real time quantitative reverse transcriptase PCR (qRT-PCR) analyses on a number of genes of both regions with both human and mouse colon tumors. These two regions (5q22.2 and 18q21.1-q21.2) are frequently deleted in human CRCs and encode genuine colorectal tumor suppressors APC and SMAD4. They also encode genes such as MCC (mutated in colorectal cancer) with their role in CRC etiology unknown. We have discovered that both regions are evolutionarily unstable, resulting in genes that are clustered in each human region being found scattered at several distinct loci in the genome of many other species. For instance, APC and MCC are within 200 kb apart in human 5q22.2 but are 10 Mb apart in the mouse genome. Importantly, our analyses revealed that, while known CRC driver genes APC and SMAD4 were disrupted in both human colorectal tumors and tumors from ApcMin/+ mice, the questionable MCC gene was disrupted in human tumors but appeared to be intact in mouse tumors. These results indicate that MCC may not actually play any causative role in early colorectal tumorigenesis. We also hypothesize that its disruption in human CRCs is likely a mere result of its close proximity to APC in the human genome. Expanding this pilot study to the entire genome may identify more questionable genes like MCC, facilitating the discovery of new CRC driver gene candidates
Directory of Open Access Journals (Sweden)
Rhythm Suren Wadhwa
2011-11-01
Full Text Available The paper presents a comparison and application of metaheuristic population-based optimization algorithms to a flexible manufacturing automation scenario in a metacasting foundry. It presents a novel application and comparison of Bee Colony Algorithm (BCA with variations of Particle Swarm Optimization (PSO and Ant Colony Optimization (ACO for object recognition problem in a robot material handling system. To enable robust pick and place activity of metalcasted parts by a six axis industrial robot manipulator, it is important that the correct orientation of the parts is input to the manipulator, via the digital image captured by the vision system. This information is then used for orienting the robot gripper to grip the part from a moving conveyor belt. The objective is to find the reference templates on the manufactured parts from the target landscape picture which may contain noise. The Normalized cross-correlation (NCC function is used as an objection function in the optimization procedure. The ultimate goal is to test improved algorithms that could prove useful in practical manufacturing automation scenarios.
Sensitivity study of voxel-based PET image comparison to image registration algorithms
Energy Technology Data Exchange (ETDEWEB)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Chen, Aileen B.; Berbeco, Ross [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Aerts, Hugo J. W. L. [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 and Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts 02115 (United States)
2014-11-01
Purpose: Accurate deformable registration is essential for voxel-based comparison of sequential positron emission tomography (PET) images for proper adaptation of treatment plan and treatment response assessment. The comparison may be sensitive to the method of deformable registration as the optimal algorithm is unknown. This study investigated the impact of registration algorithm choice on therapy response evaluation. Methods: Sixteen patients with 20 lung tumors underwent a pre- and post-treatment computed tomography (CT) and 4D FDG-PET scans before and after chemoradiotherapy. All CT images were coregistered using a rigid and ten deformable registration algorithms. The resulting transformations were then applied to the respective PET images. Moreover, the tumor region defined by a physician on the registered PET images was classified into progressor, stable-disease, and responder subvolumes. Particularly, voxels with standardized uptake value (SUV) decreases >30% were classified as responder, while voxels with SUV increases >30% were progressor. All other voxels were considered stable-disease. The agreement of the subvolumes resulting from difference registration algorithms was assessed by Dice similarity index (DSI). Coefficient of variation (CV) was computed to assess variability of DSI between individual tumors. Root mean square difference (RMS{sub rigid}) of the rigidly registered CT images was used to measure the degree of tumor deformation. RMS{sub rigid} and DSI were correlated by Spearman correlation coefficient (R) to investigate the effect of tumor deformation on DSI. Results: Median DSI{sub rigid} was found to be 72%, 66%, and 80%, for progressor, stable-disease, and responder, respectively. Median DSI{sub deformable} was 63%–84%, 65%–81%, and 82%–89%. Variability of DSI was substantial and similar for both rigid and deformable algorithms with CV > 10% for all subvolumes. Tumor deformation had moderate to significant impact on DSI for progressor
Quantitative comparison of direct phase retrieval algorithms in in-line phase tomography
International Nuclear Information System (INIS)
A well-known problem in x-ray microcomputed tomography is low sensitivity. Phase contrast imaging offers an increase of sensitivity of up to a factor of 103 in the hard x-ray region, which makes it possible to image soft tissue and small density variations. If a sufficiently coherent x-ray beam, such as that obtained from a third generation synchrotron, is used, phase contrast can be obtained by simply moving the detector downstream of the imaged object. This setup is known as in-line or propagation based phase contrast imaging. A quantitative relationship exists between the phase shift induced by the object and the recorded intensity and inversion of this relationship is called phase retrieval. Since the phase shift is proportional to projections through the three-dimensional refractive index distribution in the object, once the phase is retrieved, the refractive index can be reconstructed by using the phase as input to a tomographic reconstruction algorithm. A comparison between four phase retrieval algorithms is presented. The algorithms are based on the transport of intensity equation (TIE), transport of intensity equation for weak absorption, the contrast transfer function (CTF), and a mixed approach between the CTF and TIE, respectively. The compared methods all rely on linearization of the relationship between phase shift and recorded intensity to yield fast phase retrieval algorithms. The phase retrieval algorithms are compared using both simulated and experimental data, acquired at the European Synchrotron Radiation Facility third generation synchrotron light source. The algorithms are evaluated in terms of two different reconstruction error metrics. While being slightly less computationally effective, the mixed approach shows the best performance in terms of the chosen criteria.
Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia
2011-01-01
LS-Lab provides automatic support to comparison/evaluation of the Learning Object Sequences produced by different Curriculum Sequencing Algorithms. Through this framework a teacher can verify the correspondence between the behaviour of different sequencing algorithms and her pedagogical preferences. In fact the teacher can compare algorithms…
Digital Sound Synthesis Algorithms: a Tutorial Introduction and Comparison of Methods
Lee, J. Robert
The objectives of the dissertation are to provide both a compendium of sound-synthesis methods with detailed descriptions and sound examples, as well as a comparison of the relative merits of each method based on ease of use, observed sound quality, execution time, and data storage requirements. The methods are classified under the general headings of wavetable-lookup synthesis, additive synthesis, subtractive synthesis, nonlinear methods, and physical modelling. The nonlinear methods comprise a large group that ranges from the well-known frequency-modulation synthesis to waveshaping. The final category explores computer modelling of real musical instruments and includes numerical and analytical solutions to the classical wave equation of motion, along with some of the more sophisticated time -domain models that are possible through the prudent combination of simpler synthesis techniques. The dissertation is intended to be understandable by a musician who is mathematically literate but who does not necessarily have a background in digital signal processing. With this limitation in mind, a brief and somewhat intuitive description of digital sampling theory is provided in the introduction. Other topics such as filter theory are discussed as the need arises. By employing each of the synthesis methods to produce the same type of sound, interesting comparisons can be made. For example, a struck string sound, such as that typical of a piano, can be produced by algorithms in each of the synthesis classifications. Many sounds, however, are peculiar to a single algorithm and must be examined independently. Psychoacoustic studies were conducted as an aid in the comparison of the sound quality of several implementations of the synthesis algorithms. Other psychoacoustic experiments were conducted to supplement the established notions of which timbral issues are important in the re -synthesis of the sounds of acoustic musical instruments.
Bircher, Pascal; Liniger, Hanspeter; Prasuhn, Volker
2016-04-01
Soil erosion is a well-known challenge both from a global perspective and in Switzerland, and it is assessed and discussed in many projects (e.g. national or European erosion risk maps). Meaningful assessment of soil erosion requires models that adequately reflect surface water flows. Various studies have attempted to achieve better modelling results by including multiple flow algorithms in the topographic length and slope factor (LS-factor) of the Revised Universal Soil Loss Equation (RUSLE). The choice of multiple flow algorithms is wide, and many of them have been implemented in programs or tools like Saga-Gis, GrassGis, ArcGIS, ArcView, Taudem, and others. This study compares six different multiple flow algorithms with the aim of identifying a suitable approach to calculating the LS factor for a new soil erosion risk map of Switzerland. The comparison of multiple flow algorithms is part of a broader project to model soil erosion for the entire agriculturally used area in Switzerland and to renew and optimize the current erosion risk map of Switzerland (ERM2). The ERM2 was calculated in 2009, using a high resolution digital elevation model (2 m) and a multiple flow algorithm in ArcView. This map has provided the basis for enforcing soil protection regulations since 2010 and has proved its worth in practice, but it has become outdated (new basic data are now available, e.g. data on land use change, a new rainfall erosivity map, a new digital elevation model, etc.) and is no longer user friendly (ArcView). In a first step towards its renewal, a new data set from the Swiss Federal Office of Topography (Swisstopo) was used to generate the agricultural area based on the existing field block map. A field block is an area consisting of farmland, pastures, and meadows which is bounded by hydrological borders such as streets, forests, villages, surface waters, etc. In our study, we compared the six multiple flow algorithms with the LS factor calculation approach used in
Campo, Lorenzo; Castelli, Fabio; Caparrini, Francesca
2010-05-01
distributed model developed at the Department of Civil and Environmental Engineering of the University of Florence. Discussion on the comparisons between the effectiveness of the different algorithms on different cases of study on Central Italy basins is provided.
Directory of Open Access Journals (Sweden)
V. Sedenka
2010-09-01
Full Text Available The paper deals with efficiency comparison of two global evolutionary optimization methods implemented in MATLAB. Attention is turned to an elitist Non-dominated Sorting Genetic Algorithm (NSGA-II and a novel multi-objective Particle Swarm Optimization (PSO. The performance of optimizers is compared on three different test functions and on a cavity resonator synthesis. The microwave resonator is modeled using the Finite Element Method (FEM. The hit rate and the quality of the Pareto front distribution are classified.
Comparison of SAR Wind Speed Retrieval Algorithms for Evaluating Offshore Wind Energy Resources
DEFF Research Database (Denmark)
Kozai, K.; Ohsawa, T.; Takeyama, Y.; Shimada, S.; Niwa, R.; Hasager, Charlotte Bay; Badger, Merete
2010-01-01
stability, while CMOD5.N assumes a neutral condition. By utilizing Monin-Obukov similarity theory in the inverse LKB code, equivalent neutral wind speeds derived from CMOD5.N are converted to stability dependent wind speeds (CMOD5N_ SDW). Results of comparison in terms of energy density indicate the CMOD5N......Envisat/ASAR-derived offshore wind speeds and energy densities based on 4 different SAR wind speed retrieval algorithms (CMOD4, CMOD-IFR2, CMOD5, CMOD5.N) are compared with observed wind speeds and energy densities for evaluating offshore wind energy resources. CMOD4 ignores effects of atmospheric...
Directory of Open Access Journals (Sweden)
A. Rozanov
2007-09-01
Full Text Available This paper is devoted to an intercomparison of ozone vertical profiles retrieved from the measurements of scattered solar radiation performed by the SCIAMACHY instrument in the limb viewing geometry. Three different inversion algorithms including the prototype of the operational Level 1 to 2 processor to be operated by the European Space Agency are considered. Unlike usual validation studies, this comparison removes the uncertainties arising when comparing measurements made by different instruments probing slightly different air masses and focuses on the uncertainties specific to the modeling-retrieval problem only. The intercomparison was performed for 5 selected orbits of SCIAMACHY showing a good overall agreement of the results in the middle stratosphere, whereas considerable discrepancies were identified in the lower stratosphere and upper troposphere altitude region. Additionally, comparisons with ground-based lidar measurements are shown for selected profiles demonstrating an overall correctness of the retrievals.
Directory of Open Access Journals (Sweden)
Prabhat Kumar Giri
2016-01-01
Full Text Available In the present era of globalization and competitive market, cellular manufacturing has become a vital tool for meeting the challenges of improving productivity, which is the way to sustain growth. Getting best results of cellular manufacturing depends on the formation of the machine cells and part families. This paper examines advantages of ART method of cell formation over array based clustering algorithms, namely ROC-2 and DCA. The comparison and evaluation of the cell formation methods has been carried out in the study. The most appropriate approach is selected and used to form the cellular manufacturing system. The comparison and evaluation is done on the basis of performance measure as grouping efficiency and improvements over the existing cellular manufacturing system is presented.
Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)
2002-01-01
A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.
Marchant, Benjamin; Platnick, Steven; Meyer, Kerry; Arnold, G. Thomas; Riedi, Jérôme
2016-04-01
Cloud thermodynamic phase (ice, liquid, undetermined) classification is an important first step for cloud retrievals from passive sensors such as MODIS (Moderate Resolution Imaging Spectroradiometer). Because ice and liquid phase clouds have very different scattering and absorbing properties, an incorrect cloud phase decision can lead to substantial errors in the cloud optical and microphysical property products such as cloud optical thickness or effective particle radius. Furthermore, it is well established that ice and liquid clouds have different impacts on the Earth's energy budget and hydrological cycle, thus accurately monitoring the spatial and temporal distribution of these clouds is of continued importance. For MODIS Collection 6 (C6), the shortwave-derived cloud thermodynamic phase algorithm used by the optical and microphysical property retrievals has been completely rewritten to improve the phase discrimination skill for a variety of cloudy scenes (e.g., thin/thick clouds, over ocean/land/desert/snow/ice surface, etc). To evaluate the performance of the C6 cloud phase algorithm, extensive granule-level and global comparisons have been conducted against the heritage C5 algorithm and CALIOP. A wholesale improvement is seen for C6 compared to C5.
Comparison of PID Controller Tuning Methods with Genetic Algorithm for FOPTD System
Directory of Open Access Journals (Sweden)
K. Mohamed Hussain
2014-02-01
Full Text Available Measurement of Level, Temperature, Pressure and Flow parameters are very vital in all process industries. A combination of a few transducers with a controller, that forms a closed loop system leads to a stable and effective process. This article deals with control of in the process tank and comparative analysis of various PID control techniques and Genetic Algorithm (GA technique. The model for such a Real-time process is identified as First Order Plus Dead Time (FOPTD process and validated. The need for improved performance of the process has led to the development of model based controllers. Well-designed conventional Proportional, Integral and Derivative (PID controllers are the most widely used controller in the chemical process industries because of their simplicity, robustness and successful practical applications. Many tuning methods have been proposed for PID controllers. Many tuning methods have been proposed for obtaining better PID controller parameter settings. The comparison of various tuning methods for First Order Plus Dead Time (FOPTD process are analysed using simulation software. Our purpose in this study is comparison of these tuning methods for single input single output (SISO systems using computer simulation.Also efficiency of various PID controller are investigated for different performance metrics such as Integral Square Error (ISE, Integral Absolute Error (IAE, Integral Time absolute Error (ITAE, and Mean square Error (MSE is presented and simulation is carried out. Work in this paper explores basic concepts, mathematics, and design aspect of PID controller. Comparison between the PID controller and Genetic Algorithm (GA will have been carried out to determine the best controller for the temperature system.
Analysis and Comparison of Symmetric Key Cryptographic Algorithms Based on Various File Features
Directory of Open Access Journals (Sweden)
Ranjeet Masram
2014-07-01
Full Text Available For achieving faster communication most of confiden tial data is circulated through networks as electro nic data. Cryptographic ciphers have an important role for providing security to these confidential data against unauthorized attacks. Though security is an important factor, there are various factors that c an affect the performance and selection of cryptograph ic algorithms during the practical implementation o f these cryptographic ciphers for various application s. This paper provides analysis and comparison of s ome symmetric key cryptographic ciphers (RC4, AES, Blow fish, RC2, DES, Skipjack, and Triple DES on the basis of encryption time with the variation of vari ous file features like different data types, data s ize, data density and key sizes.
Comparison of Bayesian Land Surface Temperature algorithm performance with Terra MODIS observations
Morgan, J A
2009-01-01
An approach to land surface temperature (LST) estimation that relies upon Bayesian inference has been validated against multiband infrared radiometric imagery from the Terra MODIS instrument. Bayesian LST estimators are shown to reproduce standard MODIS product LST values starting from a parsimoniously chosen (hence, uninformative) range of prior band emissivity knowledge. Two estimation methods have been tested. The first is the iterative contraction mapping of joint expectation values for LST and surface emissivity described in a previous paper. In the second method, the Bayesian algorithm is reformulated as a Maximum \\emph{A-Posteriori} (MAP) search for the maximum joint \\emph{a-posteriori} probability for LST, given observed sensor aperture radiances and \\emph{a-priori} probabilities for LST and emissivity. Two MODIS data granules each for daytime and nighttime were used for the comparison. The granules were chosen to be largely cloud-free, with limited vertical relief in those portions of the granules fo...
Ivanova, Natalia; Pedersen, Leif T.; Lavergne, Thomas; Tonboe, Rasmus T.; Saldo, Roberto; Mäkynen, Marko; Heygster, Georg; Rösel, Anja; Kern, Stefan; Dybkjær, Gorm; Sørensen, Atle; Brucker, Ludovic; Shokr, Mohammed; Korosov, Anton; Hansen, Morten W.
2015-04-01
Sea ice concentration (SIC) has been derived globally from satellite passive microwave observations since the 1970s by a multitude of algorithms. However, existing datasets and algorithms, although agreeing in the large-scale picture, differ substantially in the details and have disadvantages in summer and fall due to presence of melt ponds and thin ice. There is thus a need for understanding of the causes for the differences and identifying the most suitable method to retrieve SIC. Therefore, during the ESA Climate Change Initiative effort 30 algorithms have been implemented, inter-compared and validated by a standardized reference dataset. The algorithms were evaluated over low and high sea ice concentrations and thin ice. Based on the findings, an optimal approach to retrieve sea ice concentration globally for climate purposes was suggested and validated. The algorithm was implemented with atmospheric correction and dynamical tie points in order to produce the final sea ice concentration dataset with per-pixel uncertainties. The issue of melt ponds was addressed in particular because they are interpreted as open water by the algorithms and thus SIC can be underestimated by up to 40%. To improve our understanding of this issue, melt-pond signatures in AMSR2 images were investigated based on their physical properties with help of observations of melt pond fraction from optical (MODIS and MERIS) and active microwave (SAR) satellite measurements.
Directory of Open Access Journals (Sweden)
Akanksha Mathur
2012-09-01
Full Text Available Encryption is the process of transforming plaintext into the ciphertext where plaintext is the input to the encryption process and ciphertext is the output of the encryption process. Decryption isthe process of transforming ciphertext into the plaintext where ciphertext is the input to the decryption process and plaintext is the output of the decryption process. There are various encryption algorithms exist classified as symmetric and asymmetric encryption algorithms. Here, I present an algorithm for data encryption and decryption which is based on ASCII values of characters in the plaintext. This algorithm is used to encrypt data by using ASCII values of the data to be encrypted. The secret used will be modifying o another string and that string is used as a key to encrypt or decrypt the data. So, it can be said that it is a kind of symmetric encryption algorithm because it uses same key for encryption anddecryption but by slightly modifying it. This algorithm operates when the length of input and the length of key are same.
Kunde-Ramamoorthy, Govindarajan; Coarfa, Cristian; Laritsky, Eleonora; Kessler, Noah J; Harris, R Alan; Xu, Mingchu; Chen, Rui; Shen, Lanlan; Milosavljevic, Aleksandar; Waterland, Robert A
2014-04-01
Coupling bisulfite conversion with next-generation sequencing (Bisulfite-seq) enables genome-wide measurement of DNA methylation, but poses unique challenges for mapping. However, despite a proliferation of Bisulfite-seq mapping tools, no systematic comparison of their genomic coverage and quantitative accuracy has been reported. We sequenced bisulfite-converted DNA from two tissues from each of two healthy human adults and systematically compared five widely used Bisulfite-seq mapping algorithms: Bismark, BSMAP, Pash, BatMeth and BS Seeker. We evaluated their computational speed and genomic coverage and verified their percentage methylation estimates. With the exception of BatMeth, all mappers covered >70% of CpG sites genome-wide and yielded highly concordant estimates of percentage methylation (r(2) ≥ 0.95). Fourfold variation in mapping time was found between BSMAP (fastest) and Pash (slowest). In each library, 8-12% of genomic regions covered by Bismark and Pash were not covered by BSMAP. An experiment using simulated reads confirmed that Pash has an exceptional ability to uniquely map reads in genomic regions of structural variation. Independent verification by bisulfite pyrosequencing generally confirmed the percentage methylation estimates by the mappers. Of these algorithms, Bismark provides an attractive combination of processing speed, genomic coverage and quantitative accuracy, whereas Pash offers considerably higher genomic coverage. PMID:24391148
Directory of Open Access Journals (Sweden)
Gaurav Prakash
2016-01-01
Conclusions: Preoperative whole eye HOA were similar for refractive surgery candidates of Arab and South.Asian origin. The values were comparable to historical data for Caucasian eyes and were lower than Asian. (Chinese eyes. These findings may aid in refining refractive nomograms for wavefront ablations.
küçükosmanoğlu, hayrettin onur
2013-01-01
The purpose of this study to evaluate the levels of the music teacher candidates Self-Esteem by socio-demographic variables. Literature was reviewed, "Personal Information Form" and the "Rosenberg Self-Esteem Scale" used in order to obtain research data. For the purposes of this study, statistical analysis of the findings are presented in the tables. The study group of said research encompasses 101 undergraduates studying in Necmettin Erbakan University, Department of Fine Arts Education, a...
Szöllösi, Tomáš
2012-01-01
The first part of this work deals with the optimization and evolutionary algorithms which are used as a tool to solve complex optimization problems. The discussed algorithms are Differential Evolution, Genetic Algorithm, Simulated Annealing and deterministic non-evolutionary algorithm Taboo Search.. Consequently the discussion is held on the issue of testing the optimization algorithms through the use of the test function gallery and comparison solution all algorithms on Travelling salesman p...
International Nuclear Information System (INIS)
Liquid metals are attractive candidates for both near-term and long-term fusion applications. The subjects of this comparison are the differences between the two candidate liquid metal breeder materials Li and LiPb for use in breeding blankets in the areas of neutronics, magnetohydrodynamics, tritium control, compatibility with structural materials, heat extraction system, safety and required research and development program. Both candidates appear to be promising for use in self-cooled breeding blankets which have inherent simplicity with the liquid metal serving as both breeder and coolant. Each liquid metal breeder has advantages and concerns associated with it, and further development is needed to resolve these concerns. The remaining feasibility question for both breeder materials is the electrical insulation between the liquid metal and the duct walls. Different ceramic coatings are required for the two breeders, and their crucial issues, namely self-healing of insulator cracks and tolerance to radiation-induced electrical degradation, have not yet been demonstrated. (orig.)
Performance Comparison of Binary Search Tree and Framed ALOHA Algorithms for RFID Anti-Collision
Chen, Wen-Tzu
Binary search tree and framed ALOHA algorithms are commonly adopted to solve the anti-collision problem in RFID systems. In this letter, the read efficiency of these two anti-collision algorithms is compared through computer simulations. Simulation results indicate the framed ALOHA algorithm requires less total read time than the binary search tree algorithm. The initial frame length strongly affects the uplink throughput for the framed ALOHA algorithm.
Comparison of three TCC calculation algorithms for partially coherent imaging simulation
Wu, Xiaofei; Liu, Shiyuan; Liu, Wei; Zhou, Tingting; Wang, Lijuan
2010-08-01
Three kinds of TCC (transmission cross coefficient) calculation algorithms used for partially coherent imaging simulation, including the integration algorithm, the analytical algorithm, and the matrix-based fast algorithm, are reviewed for their rigorous formulations and numerical implementations. The accuracy and speed achievable using these algorithms are compared by simulations conducted on several mainstream illumination sources commonly used in current lithographic tools. Simulation results demonstrate that the integration algorithm is quite accurate but time consuming, while the matrix-based fast algorithm is efficient but its accuracy is heavily dependent on simulation resolution. The analytical algorithm is both efficient and accurate but not suitable for arbitrary optical systems. It is therefore concluded that each TCC calculation algorithm has its pros and cons with a compromise necessary to achieve a balance between accuracy and speed. The observations are useful in fast lithographic simulation for aerial image modeling, optical proximity correction (OPC), source mask optimization (SMO), and critical dimension (CD) prediction.
International Nuclear Information System (INIS)
The European InnoMed-PredTox project was a collaborative effort between 15 pharmaceutical companies, 2 small and mid-sized enterprises, and 3 universities with the goal of delivering deeper insights into the molecular mechanisms of kidney and liver toxicity and to identify mechanism-linked diagnostic or prognostic safety biomarker candidates by combining conventional toxicological parameters with 'omics' data. Mechanistic toxicity studies with 16 different compounds, 2 dose levels, and 3 time points were performed in male Crl: WI(Han) rats. Three of the 16 investigated compounds, BI-3 (FP007SE), Gentamicin (FP009SF), and IMM125 (FP013NO), induced kidney proximal tubule damage (PTD). In addition to histopathology and clinical chemistry, transcriptomics microarray and proteomics 2D-DIGE analysis were performed. Data from the three PTD studies were combined for a cross-study and cross-omics meta-analysis of the target organ. The mechanistic interpretation of kidney PTD-associated deregulated transcripts revealed, in addition to previously described kidney damage transcript biomarkers such as KIM-1, CLU and TIMP-1, a number of additional deregulated pathways congruent with histopathology observations on a single animal basis, including a specific effect on the complement system. The identification of new, more specific biomarker candidates for PTD was most successful when transcriptomics data were used. Combining transcriptomics data with proteomics data added extra value.
Energy Technology Data Exchange (ETDEWEB)
Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.
2000-01-01
We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.
International Nuclear Information System (INIS)
We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior
Query by image example: The CANDID approach
Energy Technology Data Exchange (ETDEWEB)
Kelly, P.M.; Cannon, M. [Los Alamos National Lab., NM (United States). Computer Research and Applications Group; Hush, D.R. [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Electrical and Computer Engineering
1995-02-01
CANDID (Comparison Algorithm for Navigating Digital Image Databases) was developed to enable content-based retrieval of digital imagery from large databases using a query-by-example methodology. A user provides an example image to the system, and images in the database that are similar to that example are retrieved. The development of CANDID was inspired by the N-gram approach to document fingerprinting, where a ``global signature`` is computed for every document in a database and these signatures are compared to one another to determine the similarity between any two documents. CANDID computes a global signature for every image in a database, where the signature is derived from various image features such as localized texture, shape, or color information. A distance between probability density functions of feature vectors is then used to compare signatures. In this paper, the authors present CANDID and highlight two results from their current research: subtracting a ``background`` signature from every signature in a database in an attempt to improve system performance when using inner-product similarity measures, and visualizing the contribution of individual pixels in the matching process. These ideas are applicable to any histogram-based comparison technique.
Fedorova , E.; Vasylenko, A.; Hnatyk, B. I.; Zhdanov, V. I.
2016-02-01
We analyze the X-ray properties of the Compton-thick Seyfert 1.9 radio quiet AGN in NGC 1194 using INTEGRAL (ISGRI), XMM-Newton (EPIC), Swift (BAT and XRT), and Suzaku (XIS) observations. There is a set of Fe-K lines in the NGC 1194 spectrum with complex relativistic profiles that can be considered as a sign of either a warped Bardeen-Petterson accretion disk or double black hole. We compare our results on NGC 1194 with two other megamaser warped disk candidates, NGC 1068 and NGC 4258, to trace out the other properties which can be typical for AGNs with warped accretion disks. To finally confirm or disprove the double black-hole hypotheses, further observations of the iron lines and their evolution of their shape with time are necessary. Based on obsrvations made with INTEGRAL, XMM-Newton, Swift, Suzaku.
A Comparison of the Effects of K-Anonymity on Machine Learning Algorithms
Hayden Wimmer; Loreen Powell
2014-01-01
While research has been conducted in machine learning algorithms and in privacy preserving in data mining (PPDM), a gap in the literature exists which combines the aforementioned areas to determine how PPDM affects common machine learning algorithms. The aim of this research is to narrow this literature gap by investigating how a common PPDM algorithm, K-Anonymity, affects common machine learning and data mining algorithms, namely neural networks, logistic regression, decision trees, and Baye...
DEFF Research Database (Denmark)
Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika
2013-01-01
, genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application....
Lesniak, Joseph; Behrman, Elizabeth; Zandler, Melvin; Kumar, Preethika
2008-03-01
Very few quantum algorithms are currently useable today. When calculating molecular energies, using a quantum algorithm takes advantage of the quantum nature of the algorithm and calculation. A few small molecules have been used to show that this method is possible. This method will be applied to larger molecules and compared to classical computer methods.
Comparison and analysis of nonlinear algorithms for compressed sensing in MRI.
Yu, Yeyang; Hong, Mingjian; Liu, Feng; Wang, Hua; Crozier, Stuart
2010-01-01
Compressed sensing (CS) theory has been recently applied in Magnetic Resonance Imaging (MRI) to accelerate the overall imaging process. In the CS implementation, various algorithms have been used to solve the nonlinear equation system for better image quality and reconstruction speed. However, there are no explicit criteria for an optimal CS algorithm selection in the practical MRI application. A systematic and comparative study of those commonly used algorithms is therefore essential for the implementation of CS in MRI. In this work, three typical algorithms, namely, the Gradient Projection For Sparse Reconstruction (GPSR) algorithm, Interior-point algorithm (l(1)_ls), and the Stagewise Orthogonal Matching Pursuit (StOMP) algorithm are compared and investigated in three different imaging scenarios, brain, angiogram and phantom imaging. The algorithms' performances are characterized in terms of image quality and reconstruction speed. The theoretical results show that the performance of the CS algorithms is case sensitive; overall, the StOMP algorithm offers the best solution in imaging quality, while the GPSR algorithm is the most efficient one among the three methods. In the next step, the algorithm performances and characteristics will be experimentally explored. It is hoped that this research will further support the applications of CS in MRI. PMID:21097312
Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms
Directory of Open Access Journals (Sweden)
Dominik Zurek
2013-01-01
Full Text Available Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting algorithms on GPU cards and multicore processors. Then hybrid algorithm will be presented which consists of parts executed on both platforms, standard CPU and GPU.
Li, Zhaokun; Cao, Jingtai; Liu, Wei; Feng, Jianfeng; Zhao, Xiaohui
2015-03-01
We use conventional adaptive optical system to compensate atmospheric turbulence in free space optical (FSO) communication system under strong scintillation circumstances, undesired wave-front measurements based on Shark-Hartman sensor (SH). Since wavefront sensor-less adaptive optics is a feasible option, we propose several swarm intelligence algorithms to compensate the wavefront aberration from atmospheric interference in FSO and mainly discuss the algorithm principle, basic flows, and simulation result. The numerical simulation experiment and result analysis show that compared with SPGD algorithm, the proposed algorithms can effectively restrain wavefront aberration, and improve convergence rate of the algorithms and the coupling efficiency of receiver in large extent.
Humeau-Heurtier, Anne; Mahé, Guillaume; Abraham, Pierre
2015-12-01
Laser speckle contrast imaging (LSCI) enables a noninvasive monitoring of microvascular perfusion. Some studies have proposed to extract information from LSCI data through their multiscale entropy (MSE). However, for reaching a large range of scales, the original MSE algorithm may require long recordings for reliability. Recently, a novel approach to compute MSE with shorter data sets has been proposed: the short-time MSE (sMSE). Our goal is to apply, for the first time, the sMSE algorithm in LSCI data and to compare results with those given by the original MSE. Moreover, we apply the original MSE algorithm on data of different lengths and compare results with those given by longer recordings. For this purpose, synthetic signals and 192 LSCI regions of interest (ROIs) of different sizes are processed. Our results show that the sMSE algorithm is valid to compute the MSE of LSCI data. Moreover, with time series shorter than those initially proposed, the sMSE and original MSE algorithms give results with no statistical difference from those of the original MSE algorithm with longer data sets. The minimal acceptable length depends on the ROI size. Comparisons of MSE from healthy and pathological subjects can be performed with shorter data sets than those proposed until now. PMID:26220209
Directory of Open Access Journals (Sweden)
Li Li
2012-07-01
Full Text Available Abstract Background Several biclustering algorithms have been proposed to identify biclusters, in which genes share similar expression patterns across a number of conditions. However, different algorithms would yield different biclusters and further lead to distinct conclusions. Therefore, some testing and comparisons between these algorithms are strongly required. Methods In this study, five biclustering algorithms (i.e. BIMAX, FABIA, ISA, QUBIC and SAMBA were compared with each other in the cases where they were used to handle two expression datasets (GDS1620 and pathway with different dimensions in Arabidopsis thaliana (A. thaliana GO (gene ontology annotation and PPI (protein-protein interaction network were used to verify the corresponding biological significance of biclusters from the five algorithms. To compare the algorithms’ performance and evaluate quality of identified biclusters, two scoring methods, namely weighted enrichment (WE scoring and PPI scoring, were proposed in our study. For each dataset, after combining the scores of all biclusters into one unified ranking, we could evaluate the performance and behavior of the five biclustering algorithms in a better way. Results Both WE and PPI scoring methods has been proved effective to validate biological significance of the biclusters, and a significantly positive correlation between the two sets of scores has been tested to demonstrate the consistence of these two methods. A comparative study of the above five algorithms has revealed that: (1 ISA is the most effective one among the five algorithms on the dataset of GDS1620 and BIMAX outperforms the other algorithms on the dataset of pathway. (2 Both ISA and BIMAX are data-dependent. The former one does not work well on the datasets with few genes, while the latter one holds well for the datasets with more conditions. (3 FABIA and QUBIC perform poorly in this study and they may be suitable to large datasets with more genes and
Korean Medication Algorithm for Bipolar Disorder 2014: comparisons with other treatment guidelines
Directory of Open Access Journals (Sweden)
Jeong JH
2015-06-01
with MS or AAP for dysphoric/psychotic mania. Aripiprazole, olanzapine, quetiapine, and risperidone were the first-line AAPs in nearly all of the phases of bipolar disorder across the guidelines. Most guidelines advocated newer AAPs as first-line treatment options in all phases, and lamotrigine in depressive and maintenance phases. Lithium and valproic acid were commonly used as MSs in all phases of bipolar disorder. As research evidence accumulated over time, recommendations of newer AAPs – such as asenapine, paliperidone, lurasidone, and long-acting injectable risperidone – became prominent. This comparison identifies that the treatment recommendations of the KMAP-BP 2014 are similar to those of other treatment guidelines and reflect current changes in prescription patterns for bipolar disorder based on accumulated research data. Further studies are needed to address several issues identified in our review. Keywords: bipolar disorder, pharmacotherapy, treatment algorithm, guideline comparison, KMAP-2014
Li, Borui; Mu, Chundi; WANG, Tao; Peng, Qian
2014-01-01
This is a revised version of our paper published in Journal of Convergence Information Technology(JCIT): "Comparison of Feature Point Extraction Algorithms for Vision Based Autonomous Aerial Refueling". We corrected some errors including measurement unit errors, spelling errors and so on. Since the published papers in JCIT are not allowed to be modified, we submit the revised version to arXiv.org to make the paper more rigorous and not to confuse other researchers.
Binnicker, Matthew J.; Jespersen, Deborah J.; Rollins, Leonard O.
2012-01-01
We describe the first direct comparison of the reverse and traditional syphilis screening algorithms in a population with a low prevalence of syphilis. Among 1,000 patients tested, the results for 6 patients were falsely reactive by reverse screening, compared to none by traditional testing. However, reverse screening identified 2 patients with possible latent syphilis that were not detected by rapid plasma reagin (RPR).
International Nuclear Information System (INIS)
Work in the respective areas included assessment of conditions related to sinkhole development. Information collected and assessed involved geology, hydrogeology, land use, lineaments and linear trends, identification of karst features and zones, and inventory of historical sinkhole development and type. Karstification of the candidate, Rhea County, and Morristown study areas, in comparison to other karst areas in Tennessee, can be classified informally as youthful, submature, and mature, respectively. Historical sinkhole development in the more karstified areas is attributed to the greater degree of structural deformation by faulting and fracturing, subsequent solutioning of bedrock, thinness of residuum, and degree of development by man. Sinkhole triggering mechanisms identified are progressive solution of bedrock, water-level fluctuations, piping, and loading. 68 refs., 18 figs., 11 tabs
Singh, Niraj Kumar
2012-01-01
Smart Sort algorithm is a "smart" fusion of heap construction procedures (of Heap sort algorithm) into the conventional "Partition" function (of Quick sort algorithm) resulting in a robust version of Quick sort algorithm. We have also performed empirical analysis of average case behavior of our proposed algorithm along with the necessary theoretical analysis for best and worst cases. Its performance was checked against some standard probability distributions, both uniform and non-uniform, like Binomial, Poisson, Discrete & Continuous Uniform, Exponential, and Standard Normal. The analysis exhibited the desired robustness coupled with excellent performance of our algorithm. Although this paper assumes the static partition ratios, its dynamic version is expected to yield still better results.
Osborn, John C
2013-01-01
ABSTRACT The Candidate is an attempt to marry elements of journalism and gaming into a format that both entertains and educates the player. The Google-AP Scholarship, a new scholarship award that is given to several journalists a year to work on projects at the threshold of technology and journalism, funded the project. The objective in this prototype version of the game is to put the player in the shoes of a congressional candidate during an off-year election, specificall...
Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms
Dominik Zurek; Marcin Pietron; Maciej Wielgosz; Kazimierz Wiatr
2013-01-01
Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting alg...
Performance Comparison of Known ICA Algorithms to a Wavelet-ICA Merger
Janett Walters-Williams, Yan Li
2011-01-01
These signals are however contaminated with artifacts which must be removed to have pure EEGsignals. These artifacts can be removed by Independent Component Analysis (ICA). In this paperwe studied the performance of three ICA algorithms (FastICA, JADE, and Radical) as well as ournewly developed ICA technique. Comparing these ICA algorithms, it is observed that our newtechniques perform as well as these algorithms at denoising EEG signals.
Jie TANG; Nett, Brian E; Chen, Guang-Hong
2009-01-01
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical rec...
Comparison of algorithms for distributed space exploration in a simulated environment
Cikač, Jaka
2014-01-01
Space exploration algorithms aim to discover as much unknown space as possible as efficiently as possible in the shortest possible time. To achieve this goal, we use distributed algorithms, implemented on multi-agent systems. In this work, we explore, which of the algorithms can efficiently explore space in a simulated environment Gridland. Since Gridland, in it's original release, was not meant for simulating space exploration, we had to make some modifications and enable movement history an...
Performance Comparison of Known ICA Algorithms to a Wavelet-ICA Merger
Directory of Open Access Journals (Sweden)
Janett Walters-Williams, Yan Li
2011-08-01
Full Text Available These signals are however contaminated with artifacts which must be removed to have pure EEGsignals. These artifacts can be removed by Independent Component Analysis (ICA. In this paperwe studied the performance of three ICA algorithms (FastICA, JADE, and Radical as well as ournewly developed ICA technique. Comparing these ICA algorithms, it is observed that our newtechniques perform as well as these algorithms at denoising EEG signals.
Comparison of strapdown inertial navigation algorithm based on rotation vector and dual quaternion
Institute of Scientific and Technical Information of China (English)
Wang Zhenhuan; Chen Xijun; Zeng Qingshuang
2013-01-01
For the navigation algorithm of the strapdown inertial navigation system,by comparing to the equations of the dual quaternion and quaternion,the superiority of the attitude algorithm based on dual quaternion over the ones based on rotation vector in accuracy is analyzed in the case of the rotation of navigation frame.By comparing the update algorithm of the gravitational velocity in dual quaternion solution with the compensation algorithm of the harmful acceleration in traditional velocity solution,the accuracy advantage of the gravitational velocity based on dual quaternion is addressed.In view of the idea of the attitude and velocity algorithm based on dual quaternion,an improved navigation algorithm is proposed,which is as much as the rotation vector algorithm in computational complexity.According to this method,the attitude quaternion does not require compensating as the navigation frame rotates.In order to verify the correctness of the theoretical analysis,simulations are carried out utilizing the software,and the simulation results show that the accuracy of the improved algorithm is approximately equal to the dual quaternion algorithm.
Directory of Open Access Journals (Sweden)
Rajeswari Sridhar
2010-07-01
Full Text Available In this work we have compared two indexing algorithms that have been used to index and retrieve Carnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithm for music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. The modification in the dual ternary algorithm was essential to handle variable length query phrase and to accommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted for Carnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternary algorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval in which features like MFCC, spectral flux, melody string and spectral centroid are used as features for indexing data into a hash table. The way in which collision resolution was handled by this hash table is different than the normal hash table approaches. It was observed that multi-key hashing based retrieval had a lesser time complexity than dual-ternary based indexing The algorithms were also compared for their precision and recall in which multi-key hashing had a better recall than modified dual ternary indexing for the sample data considered.
Directory of Open Access Journals (Sweden)
Chansiri Singhtaun
2010-01-01
Full Text Available Problem statement: The objective of this study is to develop efficient exact algorithms for a single source capacitated multi-facility location problem with rectilinear distance. This problem is concerned with locating m capacitated facilities in the two dimensional plane to satisfy the demand of n customers with minimum total transportation cost which is proportional to the rectilinear distance between the facilities and their customers. Approach: Two exact algorithms are proposed and compared. The first algorithm, decomposition algorithm, uses explicit branching on the allocation variables and then solve for location variable corresponding to each branch as the original Mixed Integer Programming (MIP formulation with nonlinear objective function of the problem. For the other algorithm, the new formulation of the problem is first created by making use of a well-known condition for the optimal facility locations. The problem is considered as a p-median problem and the original formulation is transformed to a binary integer programming problem. The classical exact algorithm based on this formulation which is branch-and-bound algorithm (implicit branching is then used. Results: Computational results show that decomposition algorithm can provide the optimum solution for larger size of the studied problem with much less processing time than the implicit branching on the discrete reformulated problem. Conclusion: The decomposition algorithm has a higher efficiency to deal with the studied NP-hard problems but is required to have efficient MIP software to support.
An Improved Chaotic Bat Algorithm for Solving Integer Programming Problems
Directory of Open Access Journals (Sweden)
Osama Abdel Raouf
2014-08-01
Full Text Available Bat Algorithm is a recently-developed method in the field of computational intelligence. In this paper is presented an improved version of a Bat Meta-heuristic Algorithm, (IBACH, for solving integer programming problems. The proposed algorithm uses chaotic behaviour to generate a candidate solution in behaviors similar to acoustic monophony. Numerical results show that the IBACH is able to obtain the optimal results in comparison to traditional methods (branch and bound, particle swarm optimization algorithm (PSO, standard Bat algorithm and other harmony search algorithms. However, the benefits of this proposed algorithm is in its ability to obtain the optimal solution within less computation, which save time in comparison with the branch and bound algorithm (exact solution method.
Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.
2012-01-01
Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.
Comparison of neuron selection algorithms of wavelet-based neural network
Mei, Xiaodan; Sun, Sheng-He
2001-09-01
Wavelet networks have increasingly received considerable attention in various fields such as signal processing, pattern recognition, robotics and automatic control. Recently people are interested in employing wavelet functions as activation functions and have obtained some satisfying results in approximating and localizing signals. However, the function estimation will become more and more complex with the growth of the input dimension. The hidden neurons contribute to minimize the approximation error, so it is important to study suitable algorithms for neuron selection. It is obvious that exhaustive search procedure is not satisfying when the number of neurons is large. The study in this paper focus on what type of selection algorithm has faster convergence speed and less error for signal approximation. Therefore, the Genetic algorithm and the Tabu Search algorithm are studied and compared by some experiments. This paper first presents the structure of the wavelet-based neural network, then introduces these two selection algorithms and discusses their properties and learning processes, and analyzes the experiments and results. We used two wavelet functions to test these two algorithms. The experiments show that the Tabu Search selection algorithm's performance is better than the Genetic selection algorithm, TSA has faster convergence rate than GA under the same stopping criterion.
Comparison of several algorithms of the electric force calculation in particle plasma models
International Nuclear Information System (INIS)
This work is devoted to plasma modelling using the technique of molecular dynamics. The crucial problem of most such models is the efficient calculation of electric force. This is usually solved by using the particle-in-cell (PIC) algorithm. However, PIC is an approximative algorithm as it underestimates the short-range interactions of charged particles. We propose a hybrid algorithm which adds these interactions to PIC. Then we include this algorithm in a set of algorithms which we test against each other in a two-dimensional collisionless magnetized plasma model. Besides our hybrid algorithm, this set includes two variants of pure PIC and the direct application of Coulomb's law. We compare particle forces, particle trajectories, total energy conservation and the speed of the algorithms. We find out that the hybrid algorithm can be a good replacement of direct Coulomb's law application (quite accurate and much faster). It is however probably unnecessary to use it in practical 2D models.
Antoniucci, S; Causi, G Li; Lorenzetti, D
2014-01-01
Aiming at statistically studying the variability in the mid-IR of young stellar objects (YSOs), we have compared the 3.6, 4.5, and 24 um Spitzer fluxes of 1478 sources belonging to the C2D (Cores to Disks) legacy program with the WISE fluxes at 3.4, 4.6, and 22 um. From this comparison we have selected a robust sample of 34 variable sources. Their variations were classified per spectral Class (according to the widely accepted scheme of Class I/flat/II/III protostars), and per star forming region. On average, the number of variable sources decreases with increasing Class and is definitely higher in Perseus and Ophiuchus than in Chamaeleon and Lupus. According to the paradigm Class Evolution, the photometric variability can be considered to be a feature more pronounced in less evolved protostars, and, as such, related to accretion processes. Moreover, our statistical findings agree with the current knowledge of the star formation activity in different regions. The 34 selected variables were further investigate...
Tang, Y.; Reed, P.; Wagner, T.
2005-12-01
This study provides the first comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools- relative effectiveness in calibrating integrated hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (??-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study assesses the performances of these three evolutionary multiobjective algorithms using a formal metrics-based methodology. This study uses two phases of testing to compare the algorithms- performances. In the first phase, this study uses a suite of standard computer science test problems to validate the algorithms- abilities to perform global search effectively, efficiently, and reliably. The second phase of testing compares the algorithms- performances for a computationally intensive multiobjective integrated hydrologic model calibration application for the Shale Hills watershed located within the Valley and Ridge province of the Susquehanna River Basin in north central Pennsylvania. The Shale Hills test case demonstrates the computational challenges posed by the paradigmatic shift in environmental and water resources simulation tools towards highly nonlinear physical models that seek to holistically simulate the water cycle. Specifically, the Shale Hills test case is an excellent test for the three EMO algorithms due to the large number of continuous decision variables, the increased computational demands posed by the simulating fully-coupled hydrologic processes, and the highly multimodal nature of the search space. A challenge and contribution of this work is the development of a comprehensive methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques.
Comparison of Fully Numerical Predictor-Corrector and Apollo Skip Entry Guidance Algorithms
Brunner, Christopher W.; Lu, Ping
2012-09-01
The dramatic increase in computational power since the Apollo program has enabled the development of numerical predictor-corrector (NPC) entry guidance algorithms that allow on-board accurate determination of a vehicle's trajectory. These algorithms are sufficiently mature to be flown. They are highly adaptive, especially in the face of extreme dispersion and off-nominal situations compared with reference-trajectory following algorithms. The performance and reliability of entry guidance are critical to mission success. This paper compares the performance of a recently developed fully numerical predictor-corrector entry guidance (FNPEG) algorithm with that of the Apollo skip entry guidance. Through extensive dispersion testing, it is clearly demonstrated that the Apollo skip entry guidance algorithm would be inadequate in meeting the landing precision requirement for missions with medium (4000-7000 km) and long (>7000 km) downrange capability requirements under moderate dispersions chiefly due to poor modeling of atmospheric drag. In the presence of large dispersions, a significant number of failures occur even for short-range missions due to the deviation from planned reference trajectories. The FNPEG algorithm, on the other hand, is able to ensure high landing precision in all cases tested. All factors considered, a strong case is made for adopting fully numerical algorithms for future skip entry missions.
Energy Technology Data Exchange (ETDEWEB)
Gotway, C.A. [Nebraska Univ., Lincoln, NE (United States). Dept. of Biometry; Rutherford, B.M. [Sandia National Labs., Albuquerque, NM (United States)
1993-09-01
Stochastic simulation has been suggested as a viable method for characterizing the uncertainty associated with the prediction of a nonlinear function of a spatially-varying parameter. Geostatistical simulation algorithms generate realizations of a random field with specified statistical and geostatistical properties. A nonlinear function is evaluated over each realization to obtain an uncertainty distribution of a system response that reflects the spatial variability and uncertainty in the parameter. Crucial management decisions, such as potential regulatory compliance of proposed nuclear waste facilities and optimal allocation of resources in environmental remediation, are based on the resulting system response uncertainty distribution. Many geostatistical simulation algorithms have been developed to generate the random fields, and each algorithm will produce fields with different statistical properties. These different properties will result in different distributions for system response, and potentially, different managerial decisions. The statistical properties of the resulting system response distributions are not completely understood, nor is the ability of the various algorithms to generate response distributions that adequately reflect the associated uncertainty. This paper reviews several of the algorithms available for generating random fields. Algorithms are compared in a designed experiment using seven exhaustive data sets with different statistical and geostatistical properties. For each exhaustive data set, a number of realizations are generated using each simulation algorithm. The realizations are used with each of several deterministic transfer functions to produce a cumulative uncertainty distribution function of a system response. The uncertainty distributions are then compared to the single value obtained from the corresponding exhaustive data set.
Directory of Open Access Journals (Sweden)
Rajeswari Sridhar
2010-07-01
Full Text Available In this work we have compared two indexing algorithms that have been used to index and retrieveCarnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithmfor music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. Themodification in the dual ternary algorithm was essential to handle variable length query phrase and toaccommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted forCarnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternaryalgorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval inwhich features like MFCC, spectral flux, melody string and spectral centroid are used as features forindexing data into a hash table. The way in which collision resolution was handled by this hash table isdifferent than the normal hash table approaches. It was observed that multi-key hashing based retrievalhad a lesser time complexity than dual-ternary based indexing The algorithms were also compared fortheir precision and recall in which multi-key hashing had a better recall than modified dual ternaryindexing for the sample data considered.
Comparison of reconstruction algorithms for sparse-array detection photoacoustic tomography
Chaudhary, G.; Roumeliotis, M.; Carson, J. J. L.; Anastasio, M. A.
2010-02-01
A photoacoustic tomography (PAT) imaging system based on a sparse 2D array of detector elements and an iterative image reconstruction algorithm has been proposed, which opens the possibility for high frame-rate 3D PAT. The efficacy of this PAT implementation is highly influenced by the choice of the reconstruction algorithm. In recent years, a variety of new reconstruction algorithms have been proposed for medical image reconstruction that have been motivated by the emerging theory of compressed sensing. These algorithms have the potential to accurately reconstruct sparse objects from highly incomplete measurement data, and therefore may be highly suited for sparse array PAT. In this context, a sparse object is one that is described by a relatively small number of voxel elements, such as typically arises in blood vessel imaging. In this work, we investigate the use of a gradient projection-based iterative reconstruction algorithm for image reconstruction in sparse-array PAT. The algorithm seeks to minimize an 1-norm penalized least-squares cost function. By use of computer-simulation studies, we demonstrate that the gradient projection algorithm may further improve the efficacy of sparse-array PAT.
COMPARISON PROCESS LONG EXECUTION BETWEEN PQ ALGORTHM AND NEW FUZZY LOGIC ALGORITHM FOR VOIP
Directory of Open Access Journals (Sweden)
Suardinata
2011-01-01
Full Text Available The transmission of voice over IP networks can generate network congestion due to weak supervision of the traffic incoming packet, queuing and scheduling. This congestion negatively affects the Quality of Service (QoS such as delay, packet drop and packet loss. Packet delay effects will affect the other QoS such as: unstable voice packet delivery, packet jitter, packet loss and echo. Priority Queuing (PQ algorithm is a more popular technique used in the VoIP network to reduce delays. In operation, the PQ is to use the method of sorting algorithms, search and route planning to classify packets on the router. Thus,this packet classifying method can result in repetition of the process. And this recursive loop leads to thenext queue starved. In this paper, to solving problems, there are three phases namely queuing phase,classifying phase and scheduling phase. The PQ algorithm technique is based on the priority. It will beapplied to the fuzzy inference system to classify the queuing incoming packet (voice, video and text; that can reduce recursive loop and starvation. After the incoming packet is classified, the packet will be sent to the packet buffering. In addition, to justify the research objective of the PQ improved algorithm will becompared against the algorithm existing PQ, which is found in the literature using metrics such as delay,packets drop and packet losses. This paper described about different execution long process in Priority (PQ and our algorithm. Our Algorithm is to simplify process execution Algorithm that can cause starvation occurs in PQ algorithm.
A comparison of three additive tree algorithms that rely on a least-squares loss criterion.
Smith, T J
1998-11-01
The performances of three additive tree algorithms which seek to minimize a least-squares loss criterion were compared. The algorithms included the penalty-function approach of De Soete (1983), the iterative projection strategy of Hubert & Arabie (1995) and the two-stage ADDTREE algorithm, (Corter, 1982; Sattath & Tversky, 1977). Model fit, comparability of structure, processing time and metric recovery were assessed. Results indicated that the iterative projection strategy consistently located the best-fitting tree, but also displayed a wider range and larger number of local optima. PMID:9854946
Clustering performance comparison using K-means and expectation maximization algorithms
Jung, Yong Gyu; Kang, Min Soo; Heo, Jun
2014-01-01
Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K-means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K-means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results. PMID:26019610
Algorithm comparison and benchmarking using a parallel spectra transform shallow water model
Energy Technology Data Exchange (ETDEWEB)
Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)
1995-04-01
In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.
Verbeeck, Cis; Colak, Tufan; Watson, Fraser T; Delouille, Veronique; Mampaey, Benjamin; Qahwaji, Rami
2011-01-01
Since the Solar Dynamics Observatory (SDO) began recording ~ 1 TB of data per day, there has been an increased need to automatically extract features and events for further analysis. Here we compare the overall detection performance, correlations between extracted properties, and usability for feature tracking of four solar feature-detection algorithms: the Solar Monitor Active Region Tracker (SMART) detects active regions in line-of-sight magnetograms; the Automated Solar Activity Prediction code (ASAP) detects sunspots and pores in white-light continuum images; the Sunspot Tracking And Recognition Algorithm (STARA) detects sunspots in white-light continuum images; the Spatial Possibilistic Clustering Algorithm (SPoCA) automatically segments solar EUV images into active regions (AR), coronal holes (CH) and quiet Sun (QS). One month of data from the SOHO/MDI and SOHO/EIT instruments during 12 May - 23 June 2003 is analysed. The overall detection performance of each algorithm is benchmarked against National Oc...
Comparison of Algorithms for Prediction of Protein Structural Features from Evolutionary Data.
Bywater, Robert P
2016-01-01
Proteins have many functions and predicting these is still one of the major challenges in theoretical biophysics and bioinformatics. Foremost amongst these functions is the need to fold correctly thereby allowing the other genetically dictated tasks that the protein has to carry out to proceed efficiently. In this work, some earlier algorithms for predicting protein domain folds are revisited and they are compared with more recently developed methods. In dealing with intractable problems such as fold prediction, when different algorithms show convergence onto the same result there is every reason to take all algorithms into account such that a consensus result can be arrived at. In this work it is shown that the application of different algorithms in protein structure prediction leads to results that do not converge as such but rather they collude in a striking and useful way that has never been considered before. PMID:26963911
DEFF Research Database (Denmark)
Ivanova, N.; Pedersen, L. T.; Tonboe, R. T.;
2015-01-01
algorithm inter-comparison and evaluation experiment. The skills of 30 sea ice algorithms were evaluated systematically over low and high sea ice concentrations. Evaluation criteria included standard deviation relative to independent validation data, performance in the presence of thin ice and melt ponds...
Performance Comparison of Incremental K-means and Incremental DBSCAN Algorithms
Chakraborty, Sanjay; Nagwani, N. K.; Dey, Lopamudra
2014-01-01
Incremental K-means and DBSCAN are two very important and popular clustering techniques for today's large dynamic databases (Data warehouses, WWW and so on) where data are changed at random fashion. The performance of the incremental K-means and the incremental DBSCAN are different with each other based on their time analysis characteristics. Both algorithms are efficient compare to their existing algorithms with respect to time, cost and effort. In this paper, the performance evaluation of i...
A comparison of two extended Kalman filter algorithms for air-to-air passive ranging.
Ewing, Ward Hubert.
1983-01-01
Approved for public release; distribution is unlimited Two Extended Kalman Filter algorithms for air-to-air passive ranging are proposed, and examined by computer simulation. One algorithm uses only bearing observations while the other uses both bearing and elevation angles. Both are tested using a flat-Earth model and also using a spherical-Earth model where the benefit of a simple correction for the curvature-of-the-Earth effect on elevation angle is examined. The effects of varied an...
Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas
2016-03-01
Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.
A Comparison of Two Open Source LiDAR Surface Classification Algorithms
Danny G Marks; Nancy F. Glenn; Timothy E. Link; Hudak, Andrew T.; Rupesh Shrestha; Michael J. Falkowski; Alistair M. S. Smith; Hongyu Huang; Wade T. Tinkham
2011-01-01
With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are openly available, free to use, and are supported by published results. Two of the latter are the multiscale curvature classification and the Boise Center Aerospace Laboratory LiDAR (BCAL) algorithms....
A Comparison and Selection on Basic Type of Searching Algorithm in Data Structure
Kamlesh Kumar Pandey; Narendra Pradhan
2014-01-01
A lot of problems in different practical fields of Computer Science, Database Management System, Networks, Data Mining and Artificial intelligence. Searching is common fundamental operation and solve to searching problem in a different formats of these field. This research paper are presents the basic type of searching algorithms of data structure like linear search, binary search, and hash search. We have try to cover some technical aspects to this searching algorithm. This research is provi...
Huh, Hee Jin; Chung, Jae-Woo; Park, Seong Yeon; Chae, Seok Lae
2015-01-01
Background Automated Mediace Treponema pallidum latex agglutination (TPLA) and Mediace rapid plasma reagin (RPR) assays are used by many laboratories for syphilis diagnosis. This study compared the results of the traditional syphilis screening algorithm and a reverse algorithm using automated Mediace RPR or Mediace TPLA as first-line screening assays in subjects undergoing a health checkup. Methods Samples from 24,681 persons were included in this study. We routinely performed Mediace RPR and...
Comparison of Fractal Dimension Algorithms for the Computation of EEG Biomarkers for Dementia
Goh, Cindy; Hamadicharef, Brahim; Henderson, Goeff,; Ifeachor, Emmanuel
2005-01-01
Analysis of the Fractal Dimension of the EEG appears to be a good approach for the computation of biomarkers for dementia. Several Fractal Dimension algorithms have been used in the EEG analysis of cognitive and sleep disorders. The aim of this paper is to find an accurate Fractal Dimension algorithm that can be applied to the EEG for computing reliable biomarkers, specifically, for the assessment of dementia. To achieve this, some of the common methods for estimating the Fractal Dimension of...
Performance Comparison Research of the FECG Signal Separation Based on the BSS Algorithm
Directory of Open Access Journals (Sweden)
Xinling Wen
2012-08-01
Full Text Available Fetal Electrocardiogram (FECG is a weak signal through placing the electrodes upon the maternal belly surface to indirectly monitor, which contains all the forms of jamming signal. So, how to separate the FECG from the strong background interference has important value of clinical application. Independent Component Analysis (ICA is a kind of developed new Blind Source Separation (BSS technology in recent years. This study adopted ICA method to the extraction of FECG and carried out the blind signal separation by using the Fast ICA algorithm and natural gradient algorithm in the FECG separation research. The experimental results shown that the two kind of algorithm can obtain the good separation result. But because the natural gradient algorithm can achieve FECG online separation and separation effect is better than Fast ICA algorithm, therefore, the natural gradient algorithm is a better way to used in FECG separation. And it will help to monitor the congenital heart disease, neonatal arrhythmia, intrauterine fetal retardation and other diseases, which has very important test application value.
Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.
2016-03-01
Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.
International Nuclear Information System (INIS)
The dose accuracy calculated by a treatment planning system is directly related to the chosen algorithm. Nowadays, several calculation doses algorithms are commercially available and they differ in calculation time and accuracy, especially when individual tissue densities are taken into account. The aim of this study was to compare two different calculation algorithms from iPlan®, BrainLAB, in the treatment of pituitary gland tumor with intensity-modulated radiation therapy (IMRT). These tumors are located in a region with variable electronic density tissues. The deviations from the plan with no heterogeneity correction were evaluated. To initial validation of the data inserted into the planning system, an IMRT plan was simulated in a anthropomorphic phantom and the dose distribution was measured with a radiochromic film. The gamma analysis was performed in the film, comparing it with dose distributions calculated with X-ray Voxel Monte Carlo (XVMC) algorithm and pencil beam convolution (PBC). Next, 33 patients plans, initially calculated by PBC algorithm, were recalculated with XVMC algorithm. The treatment volumes and organs-at-risk dose-volume histograms were compared. No relevant differences were found in dose-volume histograms between XVMC and PBC. However, differences were obtained when comparing each plan with the plan without heterogeneity correction. (author)
Directory of Open Access Journals (Sweden)
Natarajan Meghanathan
2013-05-01
Full Text Available The high-level contribution of this paper is an exhaustive simulation-based comparison study of three categories (density, node id and stability-based of algorithms to determine connected dominating sets (CDS for mobile ad hoc networks and evaluate their performance under two categories (random node mobility and grid-based vehicular ad hoc network of mobility models. The CDS algorithms studied are the maximum density-based (MaxD-CDS, node ID-based (ID-CDS and the minimum velocity-based (MinV-CDS algorithms representing the density, node id and stability categories respectively. The node mobility models used are the Random Waypoint model (representing random node mobility and the City Section and Manhattan mobility models (representing the grid-based vehicular ad hoc networks. The three CDS algorithms under the three mobility models are evaluated with respect to two critical performance metrics: the effective CDS lifetime (calculated taking into consideration the CDS connectivity and absolute CDS lifetime and the CDS node size. Simulations are conducted under a diverse set of conditions representing low, moderate and high network density, coupled with low, moderate and high node mobility scenarios. For each CDS, the paper identifies the mobility model that can be employed to simultaneously maximize the lifetime and minimize the node size with minimal tradeoff. For the two VANET mobility models, the impact of the grid block length on the CDS lifetime and node size is also evaluated.
Performance comparison of neural network training algorithms in modeling of bimodal drug delivery.
Ghaffari, A; Abdollahi, H; Khoshayand, M R; Bozchalooi, I Soltani; Dadgar, A; Rafiee-Tehrani, M
2006-12-11
The major aim of this study was to model the effect of two causal factors, i.e. coating weight gain and amount of pectin-chitosan in the coating solution on the in vitro release profile of theophylline for bimodal drug delivery. Artificial neural network (ANN) as a multilayer perceptron feedforward network was incorporated for developing a predictive model of the formulations. Five different training algorithms belonging to three classes: gradient descent, quasi-Newton (Levenberg-Marquardt, LM) and genetic algorithm (GA) were used to train ANN containing a single hidden layer of four nodes. The next objective of the current study was to compare the performance of aforementioned algorithms with regard to predicting ability. The ANNs were trained with those algorithms using the available experimental data as the training set. The divergence of the RMSE between the output and target values of test set was monitored and used as a criterion to stop training. Two versions of gradient descent backpropagation algorithms, i.e. incremental backpropagation (IBP) and batch backpropagation (BBP) outperformed the others. No significant differences were found between the predictive abilities of IBP and BBP, although, the convergence speed of BBP is three- to four-fold higher than IBP. Although, both gradient descent backpropagation and LM methodologies gave comparable results for the data modeling, training of ANNs with genetic algorithm was erratic. The precision of predictive ability was measured for each training algorithm and their performances were in the order of: IBP, BBP>LM>QP (quick propagation)>GA. According to BBP-ANN implementation, an increase in coating levels and a decrease in the amount of pectin-chitosan generally retarded the drug release. Moreover, the latter causal factor namely the amount of pectin-chitosan played slightly more dominant role in determination of the dissolution profiles. PMID:16959449
Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.
2016-06-01
High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.
Montilla, I; Béchet, C; Le Louarn, M; Reyes, M; Tallon, M
2010-11-01
Extremely Large Telescopes (ELTs) are very challenging with respect to their adaptive optics (AO) requirements. Their diameters and the specifications required by the astronomical science for which they are being designed imply a huge increment in the number of degrees of freedom in the deformable mirrors. Faster algorithms are needed to implement the real-time reconstruction and control in AO at the required speed. We present the results of a study of the AO correction performance of three different algorithms applied to the case of a 42-m ELT: one considered as a reference, the matrix-vector multiply (MVM) algorithm; and two considered fast, the fractal iterative method (FrIM) and the Fourier transform reconstructor (FTR). The MVM and the FrIM both provide a maximum a posteriori estimation, while the FTR provides a least-squares one. The algorithms are tested on the European Southern Observatory (ESO) end-to-end simulator, OCTOPUS. The performance is compared using a natural guide star single-conjugate adaptive optics configuration. The results demonstrate that the methods have similar performance in a large variety of simulated conditions. However, with respect to system misregistrations, the fast algorithms demonstrate an interesting robustness. PMID:21045895
Comparison of optimization algorithms in intensity-modulated radiation therapy planning
Kendrick, Rachel
Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.
Directory of Open Access Journals (Sweden)
C. Keim
2009-05-01
Full Text Available This paper presents a first statistical validation of tropospheric ozone products derived from measurements of the satellite instrument IASI. Since end of 2006, IASI (Infrared Atmospheric Sounding Interferometer aboard the polar orbiter Metop-A measures infrared spectra of the Earth's atmosphere in nadir geometry. This validation covers the northern mid-latitudes and the period from July 2007 to August 2008. The comparison of the ozone products with the vertical ozone concentration profiles from balloon sondes leads to estimates of the systematic and random errors in the IASI ozone products. The intercomparison of the retrieval results from four different sources (including the EUMETSAT ozone products shows systematic differences due to the used methods and algorithms. On average the tropospheric columns have a small bias of less than 2 Dobson Units (DU when compared to the sonde measured columns. The comparison of the still pre-operational EUMETSAT columns shows higher mean differences of about 5 DU.
Amooee, Golriz; Bagheri-Dehnavi, Malihe
2012-01-01
In the current competitive world, industrial companies seek to manufacture products of higher quality which can be achieved by increasing reliability, maintainability and thus the availability of products. On the other hand, improvement in products lifecycle is necessary for achieving high reliability. Typically, maintenance activities are aimed to reduce failures of industrial machinery and minimize the consequences of such failures. So the industrial companies try to improve their efficiency by using different fault detection techniques. One strategy is to process and analyze previous generated data to predict future failures. The purpose of this paper is to detect wasted parts using different data mining algorithms and compare the accuracy of these algorithms. A combination of thermal and physical characteristics has been used and the algorithms were implemented on Ahanpishegan's current data to estimate the availability of its produced parts. Keywords: Data Mining, Fault Detection, Availability, Predictio...
Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models
Directory of Open Access Journals (Sweden)
Hiroki Yoshioka
2011-07-01
Full Text Available The fraction of vegetation cover (FVC is often estimated by unmixing a linear mixture model (LMM to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could be used to assess the robustness of LMM-based algorithms under a two-endmember assumption. The factor was derived from the analytical relationship between FVC values determined according to several previously described algorithms. The factor depended on the target spectra, endmember spectra, and choice of the spectral vegetation index. Numerical simulations were conducted to demonstrate the dependence and usefulness of the technique in terms of robustness against the measurement noise.
Vijay Alagappan, A.; Narasimha Rao, K. V.; Krishna Kumar, R.
2015-02-01
Tyre models are a prerequisite for any vehicle dynamics simulation. Tyre models range from the simplest mathematical models that consider only the cornering stiffness to a complex set of formulae. Among all the steady-state tyre models that are in use today, the Magic Formula tyre model is unique and most popular. Though the Magic Formula tyre model is widely used, obtaining the model coefficients from either the experimental or the simulation data is not straightforward due to its nonlinear nature and the presence of a large number of coefficients. A common procedure used for this extraction is the least-squares minimisation that requires considerable experience for initial guesses. Various researchers have tried different algorithms, namely, gradient and Newton-based methods, differential evolution, artificial neural networks, etc. The issues involved in all these algorithms are setting bounds or constraints, sensitivity of the parameters, the features of the input data such as the number of points, noisy data, experimental procedure used such as slip angle sweep or tyre measurement (TIME) procedure, etc. The extracted Magic Formula coefficients are affected by these variants. This paper highlights the issues that are commonly encountered in obtaining these coefficients with different algorithms, namely, least-squares minimisation using trust region algorithms, Nelder-Mead simplex, pattern search, differential evolution, particle swarm optimisation, cuckoo search, etc. A key observation is that not all the algorithms give the same Magic Formula coefficients for a given data. The nature of the input data and the type of the algorithm decide the set of the Magic Formula tyre model coefficients.
Recent Research and Comparison of QoS Routing Algorithms for MPLS Networks
Directory of Open Access Journals (Sweden)
Santosh Kulkarni
2012-03-01
Full Text Available MPLS enables service providers to meet challenges brought about by explosive growth and provides the opportunity for differentiated services without necessitating the sacrifice of the existing infrastructure. MPLS is a highly scalable data carrying mechanism which forwards packets to outgoing interface based only on label value .MPLS network has the capability of routing with some specific constraints for supporting desired QoS. In this paper we will compare recent QoS Routing Algorithms for MPLS Networks. We are presenting simulation results which will focus on the computational complexity of each algorithm, performances under a wide range of workload, topology and system parameters.
International Nuclear Information System (INIS)
Multichannel pulse height measurements with a cylindrical 3He proportional counter obtained at a reactor filter of natural iron are taken to investigate the properties of three algorithms for neutron spectrum unfolding. For a systematic application of uncertainty propagation the covariance matrix of previously determined 3He response functions is evaluated. The calculated filter transmission function together with a covariance matrix estimated from cross-section uncertainties of the filter material is used as fluence pre-information. The results obtained from algorithms with and without pre-information differ in shape and uncertainties for single group fluence values, but there is sufficient agreement when evaluating integrals over neutron energy intervals
An Efficient Approach for Candidate Set Generation
Nawar Malhis; Arden Ruttan; Hazem H. Refai
2005-01-01
When Apriori was first introduced as an algorithm for discovering association rules in a database of market basket data, the problem of generating the candidate set of the large set was a bottleneck in Apriori's performance, both in space and computational requirements. At first, many unsuccessful attempts were made to improve the generation of a candidate set. Later, other algorithms that out performed Apriori were developed that generate association rules without using a candidate set. They...
A Comparison of the Machine Learning Algorithm for Evaporation Duct Estimation
Yang, C.
2013-01-01
In this research, a comparison of the relevance vector machine (RVM), least square support vector machine (LSSVM) and the radial basis function neural network (RBFNN) for evaporation duct estimation are presented. The parabolic equation model is adopted as the forward propagation model, and which is used to establish the training database between the radar sea clutter power and the evaporation duct height. The comparison of the RVM, LSSVM and RBFNN for evaporation duct estimation are investig...
DEFF Research Database (Denmark)
Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels; Pedersen, Gert Frølund
2014-01-01
A comparison in data achievement between two well-known algorithms with simulated and real measured data is presented. The algorithms maximise the data rate in cooperative base stations (BS) multiple-input-single-output scenario. Weighted sum-minimum mean square error algorithm could be used in...... multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results....
Pick-N Multiple Choice-Exams: A Comparison of Scoring Algorithms
Bauer, Daniel; Holzer, Matthias; Kopp, Veronika; Fischer, Martin R.
2011-01-01
To compare different scoring algorithms for Pick-N multiple correct answer multiple-choice (MC) exams regarding test reliability, student performance, total item discrimination and item difficulty. Data from six 3rd year medical students' end of term exams in internal medicine from 2005 to 2008 at Munich University were analysed (1,255 students,…
Institute of Scientific and Technical Information of China (English)
Haixing Liu,Jing Lu,Ming Zhao∗; Yixing Yuan
2016-01-01
In order to compare two advanced multi⁃objective evolutionary algorithms, a multi⁃objective water distribution problem is formulated in this paper. The multi⁃objective optimization has received more attention in the water distribution system design. On the one hand the cost of water distribution system including capital, operational, and maintenance cost is mostly concerned issue by the utilities all the time; on the other hand improving the performance of water distribution systems is of equivalent importance, which is often conflicting with the previous goal. Many performance metrics of water networks are developed in recent years, including total or maximum pressure deficit, resilience, inequity, probabilistic robustness, and risk measure. In this paper, a new resilience metric based on the energy analysis of water distribution systems is proposed. Two optimization objectives are comprised of capital cost and the new resilience index. A heuristic algorithm, speed⁃constrained multi⁃objective particle swarm optimization ( SMPSO) extended on the basis of the multi⁃objective particle swarm algorithm, is introduced to compare with another state⁃of⁃the⁃art heuristic algorithm, NSGA⁃II. The solutions are evaluated by two metrics, namely spread and hypervolume. To illustrate the capability of SMPSO to efficiently identify good designs, two benchmark problems ( two⁃loop network and Hanoi network) are employed. From several aspects the results demonstrate that SMPSO is a competitive and potential tool to tackle with the optimization problem of complex systems.
Delimata, Paweł
2010-01-01
We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.
Movia, A.; Beinat, A.; Crosilla, F.
2015-04-01
The recognition of vegetation by the analysis of very high resolution (VHR) aerial images provides meaningful information about environmental features; nevertheless, VHR images frequently contain shadows that generate significant problems for the classification of the image components and for the extraction of the needed information. The aim of this research is to classify, from VHR aerial images, vegetation involved in the balance process of the environmental biochemical cycle, and to discriminate it with respect to urban and agricultural features. Three classification algorithms have been experimented in order to better recognize vegetation, and compared to NDVI index; unfortunately all these methods are conditioned by the presence of shadows on the images. Literature presents several algorithms to detect and remove shadows in the scene: most of them are based on the RGB to HSI transformations. In this work some of them have been implemented and compared with one based on RGB bands. Successively, in order to remove shadows and restore brightness on the images, some innovative algorithms, based on Procrustes theory, have been implemented and applied. Among these, we evaluate the capability of the so called "not-centered oblique Procrustes" and "anisotropic Procrustes" methods to efficiently restore brightness with respect to a linear correlation correction based on the Cholesky decomposition. Some experimental results obtained by different classification methods after shadows removal carried out with the innovative algorithms are presented and discussed.
A comparison of two open source LiDAR surface classification algorithms
With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are op...
Pande, Saket; Sharma, Ashish
2014-05-01
This study is motivated by the need to robustly specify, identify, and forecast runoff generation processes for hydroelectricity production. It atleast requires the identification of significant predictors of runoff generation and the influence of each such significant predictor on runoff response. To this end, we compare two non-parametric algorithms of predictor subset selection. One is based on information theory that assesses predictor significance (and hence selection) based on Partial Information (PI) rationale of Sharma and Mehrotra (2014). The other algorithm is based on a frequentist approach that uses bounds on probability of error concept of Pande (2005), assesses all possible predictor subsets on-the-go and converges to a predictor subset in an computationally efficient manner. Both the algorithms approximate the underlying system by locally constant functions and select predictor subsets corresponding to these functions. The performance of the two algorithms is compared on a set of synthetic case studies as well as a real world case study of inflow forecasting. References: Sharma, A., and R. Mehrotra (2014), An information theoretic alternative to model a natural system using observational information alone, Water Resources Research, 49, doi:10.1002/2013WR013845. Pande, S. (2005), Generalized local learning in water resource management, PhD dissertation, Utah State University, UT-USA, 148p.
A comparison of reconstruction algorithms for C-arm mammography tomosynthesis
International Nuclear Information System (INIS)
Digital tomosynthesis is an imaging technique to produce a tomographic image from a series of angular digital images in a manner similar to conventional focal plane tomography. Unlike film focal plane tomography, the acquisition of the data in a C-arm geometry causes the image receptor to be positioned at various angles to the reconstruction tomogram. The digital nature of the data allows for input images to be combined into the desired plane with the flexibility of generating tomograms of many separate planes from a single set of input data. Angular datasets were obtained of a low contrast detectability (LCD) phantom and cadaver breast utilizing a Lorad stereotactic biopsy unit with a coupled source and digital detector in a C-arm configuration. Datasets of 9 and 41 low-dose projections were collected over a 30 deg. angular range. Tomographic images were reconstructed using a Backprojection (BP) algorithm, an Iterative Subtraction (IS) algorithm that allows the partial subtraction of out-of-focus planes, and an Algebraic Reconstruction (AR) algorithm. These were compared with single view digital radiographs. The methods' effectiveness at enhancing visibility of an obscured LCD phantom was quantified in terms of the Signal to Noise Ratio (SNR), and Signal to Background Ratio (SBR), all normalized to the metric value for the single projection image. The methods' effectiveness at removing ghosting artifacts in a cadaver breast was quantified in terms of the Artifact Spread Function (ASF). The technology proved effective at partially removing out of focus structures and enhancing SNR and SBR. The normalized SNR was highest at 4.85 for the obscured LCD phantom, using nine projections and IS algorithm. The normalized SBR was highest at 23.2 for the obscured LCD phantom, using 41 projections and an AR algorithm. The highest normalized metric values occurred with the obscured phantom. This supports the assertion that the greatest value of tomosynthesis is in imaging
Comparison between Acuros XB and Brainlab Monte Carlo algorithms for photon dose calculation
Energy Technology Data Exchange (ETDEWEB)
Misslbeck, M.; Kneschaurek, P. [Klinikum rechts der Isar der Technischen Univ. Muenchen (Germany). Klinik und Poliklinik fuer Strahlentherapie und Radiologische Onkologie
2012-07-15
Purpose: The Acuros {sup registered} XB dose calculation algorithm by Varian and the Monte Carlo algorithm XVMC by Brainlab were compared with each other and with the well-established AAA algorithm, which is also from Varian. Methods: First, square fields to two different artificial phantoms were applied: (1) a 'slab phantom' with a 3 cm water layer, followed by a 2 cm bone layer, a 7 cm lung layer, and another 18 cm water layer and (2) a 'lung phantom' with water surrounding an eccentric lung block. For the slab phantom, depth-dose curves along central beam axis were compared. The lung phantom was used to compare profiles at depths of 6 and 14 cm. As clinical cases, the CTs of three different patients were used. The original AAA plans with all three algorithms using open fields were recalculated. Results: There were only minor differences between Acuros and XVMC in all artificial phantom depth doses and profiles; however, this was different for AAA, which had deviations of up to 13% in depth dose and a few percent for profiles in the lung phantom. These deviations did not translate into the clinical cases, where the dose-volume histograms of all algorithms were close to each other for open fields. Conclusion: Only within artificial phantoms with clearly separated layers of simulated tissue does AAA show differences at layer boundaries compared to XVMC or Acuros. In real patient CTs, these differences in the dose-volume histogram of the planning target volume were not observed. (orig.)
Directory of Open Access Journals (Sweden)
Yong Tian
2014-12-01
Full Text Available State of charge (SOC estimation is essential to battery management systems in electric vehicles (EVs to ensure the safe operations of batteries and providing drivers with the remaining range of the EVs. A number of estimation algorithms have been developed to get an accurate SOC value because the SOC cannot be directly measured with sensors and is closely related to various factors, such as ambient temperature, current rate and battery aging. In this paper, two model-based adaptive algorithms, including the adaptive unscented Kalman filter (AUKF and adaptive slide mode observer (ASMO are applied and compared in terms of convergence behavior, tracking accuracy, computational cost and estimation robustness against parameter uncertainties of the battery model in SOC estimation. Two typical driving cycles, including the Dynamic Stress Test (DST and New European Driving Cycle (NEDC are applied to evaluate the performance of the two algorithms. Comparison results show that the AUKF has merits in convergence ability and tracking accuracy with an accurate battery model, while the ASMO has lower computational cost and better estimation robustness against parameter uncertainties of the battery model.
Energy Technology Data Exchange (ETDEWEB)
Wu, Vincent W.C., E-mail: htvinwu@polyu.edu.hk [Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong SAR (Hong Kong); Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y. [Department of Health Technology and Informatics, Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong SAR (Hong Kong)
2013-07-01
Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.
Directory of Open Access Journals (Sweden)
Howard Williams
2014-05-01
Full Text Available Stochastic diffusion search (SDS is a multi-agent global optimisation technique based on the behaviour of ants, rooted in the partial evaluation of an objective function and direct communication between agents. Standard SDS, the fundamental algorithm at work in all SDS processes, is presented here. Parameter estimation is the task of suitably fitting a model to given data; some form of parameter estimation is a key element of many computer vision processes. Here, the task of hyperplane estimation in many dimensions is investigated. Following RANSAC (random sample consensus, a widely used optimisation technique and a standard technique for many parameter estimation problems, increasingly sophisticated data-driven forms of SDS are developed. The performance of these SDS algorithms and RANSAC is analysed and compared for a hyperplane estimation task. SDS is shown to perform similarly to RANSAC, with potential for tuning to particular search problems for improved results.
A comparison of thermal algorithms of fuel rod performance code systems
International Nuclear Information System (INIS)
The goal of the fuel rod performance is to identify the robustness of a fuel rod with cladding material. Computer simulation of the fuel rod performance becomes one of important parts to designed and evaluate new nuclear fuels and claddings. To construct a computing code system for the fuel rod performance, several algorithms of the existing fuel rod performance code systems are compared and are summarized as a preliminary work. Among several code systems, FRAPCON, and FEMAXI for LWR, ELESTRES for CANDU reactor, and LIFE for fast reactor are reviewed. Thermal algorithms of the above codes are investigated including methodologies and subroutines. This work will be utilized to construct a computing code system for dry process fuel rod performance
Energy Technology Data Exchange (ETDEWEB)
Fan, Chengguang [College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha 410073, PR China and Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom); Drinkwater, Bruce W. [Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom)
2014-02-18
In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.
Byrne, Dallan; O'Halloran, Martin; Jones, Edward; Glavin, Martin
2009-01-01
Ultrawideband (UWB) radar is one of the most promising alternatives to X-ray mammography as an imaging modality for the early detection of breast cancer. Several beamforming algorithms have been developed which exploit the dielectric contrast between normal and cancerous tissue at microwave frequencies in order to detect tumors. Dielectric heterogeneity within the breast greatly effects the ability of a beamformer to detect very small tumors, therefore the design of an effective beamformer for this application represents a significant challenge. This paper analyzes and compares 3 data-independent beamforming algorithms, testing each system on an anatomically correct, MRI derived breast model which incorporates recently-published data on dielectric properties. PMID:19964043
Salim, Umer
2010-01-01
In multi-user communication from one base station (BS) to multiple users, the problem of minimizing the transmit power to achieve some target guaranteed performance (rates) at users has been well investigated in the literature. Similarly various user selection algorithms have been proposed and analyzed when the BS has to transmit to a subset of the users in the system, mostly for the objective of the sum rate maximization. We study the joint problem of minimizing the transmit power at the BS to achieve specific signal-to-interference-and-noise ratio (SINR) targets at users in conjunction with user scheduling. The general analytical results for the average transmit power required to meet guaranteed performance at the users' side are difficult to obtain even without user selection due to joint optimization required over beamforming vectors and power allocation scalars. We study the transmit power minimization problem with various user selection algorithms, namely semi-orthogonal user selection (SUS), norm-based...
International Nuclear Information System (INIS)
In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded
Comparison Study on the Battery SoC Estimation with EKF and UKF Algorithms
Directory of Open Access Journals (Sweden)
Hongwen He
2013-09-01
Full Text Available The battery state of charge (SoC, whose estimation is one of the basic functions of battery management system (BMS, is a vital input parameter in the energy management and power distribution control of electric vehicles (EVs. In this paper, two methods based on an extended Kalman filter (EKF and unscented Kalman filter (UKF, respectively, are proposed to estimate the SoC of a lithium-ion battery used in EVs. The lithium-ion battery is modeled with the Thevenin model and the model parameters are identified based on experimental data and validated with the Beijing Driving Cycle. Then space equations used for SoC estimation are established. The SoC estimation results with EKF and UKF are compared in aspects of accuracy and convergence. It is concluded that the two algorithms both perform well, while the UKF algorithm is much better with a faster convergence ability and a higher accuracy.
Betremieux, Yan
2015-01-01
Atmospheric refraction affects to various degrees exoplanet transit, lunar eclipse, as well as stellar occultation observations. Exoplanet retrieval algorithms often use analytical expressions for the column abundance along a ray traversing the atmosphere as well as for the deflection of that ray, which are first order approximations valid for low densities in a spherically symmetric homogeneous isothermal atmosphere. We derive new analytical formulae for both of these quantities, which are valid for higher densities, and use them to refine and validate a new ray tracing algorithm which can be used for arbitrary atmospheric temperature-pressure profiles. We illustrate with simple isothermal atmospheric profiles the consequences of our model for different planets: temperate Earth-like and Jovian-like planets, as well as HD189733b, and GJ1214b. We find that, for both hot exoplanets, our treatment of refraction does not make much of a difference to pressures as high as 10 atmosphere, but that it is important to ...
Performance Comparison of Three Parallel Implementations of a SchwarzSplitting Algorithm
Gamble, Jim; Ribbens, Calvin J.
1989-01-01
We describe three implementations of a Schwarz splitting algorithm for the numerical solution of two dimensional, second-order, linear elliptical partial differential equations. One implementation makes use of the SCHEDULE package. A second uses the language extensions available in SEQUENT Fortran for creating and controlling parallel processes. The third implementation is a hybrid of the first two -- using explicit (non-portable) calls to create a control parallel processes, but using dat...
A Comparison of Selected Modifications of the Particle Swarm Optimization Algorithm
2014-01-01
We compare 27 modifications of the original particle swarm optimization (PSO) algorithm. The analysis evaluated nine basic PSO types, which differ according to the swarm evolution as controlled by various inertia weights and constriction factor. Each of the basic PSO modifications was analyzed using three different distributed strategies. In the first strategy, the entire swarm population is considered as one unit (OC-PSO), the second strategy periodically partitions the population into equal...
Verbeeck, Cis; Higgins, Paul A.; Colak, Tufan; Watson, Fraser T.; Delouille, Veronique; Mampaey, Benjamin; Qahwaji, Rami
2011-01-01
Since the Solar Dynamics Observatory (SDO) began recording ~ 1 TB of data per day, there has been an increased need to automatically extract features and events for further analysis. Here we compare the overall detection performance, correlations between extracted properties, and usability for feature tracking of four solar feature-detection algorithms: the Solar Monitor Active Region Tracker (SMART) detects active regions in line-of-sight magnetograms; the Automated Solar Activity Prediction...
Sheta, B.; Elhabiby, M.; Sheimy, N.
2012-01-01
A robust scale and rotation invariant image matching algorithm is vital for the Visual Based Navigation (VBN) of aerial vehicles, where matches between an existing geo-referenced database images and the real-time captured images are used to georeference (i.e. six transformation parameters - three rotation and three translation) the real-time captured image from the UAV through the collinearity equations. The georeferencing information is then used in aiding the INS integration Kalman filter a...
A comparison of waveform processing algorithms for single-wavelength LiDAR bathymetry
Wang, Chisheng; Li, Qingquan; Liu, Yanxiong; Wu, Guofeng; Liu, Peng; Ding, Xiaoli
2015-03-01
Due to the low-cost and lightweight units, single-wavelength LiDAR bathymetric systems are an ideal option for shallow-water (bathymetric retrieval. In this paper, we test six algorithms for single-wavelength bathymetric waveform processing, i.e. peak detection (PD), the average square difference function (ASDF), Gaussian decomposition (GD), quadrilateral fitting (QF), Richardson-Lucy deconvolution (RLD), and Wiener filter deconvolution (WD). To date, most of these algorithms have previously only been applied in topographic LiDAR waveforms captured over land. A simulated dataset and an Optech Aquarius dataset were used to assess the algorithms, with the focus being on their capability of extracting the depth and the bottom response. The influences of a number of water and equipment parameters were also investigated by the use of a Monte Carlo method. The results showed that the RLD method had a superior performance in terms of a high detection rate and low errors in the retrieved depth and magnitude. The attenuation coefficient, noise level, water depth, and bottom reflectance had significant influences on the measurement error of the retrieved depth, while the effects of scan angle and water surface roughness were not so obvious.
Comparison of the period detection algorithms based on Pi of the Sky data
Opiela, Rafał; Mankiewicz, Lech; Żarnecki, Aleksander Filip
2015-09-01
The Pi of the Sky is a system of five autonomous detectors designed for continuous observation of the night sky, mainly looking for optical flashes of astrophysical origin, in particular for Gamma Ray Bursts (GRB). In the Pi of the Sky project we also study many kinds of variable stars (periods in range of 0.5d - 1000.0d) or take part in the multiwavelength observing campaigns, such as the DG Cvn outburst observations. Our wide field of view robotic telescopes are located in San Pedro the Atacama Observatory, Chile and INTA El Arenosillo Observatory, Spain and were designed for monitoring a large fraction of the sky with 12m -13m range and time resolution of the order of 1 - 10 seconds. During analysis of the variable stars observations very important is accurate determination of their variability parameters. We know a lot of algorithms which can be used to the variability analysis of the observed stars123 . In this article using Monte Carlo analysis we compare all used by us the period detection algorithms dedicated to the astronomical origin data analysis. Based on the tests performed we show which algorithm gives us the best period detection quality and try to derived approximate formula describing the period detection error. We also give some examples of this calculation based on the observed by our detectors variable stars. At the end of this article we show how removing bad measurements from the analysed light curve affect to the accuracy of the period detection.
DEFF Research Database (Denmark)
Knöös, Tommy; Wieslander, Elinore; Cozzi, Luca;
2006-01-01
A study of the performance of five commercial radiotherapy treatment planning systems (TPSs) for common treatment sites regarding their ability to model heterogeneities and scattered photons has been performed. The comparison was based on CT information for prostate, head and neck, breast and lung...
Directory of Open Access Journals (Sweden)
Murat KUL
2014-07-01
Full Text Available The purpose of the study , candidates who participated in a special aptitude test of Physical Education and Sports School are compared those who were eligible to register with the win of Multiple Inte lligence Areas. In the research Scan model was used. Within the investigation, in 785 candidates who applied Bartin Universty School of Physical Education and Sports Special Ability Test for 2013 - 2014 academic year, 536 volunteer candidates who have average age x yaş = 21.15± 2.66 constitude. As data collection tool, belogns to the candidates personal information form and “Multiple Intelligences Inventory” which was developed by Özden (2003 for he identification of multiple intellegences was applied. Reliability coefficient was discovered as .96. In evaluation of data, SPSS data an alysis program was used. In evaluation of data, frequency, average, standard, deviation from descriptive statistical techniques was used. Also by taking into account normal distribution of the data, Independent Sample T - test of statistical techniques was u sed. In considering the findings of the study “Bodily - Kinesthetic Intelligence” which is a field of Multiple Intelligences of candidates as statistically significant diffirence was found in the area. Candidates winning higher than avarage scores candidates who can not win are seen to have. Also, “Social - Interpersonal Intelligence” of candidates qualifing to register with who can not qualify to register statistically significant results were observed in the levels. Winning candidates in this area compared t o the candidates who win more than others, it is concluded that they carry the dominant features. As a result of “Verbal - Linguistic Intelligence”, “Logical - Mathematical Intelligence”, “Musical - Rhythmic Intelligence”, “Bodily - Kinesthetic Intelligence, “Soci al - Interpersonal Intelligence” of Multiple Intelligence Areas candidates who participated in Physical Education
A systematic benchmark method for analysis and comparison of IMRT treatment planning algorithms
International Nuclear Information System (INIS)
Tools and procedures for evaluating and comparing different intensity-modulated radiation therapy (IMRT) systems are presented. IMRT is increasingly in demand and there are numerous systems available commercially. These programs introduce significantly different software to dosimetrists and physicists than conventional planning systems, and the options often seem initially overwhelmingly complex to the user. By creating geometric target volumes and critical normal tissues, the characteristics of the algorithms may be investigated, and the influence of the different parameters explored. Overall optimization strategies of the algorithm may be characterized by treating a square target volume (TV) with 2 perpendicular beams, with and without heterogeneities. A half-donut (hemi-annulus) TV with a 'donut hole' (central cylinder) critical normal tissue (CNT) on a CT of a simulated quality assurance phantom is suggested as a good geometry to explore the IMRT algorithm parameters. Using this geometry, the order of varying parameters is suggested. First is to determine the effects of the number of stratifications of optimized intensity fluence on the resulting dose distribution, and selecting a fixed number of stratifications for further studies. To characterize the dose distributions, a dose-homogeneity index (DHI) is defined as the ratio of the dose received by 90% of the volume to the minimum dose received by the ''hottest'' 10% of the volume. The next step is to explore the effects of priority and penalty on both the TV and the CNT. Then, choosing and fixing these parameters, the effects of varying the number of beams can be looked at. As well as evaluating the dose distributions (and DHI), the number of subfields and the number of monitor units required for different numbers of stratifications and beams can be evaluated
Wong, Y M; Wong, Yin Mei; Wilkie, Joshua
2006-01-01
Since the introduction of the Black-Scholes model stochastic processes have played an increasingly important role in mathematical finance. In many cases prices, volatility and other quantities can be modeled using stochastic ordinary differential equations. Available methods for solving such equations have until recently been markedly inferior to analogous methods for deterministic ordinary differential equations. Recently, a number of methods which employ variable stepsizes to control local error have been developed which appear to offer greatly improved speed and accuracy. Here we conduct a comparative study of the performance of these algorithms for problems taken from the mathematical finance literature.
DEFF Research Database (Denmark)
Fabricius, Anne; Watt, Dominic; Johnson, Daniel Ezra
2009-01-01
This paper evaluates a speaker-intrinsic vowel formant frequency normalization algorithm initially proposed in Watt & Fabricius (2002). We compare how well this routine, known as the S-centroid procedure, performs as a sociophonetic research tool in three ways: reducing variance in area ratios of...... from RP and Aberdeen English (northeast Scotland). We conclude that, for the data examined here, the S-centroid W&F procedures performs at least as well as the two most recognized speaker-intrinsic, vowel-extrinsic, formant-intrinsic normalization methods, Lobanov's (1971) z-score procedure and Nearey...
Comparison of the BP training algorithm and LVQ neural networks for e, μ, π identification
International Nuclear Information System (INIS)
Two different kinds of neural networks, feed-forward multi-layer mode with back-propagation training algorithm (BP) and Kohonen's learning vector quantization networks (LVQ), are adopted for the identification of e, μ, π particles in Beijing spectrometer (BES) experiment. The data samples for training and test consist of μ from cosmic ray, e and π from experimental data by strict selection. Although their momentum spectra are non-uniform, the identification efficiencies given by BP are quite uniform versus momentum, and LVQ is little worse. At least in this application BP is shown to be more powerful in pattern recognition than LVQ. (orig.)
Shabbir, Faisal; Omenzetter, Piotr
2014-04-01
Much effort is devoted nowadays to derive accurate finite element (FE) models to be used for structural health monitoring, damage detection and assessment. However, formation of a FE model representative of the original structure is a difficult task. Model updating is a branch of optimization which calibrates the FE model by comparing the modal properties of the actual structure with these of the FE predictions. As the number of experimental measurements is usually much smaller than the number of uncertain parameters, and, consequently, not all uncertain parameters are selected for model updating, different local minima may exist in the solution space. Experimental noise further exacerbates the problem. The attainment of a global solution in a multi-dimensional search space is a challenging problem. Global optimization algorithms (GOAs) have received interest in the previous decade to solve this problem, but no GOA can ensure the detection of the global minimum either. To counter this problem, a combination of GOA with sequential niche technique (SNT) has been proposed in this research which systematically searches the whole solution space. A dynamically tested full scale pedestrian bridge is taken as a case study. Two different GOAs, namely particle swarm optimization (PSO) and genetic algorithm (GA), are investigated in combination with SNT. The results of these GOA are compared in terms of their efficiency in detecting global minima. The systematic search enables to find different solutions in the search space, thus increasing the confidence of finding the global minimum.
Directory of Open Access Journals (Sweden)
Parul Rastogi
2011-03-01
Full Text Available Search Engines are the basic tool of fetching the information on the web. The IT revolution not only affected the technocrats, but the native users are also affected. The native users also tend to look for any information on web nowadays. This leads to the need of effective search engines to fulfill native user's needs and provide them information in their native languages. The major population of India use Hindi as a first language. The Hindi language web information retrieval is not in a satisfactory condition. Besides the other technical setbacks, the Hindi language search engines face the problem of sense ambiguity. Our WSD method is based on Highest Sense Count (HSC. It works well with Google. The objective of the paper is comparative analysis of the WSD algorithm results on the three Hindi language search engines- Google, Raftaar and Guruji. We have taken a test sample of 100 queries to check the performance level of the WSD algorithm on various search engines. The results show promising improvement in performance of Google search engine whereas the least performance improvement was there in Guruji search engine.
Energy Technology Data Exchange (ETDEWEB)
Dong, Feng; Pierpaoli, Elena; Gunn, James E.; Wechsler, Risa H.
2007-10-29
We present a modified adaptive matched filter algorithm designed to identify clusters of galaxies in wide-field imaging surveys such as the Sloan Digital Sky Survey. The cluster-finding technique is fully adaptive to imaging surveys with spectroscopic coverage, multicolor photometric redshifts, no redshift information at all, and any combination of these within one survey. It works with high efficiency in multi-band imaging surveys where photometric redshifts can be estimated with well-understood error distributions. Tests of the algorithm on realistic mock SDSS catalogs suggest that the detected sample is {approx} 85% complete and over 90% pure for clusters with masses above 1.0 x 10{sup 14}h{sup -1} M and redshifts up to z = 0.45. The errors of estimated cluster redshifts from maximum likelihood method are shown to be small (typically less that 0.01) over the whole redshift range with photometric redshift errors typical of those found in the Sloan survey. Inside the spherical radius corresponding to a galaxy overdensity of {Delta} = 200, we find the derived cluster richness {Lambda}{sub 200} a roughly linear indicator of its virial mass M{sub 200}, which well recovers the relation between total luminosity and cluster mass of the input simulation.
Lopes, P A A
2004-01-01
We present an optically selected galaxy cluster catalog from ~ 2,700 square degrees of the Digitized Second Palomar Observatory Sky Survey (DPOSS), spanning the redshift range 0.1 50), where stellar contamination is modest and nearly uniform. We also present a performance comparison of two different detection methods applied to this data, the Adaptive Kernel and Voronoi Tessellation techniques. In the regime where both catalogs are expected to be complete, we find excellent agreement, as well as with the most recent surveys in the literature. Extensive simulations are performed and applied to the two different methods, indicating a contamination rate of ~ 5%. These simulations are also used to optimize the algorithms and evaluate the selection function for the final cluster catalog. Redshift and richness estimates are also provided, making possible the selection of subsamples for future studies.
Comparison of Three Greedy Routing Algorithms for Efficient Packet Forwarding in VANET
Directory of Open Access Journals (Sweden)
K. Lakshmi
2012-01-01
Full Text Available VANETs (Vehicular Ad hoc Networks are highlymobile wireless ad hoc networks and will play animportant role in public safety communicationsand commercial applications. In VANET nodeswhich are vehicles can move safety with highspeed and must communicate quickly andreliably. When an accident occurs in a road orhighway, alarm messages must be disseminated,instead of ad hoc routed, to inform all othervehicles. Vehicular ad hoc network architectureand cellular technology to achieve intelligentcommunication and improve road traffic safetyand efficiency . VANET can perform effectivecommunication by utilizing routing information.In this paper, we have discussed about threegreedy routing algorithms, and have comparedto show which one is efficient in deliveringpackets in terms of mobility, nodes andtransmission range
VHDL IMPLEMENTATION AND COMPARISON OF COMPLEX MUL-TIPLIER USING BOOTH’S AND VEDIC ALGORITHM
Directory of Open Access Journals (Sweden)
Rajashri K. Bhongade
2015-11-01
Full Text Available For designing of complex number multiplier basic idea is adopted from designing of multiplier. An ancient Indian mathematics "Vedas" is used for designing the multiplier unit. There are 16 sutra in Vedas, from that the Urdhva Tiryakb-hyam sutra (method was selected for implementation complex multiplication and basically Urdhva Tiryakbhyam sutra appli-cable to all cases of multiplication. Any multi-bit multiplication can be reduced down to single bit multiplication and addition by using Urdhva Tiryakbhyam sutra is performed by vertically and crosswise. The partial products and sums are generated in single step which reduces the carry propagation from LSB to MSB by using these formulas. In this paper simulation result for 4bit complex no. multiplication using Booth‟s algorithm and using Vedic sutra are illustrated. The implementation of the Vedic mathematics and their application to the complex multiplier was checked parameter like propagation delay.
A comparison algorithm to check LTSA Layer 1 and SCORM compliance in e-Learning sites
Sengupta, Souvik; Banerjee, Nilanjan
2012-01-01
The success of e-Learning is largely dependent on the impact of its multimedia aided learning content on the learner over the hyper media. The e-Learning portals with different proportion of multimedia elements have different impact on the learner, as there is lack of standardization. The Learning Technology System Architecture (LTSA) Layer 1 deals with the effect of environment on the learner. From an information technology perspective it specifies learner interaction from the environment to the learner via multimedia content. Sharable Content Object Reference Model (SCROM) is a collection of standards and specifications for content of web-based e-learning and specifies how JavaScript API can be used to integrate content development. In this paper an examination is made on the design features of interactive multimedia components of the learning packages by creating an algorithm which will give a comparative study of multimedia component used by different learning packages. The resultant graph as output helps...
Martin, Jacob A.; Gross, Kevin C.
2016-05-01
As off-nadir viewing platforms become increasingly prevalent in remote sensing, material identification techniques must be robust to changing viewing geometries. Current identification strategies generally rely on estimating reflectivity or emissivity, both of which vary with viewing angle. Presented here is a technique, leveraging polarimetric and hyperspectral imaging (P-HSI), to estimate index of refraction which is invariant to viewing geometry. Results from a quartz window show that index of refraction can be retrieved to within 0.08 rms error from 875-1250 cm-1 for an amorphous material. Results from a silicon carbide (SiC) wafer, which has much sharper features than quartz glass, show the index of refraction can be retrieved to within 0.07 rms error. The results from each of these datasets show an improvement when compared with a maximum smoothness TES algorithm.
Performance Comparison of Three-Phase Shunt Active Power Filter Algorithms
Directory of Open Access Journals (Sweden)
Moleykutty George
2008-01-01
Full Text Available The usage of parallel converters is ever increasing. However, the voltage and current harmonics, zero-sequence and negative- sequence components of voltage and current and reactive power present in parallel converters give an alarming signal to power system and power electronic engineers. This research discusses performance of three-phase shunt active power filter (APF system using three different control techniques namely synchronous detection algorithm (SDM, instantaneous active and reactive (p-q theory and instantaneous direct and quadrature (d-q current method for the control of zero and negative sequence components, reactive power and harmonics. The novelty of this research lies in the successful application of SDM based APF and (d-q current method APF for the control of reactive power, harmonics and negative and zero sequence currents resulted by the use of parallel three-phase converters. MATLAB 6.1 toolbox is used to model the systems.
Sheta, B.; Elhabiby, M.; Sheimy, N.
2012-07-01
A robust scale and rotation invariant image matching algorithm is vital for the Visual Based Navigation (VBN) of aerial vehicles, where matches between an existing geo-referenced database images and the real-time captured images are used to georeference (i.e. six transformation parameters - three rotation and three translation) the real-time captured image from the UAV through the collinearity equations. The georeferencing information is then used in aiding the INS integration Kalman filter as Coordinate UPdaTe (CUPT). It is critical for the collinearity equations to use the proper optimization algorithm to ensure accurate and fast convergence for georeferencing parameters with the minimum required conjugate points necessary for convergence. Fast convergence to a global minimum will require non-linear approach to overcome the high degree of non-linearity that will exist in case of having large oblique images (i.e. large rotation angles).The main objective of this paper is investigating the estimation of the georeferencing parameters necessary for VBN of aerial vehicles in case of having large values of the rotational angles, which will lead to non-linearity of the estimation model. In this case, traditional least squares approaches will fail to estimate the georeferencing parameters, because of the expected non-linearity of the mathematical model. Five different nonlinear least squares methods are presented for estimating the transformation parameters. Four gradient based nonlinear least squares methods (Trust region, Trust region dogleg algorithm, Levenberg-Marquardt, and Quasi-Newton line search method) and one non-gradient method (Nelder-Mead simplex direct search) is employed for the six transformation parameters estimation process. The research was done on simulated data and the results showed that the Nelder-Mead method has failed because of its dependency on the objective function without any derivative information. Although, the tested gradient methods
Kim, R S J; Postman, M; Strauss, M A; Bahcall, Neta A; Gunn, J E; Lupton, R H; Annis, J; Nichol, R C; Castander, F J; Brinkmann, J; Brunner, R J; Connolly, A; Csabai, I; Hindsley, R B; Ivezic, Z; Vogeley, M S; York, D G; Kim, Rita S. J.; Kepner, Jeremy V.; Postman, Marc; Strauss, Michael A.; Bahcall, Neta A.; Gunn, James E.; Lupton, Robert H.; Annis, James; Nichol, Robert C.; Castander, Francisco J.; Brunner, Robert J.; Connolly, Andrew; Csabai, Istvan; Hindsley, Robert B.; Ivezic, Zeljko; Vogeley, Michael S.; York, Donald G.
2002-01-01
We present a comparison of three cluster finding algorithms from imaging data using Monte Carlo simulations of clusters embedded in a 25 deg^2 region of Sloan Digital Sky Survey (SDSS) imaging data: the Matched Filter (MF; Postman et al. 1996), the Adaptive Matched Filter (AMF; Kepner et al. 1999) and a color-magnitude filtered Voronoi Tessellation Technique (VTT). Among the two matched filters, we find that the MF is more efficient in detecting faint clusters, whereas the AMF evaluates the redshifts and richnesses more accurately, therefore suggesting a hybrid method (HMF) that combines the two. The HMF outperforms the VTT when using a background that is uniform, but it is more sensitive to the presence of a non-uniform galaxy background than is the VTT; this is due to the assumption of a uniform background in the HMF model. We thus find that for the detection thresholds we determine to be appropriate for the SDSS data, the performance of both algorithms are similar; we present the selection function for eac...
Pulse shape analysis of a two fold clover detector with an EMD based new algorithm: A comparison
International Nuclear Information System (INIS)
An investigation of Empirical Mode Decomposition (EMD) based noise filtering algorithm has been carried out on a mirror signal from a two fold germanium clover detector. EMD technique can decompose linear as well as nonlinear and chaotic signals with a precise frequency resolution. It allows to decompose the preamplifier signal (charge pulse) on an event-by-event basis. The filtering algorithm provides the information about the Intrinsic Mode Functions (IMFs) mainly dominated by the noise. It preserves the signal information and separates the overriding noise oscillations from the signals. The identification of noise structure is based on the frequency distributions of different IMFs. The preamplifier noise components which distort the azimuthal co-ordinates information have been extracted on the basis of the correlation between the different IMFs and the mirror signal. The correlation studies have been carried out both in frequency and time domain. The extracted correlation coefficient provides an important information regarding the pulse shape of the γ-ray interaction in the detector. A comparison between the EMD based and state-of-the-art wavelet based denoising techniques has also been made and discussed. It has been observed that the fractional noise strength distribution varies with the position of the collimated gamma-ray source. Above trend has been reproduced by both the denoising techniques
Trofimov, Alexey O; Kalentiev, George; Voennov, Oleg; Yuriev, Michail; Agarkova, Darya; Trofimova, Svetlana; Bragin, Denis E
2016-01-01
The aim of this work was comparison of two algorithms of perfusion computed tomography (PCT) data analysis for evaluation of cerebral microcirculation in the perifocal zone of chronic subdural hematoma (CSDH). Twenty patients with CSDH after polytrauma were included in the study. The same PCT data were assessed quantitatively in cortical brain region beneath the CSDH (zone 1), and in the corresponding contralateral brain hemisphere (zone 2) without and with the use of perfusion calculation mode excluding vascular pixel 'Remote Vessels' (RV); 1st and 2nd analysis method, respectively. Comparison with normal values for perfusion indices in the zone 1 in the 1st analysis method showed a significant (p < 0.01) increase in CBV and CBF, and no significant increase in MTT and TTP. Use of the RV mode (2nd analysis method) showed no statistically reliable change of perfusion parameters in the microcirculatory blood flow of the 2nd zone. Maintenance of microcirculatory blood flow perfusion reflects the preservation of cerebral blood flow autoregulation in patients with CSDH. PMID:27526170
Energy Technology Data Exchange (ETDEWEB)
Sun, X. H.; Akahori, Takuya; Anderson, C. S.; Farnes, J. S.; O’Sullivan, S. P. [Sydney Institute for Astronomy, School of Physics, The University of Sydney, NSW 2006 (Australia); Rudnick, L.; O’Brien, T. [Minnesota Institute for Astrophysics, School of Physics and Astronomy, University of Minnesota, 116 Church Street SE, Minneapolis, MN 55455 (United States); Bell, M. R. [Max Planck Institute for Astrophysics, Karl-Schwarzschild-Str. 1, D-85748 Garching (Germany); Bray, J. D.; Scaife, A. M. M. [Department of Physics and Astronomy, University of Southampton, Highfield, Southampton SO17 1BJ (United Kingdom); Ideguchi, S.; Kumazaki, K. [University of Nagoya, Furo-cho, Chikusa-ku, Nagoya 464-8601 (Japan); Stepanov, R. [Institute of Continuous Media Mechanics, Korolyov str. 1, 614061 Perm (Russian Federation); Stil, J.; Wolleben, M. [Department of Physics and Astronomy, University of Calgary, 2500 University Drive NW, Calgary AB T2 N 1N4 (Canada); Takahashi, K. [University of Kumamoto, 2–39-1, Kurokami, Kumamoto 860-8555 (Japan); Weeren, R. J. van, E-mail: x.sun@physics.usyd.edu.au, E-mail: larry@umn.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)
2015-02-01
Faraday rotation measures (RMs) and more general Faraday structures are key parameters for studying cosmic magnetism and are also sensitive probes of faint ionized thermal gas. A definition of which derived quantities are required for various scientific studies is needed, as well as addressing the challenges in determining Faraday structures. A wide variety of algorithms has been proposed to reconstruct these structures. In preparation for the Polarization Sky Survey of the Universe's Magnetism (POSSUM) to be conducted with the Australian Square Kilometre Array Pathfinder and the ongoing Galactic Arecibo L-band Feeds Array Continuum Transit Survey (GALFACTS), we run a Faraday structure determination data challenge to benchmark the currently available algorithms, including Faraday synthesis (previously called RM synthesis in the literature), wavelet, compressive sampling, and QU-fitting. The input models include sources with one Faraday thin component, two Faraday thin components, and one Faraday thick component. The frequency set is similar to POSSUM/GALFACTS with a 300 MHz bandwidth from 1.1 to 1.4 GHz. We define three figures of merit motivated by the underlying science: (1) an average RM weighted by polarized intensity, RM{sub wtd}, (2) the separation Δϕ of two Faraday components, and (3) the reduced chi-squared χ{sub r}{sup 2}. Based on the current test data with a signal-to-noise ratio of about 32, we find the following. (1) When only one Faraday thin component is present, most methods perform as expected, with occasional failures where two components are incorrectly found. (2) For two Faraday thin components, QU-fitting routines perform the best, with errors close to the theoretical ones for RM{sub wtd} but with significantly higher errors for Δϕ. All other methods, including standard Faraday synthesis, frequently identify only one component when Δϕ is below or near the width of the Faraday point-spread function. (3) No methods as currently
Sun, X. H.; Rudnick, L.; Akahori, Takuya; Anderson, C. S.; Bell, M. R.; Bray, J. D.; Farnes, J. S.; Ideguchi, S.; Kumazaki, K.; O'Brien, T.; O'Sullivan, S. P.; Scaife, A. M. M.; Stepanov, R.; Stil, J.; Takahashi, K.; van Weeren, R. J.; Wolleben, M.
2015-02-01
Faraday rotation measures (RMs) and more general Faraday structures are key parameters for studying cosmic magnetism and are also sensitive probes of faint ionized thermal gas. A definition of which derived quantities are required for various scientific studies is needed, as well as addressing the challenges in determining Faraday structures. A wide variety of algorithms has been proposed to reconstruct these structures. In preparation for the Polarization Sky Survey of the Universe's Magnetism (POSSUM) to be conducted with the Australian Square Kilometre Array Pathfinder and the ongoing Galactic Arecibo L-band Feeds Array Continuum Transit Survey (GALFACTS), we run a Faraday structure determination data challenge to benchmark the currently available algorithms, including Faraday synthesis (previously called RM synthesis in the literature), wavelet, compressive sampling, and QU-fitting. The input models include sources with one Faraday thin component, two Faraday thin components, and one Faraday thick component. The frequency set is similar to POSSUM/GALFACTS with a 300 MHz bandwidth from 1.1 to 1.4 GHz. We define three figures of merit motivated by the underlying science: (1) an average RM weighted by polarized intensity, R{{M}wtd}, (2) the separation Δφ of two Faraday components, and (3) the reduced chi-squared χ r2. Based on the current test data with a signal-to-noise ratio of about 32, we find the following. (1) When only one Faraday thin component is present, most methods perform as expected, with occasional failures where two components are incorrectly found. (2) For two Faraday thin components, QU-fitting routines perform the best, with errors close to the theoretical ones for R{{M}wtd} but with significantly higher errors for Δφ . All other methods, including standard Faraday synthesis, frequently identify only one component when Δφ is below or near the width of the Faraday point-spread function. (3) No methods as currently implemented work well for
International Nuclear Information System (INIS)
Faraday rotation measures (RMs) and more general Faraday structures are key parameters for studying cosmic magnetism and are also sensitive probes of faint ionized thermal gas. A definition of which derived quantities are required for various scientific studies is needed, as well as addressing the challenges in determining Faraday structures. A wide variety of algorithms has been proposed to reconstruct these structures. In preparation for the Polarization Sky Survey of the Universe's Magnetism (POSSUM) to be conducted with the Australian Square Kilometre Array Pathfinder and the ongoing Galactic Arecibo L-band Feeds Array Continuum Transit Survey (GALFACTS), we run a Faraday structure determination data challenge to benchmark the currently available algorithms, including Faraday synthesis (previously called RM synthesis in the literature), wavelet, compressive sampling, and QU-fitting. The input models include sources with one Faraday thin component, two Faraday thin components, and one Faraday thick component. The frequency set is similar to POSSUM/GALFACTS with a 300 MHz bandwidth from 1.1 to 1.4 GHz. We define three figures of merit motivated by the underlying science: (1) an average RM weighted by polarized intensity, RMwtd, (2) the separation Δϕ of two Faraday components, and (3) the reduced chi-squared χr2. Based on the current test data with a signal-to-noise ratio of about 32, we find the following. (1) When only one Faraday thin component is present, most methods perform as expected, with occasional failures where two components are incorrectly found. (2) For two Faraday thin components, QU-fitting routines perform the best, with errors close to the theoretical ones for RMwtd but with significantly higher errors for Δϕ. All other methods, including standard Faraday synthesis, frequently identify only one component when Δϕ is below or near the width of the Faraday point-spread function. (3) No methods as currently implemented work well for
Kitsionas, S; Federrath, C; Schmidt, W; Price, D; Dursi, J; Gritschneder, M; Walch, S; Piontek, R; Kim, J; Jappsen, A -K; Ciecielag, P; Mac Low, M -M
2008-01-01
Simulations of astrophysical turbulence have reached a level of sophistication that quantitative results are now starting to emerge. Contradicting results have been reported, however, in the literature with respect to the performance of the numerical techniques employed for its study and their relevance to the physical systems modelled. We aim at characterising the performance of a number of hydrodynamics codes on the modelling of turbulence decay. This is the first such large-scale comparison ever conducted. We have driven compressible, supersonic, isothermal turbulence with GADGET and then let it decay in the absence of gravity, using a number of grid (ENZO, FLASH, TVD, ZEUS) and SPH codes (GADGET, VINE, PHANTOM). We have analysed the results of our numerical experiments using a variety of statistical measures ranging from energy spectrum functions (power spectra), to velocity structure functions, to probability distribution functions. In the low numerical resolution employed here the performance of the var...
Comparison of fringe tracking algorithms for single-mode near-infrared long baseline interferometers
Choquet, Élodie; Perrin, Guy; Cassaing, Frédéric; Lacour, Sylvestre; Eisenhauer, Frank
2014-01-01
To enable optical long baseline interferometry toward faint objects, long integrations are necessary despite atmospheric turbulence. Fringe trackers are needed to stabilize the fringes and thus increase the fringe visibility and phase signal-to-noise ratio (SNR), with efficient controllers robust to instrumental vibrations, and to subsequent path fluctuations and flux drop-outs. We report on simulations, analysis and comparison of the performances of a classical integrator controller and of a Kalman controller, both optimized to track fringes under realistic observing conditions for different source magnitudes, disturbance conditions, and sampling frequencies. The key parameters of our simulations (instrument photometric performance, detection noise, turbulence and vibrations statistics) are based on typical observing conditions at the Very Large Telescope observatory and on the design of the GRAVITY instrument, a 4-telescope single-mode long baseline interferometer in the near-infrared, next in line to be in...
International Nuclear Information System (INIS)
Retrospective analysis of 3D clinical treatment plans to investigate qualitative, possible, clinical consequences of the use of PBC versus AAA. The 3D dose distributions of 80 treatment plans at four different tumour sites, produced using PBC algorithm, were recalculated using AAA and the same number of monitor units provided by PBC and clinically delivered to each patient; the consequences of the difference on the dose-effect relations for normal tissue injury were studied by comparing different NTCP model/parameters extracted from a review of published studies. In this study the AAA dose calculation is considered as benchmark data. The paired Student t-test was used for statistical comparison of all results obtained from the use of the two algorithms. In the prostate plans, the AAA predicted lower NTCP value (NTCPAAA) for the risk of late rectal bleeding for each of the seven combinations of NTCP parameters, the maximum mean decrease was 2.2%. In the head-and-neck treatments, each combination of parameters used for the risk of xerostemia from irradiation of the parotid glands involved lower NTCPAAA, that varied from 12.8% (sd=3.0%) to 57.5% (sd=4.0%), while when the PBC algorithm was used the NTCPPBC’s ranging was from 15.2% (sd=2.7%) to 63.8% (sd=3.8%), according the combination of parameters used; the differences were statistically significant. Also NTCPAAA regarding the risk of radiation pneumonitis in the lung treatments was found to be lower than NTCPPBC for each of the eight sets of NTCP parameters; the maximum mean decrease was 4.5%. A mean increase of 4.3% was found when the NTCPAAA was calculated by the parameters evaluated from dose distribution calculated by a convolution-superposition (CS) algorithm. A markedly different pattern was observed for the risk relating to the development of pneumonitis following breast treatments: the AAA predicted higher NTCP value. The mean NTCPAAA varied from 0.2% (sd = 0.1%) to 2.1% (sd = 0.3%), while the mean NTCPPBC
ECG De-noising: A comparison between EEMD-BLMS and DWT-NN algorithms.
Kærgaard, Kevin; Jensen, Søren Hjøllund; Puthusserypady, Sadasivan
2015-08-01
Electrocardiogram (ECG) is a widely used non-invasive method to study the rhythmic activity of the heart and thereby to detect the abnormalities. However, these signals are often obscured by artifacts from various sources and minimization of these artifacts are of paramount important. This paper proposes two adaptive techniques, namely the EEMD-BLMS (Ensemble Empirical Mode Decomposition in conjunction with the Block Least Mean Square algorithm) and DWT-NN (Discrete Wavelet Transform followed by Neural Network) methods in minimizing the artifacts from recorded ECG signals, and compares their performance. These methods were first compared on two types of simulated noise corrupted ECG signals: Type-I (desired ECG+noise frequencies outside the ECG frequency band) and Type-II (ECG+noise frequencies both inside and outside the ECG frequency band). Subsequently, they were tested on real ECG recordings. Results clearly show that both the methods works equally well when used on Type-I signals. However, on Type-II signals the DWT-NN performed better. In the case of real ECG data, though both methods performed similar, the DWT-NN method was a slightly better in terms of minimizing the high frequency artifacts. PMID:26737124
Sun, X H; Akahori, Takuya; Anderson, C S; Bell, M R; Bray, J D; Farnes, J S; Ideguchi, S; Kumazaki, K; O'Brien, T; O'Sullivan, S P; Scaife, A M M; Stepanov, R; Stil, J; Takahashi, K; van Weeren, R J; Wolleben, M
2014-01-01
(abridged) We run a Faraday structure determination data challenge to benchmark the currently available algorithms including Faraday synthesis (previously called RM synthesis in the literature), wavelet, compressive sampling and $QU$-fitting. The frequency set is similar to POSSUM/GALFACTS with a 300 MHz bandwidth from 1.1 to 1.4 GHz. We define three figures of merit motivated by the underlying science: a) an average RM weighted by polarized intensity, RMwtd, b) the separation $\\Delta\\phi$ of two Faraday components and c) the reduced chi-squared. Based on the current test data of signal to noise ratio of about 32, we find that: (1) When only one Faraday thin component is present, most methods perform as expected, with occasional failures where two components are incorrectly found; (2) For two Faraday thin components, QU-fitting routines perform the best, with errors close to the theoretical ones for RMwtd, but with significantly higher errors for $\\Delta\\phi$. All other methods including standard Faraday synt...
Swartz, W. H.; Bucesla, E. J.; Lamsal, L. N.; Celarier, E. A.; Krotkov, N. A.; Bhartia, P, K,; Strahan, S. E.; Gleason, J. F.; Herman, J.; Pickering, K.
2012-01-01
Nitrogen oxides (NOx =NO+NO2) are important atmospheric trace constituents that impact tropospheric air pollution chemistry and air quality. We have developed a new NASA algorithm for the retrieval of stratospheric and tropospheric NO2 vertical column densities using measurements from the nadir-viewing Ozone Monitoring Instrument (OMI) on NASA's Aura satellite. The new products rely on an improved approach to stratospheric NO2 column estimation and stratosphere-troposphere separation and a new monthly NO2 climatology based on the NASA Global Modeling Initiative chemistry-transport model. The retrieval does not rely on daily model profiles, minimizing the influence of a priori information. We evaluate the retrieved tropospheric NO2 columns using surface in situ (e.g., AQS/EPA), ground-based (e.g., DOAS), and airborne measurements (e.g., DISCOVER-AQ). The new, improved OMI tropospheric NO2 product is available at high spatial resolution for the years 200S-present. We believe that this product is valuable for the evaluation of chemistry-transport models, examining the spatial and temporal patterns of NOx emissions, constraining top-down NOx inventories, and for the estimation of NOx lifetimes.
Comparison of most adaptive meta model With newly created Quality Meta-Model using CART Algorithm
Directory of Open Access Journals (Sweden)
Jasbir Malik
2012-09-01
Full Text Available To ensure that the software developed is of high quality, it is now widely accepted that various artifacts generated during the development process should be rigorously evaluated using domain-specific quality model. However, a domain-specific quality model should be derived from a generic quality model which is time-proven, well-validated and widely-accepted. This thesis lays down a clear definition of quality meta-model and then identifies various quality meta-models existing in the research and practice-domains. This thesis then compares the various existing quality meta-models to identify which model is the most adaptable to various domains. A set of criteria is used to compare the various quality meta-models. In this we specify the categories, as the CART Algorithms is completely a tree architecture which works on either true or false meta model decision making power .So in the process it has been compared that , if the following items has been found in one category then it falls under true section else under false section .
Li Voti, R.; Sibilia, C.; Bertolotti, M.
2003-01-01
Photothermal depth profiling has been the subject of many papers in the last years. Inverse problems on different kinds of materials have been identified, classified, and solved. A first classification has been done according to the type of depth profile: the physical quantity to be reconstructed is the optical absorption in the problems of type I, the thermal effusivity for type II, and both of them for type III. Another classification may be done depending on the time scale of the pump beam heating (frequency scan, time scan), or on its geometrical symmetry (one- or three-dimensional). In this work we want to discuss two different approaches, the genetic algorithms (GA) [R. Li Voti, C. Melchiorri, C. Sibilia, and M. Bertolotti, Anal. Sci. 17, 410 (2001); R. Li Voti, Proceedings, IV Int. Workshop on Advances in Signal Processing for Non-Destructive Evaluation of Materials, Quebec, August 2001] and the thermal wave backscattering (TWBS) [R. Li Voti, G. L. Liakhou, S. Paoloni, C. Sibilia, and M. Bertolotti, Anal. Sci. 17, 414 (2001); J. C. Krapez and R. Li Voti, Anal. Sci. 17, 417 (2001)], showing their performances and limits of validity for several kinds of photothermal depth profiling problems: The two approaches are based on different mechanisms and exhibit obviously different features. GA may be implemented on the exact heat diffusion equation as follows: one chromosome is associated to each profile. The genetic evolution of the chromosome allows one to find better and better profiles, eventually converging towards the solution of the inverse problem. The main advantage is that GA may be applied to any arbitrary profile, but several disadvantages exist; for example, the complexity of the algorithm, the slow convergence, and consequently the computer time consumed. On the contrary, TWBS uses a simplified theoretical model of heat diffusion in inhomogeneous materials. According to such a model, the photothermal signal depends linearly on the thermal effusivity
A task-based comparison of two reconstruction algorithms for digital breast tomosynthesis
Mahadevan, Ravi; Ikejimba, Lynda C.; Lin, Yuan; Samei, Ehsan; Lo, Joseph Y.
2014-03-01
Digital breast tomosynthesis (DBT) generates 3-D reconstructions of the breast by taking X-Ray projections at various angles around the breast. DBT improves cancer detection as it minimizes tissue overlap that is present in traditional 2-D mammography. In this work, two methods of reconstruction, filtered backprojection (FBP) and the Newton-Raphson iterative reconstruction were used to create 3-D reconstructions from phantom images acquired on a breast tomosynthesis system. The task based image analysis method was used to compare the performance of each reconstruction technique. The task simulated a 10mm lesion within the breast containing iodine concentrations between 0.0mg/ml and 8.6mg/ml. The TTF was calculated using the reconstruction of an edge phantom, and the NPS was measured with a structured breast phantom (CIRS 020) over different exposure levels. The detectability index d' was calculated to assess image quality of the reconstructed phantom images. Image quality was assessed for both conventional, single energy and dual energy subtracted reconstructions. Dose allocation between the high and low energy scans was also examined. Over the full range of dose allocations, the iterative reconstruction yielded a higher detectability index than the FBP for single energy reconstructions. For dual energy subtraction, detectability index was maximized when most of the dose was allocated to the high energy image. With that dose allocation, the performance trend for reconstruction algorithms reversed; FBP performed better than the corresponding iterative reconstruction. However, FBP performance varied very erratically with changing dose allocation. Therefore, iterative reconstruction is preferred for both imaging modalities despite underperforming dual energy FBP, as it provides stable results.
Characterization and Comparison of the 10-2 SITA-Standard and Fast Algorithms
Directory of Open Access Journals (Sweden)
Yaniv Barkana
2012-01-01
Full Text Available Purpose: To compare the 10-2 SITA-standard and SITA-fast visual field programs in patients with glaucoma. Methods: We enrolled 26 patients with open angle glaucoma with involvement of at least one paracentral location on 24-2 SITA-standard field test. Each subject performed 10-2 SITA-standard and SITA-fast tests. Within 2 months this sequence of tests was repeated. Results: SITA-fast was 30% shorter than SITA-standard (5.5±1.1 vs 7.9±1.1 minutes, <0.001. Mean MD was statistically significantly higher for SITA-standard compared with SITA-fast at first visit (Δ=0.3 dB, =0.017 but not second visit. Inter-visit difference in MD or in number of depressed points was not significant for both programs. Bland-Altman analysis showed that clinically significant variations can exist in individual instances between the 2 programs and between repeat tests with the same program. Conclusions: The 10-2 SITA-fast algorithm is significantly shorter than SITA-standard. The two programs have similar long-term variability. Average same-visit between-program and same-program between-visit sensitivity results were similar for the study population, but clinically significant variability was observed for some individual test pairs. Group inter- and intra-program test results may be comparable, but in the management of the individual patient field change should be verified by repeat testing.
Comparison of multisensor data fusion algorithms for reactivity estimation in nuclear reactors
International Nuclear Information System (INIS)
Reactivity meter is provided in nuclear reactors for real-time indication of the core reactivity status under all states of the reactor operation and for calibration of safety and control devices. It acquires signals from neutron flux detectors for reactivity computation based on Point Kinetics approximation of reactor. However, in a multi sensor environment where there is more than one detector signal available for reactivity computation, there has to be a method to utilize information from all available sensors to find an optimal and reliable value of reactivity. In this paper, a comparative study of various data fusion methods for reactivity estimation under multi sensor environment has been made based on the analysis of data collected from a research reactor. These results are compared with conventional ad hoc weighting schemes such as averaging the sensor signals or estimated values. It is observed that uncertainty in reactivity estimation can be improved by fusing multi sensor data instead of averaging the individual sensor data at measurement or state vector level. Moreover measurement fusion is found to give better results in comparison with other fusion techniques. Issues related to signals with largely differing flux values are discussed in brief. (author)
Directory of Open Access Journals (Sweden)
Armando Marino
2015-04-01
Full Text Available The surveillance of maritime areas with remote sensing is vital for security reasons, as well as for the protection of the environment. Satellite-borne synthetic aperture radar (SAR offers large-scale surveillance, which is not reliant on solar illumination and is rather independent of weather conditions. The main feature of vessels in SAR images is a higher backscattering compared to the sea background. This peculiarity has led to the development of several ship detectors focused on identifying anomalies in the intensity of SAR images. More recently, different approaches relying on the information kept in the spectrum of a single-look complex (SLC SAR image were proposed. This paper is focused on two main issues. Firstly, two recently developed sub-look detectors are applied for the first time to ship detection. Secondly, new and well-known ship detection algorithms are compared in order to understand which has the best performance under certain circumstances and if the sub-look analysis improves ship detection. The comparison is done on real SAR data exploiting diversity in frequency and polarization. Specifically, the employed data consist of six RADARSAT-2 fine quad-polacquisitions over the North Sea, five TerraSAR-X HH/VV dual-polarimetric data-takes, also over the North Sea, and one ALOS-PALSAR quad-polarimetric dataset over Tokyo Bay. Simultaneously to the SAR images, validation data were collected, which include the automatic identification system (AIS position of ships and wind speeds. The results of the analysis show that the performance of the different sub-look algorithms considered here is strongly dependent on polarization, frequency and resolution. Interestingly, these sub-look detectors are able to outperform the classical SAR intensity detector when the sea state is particularly high, leading to a strong clutter contribution. It was also observed that there are situations where the performance improvement thanks to the sub
Xie, H.; Hendrickx, J.; Kurc, S.; Small, E.
2002-12-01
Evapotranspiration (ET) is one of the most important components of the water balance, but also one of the most difficult to measure. Field techniques such as soil water balances and Bowen ratio or eddy covariance techniques are local, ranging from point to field scale. SEBAL (Surface Energy Balance Algorithm for Land) is an image-processing model that calculates ET and other energy exchanges at the earth's surface. SEBAL uses satellite image data (TM/ETM+, MODIS, AVHRR, ASTER, and so on) measuring visible, near-infrared, and thermal infrared radiation. SEBAL algorithms predict a complete radiation and energy balance for the surface along with fluxes of sensible heat and aerodynamic surface roughness (Bastiaanssen et al, 1998; and Allen et al. 2001). We are constructing a GIS based database that includes spatially-distributed estimates of ET from remote-sensed data at a resolution of about 30 m. The SEBAL code will be optimized for this region via comparison of surface based observations of ET, reference ET (from windspeed, solar radiation, humidity, air temperature, and rainfall records), surface temperature, albedo, and so on. The observed data is collected at a series of tower in the middle Rio Grande Basin. The satellite image provides the instantaneous ET (ET_inst) only. Therefore, estimating 24 hour ET (ET_24) requires some assumptions. Two of these assumptions, which are (1) by assuming the instantaneous evaporative fraction (EF) is equal to the 24-hour averaged value, and (2) by assuming the instantaneous ETrF (same as `crop coefficient', and equal to instantaneous ET divided by instantaneous reference ET) is equal to the 24 hour averaged value, will be evaluated for the study area. Seasonal ET will be estimated by expanding the 24-hour ET proportionally to a reference ET that is derived from weather data. References: Bastiaanssen,W.G.M., M.Menenti, R.A. Feddes, and A.A.M. Holtslag, 1998, A remote sensing surface energy balance algorithm for land (SEBAL): 1
Ben Said, Mourad; Galai, Yousr; Mhadhbi, Moez; Jedidi, Mohamed; de la Fuente, José; Darghouth, Mohamed Aziz
2012-11-23
The ixodid ticks from the Hyalomma genus are important pests of livestock, having major medical and veterinary significance in Northern Africa. Beside their direct pathogenic effects, these species are vectors of important diseases of livestock and in some instances of zoonoses. Anti-tick vaccines developed in Australia and Cuba based on the concealed antigen Bm86 have variable efficacy against H. anatolicum and H. dromedarii. This variation in vaccine efficacy could be explained by the variability in protein sequence between the recombinant Bm86 vaccine and Bm86 orthologs expressed in different Hyalomma species. Bm86 orthologs from three Hyalomma tick species were amplified in two overlapping fragments and sequenced. The rate of identity of the amino acid sequence of Hm86, He86 and Hdr86, the orthologs of Bm86, respectively, in H. marginatum marginatum, H. excavatum and H. dromedarii, with the Bm86 proteins from Rhipicephalus microplus (Australia, Argentina and Mozambique) ranged between 60 and 66%. The obtained amino-acid sequences of Hmm86, He86 and Hdr86 were compared with the Hd86-A1 sequence from H. scupense used as an experimental vaccine. The results showed an identity of 91, 88 and 87% for Hmm86, He86 and Hdr86, respectively. A specific program has been used to predict B cells epitopes sites. The comparison of antigenic sites between Hd86-A1 and Hm86/Hdr86/He86 revealed a diversity affecting 4, 8 and 12 antigenic peptides out of a total of 28 antigenic peptides, respectively. When the Bm86 orthologs amplification protocol adopted in this study was applied to H. excavatum, two alleles named He86p2a1 and He86p2a2 were detected in this species. This is the first time that two different alleles of Bm86 gene are recorded in the same tick specimen. He86p2a1 and He86p2a2 showed an amino acid identity of 92%. When He86p2a1 and He86p2a2 were compared to the corresponding sequence of Hd86-A1 protein, an identity of 86.4 and 91.0% was recorded, respectively. When
DEFF Research Database (Denmark)
Tamborrini, Marco; Stoffel, Sabine A; Westerfeld, Nicole; Amacker, Mario; Theisen, Michael; Zurbriggen, Rinaldo; Pluschke, Gerd
2011-01-01
In clinical trials, immunopotentiating reconstituted influenza virosomes (IRIVs) have shown great potential as a versatile antigen delivery platform for synthetic peptides derived from Plasmodium falciparum antigens. This study describes the immunogenicity of a virosomally-formulated recombinant ...... fusion protein comprising domains of the two malaria vaccine candidate antigens MSP3 and GLURP....
Directory of Open Access Journals (Sweden)
P. S. Hiremath
2014-11-01
Full Text Available In mobile ad-hoc networks (MANET, the movement of the nodes may quickly change the networks topology resulting in the increase of the overhead message in topology maintenance. The nodes communicate with each other by exchanging the hello packet and constructing the neighbor list at each node. MANET is vulnerable to attacks such as black hole attack, gray hole attack, worm hole attack and sybil attack. A black hole attack makes a serious impact on routing, packet delivery ratio, throughput, and end to end delay of packets. In this paper, the performance comparison of clustering based and threshold based algorithms for detection and prevention of cooperative in MANETs is examined. In this study every node is monitored by its own cluster head (CH, while server (SV monitors the entire network by channel overhearing method. Server computes the trust value based on sent and receive count of packets of the receiver node. It is implemented using AODV routing protocol in the NS2 simulations. The results are obtained by comparing the performance of clustering based and threshold based methods by varying the concentration of black hole nodes and are analyzed in terms of throughput, packet delivery ratio. The results demonstrate that the threshold based method outperforms the clustering based method in terms of throughput, packet delivery ratio and end to end delay.
Directory of Open Access Journals (Sweden)
Małgorzata Stramska
2013-02-01
Full Text Available The quasi-synoptic view available from satellites has been broadly used in recent years to observe in near-real time the large-scale dynamics of marine ecosystems and to estimate primary productivity in the world ocean. However, the standard global NASA ocean colour algorithms generally do not produce good results in the Baltic Sea. In this paper, we compare the ability of seven algorithms to estimate depth-integrated daily primary production (PP, mg C m-2 in the Baltic Sea. All the algorithms use surface chlorophyll concentration, sea surface temperature, photosynthetic available radiation, latitude, longitude and day of the year as input data. Algorithm-derived PP is then compared with PP estimates obtained from 14C uptake measurements. The results indicate that the best agreement between the modelled and measured PP in the Baltic Sea is obtained with the DESAMBEM algorithm. This result supports the notion that a regional approach should be used in the interpretation of ocean colour satellite data in the Baltic Sea.
Directory of Open Access Journals (Sweden)
Fatemeh Masoudnia
2013-11-01
Full Text Available In this paper three optimum approaches to design PID controller for a Gryphon Robot are presented. The three applied approaches are Artificial Bee Colony, Shuffled Frog Leaping algorithms and nero-fuzzy system. The design goal is to minimize the integral absolute error and reduce transient response by minimizing overshoot, settling time and rise time of step response. An Objective function of these indexes is defined and minimized applying Shuffled Frog Leaping (SFL algorithm, Artificial Bee Colony (ABC algorithm and Nero-Fuzzy System (FNN. After optimization of the objective function, the optimal parameters for the PID controller are adjusted. Simulation results show that FNN has a remarkable effect on decreasing the amount of settling time and rise-time and eliminating of steady-state error while the SFL algorithm performs better on steady-state error and the ABC algorithm is better on decreasing of overshoot. In steady state manner, all of the methods react robustly to the disturbance, but FNN shows more stability in transient response.
Małgorzata Stramska; Agata Zuzewicz
2013-01-01
The quasi-synoptic view available from satellites has been broadly used in recent years to observe in near-real time the large-scale dynamics of marine ecosystems and to estimate primary productivity in the world ocean. However, the standard global NASA ocean colour algorithms generally do not produce good results in the Baltic Sea. In this paper, we compare the ability of seven algorithms to estimate depth-integrated daily primary production (PP, mg C m-2) in the Baltic Sea. All the algorith...
Kenichiro Takeuchi; Maki Ohishi; Keiko Endo; Kenichi Suzumura; Hitoshi Naraoka; Takeji Ohata; Jiro Seki; Yoichi Miyamae; Masashi Honma; Tomoyoshi Soga
2014-01-01
Gastrointestinal symptoms are a common manifestation of adverse drug effects. Non-steroid anti-inflammatory drugs (NSAIDs) are widely prescribed drugs that induce the serious side effect of gastric mucosal ulceration. Biomarkers for these side effects have not been identified and ulcers are now only detectable by endoscopy. We previously identified five metabolites as biomarker candidates for NSAID-induced gastric ulcer using capillary electrophoresis–mass spectrometry (CE–MS)-based metabolom...
Abrams, Daniel S.
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases (commonly found in ab initio physics and chemistry problems) for which all known classical algorithms require exponential time. Fast algorithms for simulating many body Fermi systems are also provided in both first and second quantized descriptions. An efficient quantum algorithm for anti-symmetrization is given as well as a detailed discussion of a simulation of the Hubbard model. In addition, quantum algorithms that calculate numerical integrals and various characteristics of stochastic processes are described. Two techniques are given, both of which obtain an exponential speed increase in comparison to the fastest known classical deterministic algorithms and a quadratic speed increase in comparison to classical Monte Carlo (probabilistic) methods. I derive a simpler and slightly faster version of Grover's mean algorithm, show how to apply quantum counting to the problem, develop some variations of these algorithms, and show how both (apparently distinct) approaches can be understood from the same unified framework. Finally, the relationship between physics and computation is explored in some more depth, and it is shown that computational complexity theory depends very sensitively on physical laws. In particular, it is shown that nonlinear quantum mechanics allows for the polynomial time solution of NP-complete and #P oracle problems. Using the Weinberg model as a simple example, the explicit construction of the necessary gates is derived from the underlying physics. Nonlinear quantum algorithms are also presented using Polchinski type nonlinearities which do not allow for superluminal communication. (Copies available exclusively from MIT Libraries, Rm. 14- 0551, Cambridge, MA 02139-4307. Ph. 617-253-5668; Fax 617-253-1690.)
Aleardi, Mattia
2015-06-01
Predicting missing log data is a useful capability for geophysicists. Geophysical measurements in boreholes are frequently affected by gaps in the recording of one or more logs. In particular, sonic and shear sonic logs are often recorded over limited intervals along the well path, but the information these logs contain is crucial for many geophysical applications. Estimating missing log intervals from a set of recorded logs is therefore of great interest. In this work, I propose to estimate the data in missing parts of velocity logs using a genetic algorithm (GA) optimisation and I demonstrate that this method is capable of extracting linear or exponential relations that link the velocity to other available logs. The technique was tested on different sets of logs (gamma ray, resistivity, density, neutron, sonic and shear sonic) from three wells drilled in different geological settings and through different lithologies (sedimentary and intrusive rocks). The effectiveness of this methodology is demonstrated by a series of blind tests and by evaluating the correlation coefficients between the true versus predicted velocity values. The combination of GA optimisation with a Gibbs sampler (GS) and subsequent Monte Carlo simulations allows the uncertainties in the final predicted velocities to be reliably quantified. The GA method is also compared with the neural networks (NN) approach and classical multilinear regression. The comparisons show that the GA, NN and multilinear methods provide velocity estimates with the same predictive capability when the relation between the input logs and the seismic velocity is approximately linear. The GA and NN approaches are more robust when the relations are non-linear. However, in all cases, the main advantages of the GA optimisation procedure over the NN approach is that it directly provides an interpretable and simple equation that relates the input and predicted logs. Moreover, the GA method is not affected by the disadvantages
Library correlation nuclide identification algorithm
International Nuclear Information System (INIS)
A novel nuclide identification algorithm, Library Correlation Nuclide Identification (LibCorNID), is proposed. In addition to the spectrum, LibCorNID requires the standard energy, peak shape and peak efficiency calibrations. Input parameters include tolerances for some expected variations in the calibrations, a minimum relative nuclide peak area threshold, and a correlation threshold. Initially, the measured peak spectrum is obtained as the residual after baseline estimation via peak erosion, removing the continuum. Library nuclides are filtered by examining the possible nuclide peak areas in terms of the measured peak spectrum and applying the specified relative area threshold. Remaining candidates are used to create a set of theoretical peak spectra based on the calibrations and library entries. These candidate spectra are then simultaneously fit to the measured peak spectrum while also optimizing the calibrations within the bounds of the specified tolerances. Each candidate with optimized area still exceeding the area threshold undergoes a correlation test. The normalized Pearson's correlation value is calculated as a comparison of the optimized nuclide peak spectrum to the measured peak spectrum with the other optimized peak spectra subtracted. Those candidates with correlation values that exceed the specified threshold are identified and their optimized activities are output. An evaluation of LibCorNID was conducted to verify identification performance in terms of detection probability and false alarm rate. LibCorNID has been shown to perform well compared to standard peak-based analyses
Institute of Scientific and Technical Information of China (English)
Ikuo Nagashima; Tadahiro Takada; Miki Adachi; Hirokazu Nagawa; Tetsuichiro Muto; Kota Okinaga
2006-01-01
AIM: To select accurately good candidates of hepatic resection for colorectal liver metastasis.METHODS: Thirteen clinicopathological features, which were recognized only before or during surgery, were selected retrospectively in 81 consecutive patients in one hospital (Group Ⅰ ). These features were entered into a multivariate analysis to determine independent and significant variables affecting long-term prognosis after hepatectomy. Using selected variables, we created a scoring formula to classify patients with colorectal liver metastases to select good candidates for hepatic resection. The usefulness of the new scoring system was examined in a series of 92 patients from another hospital (Group Ⅱ ), comparing the number of selected variables.RESULTS: Among 81 patients of Group Ⅰ, multivariate analysis, i.e. Cox regression analysis, showed that multiple tumors, the largest tumor greater than 5 cm in diameter, and resectable extrahepatic metastases were significant and independent prognostic factors for poor survival after hepatectomy (P ＜ 0.05). In addition, these three factors: serosa invasion, local lymph node metastases of primary cancers, and postoperative disease free interval less than 1 year including synchronous hepatic metastasis, were not significant,however, they were selected by a stepwise method of Cox regression analysis (0.05 ＜ P ＜ 0.20). Using these six variables, we created a new scoring formula to classify patients with colorectal liver metastases. Finally,our new scoring system not only classified patients in Group Ⅰ very well, but also that in Group Ⅱ, according to long-term outcomes after hepatic resection. The positive number of these six variables also classified them well.CONCLUSION: Both, our new scoring system and the positive number of significant prognostic factors are useful to classify patients with colorectal liver metastases in the preoperative selection of good candidates for hepatic resection.
Directory of Open Access Journals (Sweden)
Tamborrini Marco
2011-12-01
Full Text Available Abstract Background In clinical trials, immunopotentiating reconstituted influenza virosomes (IRIVs have shown great potential as a versatile antigen delivery platform for synthetic peptides derived from Plasmodium falciparum antigens. This study describes the immunogenicity of a virosomally-formulated recombinant fusion protein comprising domains of the two malaria vaccine candidate antigens MSP3 and GLURP. Methods The highly purified recombinant protein GMZ2 was coupled to phosphatidylethanolamine and the conjugates incorporated into the membrane of IRIVs. The immunogenicity of this adjuvant-free virosomal formulation was compared to GMZ2 formulated with the adjuvants Montanide ISA 720 and Alum in three mouse strains with different genetic backgrounds. Results Intramuscular injections of all three candidate vaccine formulations induced GMZ2-specific antibody responses in all mice tested. In general, the humoral immune response in outbred NMRI mice was stronger than that in inbred BALB/c and C57BL/6 mice. ELISA with the recombinant antigens demonstrated immunodominance of the GLURP component over the MSP3 component. However, compared to the Al(OH3-adjuvanted formulation the two other formulations elicited in NMRI mice a larger proportion of anti-MSP3 antibodies. Analyses of the induced GMZ2-specific IgG subclass profiles showed for all three formulations a predominance of the IgG1 isotype. Immune sera against all three formulations exhibited cross-reactivity with in vitro cultivated blood-stage parasites. Immunofluorescence and immunoblot competition experiments showed that both components of the hybrid protein induced IgG cross-reactive with the corresponding native proteins. Conclusion A virosomal formulation of the chimeric protein GMZ2 induced P. falciparum blood stage parasite cross-reactive IgG responses specific for both MSP3 and GLURP. GMZ2 thus represents a candidate component suitable for inclusion into a multi-valent virosomal
Novel multi-objective optimization algorithm
Institute of Scientific and Technical Information of China (English)
Jie Zeng; Wei Nie
2014-01-01
Many multi-objective evolutionary algorithms (MOEAs) can converge to the Pareto optimal front and work wel on two or three objectives, but they deteriorate when faced with many-objective problems. Indicator-based MOEAs, which adopt various indicators to evaluate the fitness values (instead of the Pareto-dominance relation to select candidate solutions), have been regarded as promising schemes that yield more satisfactory re-sults than wel-known algorithms, such as non-dominated sort-ing genetic algorithm (NSGA-II) and strength Pareto evolution-ary algorithm (SPEA2). However, they can suffer from having a slow convergence speed. This paper proposes a new indicator-based multi-objective optimization algorithm, namely, the multi-objective shuffled frog leaping algorithm based on the ε indicator (ε-MOSFLA). This algorithm adopts a memetic meta-heuristic, namely, the SFLA, which is characterized by the powerful capa-bility of global search and quick convergence as an evolutionary strategy and a simple and effective ε-indicator as a fitness as-signment scheme to conduct the search procedure. Experimental results, in comparison with other representative indicator-based MOEAs and traditional Pareto-based MOEAs on several standard test problems with up to 50 objectives, show thatε-MOSFLA is the best algorithm for solving many-objective optimization problems in terms of the solution quality as wel as the speed of convergence.
Directory of Open Access Journals (Sweden)
Yinliang Wang
Full Text Available The leaf beetle Ambrostoma quadriimpressum (Coleoptera: Chrysomelidae is a predominant forest pest that causes substantial damage to the lumber industry and city management. However, no effective and environmentally friendly chemical method has been discovered to control this pest. Until recently, the molecular basis of the olfactory system in A. quadriimpressum was completely unknown. In this study, antennae and leg transcriptomes were analyzed and compared using deep sequencing data to identify the olfactory genes in A. quadriimpressum. Moreover, the expression profiles of both male and female candidate olfactory genes were analyzed and validated by bioinformatics, motif analysis, homology analysis, semi-quantitative RT-PCR and RT-qPCR experiments in antennal and non-olfactory organs to explore the candidate olfactory genes that might play key roles in the life cycle of A. quadriimpressum. As a result, approximately 102.9 million and 97.3 million clean reads were obtained from the libraries created from the antennas and legs, respectively. Annotation led to 34344 Unigenes, which were matched to known proteins. Annotation data revealed that the number of genes in antenna with binding functions and receptor activity was greater than that of legs. Furthermore, many pathway genes were differentially expressed in the two organs. Sixteen candidate odorant binding proteins (OBPs, 10 chemosensory proteins (CSPs, 34 odorant receptors (ORs, 20 inotropic receptors [1] and 2 sensory neuron membrane proteins (SNMPs and their isoforms were identified. Additionally, 15 OBPs, 9 CSPs, 18 ORs, 6 IRs and 2 SNMPs were predicted to be complete ORFs. Using RT-PCR, RT-qPCR and homology analysis, AquaOBP1/2/4/7/C1/C6, AquaCSP3/9, AquaOR8/9/10/14/15/18/20/26/29/33, AquaIR8a/13/25a showed olfactory-specific expression, indicating that these genes might play a key role in olfaction-related behaviors in A. quadriimpressum such as foraging and seeking. AquaOBP4/C5, Aqua
Suarez, Max J. (Editor); Chang, Alfred T. C.; Chiu, Long S.
1997-01-01
Seventeen months of rainfall data (August 1987-December 1988) from nine satellite rainfall algorithms (Adler, Chang, Kummerow, Prabhakara, Huffman, Spencer, Susskind, and Wu) were analyzed to examine the uncertainty of satellite-derived rainfall estimates. The variability among algorithms, measured as the standard deviation computed from the ensemble of algorithms, shows regions of high algorithm variability tend to coincide with regions of high rain rates. Histograms of pattern correlation (PC) between algorithms suggest a bimodal distribution, with separation at a PC-value of about 0.85. Applying this threshold as a criteria for similarity, our analyses show that algorithms using the same sensor or satellite input tend to be similar, suggesting the dominance of sampling errors in these satellite estimates.
Directory of Open Access Journals (Sweden)
Mahdi Sadeghzadeh
2014-02-01
Full Text Available Genetic Algorithm is an algorithm based on population and many optimization problems are solved with this method, successfully. With increasing demand for computer attacks, security, efficient and reliable Internet has increased. Cryptographic systems have studied the science of communication is hidden, and includes two case categories including encryption, password and analysis. In this paper, several code analyses based on genetic algorithms, tabu search and simulated annealing for a permutation of encrypted text are investigated. The study also attempts to provide and to compare the performance in terms of the amount of check and control algorithms and the results are compared.
Jones, Andrew Osler
There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs further downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. Dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the lung
Herring, Jeannette L.; Maurer, Calvin R., Jr.; Muratore, Diane M.; Galloway, Robert L., Jr.; Dawant, Benoit M.
1999-05-01
This paper presents a comparison of iso-intensity-based surface extraction algorithms applied to computed tomography (CT) images of the spine. The extracted vertebral surfaces are used in surface-based registration of CT images to physical space, where our ultimate goal is the development of a technique that can be used for image-guided spinal surgery. The surface extraction process has a direct effect on image-guided surgery in two ways: the extracted surface must provide an accurate representation of the actual surface so that a good registration can be achieved, and the number of polygons in the mesh representation of the extracted surface must be small enough to allow the registration to be performed quickly. To examine the effect of the surface extraction process on registration error and run time, we have performed a large number of experiments on two plastic spine phantoms. Using a marker-based system to assess accuracy, we have found that submillimetric registration accuracy can be achieved using a point-to- surface registration algorithm with simplified and unsimplified members of the general class of iso-intensity- based surface extraction algorithms. This research has practical implications, since it shows that several versions of the widely available class of intensity-based surface extraction algorithms can be used to provide sufficient accuracy for vertebral registration. Since intensity-based algorithms are completely deterministic and fully automatic, this finding simplifies the pre-processing required for image-guided back surgery.
Indian Academy of Sciences (India)
P Chitra; P Venkatesh; R Rajaram
2011-04-01
The task scheduling problem in heterogeneous distributed computing systems is a multiobjective optimization problem (MOP). In heterogeneous distributed computing systems (HDCS), there is a possibility of processor and network failures and this affects the applications running on the HDCS. To reduce the impact of failures on an application running on HDCS, scheduling algorithms must be devised which minimize not only the schedule length (makespan) but also the failure probability of the application (reliability). These objectives are conﬂicting and it is not possible to minimize both objectives at the same time. Thus, it is needed to develop scheduling algorithms which account both for schedule length and the failure probability. Multiobjective Evolutionary Computation algorithms (MOEAs) are well-suited for Multiobjective task scheduling on heterogeneous environment. The two Multi-Objective Evolutionary Algorithms such as Multiobjective Genetic Algorithm (MOGA) and Multiobjective Evolutionary Programming (MOEP) with non-dominated sorting are developed and compared for the various random task graphs and also for a real-time numerical application graph. The metrics for evaluating the convergence and diversity of the obtained non-dominated solutions by the two algorithms are reported. The simulation results conﬁrm that the proposed algorithms can be used for solving the task scheduling at reduced computational times compared to the weighted-sum based biobjective algorithm in the literature.
Brewin, Robert J.W.; Sathyendranath, Shubha; Muller, Dagmar; Brockmann, Carsten; Deschamps, Pierre-Yves; Devred, Emmanuel; Doerffer, Roland; Fomferra, Norman; Franz, Bryan; Grant, Mike; Groom, Steve; Horseman, Andrew; Hu, Chuanmin; Krasemann, Hajo; Lee, ZhongPing; Maritorena, Stephane; Melin, Frederic; Peters, Marco; Platt, Trevor; Regner, Peter; Smyth, Tim; Steinmetz, Francois; Swinton, John; Werdell, Jeremy; White, George N., III
2013-01-01
Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situ Rrs as input to the models, the performance of eleven semianalytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional
International Nuclear Information System (INIS)
Astrophysical black hole candidates are thought to be the Kerr black hole predicted by General Relativity. However, in order to confirm the Kerr-nature of these objects, we need to probe the geometry of the space-time around them and check that observations are consistent with the predictions of the Kerr metric. That can be achieved, for instance, by studying the properties of the electromagnetic radiation emitted by the gas in the accretion disk. The high-frequency quasi-periodic oscillations observed in the X-ray flux of some stellar-mass black hole candidates might do the job. As the frequencies of these oscillations depend only very weakly on the observed X-ray flux, it is thought they are mainly determined by the metric of the space-time. In this paper, I consider the resonance models proposed by Abramowicz and Kluzniak and I extend previous results to the case of non-Kerr space-times. The emerging picture is more complicated than the one around a Kerr black hole and there is a larger number of possible combinations between different modes. I then compare the bounds inferred from the twin peak high-frequency quasi-periodic oscillations observed in three micro-quasars (GRO J1655-40, XTE J1550-564, and GRS 1915+105) with the measurements from the continuum-fitting method of the same objects. For Kerr black holes, the two approaches do not provide consistent results. In a non-Kerr geometry, this conflict may be solved if the observed quasi-periodic oscillations are produced by the resonance νθ:νr = 3:1, where νθ and νr are the two epicyclic frequencies. It is at least worth mentioning that the deformation from the Kerr solution required by observations would be consistent with the one suggested in another recent work discussing the possibility that steady jets are powered by the spin of these compact objects
Fykse, Egil
2013-01-01
The objective of this thesis is to compare the suitability of FPGAs, GPUs and DSPs for digital image processing applications. Normalized cross-correlation is used as a benchmark, because this algorithm includes convolution, a common operation in image processing and elsewhere. Normalized cross-correlation is a template matching algorithm that is used to locate predefined objects in a scene image. Because the throughput of DSPs is low for efficient calculation of normalized cross-correlation, ...
International Nuclear Information System (INIS)
Objective: To study the morphology of normal liver and tumors by breathing motion of hepatocellular carcinoma patients, through comparing the modified demons algorithm and FFD algorithm based on B-spline, and combing four-dimensional computed tomography (4DCT). Methods: The 4DCT images of 8 HCC patients were segmented into 10-series which were named CT0, CT10…CT80, CT90 according to the respiratory phases, CT0 and CT50 are defined to be end-inhale and end-exhale respectively. CT50 was chosen as the reference image. We used the modified demons algorithm and FFD algorithm based on B-spline to deform the images. Linear interpolation was used in both mode 1 and mode 2. The normalized mutual information (NMI), Hausdorff distance (dH) and registration speed were used to verify the registration performance. Results: The average NMI for the end-inhale and end-exhale images of 8 HCC patients after demons registration in mode 1 improved 4.75% with FFD algorithm based on B-spline (P = 0.002). And the difference of dH after demons reduced 15.2% comparing with FFD model algorithm (P = 0.02). In addition, demons algorithm has the absolute advantage in registration speed (P = 0.036). Conclusions: The breathing movement for deformation of normal liver and tumor targets is significant. These two algorithms can achieve the registration of 4DCT images and the modified demons registration can deform 4DCT images effectively. (authors)
Neuner, Philippe; Peier, Andrea M; Talamo, Fabio; Ingallinella, Paolo; Lahm, Armin; Barbato, Gaetano; Di Marco, Annalise; Desai, Kunal; Zytko, Karolina; Qian, Ying; Du, Xiaobing; Ricci, Davide; Monteagudo, Edith; Laufer, Ralph; Pocai, Alessandro; Bianchi, Elisabetta; Marsh, Donald J; Pessi, Antonello
2014-01-01
Neuromedin U (NMU) is an endogenous peptide implicated in the regulation of feeding, energy homeostasis, and glycemic control, which is being considered for the therapy of obesity and diabetes. A key liability of NMU as a therapeutic is its very short half-life in vivo. We show here that conjugation of NMU to human serum albumin (HSA) yields a compound with long circulatory half-life, which maintains full potency at both the peripheral and central NMU receptors. Initial attempts to conjugate NMU via the prevalent strategy of reacting a maleimide derivative of the peptide with the free thiol of Cys34 of HSA met with limited success, because the resulting conjugate was unstable in vivo. Use of a haloacetyl derivative of the peptide led instead to the formation of a metabolically stable conjugate. HSA-NMU displayed long-lasting, potent anorectic, and glucose-normalizing activity. When compared side by side with a previously described PEG conjugate, HSA-NMU proved superior on a molar basis. Collectively, our results reinforce the notion that NMU-based therapeutics are promising candidates for the treatment of obesity and diabetes. PMID:24222478
Energy Technology Data Exchange (ETDEWEB)
Kobayashi, Naoko [Univ. of Nebraska, Lincoln, NE (United States); Barnes, Austin [Univ. of Nebraska, Lincoln, NE (United States); Jensen, Travis [Univ. of Nebraska, Lincoln, NE (United States); Noel, Eric [Univ. of Nebraska, Lincoln, NE (United States); Andlay, Gunjan [Synaptic Research, Baltimore, MD (United States); Rosenberg, Julian N. [Johns Hopkins Univ., Baltimore, MD (United States); Betenbaugh, Michael J. [Johns Hopkins Univ., Baltimore, MD (United States); Guarnieri, Michael T. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Oyler, George A. [Univ. of Nebraska, Lincoln, NE (United States); Johns Hopkins Univ., Baltimore, MD (United States); Synaptic Research, Baltimore, MD (United States)
2015-09-01
Chlorella species from the UTEX collection, classified by rDNA-based phylogenetic analysis, were screened based on biomass and lipid production in different scales and modes of culture. Lead candidate strains of C. sorokiniana UTEX 1230 and C. vulgaris UTEX 395 and 259 were compared between conditions of vigorous aeration with filtered atmospheric air and 3% CO_{2} shake-flask cultivation. We found that the biomass of UTEX 1230 produced 2 times higher at 652 mg L^{-1} dry weight under both ambient CO_{2} vigorous aeration and 3% CO_{2} conditions, while UTEX 395 and 259 under 3% CO_{2} increased to 3 times higher at 863 mg L^{-1} dry weight than ambient CO_{2} vigorous aeration. The triacylglycerol contents of UTEX 395 and 259 increased more than 30 times to 30% dry weight with 3% CO_{2}, indicating that additional CO_{2} is essential for both biomass and lipid accumulation in UTEX 395 and 259.
Directory of Open Access Journals (Sweden)
Khalid A. Almahorg
2013-11-01
Full Text Available Mobile Ad Hoc networks (MANETs are gaining increased interest due to their wide range of potential applications in civilian and military sectors. The self-control, self-organization, topology dynamism, and bandwidth limitation of the wireless communication channel make implementation of MANETs a challenging task. The Connected Dominating Set (CDS has been proposed to facilitate MANETs realization. Minimizing the CDS size has several advantages; however, this minimization is NP complete problem; therefore, approximation algorithms are used to tackle this problem. The fastest CDS creation algorithm is Wu and Li algorithm; however, it generates a relatively high signaling overhead. Utilizing the location information of network members reduces the signaling overhead of Wu and Li algorithm. In this paper, we compare the performance of Wu and Li algorithm with its Location-Information-Based version under two types of Medium Access Control protocols, and several network sizes. The MAC protocols used are: a virtual ideal MAC protocol, and the IEEE 802.11 MAC protocol. The use of a virtual ideal MAC enables us to investigate how the real-world performance of these algorithms deviates from their ideal-conditions counterpart. The simulator used in this research is the ns-2 network simulator.
Indian Academy of Sciences (India)
Sachin Vrajlal Rajani; Vivek J Pandya
2015-02-01
Solar energy is a clean, green and renewable source of energy. It is available in abundance in nature. Solar cells by photovoltaic action are able to convert the solar energy into electric current. The output power of solar cell depends upon factors such as solar irradiation (insolation), temperature and other climatic conditions. Present commercial efficiency of solar cells is not greater than 15% and therefore the available efficiency is to be exploited to the maximum possible value and the maximum power point tracking (MPPT) with the aid of power electronics to solar array can make this possible. There are many algorithms proposed to realize maximum power point tracking. These algorithms have their own merits and limitations. In this paper, an attempt is made to understand the basic functionality of the two most popular algorithms viz. Perturb and Observe (P & O) algorithm and Incremental conductance algorithm. These algorithms are compared by simulating a 100 kW solar power generating station connected to grid. MATLAB M-files are generated to understand MPPT and its dependency on insolation and temperature. MATLAB Simulink software is used to simulate the MPPT systems. Simulation results are presented to verify these assumptions.
Ivanova, N.; Pedersen, L. T.; Tonboe, R. T.; Kern, S.; Heygster, G.; Lavergne, T.; Sørensen, A.; Saldo, R.; Dybkjær, G.; Brucker, L.; Shokr, M.
2015-09-01
Sea ice concentration has been retrieved in polar regions with satellite microwave radiometers for over 30 years. However, the question remains as to what is an optimal sea ice concentration retrieval method for climate monitoring. This paper presents some of the key results of an extensive algorithm inter-comparison and evaluation experiment. The skills of 30 sea ice algorithms were evaluated systematically over low and high sea ice concentrations. Evaluation criteria included standard deviation relative to independent validation data, performance in the presence of thin ice and melt ponds, and sensitivity to error sources with seasonal to inter-annual variations and potential climatic trends, such as atmospheric water vapour and water-surface roughening by wind. A selection of 13 algorithms is shown in the article to demonstrate the results. Based on the findings, a hybrid approach is suggested to retrieve sea ice concentration globally for climate monitoring purposes. This approach consists of a combination of two algorithms plus dynamic tie points implementation and atmospheric correction of input brightness temperatures. The method minimizes inter-sensor calibration discrepancies and sensitivity to the mentioned error sources.
Directory of Open Access Journals (Sweden)
M O Qutub
2011-01-01
Full Text Available Purpose: To evaluate usefulness of applying either the two-step algorithm (Ag-EIAs and CCNA or the three-step algorithm (all three assays for better confirmation of toxigenic Clostridium difficile. The antigen enzyme immunoassays (Ag-EIAs can accurately identify the glutamate dehydrogenase antigen of toxigenic and nontoxigenic Clostridium difficile. Therefore, it is used in combination with a toxin-detecting assay [cell line culture neutralization assay (CCNA, or the enzyme immunoassays for toxins A and B (TOX-A/BII EIA] to provide specific evidence of Clostridium difficile-associated diarrhoea. Materials and Methods: A total of 151 nonformed stool specimens were tested by Ag-EIAs, TOX-A/BII EIA, and CCNA. All tests were performed according to the manufacturer′s instructions and the results of Ag-EIAs and TOX-A/BII EIA were read using a spectrophotometer at a wavelength of 450 nm. Results: A total of 61 (40.7%, 38 (25.3%, and 52 (34.7% specimens tested positive with Ag-EIA, TOX-A/BII EIA, and CCNA, respectively. Overall, the sensitivity, specificity, negative predictive value, and positive predictive value for Ag-EIA were 94%, 87%, 96.6%, and 80.3%, respectively. Whereas for TOX-A/BII EIA, the sensitivity, specificity, negative predictive value, and positive predictive value were 73.1%, 100%, 87.5%, and 100%, respectively. With the two-step algorithm, all 61 Ag-EIAs-positive cases required 2 days for confirmation. With the three-step algorithm, 37 (60.7% cases were reported immediately, and the remaining 24 (39.3% required further testing by CCNA. By applying the two-step algorithm, the workload and cost could be reduced by 28.2% compared with the three-step algorithm. Conclusions: The two-step algorithm is the most practical for accurately detecting toxigenic Clostridium difficile, but it is time-consuming.
Palmer, Grant; Venkatapathy, Ethiraj
1993-01-01
Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
Burt, Adam O.; Tinker, Michael L.
2014-01-01
In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.
DEFF Research Database (Denmark)
Sossan, Fabrizio; Bindner, Henrik W.
2012-01-01
, DSRs, are electric loads whose power consumption can be shifted without having a big impact on the primary services they are supplying and they are suitable for being controlled according the needs of regulating power in the electric power system. In this paper the performances and the aggregate...... responses provided by three algorithms for controlling electric space heating through a broadcasted price signal are compared. The algorithms have been tested in a software platform with a population of buildings using a hardware-in-the-loop approach that allows to feedback into the simulation the thermal...
DEFF Research Database (Denmark)
Sossan, Fabrizio; Bindner, Henrik W.
2012-01-01
responses provided by three algorithms for controlling electric space heating through a broadcasted price signal are compared. The algorithms have been tested in a software platform with a population of buildings using a hardware-in-the-loop approach that allows to feedback into the simulation the thermal...... response of a real office building; the experimental results of using a model predictive controller for heating a real building in a variable price context are also presented. This study is part of the Flexpower project whose aim is investigating the possibility of creating an electric market for...
Energy Technology Data Exchange (ETDEWEB)
Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael
2009-09-01
There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.
Directory of Open Access Journals (Sweden)
Jeng-Fung Chen
2014-10-01
Full Text Available Predicting student academic performance with a high accuracy facilitates admission decisions and enhances educational services at educational institutions. This raises the need to propose a model that predicts student performance, based on the results of standardized exams, including university entrance exams, high school graduation exams, and other influential factors. In this study, an approach to the problem based on the artificial neural network (ANN with the two meta-heuristic algorithms inspired by cuckoo birds and their lifestyle, namely, Cuckoo Search (CS and Cuckoo Optimization Algorithm (COA is proposed. In particular, we used previous exam results and other factors, such as the location of the student’s high school and the student’s gender as input variables, and predicted the student academic performance. The standard CS and standard COA were separately utilized to train the feed-forward network for prediction. The algorithms optimized the weights between layers and biases of the neuron network. The simulation results were then discussed and analyzed to investigate the prediction ability of the neural network trained by these two algorithms. The findings demonstrated that both CS and COA have potential in training ANN and ANN-COA obtained slightly better results for predicting student academic performance in this case. It is expected that this work may be used to support student admission procedures and strengthen the service system in educational institutions.
Yin, Y.; Sykes, J. F.
2006-12-01
Transport parameter estimation and contaminant source identification are critical steps in the development of a physically based groundwater contaminant transport model. For most transient field scale problems, the high computational burden required by parameter identification algorithms combined with sparse data sets often limits calibration. However, when data are available, a high performance computing system and parallel computing may make the calibration process feasible. The selection of the optimization algorithm is also critical. In this paper, the contaminant transport and source parameters were estimated and compared using optimization with two heuristic search algorithms (a dynamically dimensioned search and a parallelized micro genetic algorithm) and a gradient based multi-start PEST algorithm which were implemented on the Shared Hierarchical Academic Research Computing Network (Sharcnet). The case study is located in New Jersey where improper waste disposal resulted in the contamination of down gradient public water supply wells. Using FRAC3DVS, a physically based transient three-dimensional groundwater flow model with spatially and temporally varying recharge was developed and calibrated using both approximately 9 years of head data from continuous well records and data over a period of approximately 30 years from traditional monitoring wells. For the contaminant system, the parameters that were estimated include source leaching rate, source concentration, dispersivities, and retardation coefficient. The groundwater domain was discretized using 214,520 elements. With highly changing pump rates at the 7 municipal wells, time increments over the approximately 30 year simulation period varied dynamically between several days and 3 months. On Sharcnet, one forward simulation on a single processor of both transient flow and contaminant transport takes approximately 3 to 4 hours. The contaminant transport model calibration results indicate that overall
International Nuclear Information System (INIS)
Our goal is to compare the dosimetric accuracy of the Pinnacle-3 9.2 Collapsed Cone Convolution Superposition (CCCS) and the iPlan 4.1 Monte Carlo (MC) and Pencil Beam (PB) algorithms in an anthropomorphic lung phantom using measurement as the gold standard. Ion chamber measurements were taken for 6, 10, and 18 MV beams in a CIRS E2E SBRT Anthropomorphic Lung Phantom, which mimics lung, spine, ribs, and tissue. The plan implemented six beams with a 5×5 cm2 field size, delivering a total dose of 48 Gy. Data from the planning systems were computed at the treatment isocenter in the left lung, and two off-axis points, the spinal cord and the right lung. The measurements were taken using a pinpoint chamber. The best results between data from the algorithms and our measurements occur at the treatment isocenter. For the 6, 10, and 18 MV beams, iPlan 4.1 MC software performs the best with 0.3%, 0.2%, and 4.2% absolute percent difference from measurement, respectively. Differences between our measurements and algorithm data are much greater for the off-axis points. The best agreement seen for the right lung and spinal cord is 11.4% absolute percent difference with 6 MV iPlan 4.1 PB and 18 MV iPlan 4.1 MC, respectively. As energy increases absolute percent difference from measured data increases up to 54.8% for the 18 MV CCCS algorithm. This study suggests that iPlan 4.1 MC computes peripheral dose and target dose in the lung more accurately than the iPlan 4.1 PB and Pinnicale CCCS algorithms
Detecting candidate cosmic bubble collisions with optimal filters
McEwen, J D; Johnson, M C; Peiris, H V
2012-01-01
We review an optimal-filter-based algorithm for detecting candidate sources of unknown and differing size embedded in a stochastic background, and its application to detecting candidate cosmic bubble collision signatures in Wilkinson Microwave Anisotropy Probe (WMAP) 7-year observations. The algorithm provides an enhancement in sensitivity over previous methods by a factor of approximately two. Moreover, it is optimal in the sense that no other filter-based approach can provide a superior enhancement of these signatures. Applying this algorithm to WMAP 7-year observations, eight new candidate bubble collision signatures are detected for follow-up analysis.
New focused crawling algorithm
Institute of Scientific and Technical Information of China (English)
Su Guiyang; Li Jianhua; Ma Yinghua; Li Shenghong; Song Juping
2005-01-01
Focused carawling is a new research approach of search engine. It restricts information retrieval and provides search service in specific topic area. Focused crawling search algorithm is a key technique of focused crawler which directly affects the search quality. This paper first introduces several traditional topic-specific crawling algorithms, then an inverse link based topic-specific crawling algorithm is put forward. Comparison experiment proves that this algorithm has a good performance in recall, obviously better than traditional Breadth-First and Shark-Search algorithms. The experiment also proves that this algorithm has a good precision.
Gordon, Steven C.
1991-10-01
A spacecraft near a Lagrange (libration) point orbit between the Earth and the Sun can study the important interaction of the Sun's corona with the terrestrial environment. However, the spacecraft will drift from the nominal (unstable) path. The accumulated error in the spacecraft's position and velocity relative to the nominal path after a predetermined period of range and range-rate tracking can be computed. This error, or uncertainty, in the spacecraft state is measured through simulations, commonly referred to as orbit determination error analysis, and is presented as either variances or standard deviations of the state vector elements. The state uncertainty computed in the error analysis can then be input to a station-keeping algorithm. The algorithm computes control maneuvers that return the spacecraft to the vicinity of the nominal path. Variations in orbital shapes and sizes may, however, have some effect on the station-keeping costs. Several algorithms are derived and are used to test differences in station-keeping costs for orbits of different size and shape. Statistical hypothesis tests for equality of means are used in the comparisons.
CASE via MS: Ranking Structure Candidates by Mass Spectra
Kerber, Adalbert; Meringer, Markus; Rücker, Christoph
2006-01-01
Two important tasks in computer-aided structure elucidation (CASE) are the generation of candidate structures from a given molecular formula, and the ranking of structure candidates according to compatibility with an experimental spectrum. Candidate ranking with respect to electron impact mass spectra is based on virtual fragmentation of a candidate structure and comparison of the fragments’ isotope distributions against the spectrum of the unknown compound, whence a structure–spectrum compat...
Directory of Open Access Journals (Sweden)
A. A. Kokhanovsky
2009-12-01
Full Text Available Remote sensing of aerosol from space is a challenging and typically underdetermined retrieval task, requiring many assumptions to be made with respect to the aerosol and surface models. Therefore, the quality of a priori information plays a central role in any retrieval process (apart from the cloud screening procedure and the forward radiative transfer model, which to be most accurate should include the treatment of light polarization and molecular-aerosol coupling. In this paper the performance of various algorithms with respect to the of spectral aerosol optical thickness determination from optical spaceborne measurements is studied. The algorithms are based on various types of measurements (spectral, angular, polarization, or some combination of these. It is confirmed that multiangular spectropolarimetric measurements provide more powerful constraints compared to spectral intensity measurements alone, particularly those acquired at a single view angle and which rely on a priori assumptions regarding the particle phase function in the retrieval process.
International Nuclear Information System (INIS)
The purpose of this study was to compare dose distributions from three different algorithms with the x-ray Voxel Monte Carlo (XVMC) calculations, in actual computed tomography (CT) scans for use in stereotactic radiotherapy (SRT) of small lung cancers. Slow CT scan of 20 patients was performed and the internal target volume (ITV) was delineated on Pinnacle3. All plans were first calculated with a scatter homogeneous mode (SHM) which is compatible with Clarkson algorithm using Pinnacle3 treatment planning system (TPS). The planned dose was 48 Gy in 4 fractions. In a second step, the CT images, structures and beam data were exported to other treatment planning systems (TPSs). Collapsed cone convolution (CCC) from Pinnacle3, superposition (SP) from XiO, and XVMC from Monaco were used for recalculating. The dose distributions and the Dose Volume Histograms (DVHs) were compared with each other. The phantom test revealed that all algorithms could reproduce the measured data within 1% except for the SHM with inhomogeneous phantom. For the patient study, the SHM greatly overestimated the isocenter (IC) doses and the minimal dose received by 95% of the PTV (PTV95) compared to XVMC. The differences in mean doses were 2.96 Gy (6.17%) for IC and 5.02 Gy (11.18%) for PTV95. The DVH's and dose distributions with CCC and SP were in agreement with those obtained by XVMC. The average differences in IC doses between CCC and XVMC, and SP and XVMC were -1.14% (p = 0.17), and -2.67% (p = 0.0036), respectively. Our work clearly confirms that the actual practice of relying solely on a Clarkson algorithm may be inappropriate for SRT planning. Meanwhile, CCC and SP were close to XVMC simulations and actual dose distributions obtained in lung SRT
2014-01-01
This paper presents backstepping controller design for tracking purpose of nonlinear system. Since the performance of the designed controller depends on the value of control parameters, gravitational search algorithm (GSA) and particle swarm optimization (PSO) techniques are used to optimise these parameters in order to achieve a predefined system performance. The performance is evaluated based on the tracking error between reference input given to the system and the system output. Then, the ...
Ruvio, Giuseppe; Solimene, Raffaele; Cuccaro, Antonio; Ammann, Max
2013-01-01
A comparative analysis of an imaging method based on a multi-frequency Multiple Signal Classification (MUSIC) approach against two common linear detection algorithms based on non-coherent migration is made. The different techniques are tested using synthetic data generated through CST Microwave Studio and a phantom developed from MRI scans of a mostly fat breast. The multi-frequency MUSIC approach shows an overall superior performance compared to the non-coherent techniques. This paper report...
Khehra, Baljit Singh; Pharwaha, Amar Partap Singh
2016-06-01
Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.
Jeng-Fung Chen; Ho-Nien Hsieh; Quang Hung Do
2014-01-01
Predicting student academic performance with a high accuracy facilitates admission decisions and enhances educational services at educational institutions. This raises the need to propose a model that predicts student performance, based on the results of standardized exams, including university entrance exams, high school graduation exams, and other influential factors. In this study, an approach to the problem based on the artificial neural network (ANN) with the two meta-heuristic algorithm...
Energy Technology Data Exchange (ETDEWEB)
Mantini, D [ITAB-Institute of Advanced Biomedical Technologies, University Foundation ' G. D' Annunzio' , University of Chieti (Italy); II, K E Hild [Department of Radiology, University of California at San Francisco, CA (United States); Alleva, G [ITAB-Institute of Advanced Biomedical Technologies, University Foundation ' G. D' Annunzio' , University of Chieti (Italy); Comani, S [ITAB-Institute of Advanced Biomedical Technologies, University Foundation ' G. D' Annunzio' , University of Chieti (Italy); Department of Clinical Sciences and Bio-imaging, University of Chieti (Italy)
2006-02-21
Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.
Mantini, D.; Hild, K. E., II; Alleva, G.; Comani, S.
2006-02-01
Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.
International Nuclear Information System (INIS)
Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times
Kim, Sung Jin; Kim, Sung Kyu
2015-01-01
Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam, collapsed cone, and Monte-Carlo, provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated using the PB, CC, and MC algorithms. Planning treatment volume and organs at risk delineation was performed according to our institutions protocols on the Oncentra MasterPlan image registration module, on 0.3 to 0.5 cm computed tomography slices taken under normal respiration conditions. Four intensity-modulated radiation therapy plans were calculated according to each algorithm for each patient. The plans were conducted on the Oncentra MasterPlan and CMS Monaco treatment planning systems, for 6 MV. The plans were compared in terms of the dose distribution in target, OAR volumes, and...
International Nuclear Information System (INIS)
To compare ischaemic lesions predicted by different CT perfusion (CTP) post-processing techniques and validate CTP lesions compared with final lesion size in stroke patients. Fifty patients underwent CT, CTP and CT angiography. Quantitative values and colour maps were calculated using least mean square deconvolution (LMSD), maximum slope (MS) and conventional singular value decomposition deconvolution (SVDD) algorithms. Quantitative results, core/penumbra lesion sizes and Alberta Stroke Programme Early CT Score (ASPECTS) were compared among the algorithms; lesion sizes and ASPECTS were compared with final lesions on follow-up MRI + MRA or CT + CTA as a reference standard, accounting for recanalisation status. Differences in quantitative values and lesion sizes were statistically significant, but therapeutic decisions based on ASPECTS and core/penumbra ratios would have been the same in all cases. CTP lesion sizes were highly predictive of final infarct size: Coefficients of determination (R 2) for CTP versus follow-up lesion sizes in the recanalisation group were 0.87, 0.82 and 0.61 (P < 0.001) for LMSD, MS and SVDD, respectively, and 0.88, 0.87 and 0.76 (P < 0.001), respectively, in the non-recanalisation group. Lesions on CT perfusion are highly predictive of final infarct. Different CTP post-processing algorithms usually lead to the same clinical decision, but for assessing lesion size, LMSD and MS appear superior to SVDD. (orig.)
Directory of Open Access Journals (Sweden)
Muhammad Ilyas
2016-05-01
Full Text Available This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF and Unscented Kalman filter (UKF were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-01-01
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293
Directory of Open Access Journals (Sweden)
Gilson Alexandre Pinto
2005-06-01
Full Text Available This work presented the results of the implementation of an off-line smoothing algorithm in the monitoring system, for the partial hydrolysis of cheese whey proteins using enzymes, which used penalized least squares. Different algorithms for on-line signals filtering used by the control were also compared: artificial neural networks, moving average and smoothing algorithm.A hidrólise parcial de proteínas do soro de queijo, realizada por enzimas imobilizadas em suporte inerte, pode alterar ou evidenciar propriedades funcionais dos polipeptídeos produzidos, aumentando assim suas aplicações. O controle do pH do reator de proteólise é de fundamental importância para modular a distribuição de pesos moleculares dos peptídeos formados. Os sinais de pH e temperatura utilizados pelo algoritmo de controle e inferência de estado podem estar sujeitos a ruído considerável, tornando importante sua filtragem. Apresentam-se aqui resultados da implementação, no sistema de monitoramento do processo, de algoritmo suavizador, que utiliza mínimos quadrados com penalização para o pós-tratamento dos dados. Compara-se ainda o desempenho de diferentes algoritmos na filtragem em tempo real dos sinais utilizados pelo sistema de controle, a saber: redes neurais artificiais, média móvel e o sobredito suavizador.
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-01-01
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293
International Nuclear Information System (INIS)
Highlights: ► Multi-objective optimization of STI based on risk-informed decision making. ► Four different genetic algorithms (GAs) techniques are used as optimization tool. ► Advantages/disadvantages among the four different GAs applied are emphasized. - Abstract: The risk-informed decision making (RIDM) process, where insights gained from the probabilistic safety assessment are contemplated together with other engineering insights, is gaining an ever-increasing attention in the process industries. Increasing safety systems availability by applying RIDM is one of the prime goals for the authorities operating with nuclear power plants. Additionally, equipment ageing is gradually becoming a major concern in the process industries and especially in the nuclear industry, since more and more safety-related components are approaching or are already in their wear-out phase. A significant difficulty regarding the consideration of ageing effects on equipment (un)availability is the immense uncertainty the available equipment ageing data are associated to. This paper presents an approach for safety system unavailability reduction by optimizing the related test and maintenance schedule suggested by the technical specifications in the nuclear industry. Given the RIDM philosophy, two additional insights, i.e. ageing data uncertainty and test and maintenance costs, are considered along with unavailability insights gained from the probabilistic safety assessment for a selected standard safety system. In that sense, an approach for multi-objective optimization of the equipment surveillance test interval is proposed herein. Three different objective functions related to each one of the three different insights discussed above comprise the multi-objective nature of the optimization process. Genetic algorithm technique is utilized as an optimization tool. Four different types of genetic algorithms are utilized and consequently comparative analysis is conducted given the
Directory of Open Access Journals (Sweden)
G.Kesavaraj
2013-10-01
Full Text Available Data mining (the analysis step of the "Knowledge Discovery in Databases" process, or KDD, [1] a field at the intersection of computer science and statistics, is the process that attempts to discover patterns in large data sets. It utilizes methods at the intersection of artificial intelligence, machine learning, statistics, and database systems. The overall goal of the data mining process is to extract information from a data set and transform it into an understandable structure for further use. It is commonly used in marketing, surveillance, fraud detection, scientific discovery and now gaining wide way in social networking. Anything and everything on the Internet is fair game for extreme data mining practices. Social media covers all aspects of the social side of the internet that allow us to get contact and carve up information with others as well as intermingle with any number of people in any place in the world. This paper uses the dataset “Local News Survey” from Pew Research Center. The focus of the research is towards exploration on impact of the internet on Local News activities using Data Mining Techniques. The original dataset contains 102 attributes which is very large and hence the essential attributes required for the analysis are selected by feature reduction method. The selected attributes were applied to Data Mining Classification Algorithms such as RndTree, ID3, K-NN, C4.5 and CS-MC4. The Error rates of various classification Algorithms were compared to bring out the best and effective Algorithm suitable for this dataset.
Energy Technology Data Exchange (ETDEWEB)
Puchner, Stefan B. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Medical University of Vienna, Department of Biomedical Imaging and Image-Guided Therapy, Vienna (Austria); Ferencik, Maros [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Harvard Medical School, Division of Cardiology, Massachusetts General Hospital, Boston, MA (United States); Maurovich-Horvat, Pal [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Semmelweis University, MTA-SE Lenduelet Cardiovascular Imaging Research Group, Heart and Vascular Center, Budapest (Hungary); Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu [CV Path Institute Inc., Gaithersburg, MD (United States); Kauczor, Hans-Ulrich [University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany); Hoffmann, Udo [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Schlett, Christopher L. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany)
2015-01-15
To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area <60 HU was semiautomatically quantified in CCTA. LCP was defined by histology as fibroatheroma with a large lipid/necrotic core. Area under the curve (AUC) was derived from logistic regression analysis as a measure of diagnostic accuracy. Overall, 173 CCTA triplets (FBP/ASIR/MBIR) were co-registered with histology. LCP was present in 26 cross-sections. Average measured plaque area <60 HU was significantly larger in LCP compared to non-LCP cross-sections (mm{sup 2}: 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Rothe, Jan Holger, E-mail: jan-holger.rothe@charite.de [Klinik für Radiologie, Campus Virchow-Klinikum, Charité – Universitätsmedizin, Berlin (Germany); Grieser, Christian [Klinik für Radiologie, Campus Virchow-Klinikum, Charité – Universitätsmedizin, Berlin (Germany); Lehmkuhl, Lukas [Abteilung für Diagnostische und Interventionelle Radiologie, Herzzentrum Leipzig (Germany); Schnapauff, Dirk; Fernandez, Carmen Perez; Maurer, Martin H.; Mussler, Axel; Hamm, Bernd; Denecke, Timm; Steffen, Ingo G. [Klinik für Radiologie, Campus Virchow-Klinikum, Charité – Universitätsmedizin, Berlin (Germany)
2013-11-01
Objective: To compare different three-dimensional volumetric algorithms (3D-algorithms) and RECIST for size measurement and response assessment in liver metastases from colorectal and pancreatic cancer. Methods: The volumes of a total of 102 liver metastases in 45 patients (pancreatic cancer, n = 22; colon cancer, n = 23) were estimated using three volumetric methods (seeded region growing method, slice-based segmentation, threshold-based segmentation) and the RECIST 1.1 method with volume calculation based on the largest axial diameter. Each measurement was performed three times by one observer. All four methods were applied to follow-up on 55 liver metastases in 29 patients undergoing systemic treatment (median follow-up, 3.5 months; range, 1–10 months). Analysis of variance (ANOVA) with post hoc tests was performed to analyze intraobserver variability and intermethod differences. Results: ANOVA showed significant higher volumes calculated according to the RECIST guideline compared to the other measurement methods (p < 0.001) with relative differences ranging from 0.4% to 41.1%. Intraobserver variability was significantly higher (p < 0.001) for RECIST and threshold based segmentation (3.6–32.8%) compared with slice segmentation (0.4–13.7%) and seeded region growing method (0.6–10.8%). In the follow-up study, the 3D-algorithms and the assessment following RECIST 1.1 showed a discordant classification of treatment response in 10–21% of the patients. Conclusions: This study supports the use of volumetric measurement methods due to significant higher intraobserver reproducibility compared to RECIST. Substantial discrepancies in tumor response classification between RECIST and volumetric methods depending on applied thresholds confirm the requirement of a consensus concerning volumetric criteria for response assessment.
Exploring Optimal Topology and Routing Algorithm for 3D Network on Chip
Directory of Open Access Journals (Sweden)
N. Viswanathan
2012-01-01
Full Text Available Problem statement: Network on Chip (NoC is an appropriate candidate to implement interconnections in SoCs. Increase in number of IP blocks in 2D NoC will lead to increase in chip area, global interconnect, length of the communication channel, number of hops transversed by a packet, latency and difficulty in clock distribution. 3D NoC is evolved to overcome the drawbacks of 2D NoC. Topology, switching mechanism and routing algorithm are major area of 3D NoC research. In this study, three topologies (3D-MT, 3D-ST and 3D-RNT and routing algorithm for 3D NoC are presented. Approach: Experiment is conducted to evaluate the performance of the topologies and routing algorithm. Evaluation parameters are latency, probability and network diameter and energy dissipation. Results: It is demonstrated by a comparison of experimental results analysis that 3D-RNT is a suitable candidate for 3D NoC topology. Conclusion: TThe performance of the topologies and routing algorithm for 3D NoC is analysed. 3D-MT is not a suitable candidate for 3D NoC, 3D-ST is a suitable candidate provided interlayer communications are frequent and 3D-RNT is a suitable candidate as interlayer communications are limited.
Directory of Open Access Journals (Sweden)
Noureddine Bouhmala
2012-11-01
Full Text Available In this work, a hierarchical population-based memetic algorithm for solving the satisfiability problem is presented. The approach suggests looking at the evolution as a hierarchical process evolving from a coarse population where the basic unit of a gene is composed of cluster of variables that represent the problem to a fine population where each gene represents a single variable. The optimization process is carried out by letting the converged population at a child level serve as the initial population to the parent level. A benchmark composed of industrial instances is used to compare the effectiveness of the hierarchical approach against its single-level counterpart.
Directory of Open Access Journals (Sweden)
Noureddine Bouhmala
2012-12-01
Full Text Available In this work, a hierarchical population-based memetic algorithm for solving the satisfiability problem ispresented. The approach suggests looking at the evolution as a hierarchical process evolving from a coarsepopulation where the basic unit of a gene is composed of cluster of variables that represent the problem toa fine population where each gene represents a single variable. The optimization process is carried out byletting the converged population at a child level serve as the initial population to the parent level. Abenchmark composed of industrial instances is used to compare the effectiveness of the hierarchicalapproach against its single-level counterpart.
International Nuclear Information System (INIS)
The use of high accuracy dose calculation algorithms, such as Monte Carlo (MC) and Collapsed Cone (CC) determine dose in inhomogeneous tissue more accurately than pencil beam (PB) algorithms. However, prescription protocols based on clinical experience with PB are often used for treatment plans calculated with CC. This may lead to treatment plans with changes in field size (FS) and changes in dose to organs at risk (OAR), especially for small tumor volumes in lung tissue treated with SABR. We re-evaluated 17 3D-conformal treatment plans for small intrapulmonary lesions with a prescription of 60 Gy in fractions of 7.5 Gy to the 80% isodose. All treatment plans were initially calculated in Oncentra MasterPlan® using a PB algorithm and recalculated with CC (CCre-calc). Furthermore, a CC-based plan with coverage similar to the PB plan (CCcov) and a CC plan with relaxed coverage criteria (CCclin), were created. The plans were analyzed in terms of Dmean, Dmin, Dmax and coverage for GTV, PTV and ITV. Changes in mean lung dose (MLD), V10Gy and V20Gy were evaluated for the lungs. The re-planned CC plans were compared to the original PB plans regarding changes in total monitor units (MU) and average FS. When PB plans were recalculated with CC, the average V60Gy of GTV, ITV and PTV decreased by 13.2%, 19.9% and 41.4%, respectively. Average Dmean decreased by 9% (GTV), 11.6% (ITV) and 14.2% (PTV). Dmin decreased by 18.5% (GTV), 21.3% (ITV) and 17.5% (PTV). Dmax declined by 7.5%. PTV coverage correlated with PTV volume (p < 0.001). MLD, V10Gy, and V20Gy were significantly reduced in the CC plans. Both, CCcov and CCclin had significantly increased MUs and FS compared to PB. Recalculation of PB plans for small lung lesions with CC showed a strong decline in dose and coverage in GTV, ITV and PTV, and declined dose in the lung. Thus, switching from a PB algorithm to CC, while aiming to obtain similar target coverage, can be associated with application of more MU and extension of
Evaluation of the citation matching algorithms of CWTS and iFQ in comparison to Web of Science
Olensky, Marlies; Schmidt, Marion; van Eck, Nees Jan
2015-01-01
The results of bibliometric studies provided by bibliometric research groups, e.g. the Centre for Science and Technology Studies (CWTS) and the Institute for Research Information and Quality Assurance (iFQ), are often used in the process of research assessment. Their databases use Web of Science (WoS) citation data, which they match according to their own matching algorithms - in the case of CWTS for standard usage in their studies and in the case of iFQ on an experimental basis. Since the pr...
Scarbolo, Luca; Molin, Dafne; Perlekar, Prasad; Sbragaglia, Mauro; Soldati, Alfredo; Toschi, Federico
2013-01-01
Lattice Boltzmann Models (LBM) and Phase Field Models (PFM) are two of the most widespread approaches for the numerical study of multicomponent fluid systems. Both methods have been successfully employed by several authors but, despite their popularity, still remains unclear how to properly compare them and how they perform on the same problem. Here we present a unified framework for the direct (one-to-one) comparison of the multicomponent LBM against the PFM. We provide analytical guidelines...
Vio, R; Wamsteker, W
2004-01-01
It is well-known that the noise associated with the collection of an astronomical image by a CCD camera is, in large part, Poissonian. One would expect, therefore, that computational approaches that incorporate this a priori information will be more effective than those that do not. The Richardson-Lucy (RL) algorithm, for example, can be viewed as a maximum-likelihood (ML) method for image deblurring when the data noise is assumed to be Poissonian. Least-squares (LS) approaches, on the other hand, arises from the assumption that the noise is Gaussian with fixed variance across pixels, which is rarely accurate. Given this, it is surprising that in many cases results obtained using LS techniques are relatively insensitive to whether the noise is Poissonian or Gaussian. Furthermore, in the presence of Poisson noise, results obtained using LS techniques are often comparable with those obtained by the RL algorithm. We seek an explanation of these phenomena via an examination of the regularization properties of par...
Comparison of Chlorophyll-A Algorithms for the Transition Zone Between the North Sea and Baltic Sea
Huber, Silvia; Hansen, Lars B.; Rasmussen, Mads O.; Kaas, Hanne
2015-12-01
Monitoring water quality of the transition zone between the North Sea and Baltic Sea from space is still a challenge because of the optically complex waters. The presence of suspended sediments and dissolved substances often interfere with the phytoplankton signal and thus confound conventional case-1 algorithms developed for the open ocean. Specific calibration to case-2 waters may compensate for this. In this study we compared chlorophyll-a (chl-a) concentrations derived with three different case-2 algorithms: C2R, FUB/WeW and CoastColour using MERIS data as basis. Default C2R and FUB clearly underestimate higher chl-a concentrations. However, with local tuning we could significantly improve the fit with in-situ data. For instance, the root mean square error is reduced by roughly 50% from 3.06 to 1.6 μ g/L for the calibrated C2R processor as compared to the default C2R. This study is part of the FP7 project AQUA-USERS which has the overall goal to provide the aquaculture industry with timely information based on satellite data and optical in-situ measurements. One of the products is chlorophyll-a concentration.
Improved Apriori Algorithm for Mining Association Rules
Darshan M. Tank
2014-01-01
Association rules are the main technique for data mining. Apriori algorithm is a classical algorithm of association rule mining. Lots of algorithms for mining association rules and their mutations are proposed on basis of Apriori algorithm, but traditional algorithms are not efficient. For the two bottlenecks of frequent itemsets mining: the large multitude of candidate 2- itemsets, the poor efficiency of counting their support. Proposed algorithm reduces one redundant pruning operations of C...
Nagayama, T; Mancini, R C; Florido, R; Tommasini, R; Koch, J A; Delettrez, J A; Regan, S P; Smalyuk, V A; Welser-Sherrill, L A; Golovkin, I E
2008-10-01
Detailed analysis of x-ray narrow-band images from argon-doped deuterium-filled inertial confinement fusion implosion experiments yields information about the temperature spatial structure in the core at the collapse of the implosion. We discuss the analysis of direct-drive implosion experiments at OMEGA, in which multiple narrow-band images were recorded with a multimonochromatic x-ray imaging instrument. The temperature spatial structure is investigated by using the sensitivity of the Ly beta/He beta line emissivity ratio to the temperature. Three analysis methods that consider the argon He beta and Ly beta image data are discussed and the results compared. The methods are based on a ratio of image intensities, ratio of Abel-inverted emissivities, and a search and reconstruction technique driven by a Pareto genetic algorithm. PMID:19044576
Rueda, Antonio J.; Noguera, José M.; Luque, Adrián
2016-02-01
In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.
Energy Technology Data Exchange (ETDEWEB)
Sun, Shangjin; Gill, Michelle; Li, Yifei; Huang, Mitchell; Byrd, R. Andrew, E-mail: byrdra@mail.nih.gov [National Cancer Institute, Structural Biophysics Laboratory (United States)
2015-05-15
The advantages of non-uniform sampling (NUS) in offering time savings and resolution enhancement in NMR experiments have been increasingly recognized. The possibility of sensitivity gain by NUS has also been demonstrated. Application of NUS to multidimensional NMR experiments requires the selection of a sampling scheme and a reconstruction scheme to generate uniformly sampled time domain data. In this report, an efficient reconstruction scheme is presented and used to evaluate a range of regularization algorithms that collectively yield a generalized solution to processing NUS data in multidimensional NMR experiments. We compare l1-norm (L1), iterative re-weighted l1-norm (IRL1), and Gaussian smoothed l0-norm (Gaussian-SL0) regularization for processing multidimensional NUS NMR data. Based on the reconstruction of different multidimensional NUS NMR data sets, L1 is demonstrated to be a fast and accurate reconstruction method for both quantitative, high dynamic range applications (e.g. NOESY) and for all J-coupled correlation experiments. Compared to L1, both IRL1 and Gaussian-SL0 are shown to produce slightly higher quality reconstructions with improved linearity in peak intensities, albeit with a computational cost. Finally, a generalized processing system, NESTA-NMR, is described that utilizes a fast and accurate first-order gradient descent algorithm (NESTA) recently developed in the compressed sensing field. NESTA-NMR incorporates L1, IRL1, and Gaussian-SL0 regularization. NESTA-NMR is demonstrated to provide an efficient, streamlined approach to handling all types of multidimensional NMR data using proteins ranging in size from 8 to 32 kDa.
Directory of Open Access Journals (Sweden)
Robert J Hickey
2007-01-01
Full Text Available Introduction: As an alternative to DNA microarrays, mass spectrometry based analysis of proteomic patterns has shown great potential in cancer diagnosis. The ultimate application of this technique in clinical settings relies on the advancement of the technology itself and the maturity of the computational tools used to analyze the data. A number of computational algorithms constructed on different principles are available for the classification of disease status based on proteomic patterns. Nevertheless, few studies have addressed the difference in the performance of these approaches. In this report, we describe a comparative case study on the classification accuracy of hepatocellular carcinoma based on the serum proteomic pattern generated from a Surface Enhanced Laser Desorption/Ionization (SELDI mass spectrometer.Methods: Nine supervised classifi cation algorithms are implemented in R software and compared for the classification accuracy.Results: We found that the support vector machine with radial function is preferable as a tool for classification of hepatocellular carcinoma using features in SELDI mass spectra. Among the rest of the methods, random forest and prediction analysis of microarrays have better performance. A permutation-based technique reveals that the support vector machine with a radial function seems intrinsically superior in learning from the training data since it has a lower prediction error than others when there is essentially no differential signal. On the other hand, the performance of the random forest and prediction analysis of microarrays rely on their capability of capturing the signals with substantial differentiation between groups.Conclusions: Our finding is similar to a previous study, where classification methods based on the Matrix Assisted Laser Desorption/Ionization (MALDI mass spectrometry are compared for the prediction accuracy of ovarian cancer. The support vector machine, random forest and prediction
International Nuclear Information System (INIS)
Highlights: • Simultaneous minimization of the thermal resistance and pressure drop is shown. • Genetic algorithm is capable of securing above objectives. • Experimental data using the microchannel heat sinks is limited. • Utilization of experimental data from ammonia-cooled microchannel which is scarce. • Outcomes present potentials for exploratory research into new coolants. - Abstract: Minimization of the thermal resistance and pressure drop of a microchannel heat sink is desirable for efficient heat removal which is becoming a serious challenge due to the demand for continuous miniaturization of such cooling systems with increasing high heat generation rate. However, a reduction in the thermal resistance generally leads to the increase in the pressure drop and vice versa. This paper reports the outcome of optimization of the hydraulic diameter and wall width to channel width ratio of square and circular microchannel heat sink for the simultaneous minimization of the two objectives; thermal resistance and pressure drop. The procedure was completed with multi-objective genetic algorithm (MOGA). Environmentally friendly liquid ammonia was used as the coolant and the thermophysical properties have been obtained based on the average experimental saturation temperatures measured along an ammonia-cooled 3.0 mm internal diameter horizontal microchannel rig. The optimized results showed that with the same hydraulic diameter and pumping power, circular microchannels have lower thermal resistance. Based on the same number of microchannels per square cm, the thermal resistance for the circular channels is lower by 21% at the lowest pumping power and lower by 35% at the highest pumping power than the thermal resistance for the square microchannels. Results obtained at 10 °C and 5 °C showed no significant difference probably due to the slight difference in properties at these temperatures
International Nuclear Information System (INIS)
The advantages of non-uniform sampling (NUS) in offering time savings and resolution enhancement in NMR experiments have been increasingly recognized. The possibility of sensitivity gain by NUS has also been demonstrated. Application of NUS to multidimensional NMR experiments requires the selection of a sampling scheme and a reconstruction scheme to generate uniformly sampled time domain data. In this report, an efficient reconstruction scheme is presented and used to evaluate a range of regularization algorithms that collectively yield a generalized solution to processing NUS data in multidimensional NMR experiments. We compare l1-norm (L1), iterative re-weighted l1-norm (IRL1), and Gaussian smoothed l0-norm (Gaussian-SL0) regularization for processing multidimensional NUS NMR data. Based on the reconstruction of different multidimensional NUS NMR data sets, L1 is demonstrated to be a fast and accurate reconstruction method for both quantitative, high dynamic range applications (e.g. NOESY) and for all J-coupled correlation experiments. Compared to L1, both IRL1 and Gaussian-SL0 are shown to produce slightly higher quality reconstructions with improved linearity in peak intensities, albeit with a computational cost. Finally, a generalized processing system, NESTA-NMR, is described that utilizes a fast and accurate first-order gradient descent algorithm (NESTA) recently developed in the compressed sensing field. NESTA-NMR incorporates L1, IRL1, and Gaussian-SL0 regularization. NESTA-NMR is demonstrated to provide an efficient, streamlined approach to handling all types of multidimensional NMR data using proteins ranging in size from 8 to 32 kDa
Detwiler, Russell L; Mehl, Steffen; Rajaram, Harihar; Cheung, Wendy W
2002-01-01
Numerical solution of large-scale ground water flow and transport problems is often constrained by the convergence behavior of the iterative solvers used to solve the resulting systems of equations. We demonstrate the ability of an algebraic multigrid algorithm (AMG) to efficiently solve the large, sparse systems of equations that result from computational models of ground water flow and transport in large and complex domains. Unlike geometric multigrid methods, this algorithm is applicable to problems in complex flow geometries, such as those encountered in pore-scale modeling of two-phase flow and transport. We integrated AMG into MODFLOW 2000 to compare two- and three-dimensional flow simulations using AMG to simulations using PCG2, a preconditioned conjugate gradient solver that uses the modified incomplete Cholesky preconditioner and is included with MODFLOW 2000. CPU times required for convergence with AMG were up to 140 times faster than those for PCG2. The cost of this increased speed was up to a nine-fold increase in required random access memory (RAM) for the three-dimensional problems and up to a four-fold increase in required RAM for the two-dimensional problems. We also compared two-dimensional numerical simulations of steady-state transport using AMG and the generalized minimum residual method with an incomplete LU-decomposition preconditioner. For these transport simulations, AMG yielded increased speeds of up to 17 times with only a 20% increase in required RAM. The ability of AMG to solve flow and transport problems in large, complex flow systems and its ready availability make it an ideal solver for use in both field-scale and pore-scale modeling. PMID:12019641
Directory of Open Access Journals (Sweden)
Ji-Hong Jeon
2014-11-01
Full Text Available Global optimization methods linked with simulation models are widely used for automated calibration and serve as useful tools for searching for cost-effective alternatives for environmental management. A genetic algorithm (GA and shuffled complex evolution (SCE-UA algorithm were linked with the Long-Term Hydrologic Impact Assessment (L-THIA model, which employs the curve number (SCS-CN method. The performance of the two optimization methods was compared by automatically calibrating L-THIA for monthly runoff from 10 watersheds in Indiana. The selected watershed areas ranged from 32.7 to 5844.1 km2. The SCS-CN values and total five-day rainfall for adjustment were optimized, and the objective function used was the Nash-Sutcliffe value (NS value. The GA method rapidly reached the optimal space until the 10th generating population (generation, and after the 10th generation solutions increased dispersion around the optimal space, called a cross hair pattern, because of mutation rate increase. The number of looping executions influenced the performance of model calibration for the SCE-UA and GA method. The GA method performed better for the case of fewer loop executions than the SCE-UA method. For most watersheds, calibration performance using GA was better than for SCE-UA until the 50th generation when the number of model loop executions was around 5150 (one generation has 100 individuals. However, after the 50th generation of the GA method, the SCE-UA method performed better for calibrating monthly runoff compared to the GA method. Optimized SCS-CN values for primary land use types were nearly the same for the two methods, but those for minor land use types and total five-day rainfall for AMC adjustment were somewhat different because those parameters did not significantly influence calculation of the objective function. The GA method is recommended for cases when model simulation takes a long time and the model user does not have sufficient time
Directory of Open Access Journals (Sweden)
Raju Datla
2016-02-01
Full Text Available The radiometric calibration equations for the thermal emissive bands (TEB and the reflective solar bands (RSB measurements of the earth scenes by the polar satellite sensors, (Terra and Aqua MODIS and Suomi NPP (VIIRS, and geostationary sensors, GOES Imager and the GOES-R Advanced Baseline Imager (ABI are analyzed towards calibration algorithm harmonization on the basis of SI traceability which is one of the goals of the NOAA National Calibration Center (NCC. One of the overarching goals of NCC is to provide knowledge base on the NOAA operational satellite sensors and recommend best practices for achieving SI traceability for the radiance measurements on-orbit. As such, the calibration methodologies of these satellite optical sensors are reviewed in light of the recommended practice for radiometric calibration at the National Institute of Standards and Technology (NIST. The equivalence of some of the spectral bands in these sensors for their end products is presented. The operational and calibration features of the sensors for on-orbit observation of radiance are also compared in tabular form. This review is also to serve as a quick cross reference to researchers and analysts on how the observed signals from these sensors in space are converted to radiances.
Serag, Ahmed; Blesa, Manuel; Moore, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Wilkinson, A. G.; MacNaught, Gillian; Semple, Scott I.; Boardman, James P.
2016-03-01
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
Odindi, John; Adam, Elhadi; Ngubane, Zinhle; Mutanga, Onisimo; Slotow, Rob
2014-01-01
Plant species invasion is known to be a major threat to socioeconomic and ecological systems. Due to high cost and limited extents of urban green spaces, high mapping accuracy is necessary to optimize the management of such spaces. We compare the performance of the new-generation WorldView-2 (WV-2) and SPOT-5 images in mapping the bracken fern [Pteridium aquilinum (L) kuhn] in a conserved urban landscape. Using the random forest algorithm, grid-search approaches based on out-of-bag estimate error were used to determine the optimal ntree and mtry combinations. The variable importance and backward feature elimination techniques were further used to determine the influence of the image bands on mapping accuracy. Additionally, the value of the commonly used vegetation indices in enhancing the classification accuracy was tested on the better performing image data. Results show that the performance of the new WV-2 bands was better than that of the traditional bands. Overall classification accuracies of 84.72 and 72.22% were achieved for the WV-2 and SPOT images, respectively. Use of selected indices from the WV-2 bands increased the overall classification accuracy to 91.67%. The findings in this study show the suitability of the new generation in mapping the bracken fern within the often vulnerable urban natural vegetation cover types.
Directory of Open Access Journals (Sweden)
Li Zhen
2008-05-01
analysis of data sets in which in vitro bioassay data is being used to predict in vivo chemical toxicology. From our analysis, we can recommend that several ML methods, most notably SVM and ANN, are good candidates for use in real world applications in this area.
Many-Objective Distinct Candidates Optimization using Differential Evolution
DEFF Research Database (Denmark)
Justesen, Peter; Ursem, Rasmus Kjær
2010-01-01
paper, we present the novel MODCODE algorithm incorporating the ROD measure to measure and control candidate distinctiveness. MODCODE is tested against GDE3 on three real world centrifugal pump design problems supplied by Grundfos. Our algorithm outperforms GDE3 on all problems with respect to all...
DEFF Research Database (Denmark)
Larsen, Thomas Ostenfeld; Petersen, Bent O.; Duus, Jens Øllgaard;
2005-01-01
X-hitting, a newly developed algorithm for automated comparison of UV data, has been used for the tracking of two novel spiro-quinazoline metabolites, lapatins A (1)andB(2), in a screening study targeting quinazolines. The structures of 1 and 2 were elucidated by analysis of spectroscopic data, p...
Promising new cryogenic seal candidate
International Nuclear Information System (INIS)
Of the five seal candidates considered for the main propellant system of the Space Shuttle, only one candidate, the fluoroplastic Halar, satisfied all tests including the critical LO2 impact test and the cryogenic compression sealability test. Radiation-cross-linked Halar is a tough, strong thermoplastic that not only endured one hundred 2200 N compression cycles at 83 K while mounted in a standard military O-ring gland without cracking or deforming, but improved in sealability as a result of this cycling. Although these Halar O-rings require much higher sealing forces (approximately 500 N) at room temperature than rubber O-rings, on cooling to cryogenic temperatures the required sealing force only doubles, whereas the sealing force for rubber O-rings increases eightfold. Although these Halar O-rings were inadequately cross-linked, they still exhibited promise as LO2-compatible cryogenic seals. It is expected that their high-temperature properties can be greatly improved by higher degrees of cross-linking (e.g., by 20 mrad of radiation) without compromising their already excellent low-temperature properties. A direct comparison should then be obtained between the best of the cross-linked Halar compounds and the current commercial cryogenic seal materials, filled Teflon and Kel-F
Improved comparison inspection algorithm between ICT images & CAD model%改进的工业CT图像与CAD模型的比对检测
Institute of Scientific and Technical Information of China (English)
张志波; 曾理; 何洪举
2012-01-01
This paper improved an algorithm of analyzing the manufacture error based on comparison inspection between ICT images and the CAD model. Firstly, it segmented the ICT images by 3D Otsu threshold method, and then obtained the edge surface and comer features. Secondly, it calculated the oriented bounding boxes ( OBB) of the ICT images' comer features and the work-piece' s CAD model using the presented rotating projection method presented, then realized the rough registration by the two OBBs. Then the singular value decomposition and iterative closest point (SVD-ICP) algorithm were used to complete the precise registration between the CAD model and comer features of ICT images. The k-d tree was used to improve the calculation speed of searching for the closest point. Finally, it displayed the error using edge surface. The experimental results indicate that the result of rough registration in this paper is more accurate and applicable. The whole algorithm is more rapid and efficient.%改进了一种基于工业计算机断层成像(industrial computed tomography,ICT)图像与计算机辅助设计(computer aided design,CAD)模型的比对检测算法,分析工件制造误差.首先对工业CT图像用三维Otsu法进行阈值分割,并分别提取边缘面与角点特征；然后对工业CT图像角点特征与工件的CAD模型用文中研究的旋转投影法求取方向包围盒,进而实现粗配准；再结合角点特征点集和奇异值分解-迭代最近点算法进行精配准,最近点对的求取用k-d树进行加速；最后在边缘面上显示误差.实验结果表明,该方法在工件比对检测过程中,粗配准精度更高,适应性更好.整个比对检测过程更加高效,速度上有了较大的提高.
An Improved K-means Clustering Algorithm
Xiuchang Huang; Wei Su
2014-01-01
An improved k-means clustering algorithm based on K-MEANS algorithm is proposed. This paper gives an improved traditional algorithm by analyzing the statistical data. After a comparison between the actual data and the simulation data, this paper safely shows that the improved algorithm significantly reduce classification error on the simulation data set and the quality of the improved algorithm is much better than K-MEANS algorithm. Such comparative results confirm that the improved algorithm...
International Nuclear Information System (INIS)
By late 2008 one of the most important pieces of the 'Museo delle Antichità Egizie' of Turin, the sculpture of the Pharaoh with god Amun, was planned to be one of the masterpieces of a travelling exhibition in Japan. The 'Fondazione Museo delle Antichità Egizie di Torino', who manages the museum, was concerned with the integrity of the base of the statue which actually presents visible signs of restoration dating back to the early 19th century. It was required to estimate the persistence of the visible fractures, to search for unknown ones and to provide information about the overall mechanical strength of the base. To tackle the first question a GPR reflection survey along three sides of the base was performed and the results were assembled in a 3D rendering. As far as the second question is concerned, two parallel, horizontal ultrasonic 2D tomograms across the base were made. We acquired, for each section, 723 ultrasonic signals corresponding to different transmitter and receiver positions. The tomographic data were inverted using four different software packages based upon different algorithms. The obtained velocity images were then compared each other, with the GPR results and with the visible fractures in the base. A critical analysis of the comparisons is finally presented
Closing the door on dark matter candidates
International Nuclear Information System (INIS)
Cold dark matter candidates - if they exist - will be present in, and perhaps dominate the mass density of, the halo of our Galaxy. As they pass the vicinity of the Sun, such halo particles will be focussed gravitationally, pass through the Sun and - occasionally - be captured and accumulate in the solar core. When the density in the core of the Sun of dark matter candidates has increased sufficiently, they will annihilate and among the annihilation products will be energetic (≥ GeV) ''ordinary'' neutrinos (νe, νμ, ντ) which are detectable in deep underground experiments. The event rates in such detectors from the capture and annihilation of various dark matter candidates (Dirac and Majorana neutrinos, Sneutrinos and Photinos) are presented and it is shown how comparison with data may lead to constraints on (or, the exclusion of) the masses of these particles. 6 refs
International Nuclear Information System (INIS)
One of the simplest, yet most profound, questions we can ask about the Universe is, how much stuff is in it, and further what is that stuff composed of? Needless to say, the answer to this question has very important implications for the evolution of the Universe, determining both the ultimate fate and the course of structure formation. Remarkably, at this late date in the history of the Universe we still do not have a definitive answer to this simplest of questions---although we have some very intriguing clues. It is known with certainty that most of the material in the Universe is dark, and we have the strong suspicion that the dominant component of material in the Cosmos is not baryons, but rather is exotic relic elementary particles left over from the earliest, very hot epoch of the Universe. If true, the Dark Matter question is a most fundamental one facing both particle physics and cosmology. The leading particle dark matter candidates are: the axion, the neutralino, and a light neutrino species. All three candidates are accessible to experimental tests, and experiments are now in progress. In addition, there are several dark horse, long shot, candidates, including the superheavy magnetic monopole and soliton stars. 13 refs
Directory of Open Access Journals (Sweden)
Mukabatsinda Constance
2012-01-01
Full Text Available Abstract Background The algorithmic approach to guidelines has been introduced and promoted on a large scale since the 1970s. This study aims at comparing the performance of three algorithms for the management of chronic cough in patients with HIV infection, and at reassessing the current position of algorithmic guidelines in clinical decision making through an analysis of accuracy, harm and complexity. Methods Data were collected at the University Hospital of Kigali (CHUK in a total of 201 HIV-positive hospitalised patients with chronic cough. We simulated management of each patient following the three algorithms. The first was locally tailored by clinicians from CHUK, the second and third were drawn from publications by Médecins sans Frontières (MSF and the World Health Organisation (WHO. Semantic analysis techniques known as Clinical Algorithm Nosology were used to compare them in terms of complexity and similarity. For each of them, we assessed the sensitivity, delay to diagnosis and hypothetical harm of false positives and false negatives. Results The principal diagnoses were tuberculosis (21% and pneumocystosis (19%. Sensitivity, representing the proportion of correct diagnoses made by each algorithm, was 95.7%, 88% and 70% for CHUK, MSF and WHO, respectively. Mean time to appropriate management was 1.86 days for CHUK and 3.46 for the MSF algorithm. The CHUK algorithm was the most complex, followed by MSF and WHO. Total harm was by far the highest for the WHO algorithm, followed by MSF and CHUK. Conclusions This study confirms our hypothesis that sensitivity and patient safety (i.e. less expected harm are proportional to the complexity of algorithms, though increased complexity may make them difficult to use in practice.
Kim, Sung Jin; Kim, Sung Kyu; Kim, Dong Ho
2015-07-01
Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam (PB), collapsed cone (CC), and Monte-Carlo (MC), provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated by using the PB, CC, and MC algorithms. Planning treatment volume (PTV) and organs at risk (OARs) delineations were performed according to our institution's protocols on the Oncentra MasterPlan image registration module, on 0.3-0.5 cm computed tomography (CT) slices taken under normal respiration conditions. Intensitymodulated radiation therapy (IMRT) plans were calculated for the three algorithm for each patient. The plans were conducted on the Oncentra MasterPlan (PB and CC) and CMS Monaco (MC) treatment planning systems for 6 MV. The plans were compared in terms of the dose distribution in target, the OAR volumes, and the monitor units (MUs). Furthermore, absolute dosimetry was measured using a three-dimensional diode array detector (ArcCHECK) to evaluate the dose differences in a homogeneous phantom. Comparing the dose distributions planned by using the PB, CC, and MC algorithms, the PB algorithm provided adequate coverage of the PTV. The MUs calculated using the PB algorithm were less than those calculated by using. The MC algorithm showed the highest accuracy in terms of the absolute dosimetry. Differences were found when comparing the calculation algorithms. The PB algorithm estimated higher doses for the target than the CC and the MC algorithms. The PB algorithm actually overestimated the dose compared with those calculated by using the CC and the MC algorithms. The MC algorithm showed better accuracy than the other algorithms.
Directory of Open Access Journals (Sweden)
Angelina Espejel
2012-01-01
Full Text Available En este trabajo se realiza una descripción y comparación de tres algoritmos de Criptografía Visual Extendida. Actualmente estos métodos son frecuentemente utilizados como base para el desarrollo de esquemas más complejos. Se obtienen resultados de la superposición de las sombras, expansión de pixeles, la calidad de la imagen resultante y las sombras, así como el número de imágenes que pueden cifrar. En un esquema convencional de criptografía visual una imagen secreta es cifrada mediante un conjunto de imágenes con apariencia de ruido pseudo-aleatorio y son repartidas a cada participante del esquema; para obtener la imagen secreta se debe superponer las imágenes recibidas. En la criptografía visual extendida las sombras son visualmente reconocibles. Se concluye que los tres métodos tienen ventajas sobre los otros dependiendo de la aplicaciónA description and comparison of three visual extended cryptography algorithms is presented. Currently these methods are often used as a basis for developing more complex schemes. Results related to overlapping shades, pixel expansion, quality of images and shades, and the number of coded images are obtained. In conventional visual cryptography a secret image is encrypted into a set of images that look like random-noise and are distributed to each participant of the scheme. To obtain the secret image the images received must be super-imposed. .In the extended visual cryptography the shades are visually recognizable images. It is concluded that the three methods present particular advantages depending on the application.
Institute of Scientific and Technical Information of China (English)
马超
2012-01-01
针对遗传算法和Dijkstra算法在求解动态权值系统中最短路径时的性能问题,采用比较法,将两种算法应用在同一个实际游戏模型中,对其算法的稳定性、智能性、时间复杂度进行对比测试.游戏模型模拟了各种条件下的动态权值系统.为了使遗传算法更加可靠,通过优化其变异过程使得收敛速度更快,可靠性更高.实验数据表明,遗传算法在每张地图上的得分数以及算法所用时间普遍高于Dijkstra算法,从而得出遗传算法在求解动态权值系统中最短路径问题时稳定性和预期效果明显好于Dijkstra算法,但其时间复杂度较高的结论.%Used a comparative approach to compare the performance of the genetic algorithm with the Dijkstra algorithm when solve the shortest path problem in the dynamic weight system. Did an experiment in the actual model with these two algorithms in order to test their stability, intelligence and time complexity. The game model makes" many kinds of dynamic weight system. In order to make the genetic algorithm more reliable, the new algorithm gets a way to optimize the process of mutation to make the speed of the genetic algorithm faster and the reliability better. The experiment data shows that most data of the genetic algorithm is higher than the Dijkstra algorithm. The experiment makes a conclusion that the stability and expected result of the genetic algorithm is better than the Dijkstra algorithm in the dynamic weight system,but the time complexity of algorithm is higher than the Dijkstra algorithm.
Bassa, Zaakirah; Bob, Urmilla; Szantoi, Zoltan; Ismail, Riyad
2016-01-01
In recent years, the popularity of tree-based ensemble methods for land cover classification has increased significantly. Using WorldView-2 image data, we evaluate the potential of the oblique random forest algorithm (oRF) to classify a highly heterogeneous protected area. In contrast to the random forest (RF) algorithm, the oRF algorithm builds multivariate trees by learning the optimal split using a supervised model. The oRF binary algorithm is adapted to a multiclass land cover and land use application using both the "one-against-one" and "one-against-all" combination approaches. Results show that the oRF algorithms are capable of achieving high classification accuracies (>80%). However, there was no statistical difference in classification accuracies obtained by the oRF algorithms and the more popular RF algorithm. For all the algorithms, user accuracies (UAs) and producer accuracies (PAs) >80% were recorded for most of the classes. Both the RF and oRF algorithms poorly classified the indigenous forest class as indicated by the low UAs and PAs. Finally, the results from this study advocate and support the utility of the oRF algorithm for land cover and land use mapping of protected areas using WorldView-2 image data.
Institute of Scientific and Technical Information of China (English)
马苗; 刘艳丽
2012-01-01
针对目前研究相对薄弱的群体智能优化算法的性能对比问题,搭建数字图像为生命栖息环境的群体智能优化算法的性能对比平台,提出基于最优个体变化的收敛关联度和收敛面积的新型性能评价指标,并具体进行了遗传算法、粒子群算法、人工鱼群算法、细菌觅食算法等多种群体智能优化算法的性能比较与测试.实验结果显示,所提出的评价平台和性能评价指标能够合理有效地对比不同搜索机制下智能群体的寻优能力.%Aiming at the performance comparison of swarm intelligence optimization algorithms that lacks qualified research findings, we constructed a platform for comparing the performance of the algorithms. Then, we proposed the novel performance evaluation criteria for convergence relational degree and the convergence area based on the changes of the best individual. Specifically, we compared and tested the performances of several swarm intelligence optimization algorithms, such as the genetic algorithm （GA）, particle swarm optimization （PSO） algorithm, artificial fish swarm （AFS） algorithm, bacterial foraging （BF） algorithm and artificial bee colony （ABC） algorithm. Experimental results showed that the platform and criteria of performance evaluation proposed in this paper can be effectively used to compare the capability of optimization search under different mechanisms.
Energy Technology Data Exchange (ETDEWEB)
Ortiz J, J. [Instituto Nacional de Investigaciones Nucleares, Depto. Sistemas Nucleares, A.P. 18-1027, 11801 Mexico D.F. (Mexico); Requena, I. [Universidad de Granada (Spain)
2002-07-01
In this work the results of a genetic algorithm (AG) and a neural recurrent multi state network (RNRME) for optimizing the fuel reload of 5 cycles of the Laguna Verde nuclear power plant (CNLV) are presented. The fuel reload obtained by both methods are compared and it was observed that the RNRME creates better fuel distributions that the AG. Moreover a comparison of the utility for using one or another one techniques is make. (Author)
Particle Dark Matter Candidates
Scopel, Stefano
2007-01-01
I give a short overview on some of the favorite particle Cold Dark Matter candidates today, focusing on those having detectable interactions: the axion, the KK-photon in Universal Extra Dimensions, the heavy photon in Little Higgs and the neutralino in Supersymmetry. The neutralino is still the most popular, and today is available in different flavours: SUGRA, nuSUGRA, sub-GUT, Mirage mediation, NMSSM, effective MSSM, scenarios with CP violation. Some of these scenarios are already at the level of present sensitivities for direct DM searches.
Constructing a Scheduling Algorithm For Multidirectional Elevators
Edlund, Joakim; Berntsson, Fredrik
2015-01-01
With this thesis we aim to create an efficient scheduling algorithm for elevators that can move in multiple directions and establish if and when the algorithm is efficient in comparison to algorithms constructed for traditional elevator algorithms. To measure efficiency, a simulator is constructed to simulate an elevator system implementing different algorithms. Because of the challenge of constructing a simulator and since we did not find either a simulator nor any algorithms for use in mult...
An inversion algorithm for general tridiagonal matrix
Institute of Scientific and Technical Information of China (English)
Rui-sheng RAN; Ting-zhu HUANG; Xing-ping LIU; Tong-xiang GU
2009-01-01
An algorithm for the inverse of a general tridiagonal matrix is presented. For a tridiagonal matrix having the Doolittle factorization, an inversion algorithm is established.The algorithm is then generalized to deal with a general tridiagonal matrix without any restriction. Comparison with other methods is provided, indicating low computational complexity of the proposed algorithm, and its applicability to general tridiagonal matrices.
Eremeev, Anton V.
2015-01-01
This manuscript contains an outline of lectures course "Evolutionary Algorithms" read by the author in Omsk State University n.a. F.M.Dostoevsky. The course covers Canonic Genetic Algorithm and various other genetic algorithms as well as evolutioanry algorithms in general. Some facts, such as the Rotation Property of crossover, the Schemata Theorem, GA performance as a local search and "almost surely" convergence of evolutionary algorithms are given with complete proofs. The text is in Russian.
Comparison of Classification Algorithm in Coal Data Analysis System%分类算法在煤矿勘探数据分析系统中的比较
Institute of Scientific and Technical Information of China (English)
莫洪武; 万荣泽
2013-01-01
煤炭开采过程中需要对收集的勘探数据进行分析和研究，从中挖掘出更加有价值的信息。文章针对多种数据分类算法，研究分析他们在煤炭勘探数据分析中的作用。通过研究和比较多种分类算法在数据分析工作中的性能，找到能够更加有效地处理勘探数据的分类算法。%Coal system usually analyze and research on the collected coal data, and mine more valuable information from them. In data mining area, there are several kinds of data classification mining algorithms. Coal system could apply them into real work according to different data type. In this paper, focusing data classification algorithms, we research and analyze the function of the algorithms in coal data analysis. Through the research and comparison the performance of multiple classification algorithms, we find the effective classification algorithms in processing coal data.
Institute of Scientific and Technical Information of China (English)
李林
2012-01-01
分析网络群落划分的GN聚类和模式识别中AP聚类两种算法的设计思想和特点；以图书借阅记录为例构建了顾客聚类的数据集,进行了两种算法的聚类比较.研究表明,两种算法从不同角度揭示了顾客群体的结构特征,GN聚类结果与顾客的宏观特征分类相接近,而AP算法结果反映出顾客需求的分布特征.探讨了算法设计原则对实验结果产生的影响.这些工作可为聚类算法的设计改进和顾客行为的数据挖掘等研究提供一定的参考.%This paper summarized the design ideas and features of two kind clustering algorithms, such as GN clustering of network community division and AP clustering in pattern recognition. To serve as an example of the customer group, it constructed the data set from library borrowing records, and then made the comparison of two clustering algorithms. The results indicate that the two kind algorithms have revealed much more about the structure of the customer group, the outcome of GN clustering algorithm is close to customer macrostructure, and the result of AP clustering algorithm reflects the customer requirement distribution. And it also discussed the effect of algorithm design principles on experiment results. This work can give a va-luable reference for design improvement of clustering algorithm and customer behavior data mining.
Energy Technology Data Exchange (ETDEWEB)
Llacer Martos, S.; Herraiz Lablanca, M. D.; Puchal Ane, R.
2011-07-01
This paper compares the image quality obtained with each of the algorithms is evaluated and its running time, to optimize the choice of algorithm to use taking into account both the quality of the reconstructed image as the time spent on the reconstruction.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Fitness inheritance in the Bayesian optimization algorithm
Pelikan, Martin; Sastry, Kumara
2004-01-01
This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems where fitness evaluation is expensive. Bayesian networks used in BOA to model promising solutions and generate the new ones are extended to allow not only for modeling and sampling candidate solutions...
Building Better Nurse Scheduling Algorithms
Aickelin, Uwe
2008-01-01
The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.
Sorting Algorithms with Restrictions
Aslanyan, Hakob
2011-01-01
Sorting is one of the most used and well investigated algorithmic problem [1]. Traditional postulation supposes the sorting data archived, and the elementary operation as comparisons of two numbers. In a view of appearance of new processors and applied problems with data streams, sorting changed its face. This changes and generalizations are the subject of investigation in the research below.
General cardinality genetic algorithms
Koehler; Bhattacharyya; Vose
1997-01-01
A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767
International Nuclear Information System (INIS)
An increasing number of Evolutionary Algorithms (EA), have been successfully employed in complex functions and combinatorial optimization problems. Among the EA there is 'Population Based Incremental Learning' (PBIL), which is a method that combines the mechanisms of the standard genetic algorithm with competitive learning. PBIL has been an efficient tool in combinatorial optimization problems. The purpose of this work is to introduce a parallelization of the PBIL algorithm to be applied in a PWR nuclear reload optimization problem. Tests were performed with data from cycle 7 of the Angra 1 PWR. Results are compared with the serial PBIL ones. (author)
DEFF Research Database (Denmark)
Olsen, Emil; Boye, Jenny Katrine; Pfau, Thilo;
2012-01-01
and use robust and validated algorithms. It is the objective of this study to compare accuracy (bias) and precision (SD) for five published human and equine motion capture foot-on/off and stance phase detection algorithms during walk. Six horses were walked over 8 seamlessly embedded force plates......Motion capture is frequently used over ground in equine locomotion science to study kinematics. Determination of gait events (hoof-on/off and stance) without force plates is essential to cut the data into strides. The lack of comparative evidence emphasise the need to compare existing algorithms...... surrounded by a 12-camera infrared motion capture system. The algorithms were based on horizontal or vertical velocity displacement and velocity of the hoof relative to the centre of mass movement or fetlock angle and velocity or displacement of the hoof. Horizontal hoof velocity relative to the centre...
Tel, G.
1993-01-01
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of distri
Indian Academy of Sciences (India)
SHIDROKH GOUDARZI; WAN HASLINA HASSAN; MOHAMMAD HOSSEIN ANISI; SEYED AHMAD SOLEYMANI
2016-07-01
Genetic algorithms (GAs) and simulated annealing (SA) have emerged as leading methods for search and optimization problems in heterogeneous wireless networks. In this paradigm, various access technologies need to be interconnected; thus, vertical handovers are necessary for seamless mobility. In this paper, the hybrid algorithm for real-time vertical handover using different objective functions has been presented to find the optimal network to connect with a good quality of service in accordance with the user’s preferences. As it is, the characteristics of the current mobile devices recommend using fast andefficient algorithms to provide solutions near to real-time. These constraints have moved us to develop intelligent algorithms that avoid slow and massive computations. This was to, specifically, solve two major problems in GA optimization, i.e. premature convergence and slow convergence rate, and the facilitation of simulated annealing in the merging populations phase of the search. The hybrid algorithm was expected to improve on the pure GA in two ways, i.e., improved solutions for a given number of evaluations, and more stability over many runs. This paper compares the formulation and results of four recent optimization algorithms: artificial bee colony (ABC), genetic algorithm(GA), differential evolution (DE), and particle swarm optimization (PSO). Moreover, a cost function is used to sustain the desired QoS during the transition between networks, which is measured in terms of the bandwidth, BER, ABR, SNR, and monetary cost. Simulation results indicated that choosing the SA rules would minimize the cost function and the GA– SA algorithm could decrease the number of unnecessary handovers, and thereby prevent the ‘Ping-Pong’ effect.
International Nuclear Information System (INIS)
Purpose: The latest publications indicate that the Ray Tracing algorithm significantly overestimates the dose delivered as compared to the Monte Carlo (MC) algorithm. The purpose of this study is to quantify this overestimation and to identify significant correlations between the RT and MC calculated dose distributions. Methods: Preliminary results are based on 50 preexisting RT algorithm dose optimization and calculation treatment plans prepared on the Multiplan treatment planning system (Accuray Inc., Sunnyvale, CA). The analysis will be expanded to include 100 plans. These plans are recalculated using the MC algorithm, with high resolution and 1% uncertainty. The geometry and number of beams for a given plan, as well as the number of monitor units, is constant for the calculations for both algorithms and normalized differences are compared. Results: MC calculated doses were significantly smaller than RT doses. The D95 of the PTV was 27% lower for the MC calculation. The GTV and PTV mean coverage were 13 and 39% less for MC calculation. The first parameter of conformality, as defined as the ratio of the Prescription Isodose Volume to the PTV Volume was on average 1.18 for RT and 0.62 for MC. Maximum doses delivered to OARs was reduced in the MC plans. The doses for 1000 and 1500 cc of total lung minus PTV, respectively were reduced by 39% and 53% for the MC plans. The correlation of the ratio of air in PTV to the PTV with the difference in PTV coverage had a coefficient of −0.54. Conclusion: The preliminary results confirm that the RT algorithm significantly overestimates the dosages delivered confirming previous analyses. Finally, subdividing the data into different size regimes increased the correlation for the smaller size PTVs indicating the MC algorithm improvement verses the RT algorithm is dependent upon the size of the PTV
Directory of Open Access Journals (Sweden)
Pongpan Nakkaew
2016-06-01
Full Text Available In manufacturing process where efficiency is crucial in order to remain competitive, flowshop is a common configuration in which machines are arranged in series and products are produced through the stages one by one. In certain production processes, the machines are frequently configured in the way that each production stage may contain multiple processing units in parallel or hybrid. Moreover, along with precedent conditions, the sequence dependent setup times may exist. Finally, in case there is no buffer, a machine is said to be blocked if the next stage to handle its output is being occupied. Such NP-Hard problem, referred as Blocking Hybrid Flowshop Scheduling Problem with Sequence Dependent Setup/Changeover Times, is usually not possible to find the best exact solution to satisfy optimization objectives such as minimization of the overall production time. Thus, it is usually solved by approximate algorithms such as metaheuristics. In this paper, we investigate comparatively the effectiveness of the two approaches: a Genetic Algorithm (GA and an Artificial Bee Colony (ABC algorithm. GA is inspired by the process of natural selection. ABC, in the same manner, resembles the way types of bees perform specific functions and work collectively to find their foods by means of division of labor. Additionally, we apply an algorithm to improve the GA and ABC algorithms so that they can take advantage of parallel processing resources of modern multiple core processors while eliminate the need for screening the optimal parameters of both algorithms in advance.
Detection Algorithms: FFT vs. KLT
Maccone, Claudio
Given the vast distances between the stars, we can anticipate that any received SETI signal will be exceedingly weak. How can we hope to extract (or even recognize) such signals buried well beneath the natural background noise with which they must compete? This chapter analyzes, compares, and contrasts the two dominant signal detection algorithms used by SETI scientists to recognize extremely weak candidate signals.
Blind Alley Aware ACO Routing Algorithm
Yoshikawa, Masaya; Otani, Kazuo
2010-10-01
The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.
International Nuclear Information System (INIS)
Purpose: To evaluate the adequacy of tumor volume coverage using a three-dimensional (3D) margin-growing algorithm compared to a two-dimensional (2D) margin-growing algorithm in the conformal radiotherapy planning of prostate cancer. Methods and Materials: Two gross tumor volumes (GTV) were segmented in each of 10 patients with localized prostate cancer; prostate gland only (PO) and prostate with seminal vesicles (PSV). A predetermined margin of 10 mm was applied to these two groups (PO and PSV) using both 2D and 3D margin-growing algorithms. The 2D algorithm added a transaxial margin to each GTV slice, whereas the 3D algorithm added a volumetric margin all around the GTV. The true planning target volume (PTV) was defined as the region delineated by the 3D algorithm. The adequacy of geometric coverage of the GTV by the two algorithms was examined in a series of transaxial planes throughout the target volume. Results: The 2D margin-growing algorithm underestimated the PTV by 17% (range 12-20) in the PO group and by 20% (range 13-28) for the PSV group when compared to the 3D-margin algorithm. For the PO group, the mean transaxial difference between the 2D and 3D algorithm was 3.8 mm inferiorly (range 0-20), 1.8 mm centrally (range 0-9), and 4.4 mm superiorly (range 0-22). Considering all of these regions, the mean discrepancy anteriorly was 5.1 mm (range 0-22), posteriorly 2.2 (range 0-20), right border 2.8 mm (range 0-14), and left border 3.1 mm (range 0-12). For the PSV group, the mean discrepancy in the inferior region was 3.8 mm (range 0-20), central region of the prostate was 1.8 mm ( range 0-9), the junction region of the prostate and the seminal vesicles was 5.5 mm (range 0-30), and the superior region of the seminal vesicles was 4.2 mm (range 0-55). When the different borders were considered in the PSV group, the mean discrepancies for the anterior, posterior, right, and left borders were 6.4 mm (range 0-55), 2.5 mm (range 0-20), 2.6 mm (range 0-14), and 3
Multithreaded Implementation of Hybrid String Matching Algorithm
Directory of Open Access Journals (Sweden)
Akhtar Rasool
2012-03-01
Full Text Available Reading and taking reference from many books and articles, and then analyzing the Navies algorithm, Boyer Moore algorithm and Knuth Morris Pratt (KMP algorithm and a variety of improved algorithms, summarizes various advantages and disadvantages of the pattern matching algorithms. And on this basis, a new algorithm – Multithreaded Hybrid algorithm is introduced. The algorithm refers to Boyer Moore algorithm, KMP algorithm and the thinking of improved algorithms. Utilize the last character of the string, the next character and the method to compare from side to side, and then advance a new hybrid pattern matching algorithm. And it adjusted the comparison direction and the order of the comparison to make the maximum moving distance of each time to reduce the pattern matching time. The algorithm reduces the comparison number and greatlyreduces the moving number of the pattern and improves the matching efficiency. Multithreaded implementation of hybrid, pattern matching algorithm performs the parallel string searching on different text data by executing a number of threads simultaneously. This approach is advantageous from all other string-pattern matching algorithm in terms of time complexity. This again improves the overall string matching efficiency.
A Clustal Alignment Improver Using Evolutionary Algorithms
DEFF Research Database (Denmark)
Thomsen, Rene; Fogel, Gary B.; Krink, Thimo
2002-01-01
Multiple sequence alignment (MSA) is a crucial task in bioinformatics. In this paper we extended previous work with evolutionary algorithms (EA) by using MSA solutions obtained from the wellknown Clustal V algorithm as a candidate solution seed of the initial EA population. Our results clearly sh...
Institute of Scientific and Technical Information of China (English)
张斐; 谭军; 谢竞博
2009-01-01
研究转录因子结合位点(TFBs)的主要预测模型及其预测的算法,通过基于调控元件预测的3种代表性的算法MEME、Gibbs采样和Weeder预测拟南芥基因组.比较结果表明,Gibbs采样算法和Weeder算法预测长、短motif效率较高.重点分析MEME算法,提出结合不同算法查找motif的优化方法,并以实验验证该方法能有效提高预测效率.%This paper studies some models and discrimination algorithms of Transcription Factor Binding sites(TFBs). Experiment compares advantages and disadvantages in three representative discrimination algorithms which are based on regulation elements, including MEME, Gibbs sample and Weeder through predicting arabidopsis thaliana genome, against Gibbs sampling algorithm and Weeder algorithms are forecast long and short motif of the characteristics of high efficiency, MEME is intensively analyzed, and proposed an effective way to forecast motifs through MEME binding other discrimination algorithms. Experimental result proves that the method can improve the efficiency of motif finding efficiently.
Electoral Systems and Candidate Selection
Hazan, Reuven Y.; Voerman, Gerrit
2006-01-01
Electoral systems at the national level and candidate selection methods at the party level are connected, maybe not causally but they do influence each other. More precisely, the electoral system constrains and conditions the parties' menu of choices concerning candidate selection. Moreover, in ligh
International Nuclear Information System (INIS)
Purpose: To evaluate the adequacy of tumor volume coverage using a three dimensional (3D) margin growing algorithm compared to a two dimensional (2D) margin growing algorithm in the conformal radiotherapy planning of prostate cancer. Methods and Materials: Two gross tumor volumes (GTV) were segmented in each of ten patients with localized prostate cancer: prostate gland only (PO) and prostate with seminal vesicles (PSV). A margin of 10 mm was applied to these two groups (PO and PSV) using both the 2D and 3D margin growing algorithms. The true planning target volume (PTV) was defined as the region delineated by the 3D algorithm. Adequacy of geometric coverage of the GTV with the two algorithms was examined throughout the target volume. Discrepancies between the two margin methods were measured in the transaxial plane. Results: The 2D algorithm underestimated the PTV by 17% (range 12-20) in the PO group and by 20% (range 13-28) for the PSV group when compared to the 3D algorithm. For both the PO and PSV groups, the inferior coverage of the PTV was consistently underestimated by the 2D margin algorithm when compared to the 3D margins with a mean radial distance of 4.8 mm (range 0-10). In the central region of the prostate gland, the anterior, posterior, and lateral PTV borders were underestimated with the 2D margin in both the PO and PSV groups by a mean of 3.6 mm (range 0-9), 2.1 mm (range 0-8), and 1.8 (range 0-9) respectively. The PTV coverage of the PO group superiorly was radially underestimated by 4.5mm (range 0-14) when comparing the 2D margins to the 3D margins. For the PSV group, the junction region between the prostate and the seminal vesicles was underestimated by the 2D margin by a mean transaxial distance of 18.1 mm in the anterior PTV border (range 4-30), 7.2 mm posteriorly (range 0-20), and 3.7 mm laterally (range 0-14). The superior region of the seminal vesicles in the PSV group was also consistently underestimated with a radial discrepancy of 3.3 mm
Stochastic Approximation Algorithms for Number Partitioning
Ruml, Wheeler
1993-01-01
This report summarizes research on algorithms for finding particularly good solutions to instances of the NP-complete number-partitioning problem. Our approach is based on stochastic search algorithms, which iteratively improve randomly chosen initial solutions. Instead of searching the space of all 2^(n-1), possible partitionings, however, we use these algorithms to manipulate indirect encodings of candidate solutions. An encoded solution is evaluated by a decoder, which interprets the encod...
Fast Algorithm for N-2 Contingency Problem
Turitsyn, K. S.; Kaplunovich, P. A.
2013-01-01
We present a novel selection algorithm for N-2 contingency analysis problem. The algorithm is based on the iterative bounding of line outage distribution factors and successive pruning of the set of contingency pair candidates. The selection procedure is non-heuristic, and is certified to identify all events that lead to thermal constraints violations in DC approximation. The complexity of the algorithm is O(N[superscript 2]) comparable to the complexity of N-1 contingency problem. We validat...
Genetic algorithms as global random search methods
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
Energy Technology Data Exchange (ETDEWEB)
Abramowicz, H. [Tel Aviv University (Israel). Raymond and Beverly Sackler Faculty of Exact Sciences, School of Physics; Max Planck Inst., Munich (Germany); Abt, I. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Adamczyk, L. [AGH-University of Science and Technology, Cracow (PL). Faculty of Physics and Applied Computer Science] (and others)
2010-03-15
For the first time, differential inclusive-jet cross sections have been measured in neutral current deep inelastic ep scattering using the anti-k{sub T} and SIScone algorithms. The measurements were made for boson virtualities Q{sup 2} > 125 GeV{sup 2} with the ZEUS detector at HERA using an integrated luminosity of 82 pb{sup -1} and the jets were identified in the Breit frame. The performance and suitability of the jet algorithms for their use in hadron-like reactions were investigated by comparing the measurements to those performed with the k{sub T} algorithm. Next-to-leading-order QCD calculations give a good description of the measurements. Measurements of the ratios of cross sections using different jet algorithms are also presented; the measured ratios are well described by calculations including up to O({alpha}{sub s}{sup 3}) terms. Values of {alpha}{sub s}(M{sub Z}) were extracted from the data; the results are compatible with and have similar precision to the value extracted from the k{sub T} analysis. (orig.)
International Nuclear Information System (INIS)
For the first time, differential inclusive-jet cross sections have been measured in neutral current deep inelastic ep scattering using the anti-kT and SIScone algorithms. The measurements were made for boson virtualities Q2>125 GeV2 with the ZEUS detector at HERA using an integrated luminosity of 82 pb-1 and the jets were identified in the Breit frame. The performance and suitability of the jet algorithms for their use in hadron-like reactions were investigated by comparing the measurements to those performed with the kT algorithm. Next-to-leading-order QCD calculations give a good description of the measurements. Measurements of the ratios of cross sections using different jet algorithms are also presented; the measured ratios are well described by calculations including up to O(αs3) terms. Values of αs(MZ) were extracted from the data; the results are compatible with and have similar precision to the value extracted from the kT analysis.
DEFF Research Database (Denmark)
Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk;
2007-01-01
the MC data as reference, gamma index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil...
Othman, Arsalan; Gloaguen, Richard
2015-04-01
Topographic effects and complex vegetation cover hinder lithology classification in mountain regions based not only in field, but also in reflectance remote sensing data. The area of interest "Bardi-Zard" is located in the NE of Iraq. It is part of the Zagros orogenic belt, where seven lithological units outcrop and is known for its chromite deposit. The aim of this study is to compare three machine learning algorithms (MLAs): Maximum Likelihood (ML), Support Vector Machines (SVM), and Random Forest (RF) in the context of a supervised lithology classification task using Advanced Space-borne Thermal Emission and Reflection radiometer (ASTER) satellite, its derived, spatial information (spatial coordinates) and geomorphic data. We emphasize the enhancement in remote sensing lithological mapping accuracy that arises from the integration of geomorphic features and spatial information (spatial coordinates) in classifications. This study identifies that RF is better than ML and SVM algorithms in almost the sixteen combination datasets, which were tested. The overall accuracy of the best dataset combination with the RF map for the all seven classes reach ~80% and the producer and user's accuracies are ~73.91% and 76.09% respectively while the kappa coefficient is ~0.76. TPI is more effective with SVM algorithm than an RF algorithm. This paper demonstrates that adding geomorphic indices such as TPI and spatial information in the dataset increases the lithological classification accuracy.
International Nuclear Information System (INIS)
For the first time, differential inclusive-jet cross sections have been measured in neutral current deep inelastic ep scattering using the anti-kT and SIScone algorithms. The measurements were made for boson virtualities Q2 > 125 GeV2 with the ZEUS detector at HERA using an integrated luminosity of 82 pb-1 and the jets were identified in the Breit frame. The performance and suitability of the jet algorithms for their use in hadron-like reactions were investigated by comparing the measurements to those performed with the kT algorithm. Next-to-leading-order QCD calculations give a good description of the measurements. Measurements of the ratios of cross sections using different jet algorithms are also presented; the measured ratios are well described by calculations including up to O(αs3) terms. Values of αs(MZ) were extracted from the data; the results are compatible with and have similar precision to the value extracted from the kT analysis. (orig.)
An Improved Ant Colony Routing Algorithm for WSNs
Tan Zhi; Zhang Hui
2015-01-01
Ant colony algorithm is a classical routing algorithm. And it are used in a variety of application because it is economic and self-organized. However, the routing algorithm will expend huge amounts of energy at the beginning. In the paper, based on the idea of Dijkstra algorithm, the improved ant colony algorithm was proposed to balance the energy consumption of networks. Through simulation and comparison with basic ant colony algorithms, it is obvious that improved algorithm can effectively...
Lechleiter, Kristen M.; Low, Daniel A.; Chaudhari, Amir; Lu, Wei; Hubenschmidt, James P.; Mayse, Martin L.; Dimmer, Steven C.; Bradley, Jeffrey D.; Parikh, Parag J.
2007-03-01
Three-dimensional volumetric imaging correlated with respiration (4DCT) typically utilizes external breathing surrogates and phase-based models to determine lung tissue motion. However, 4DCT requires time consuming post-processing and the relationship between external breathing surrogates and lung tissue motion is not clearly defined. This study compares algorithms using external respiratory motion surrogates as predictors of internal lung motion tracked in real-time by electromagnetic transponders (Calypso® Medical Technologies) implanted in a canine model. Simultaneous spirometry, bellows, and transponder positions measurements were acquired during free breathing and variable ventilation respiratory patterns. Functions of phase, amplitude, tidal volume, and airflow were examined by least-squares regression analysis to determine which algorithm provided the best estimate of internal motion. The cosine phase model performed the worst of all models analyzed (R2 = 31.6%, free breathing, and R2 = 14.9%, variable ventilation). All algorithms performed better during free breathing than during variable ventilation measurements. The 5D model of tidal volume and airflow predicted transponder location better than amplitude or either of the two phasebased models analyzed, with correlation coefficients of 66.1% and 64.4% for free breathing and variable ventilation respectively. Real-time implanted transponder based measurements provide a direct method for determining lung tissue location. Current phase-based or amplitude-based respiratory motion algorithms cannot as accurately predict lung tissue motion in an irregularly breathing subject as a model including tidal volume and airflow. Further work is necessary to quantify the long term stability of prediction capabilities using amplitude and phase based algorithms for multiple lung tumor positions over time.
150 New transiting planet candidates from Kepler Q1-Q6 data
Huang, Xu; Bakos, Gáspár Á.; Hartman, Joel D.
2012-01-01
We have performed an extensive search for planet candidates in the publicly available Kepler Long Cadence data from quarters Q1 through Q6. The search method consists of initial de-trending of the data, applying the trend filtering algorithm, searching for transit signals with the Box Least Squares fitting method in three frequency domains, visual inspection of the potential transit candidates, and in-depth analysis of the shortlisted candidates. In this paper we present 150 new periodic plan...
Schmitt, Joseph R; Fischer, Debra A; Jek, Kian J; Moriarty, John C; Boyajian, Tabetha S; Schwamb, Megan E; Lintott, Chris; Smith, Arfon M; Parrish, Michael; Schawinski, Kevin; Lynn, Stuart; Simpson, Robert; Omohundro, Mark; Winarski, Troy; Goodman, Samuel J; Jebson, Tony; Lacourse, Daryll
2013-01-01
We report the discovery of 14 new transiting planet candidates in the Kepler field from the Planet Hunters citizen science program. None of these candidates overlap with Kepler Objects of Interest (KOIs), and five of the candidates were missed by the Kepler Transit Planet Search (TPS) algorithm. The new candidates have periods ranging from 124-904 days, eight residing in their host star's habitable zone (HZ) and two (now) in multiple planet systems. We report the discovery of one more addition to the six planet candidate system around KOI-351, marking the first seven planet candidate system from Kepler. Additionally, KOI-351 bears some resemblance to our own solar system, with the inner five planets ranging from Earth to mini-Neptune radii and the outer planets being gas giants; however, this system is very compact, with all seven planet candidates orbiting $\\lesssim 1$ AU from their host star. We perform a numerical integration of the orbits and show that the system remains stable for over 100 million years....
Lu, Xiaoxu; Sun, Wen; Tang, Yanping; Zhu, Lingqun; Li, Yuan; Ou, Chao; Yang, Chun; Su, Jianjia; LUO, CHENGPIAO; Hu, Yanling; Cao, Ji
2015-01-01
The aim of the present study was to determine key pathways and genes involved in the pathogenesis of hepatocellular carcinoma (HCC) through bioinformatic analyses of HCC microarray data based on cross-species comparison. Microarray data of gene expression in HCC in different species were analyzed using gene set enrichment analysis (GSEA) and meta-analysis. Reverse transcription-quantitative polymerase chain reaction and western blotting were performed to determine the mRNA and protein express...
Institute of Scientific and Technical Information of China (English)
王风华; 孟文杰
2012-01-01
虹膜识别易受环境影响,利用多算法融合识别提高复杂应用环境下虹膜识别可靠性是一种非常有效的途径.本文针对多算法融合虹膜识别中的关键步骤——规范化模型选择做了比较性研究.首先搭建多算法融合虹膜识别平台,对常见的三种规范化模型在UBIRIS虹膜库中做了比较测试,实验结果证明双sigmoid函数指数模型性能最优.本文研究可对多算法融合的研究提供理论参考.%Iris recognition is susceptible to the environment, and multi-algorithmic fusion is an effective way to improve the performance of iris recognition in complicated environment. This paper makes a research on normalization model comparison. Iris recognition framework based on multi-algorithmic fusion is first built and three common normalization models are compared in UBIRIS iris database. Experimental results show that exponential model using sigmoid function can get best recognition performance. This work can give theoretical reference for the research of multi-algorithmic fusion.
Dynamic Programming Algorithms in Speech Recognition
Directory of Open Access Journals (Sweden)
Titus Felix FURTUNA
2008-01-01
Full Text Available In a system of speech recognition containing words, the recognition requires the comparison between the entry signal of the word and the various words of the dictionary. The problem can be solved efficiently by a dynamic comparison algorithm whose goal is to put in optimal correspondence the temporal scales of the two words. An algorithm of this type is Dynamic Time Warping. This paper presents two alternatives for implementation of the algorithm designed for recognition of the isolated words.
On Resampling Algorithms for Particle Filters
Hol, Jeroen; Schön, Thomas; Gustafsson, Fredrik
2007-01-01
In this paper a comparison is made between four frequently encountered resampling algorithms for particle filters. A theoretical framework is introduced to be able to understand and explain the differences between the resampling algorithms. This facilitates a comparison of the algorithms with respect to their resampling quality and computational complexity.Using extensive Monte Carlo simulations the theoretical results are verified. It is found that systematic resampling is favourable, both i...
Energy Technology Data Exchange (ETDEWEB)
Birchler, W.D.; Schilling, S.A.
2001-02-01
The purpose of this report is to demonstrate that modern computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems can be used in the Department of Energy (DOE) Nuclear Weapons Complex (NWC) to design new and remodel old products, fabricate old and new parts, and reproduce legacy data within the inspection uncertainty limits. In this study, two two-dimensional splines are compared with several modern CAD curve-fitting modeling algorithms. The first curve-fitting algorithm is called the Wilson-Fowler Spline (WFS), and the second is called a parametric cubic spline (PCS). Modern CAD systems usually utilize either parametric cubic and/or B-splines.
Skinner, James E; Meyer, Michael; Nester, Brian A; Geary, Una; Taggart, Pamela; Mangione, Antoinette; Ramalanjaona, George; Terregino, Carol; Dalsey, William C
2009-01-01
Objective: Comparative algorithmic evaluation of heartbeat series in low-to-high risk cardiac patients for the prospective prediction of risk of arrhythmic death (AD). Background: Heartbeat variation reflects cardiac autonomic function and risk of AD. Indices based on linear stochastic models are independent risk factors for AD in post-myocardial infarction (post-MI) cohorts. Indices based on nonlinear deterministic models have superior predictability in retrospective data. Methods: Patients ...
Cuba Gyllensten, Illapha; Alberto G Bonomi; Goode, Kevin M.; Reiter, Harald; Habetha, Joerg; Amft, Oliver; Cleland, John GF
2016-01-01
Background Heart Failure (HF) is a common reason for hospitalization. Admissions might be prevented by early detection of and intervention for decompensation. Conventionally, changes in weight, a possible measure of fluid accumulation, have been used to detect deterioration. Transthoracic impedance may be a more sensitive and accurate measure of fluid accumulation. Objective In this study, we review previously proposed predictive algorithms using body weight and noninvasive transthoracic bio-...
Xie, Jianwen; Douglas, Pamela K.; Wu, Ying Nian; Brody, Arthur L.; Anderson, Ariana E.
2016-01-01
Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet mathematical constraints such as sparse coding and positivity both provide alternate biologically-plausible frameworks for generating brain networks. Non-negative Matrix Factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms ($L1$ Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking,...
Energy Technology Data Exchange (ETDEWEB)
Ahn, Hye Shin; Kim, Sun Mi; Jang, Mi Jung; Yun, Bo La; Kim, Boh Young [Dept. of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Ko, Eun Sook; Han, Boo Kyung [Dept. of Radiology, Samsung Medical Center, Seoul (Korea, Republic of); Chang, Jung Min; Yi, Ann; Cho, Nariya; Moon, Woo Kyung [Dept. of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul (Korea, Republic of); Choi, Hye Young [Dept. of Radiology, Gyeongsang National University Hospital, Jinju (Korea, Republic of)
2014-06-15
To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of {sup M}ammogram enhancement ver. 2.0{sup ;} group B (SMB), specimen mammography with application of {sup M}ammogram enhancement ver. 2.0{sup .} Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software.
International Nuclear Information System (INIS)
To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of Mammogram enhancement ver. 2.0; group B (SMB), specimen mammography with application of Mammogram enhancement ver. 2.0. Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software.
A novel artificial bee colony algorithm based on modified search equation and orthogonal learning.
Gao, Wei-feng; Liu, San-yang; Huang, Ling-ling
2013-06-01
The artificial bee colony (ABC) algorithm is a relatively new optimization technique which has been shown to be competitive to other population-based algorithms. However, ABC has an insufficiency regarding its solution search equation, which is good at exploration but poor at exploitation. To address this concerning issue, we first propose an improved ABC method called as CABC where a modified search equation is applied to generate a candidate solution to improve the search ability of ABC. Furthermore, we use the orthogonal experimental design (OED) to form an orthogonal learning (OL) strategy for variant ABCs to discover more useful information from the search experiences. Owing to OED's good character of sampling a small number of well representative combinations for testing, the OL strategy can construct a more promising and efficient candidate solution. In this paper, the OL strategy is applied to three versions of ABC, i.e., the standard ABC, global-best-guided ABC (GABC), and CABC, which yields OABC, OGABC, and OCABC, respectively. The experimental results on a set of 22 benchmark functions demonstrate the effectiveness and efficiency of the modified search equation and the OL strategy. The comparisons with some other ABCs and several state-of-the-art algorithms show that the proposed algorithms significantly improve the performance of ABC. Moreover, OCABC offers the highest solution quality, fastest global convergence, and strongest robustness among all the contenders on almost all the test functions. PMID:23086528
A Modern Non Candidate Approach for sequential pattern mining with Dynamic Minimum Support
Directory of Open Access Journals (Sweden)
Kumudbala Saxena
2011-12-01
Full Text Available Finding frequent patterns in data mining plays a significant role for finding the relational patterns. Data mining is also called knowledge discovery in several database including mobile databases and for heterogeneous environment. In this paper we proposed a modern non candidate approach for sequential pattern mining with dynamic minimum support. Our modern approach is divided into six parts. 1 Accept the dataset from the heterogeneous input set. 2 Generate Token Based on the character, we only generate posterior tokens. 3 Minimum support is entering by the user according to the need and place. 4 Find the frequent pattern which is according to the dynamic minimum support 5 Find associated member according to the token value 6 Find useful pattern after applying pruning. Our approach is not based on candidate key so it takes less time and memory in comparison to the previous algorithm. Second and main thing is the dynamic minimum support which gives us the flexibility to find the frequent pattern based on location and user requirement.
A Modern Non Candidate Approach for sequential pattern mining with Dynamic Minimum Support
Directory of Open Access Journals (Sweden)
Ms. Kumudbala Saxena
2011-09-01
Full Text Available Finding frequent patterns in data mining plays a significant role for finding the relational patterns. Data mining is also called knowledge discovery in several database including mobile databases and for heterogeneous environment. In this paper we proposed a modern non candidate approach for sequential pattern mining with dynamic minimum support. Our modern approach is divided into six parts. 1 Accept the dataset from the heterogeneous input set. 2 Generate Token Based on the character, we only generate posterior tokens. 3 Minimum support is entering by the user according to the need and place. 4 Find the frequent pattern which is according to the dynamic minimum support 5 Find associated member according to the token value 6 Find useful pattern after applying pruning. Our approach is not based on candidate key so it takes less time and memory in comparison to the previous algorithm. Second and main thing is the dynamic minimum support which gives us the flexibility to find the frequent pattern based on location and user requirement.
Performance Analysis of Cone Detection Algorithms
Mariotti, Letizia
2015-01-01
Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of two popular cone detection algorithms and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the three algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimat...
A REVIEW ON ASSOCIATION RULE MINING ALGORITHMS
JYOTI ARORA, NIDHI BHALLA, SANJEEV RAO
2013-01-01
In this paper, a review of four different association rule mining algorithmsApriori, AprioriTid,Apriori hybrid and tertius algorithms and their drawbacks which would be helpful to find new solution for the Problems found in these algorithms and also presents a comparison between different association mining algorithms. Association rule mining is the one of the most important technique of the data mining. Its aim is to extract interesting correlations, frequent patterns and association among s...
Hromkovic, Juraj
2009-01-01
Explores the science of computing. This book starts with the development of computer science, algorithms and programming, and then explains and shows how to exploit the concepts of infinity, computability, computational complexity, nondeterminism and randomness.
International Nuclear Information System (INIS)
The purpose of this study was to investigate the potential dose reduction to the heart, left anterior descending (LAD) coronary artery and the ipsilateral lung for patients treated with tangential and locoregional radiotherapy for left-sided breast cancer with enhanced inspiration gating (EIG) compared to free breathing (FB) using the AAA algorithm. The radiobiological implication of such dose sparing was also investigated. Thirty-two patients, who received tangential or locoregional adjuvant radiotherapy with EIG for left-sided breast cancer, were retrospectively enrolled in this study. Each patient was CT-scanned during FB and EIG. Similar treatment plans, with comparable target coverage, were created in the two CT-sets using the AAA algorithm. Further, the probability of radiation induced cardiac mortality and pneumonitis were calculated using NTCP models. For tangential treatment, the median V25Gy for the heart and LAD was decreased for EIG from 2.2% to 0.2% and 40.2% to 0.1% (p < 0.001), respectively, whereas there was no significant difference in V20Gy for the ipsilateral lung (p = 0.109). For locoregional treatment, the median V25Gy for the heart and LAD was decreased for EIG from 3.3% to 0.2% and 51.4% to 5.1% (p < 0.001), respectively, and the median ipsilateral lung V20Gy decreased from 27.0% for FB to 21.5% (p = 0.020) for EIG. The median excess cardiac mortality probability decreased from 0.49% for FB to 0.02% for EIG (p < 0.001) for tangential treatment and from 0.75% to 0.02% (p < 0.001) for locoregional treatment. There was no significant difference in risk of radiation pneumonitis for tangential treatment (p = 0.179) whereas it decreased for locoregional treatment from 6.82% for FB to 3.17% for EIG (p = 0.004). In this study the AAA algorithm was used for dose calculation to the heart, LAD and left lung when comparing the EIG and FB techniques for tangential and locoregional radiotherapy of breast cancer patients. The results support the dose and
Directory of Open Access Journals (Sweden)
James E Skinner
2009-08-01
Full Text Available James E Skinner1, Michael Meyer2, Brian A Nester3, Una Geary4, Pamela Taggart4, Antoinette Mangione4, George Ramalanjaona5, Carol Terregino6, William C Dalsey41Vicor Technologies, Inc., Boca Raton, FL, USA; 2Max Planck Institute for Experimental Physiology, Goettingen, Germany; 3Lehigh Valley Hospital, Allentown, PA, USA; 4Albert Einstein Medical Center, Philadelphia, PA, USA; 5North Shore University Hospital, Plainview, NY, USA; 6Cooper Medical Center, Camden, NJ, USAObjective: Comparative algorithmic evaluation of heartbeat series in low-to-high risk cardiac patients for the prospective prediction of risk of arrhythmic death (AD.Background: Heartbeat variation reflects cardiac autonomic function and risk of AD. Indices based on linear stochastic models are independent risk factors for AD in post-myocardial infarction (post-MI cohorts. Indices based on nonlinear deterministic models have superior predictability in retrospective data.Methods: Patients were enrolled (N = 397 in three emergency departments upon presenting with chest pain and were determined to be at low-to-high risk of acute MI (>7%. Brief ECGs were recorded (15 min and R-R intervals assessed by three nonlinear algorithms (PD2i, DFA, and ApEn and four conventional linear-stochastic measures (SDNN, MNN, 1/f-Slope, LF/HF. Out-of-hospital AD was determined by modified Hinkle–Thaler criteria.Results: All-cause mortality at one-year follow-up was 10.3%, with 7.7% adjudicated to be AD. The sensitivity and relative risk for predicting AD was highest at all time-points for the nonlinear PD2i algorithm (p ≤ 0.001. The sensitivity at 30 days was 100%, specificity 58%, and relative risk >100 (p ≤ 0.001; sensitivity at 360 days was 95%, specificity 58%, and relative risk >11.4 (p ≤ 0.001.Conclusions: Heartbeat analysis by the time-dependent nonlinear PD2i algorithm is comparatively the superior test.Keywords: autonomic nervous system, regulatory systems, electrophysiology, heart rate
Comparing Online Algorithms for Bin Packing Problems
DEFF Research Database (Denmark)
Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard
2012-01-01
The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst......-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair....
Directory of Open Access Journals (Sweden)
V.B.Kirubanand
2010-03-01
Full Text Available The main theme of this paper is to find the performance of the Hub, Switch and Bluetooth technology using the Queueing Petri-net model and the markov algorithm with the security of Steganography. This paper mainly focuses on comparis on of Hub, switch and Bluetooth technologies in terms of service rate and arrival rate by using Markov algorithm (M/M(1,b/1. When comparing the service rates from the Hub network, switch network and the Bluetooth technology, it has been found that the service rate from the Bluetooth technology is very efficient for implementation. The values obtained from the Bluetooth technology can used for calculating the performance of other wireless technologies. QPNs facilitate the integration of both hardware and software aspects of the system behavior in the improved model. The purpose of Steganography is to send the hidden the information from one system to another through the Bluetooth technology with security measures. Queueing Petri Nets are very powerful as a performance analysis and prediction tool. By demonstrating the power of QPNs as a modeling paradigm in further fore coming technologies we hope to motivate further research in this area.
Giacometti, Achille; Gögelein, Christoph; Lado, Fred; Sciortino, Francesco; Ferrari, Silvano; Pastore, Giorgio
2014-03-01
Building upon past work on the phase diagram of Janus fluids [F. Sciortino, A. Giacometti, and G. Pastore, Phys. Rev. Lett. 103, 237801 (2009)], we perform a detailed study of integral equation theory of the Kern-Frenkel potential with coverage that is tuned from the isotropic square-well fluid to the Janus limit. An improved algorithm for the reference hypernetted-chain (RHNC) equation for this problem is implemented that significantly extends the range of applicability of RHNC. Results for both structure and thermodynamics are presented and compared with numerical simulations. Unlike previous attempts, this algorithm is shown to be stable down to the Janus limit, thus paving the way for analyzing the frustration mechanism characteristic of the gas-liquid transition in the Janus system. The results are also compared with Barker-Henderson thermodynamic perturbation theory on the same model. We then discuss the pros and cons of both approaches within a unified treatment. On balance, RHNC integral equation theory, even with an isotropic hard-sphere reference system, is found to be a good compromise between accuracy of the results, computational effort, and uniform quality to tackle self-assembly processes in patchy colloids of complex nature. Further improvement in RHNC however clearly requires an anisotropic reference bridge function. PMID:24606350
International Nuclear Information System (INIS)
Building upon past work on the phase diagram of Janus fluids [F. Sciortino, A. Giacometti, and G. Pastore, Phys. Rev. Lett. 103, 237801 (2009)], we perform a detailed study of integral equation theory of the Kern-Frenkel potential with coverage that is tuned from the isotropic square-well fluid to the Janus limit. An improved algorithm for the reference hypernetted-chain (RHNC) equation for this problem is implemented that significantly extends the range of applicability of RHNC. Results for both structure and thermodynamics are presented and compared with numerical simulations. Unlike previous attempts, this algorithm is shown to be stable down to the Janus limit, thus paving the way for analyzing the frustration mechanism characteristic of the gas-liquid transition in the Janus system. The results are also compared with Barker-Henderson thermodynamic perturbation theory on the same model. We then discuss the pros and cons of both approaches within a unified treatment. On balance, RHNC integral equation theory, even with an isotropic hard-sphere reference system, is found to be a good compromise between accuracy of the results, computational effort, and uniform quality to tackle self-assembly processes in patchy colloids of complex nature. Further improvement in RHNC however clearly requires an anisotropic reference bridge function
ISINA: INTEGRAL Source Identification Network Algorithm
Scaringi, S; Clark, D J; Dean, A J; Hill, A B; McBride, V A; Shaw, S E
2008-01-01
We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using Random Forests, is applied to the IBIS/ISGRI dataset in order to ease the production of unbiased future soft gamma-ray source catalogues. First we introduce the dataset and the problems encountered when dealing with images obtained using the coded mask technique. The initial step of source candidate searching is introduced and an initial candidate list is created. A description of the feature extraction on the initial candidate list is then performed together with feature merging for these candidates. Three training and testing sets are created in order to deal with the diverse timescales encountered when dealing with the gamma-ray sky. Three independent Random Forest are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described...
Chemyakin, E.; Sawamura, P.; Mueller, D.; Burton, S. P.; Ferrare, R. A.; Hostetler, C. A.; Scarino, A. J.; Hair, J. W.; Berkoff, T.; Cook, A. L.; Harper, D. B.; Seaman, S. T.
2015-12-01
Although aerosols are only a fairly minor constituent of Earth's atmosphere they are able to affect its radiative energy balance significantly. Light detection and ranging (lidar) instruments have the potential to play a crucial role in atmospheric research as only these instruments provide information about aerosol properties at a high vertical resolution. We are exploring different algorithmic approaches to retrieve microphysical properties of aerosols using lidar. Almost two decades ago we started with inversion techniques based on Tikhonov's regularization that became a reference point for the improvement of retrieval capabilities of inversion algorithms. Recently we began examining the potential of the "arrange and average" scheme, which relies on a look-up table of optical and microphysical aerosol properties. The future combination of these two different inversion schemes may help us to improve the accuracy of the microphysical data products.The novel arrange and average algorithm was applied to retrieve aerosol optical and microphysical parameters using NASA Langley Research Center (LaRC) High Spectral Resolution Lidar (HSRL-2) data. HSRL-2 is the first airborne HSRL system that is able to provide advanced datasets consisting of backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm as input information for aerosol microphysical retrievals. HSRL-2 was deployed on-board NASA LaRC's King Air aircraft during the Deriving Information on Surface Conditions from Column and VERtically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) field campaigns over the California Central Valley and Houston. Vertical profiles of aerosol optical properties and size distributions were obtained from in-situ instruments on-board the NASA's P-3B aircraft. As HSRL-2 flew along the same flight track of the P-3B, synergistic measurements and retrievals were obtained by these two independent platforms. We will present an
International Nuclear Information System (INIS)
Purpose: To quantitatively compare the accuracy of tumor volume segmentation in amplitude-based and phase-based respiratory gating algorithms in respiratory-correlated positron emission tomography (PET). Methods and Materials: List-mode fluorodeoxyglucose-PET data was acquired for 10 patients with a total of 12 fluorodeoxyglucose-avid tumors and 9 lymph nodes. Additionally, a phantom experiment was performed in which 4 plastic butyrate spheres with inner diameters ranging from 1 to 4 cm were imaged as they underwent 1-dimensional motion based on 2 measured patient breathing trajectories. PET list-mode data were gated into 8 bins using 2 amplitude-based (equal amplitude bins [A1] and equal counts per bin [A2]) and 2 temporal phase-based gating algorithms. Gated images were segmented using a commercially available gradient-based technique and a fixed 40% threshold of maximum uptake. Internal target volumes (ITVs) were generated by taking the union of all 8 contours per gated image. Segmented phantom ITVs were compared with their respective ground-truth ITVs, defined as the volume subtended by the tumor model positions covering 99% of breathing amplitude. Superior-inferior distances between sphere centroids in the end-inhale and end-exhale phases were also calculated. Results: Tumor ITVs from amplitude-based methods were significantly larger than those from temporal-based techniques (P=.002). For lymph nodes, A2 resulted in ITVs that were significantly larger than either of the temporal-based techniques (P<.0323). A1 produced the largest and most accurate ITVs for spheres with diameters of ≥2 cm (P=.002). No significant difference was shown between algorithms in the 1-cm sphere data set. For phantom spheres, amplitude-based methods recovered an average of 9.5% more motion displacement than temporal-based methods under regular breathing conditions and an average of 45.7% more in the presence of baseline drift (P<.001). Conclusions: Target volumes in images generated
A Novel Algorithm for Finding Interspersed Repeat Regions
Institute of Scientific and Technical Information of China (English)
Dongdong Li; Zhengzhi Wang; Qingshan Ni
2004-01-01
The analysis of repeats in the DNA sequences is an important subject in bioinformatics. In this paper, we propose a novel projection-assemble algorithm to find unknown interspersed repeats in DNA sequences. The algorithm employs random projection algorithm to obtain a candidate fragment set, and exhaustive search algorithm to search each pair of fragments from the candidate fragment set to find potential linkage, and then assemble them together. The complexity of our projection-assemble algorithm is nearly linear to the length of the genome sequence, and its memory usage is limited by the hardware. We tested our algorithm with both simulated data and real biology data, and the results show that our projection-assemble algorithm is efficient. By means of this algorithm, we found an un-labeled repeat region that occurs five times in Escherichia coli genome, with its length more than 5,000 bp, and a mismatch probability less than 4%.
Halopentacenes: Promising Candidates for Organic Semiconductors
Institute of Scientific and Technical Information of China (English)
DU Gong-He; REN Zhao-Yu; GUO Ping; ZHENG Ji-Ming
2009-01-01
We introduce polar substituents such as F, Cl, Br into pentacene to enhance the dissolubility in common organic solvents while retaining the high charge-carrier mobilities of pentacene. Geometric structures, dipole moments,frontier molecule orbits, ionization potentials and electron affinities, as well as reorganization energies of those molecules, and of pentacene for comparison, are successively calculated by density functional theory. The results indicate that halopentacenes have rather small reorganization energies (< 0.2 eV), and when the substituents are in position 2 or positions 2 and 9, they are polarity molecules. Thus we conjecture that they can easily be dissolved in common organic solvents, and are promising candidates for organic semiconductors.
Directory of Open Access Journals (Sweden)
Hamed Piarehzadeh
2012-08-01
Full Text Available In this study is tried to optimal distributed generation allocation for stability improvement in radial distribution systems. Voltage instability implies an uncontrolled decrease in voltage triggered by a disturbance, leading to voltage collapse and is primarily caused by dynamics connected with the load. The instability is divided into steady state and transient voltage instability Based on the time spectrum of the incident of the phenomena. The analysis is accomplished using a steady state voltage stability index which can be evaluated at each node of the distribution system. Several optimal capacities and locations are used to check these results. The location of DG has the main effect voltage stability on the system. Effects of location and capacity on incrementing steady state voltage stability in radial distribution systems are examined through Harmony Search Algorithm (HSA and finally the results are compared to Particle Swarm Optimization (PSO on the terms of speed, convergence and accuracy.
El-habashi, A.; Ahmed, S.
2015-10-01
New approaches are described that use of the Ocean Color Remote Sensing Reflectance readings (OC Rrs) available from the existing Visible Infrared Imaging Radiometer Suite (VIIRS) bands to detect and retrieve Karenia brevis (KB) Harmful Algal Blooms (HABs) that frequently plague the coasts of the West Florida Shelf (WFS). Unfortunately, VIIRS, unlike MODIS, does not have a 678 nm channel to detect Chlorophyll fluorescence, which is used with MODIS in the normalized fluorescence height (nFLH) algorithm which has been shown to help in effectively detecting and tracking KB HABs. We present here the use of neural network (NN) algorithms for KB HABS retrievals in the WFS. These NNs, previously reported by us, were trained, using a wide range of suitably parametrized synthetic data typical of coastal waters, to form a multiband inversion algorithm which models the relationship between Rrs values at the 486, 551 and 671nm VIIRS bands against the values of phytoplankton absorption (aph), CDOM absorption (ag), non-algal particles (NAP) absorption (aNAP) and the particulate backscattering bbp coefficients, all at 443nm, and permits retrievals of these parameters. We use the NN to retrieve aph443 in the WFS. The retrieved aph443 values are then filtered by applying known limiting conditions on minimum Chlorophyll concentration [Chla] and low backscatter properties associated with KB HABS in the WFS, thereby identifying, delineating and quantifying the aph443 values, and hence [Chl] concentrations representing KB HABS. Comparisons with in-situ measurements and other techniques including MODIS nFLH confirm the viability of both the NN retrievals and the filtering approaches devised.
Directory of Open Access Journals (Sweden)
Hengameh Khosropanah
2013-01-01
Full Text Available Introduction: Nowadays the necessity of the existence of a certain width of keratinized gingiva is emphasized upon to maintain periodontal health and prevent soft tissue recession around teeth and dental implants. This study was carried out to compare two gingival graft procedures including connective tissue graft and graft with a combination of collagen sponge with platelet-rich plasma and platelet-rich fibrin to increase the width and thickness of keratinized gingiva.Materials and Methods: In this clinical trial 8 patients with bilateral inadequate width (≤ 2 mm and thickness (≤ 1 mm of keratinized gingiva on the buccal aspect of single-rooted teeth were selected. On the control side, connective tissue graft and on the test side, graft by combination of stypro+PRP+PRF were performed. After surgery, the measurements were repeated at 1, 2 and 3 months. Data was analyzed by Wilcoxon’s test to compare the 2 groups and Friedman’s test was used for comparisons within each group with SPSS 15 (α = 0.05.Results: After 3 months there were no differences between test and control groups in relation to the width of keratinized (p value = 0.317, attached gingival (p value = 0.527, thickness of the graft (p value = 0.05 and thickness of the keratinized layer (p value = 1.Conclusion: It appears the new approach for gingival augmentation used in this study may be a proper substitute for autogenous gingival grafts. Key words: Collagen, Connective tissue, Gingiva, Plasma
Candidate gene prioritization with Endeavour.
Tranchevent, Léon-Charles; Ardeshirdavani, Amin; ElShal, Sarah; Alcaide, Daniel; Aerts, Jan; Auboeuf, Didier; Moreau, Yves
2016-07-01
Genomic studies and high-throughput experiments often produce large lists of candidate genes among which only a small fraction are truly relevant to the disease, phenotype or biological process of interest. Gene prioritization tackles this problem by ranking candidate genes by profiling candidates across multiple genomic data sources and integrating this heterogeneous information into a global ranking. We describe an extended version of our gene prioritization method, Endeavour, now available for six species and integrating 75 data sources. The performance (Area Under the Curve) of Endeavour on cross-validation benchmarks using 'gold standard' gene sets varies from 88% (for human phenotypes) to 95% (for worm gene function). In addition, we have also validated our approach using a time-stamped benchmark derived from the Human Phenotype Ontology, which provides a setting close to prospective validation. With this benchmark, using 3854 novel gene-phenotype associations, we observe a performance of 82%. Altogether, our results indicate that this extended version of Endeavour efficiently prioritizes candidate genes. The Endeavour web server is freely available at https://endeavour.esat.kuleuven.be/. PMID:27131783
Empathy Development in Teacher Candidates
Boyer, Wanda
2010-01-01
Using a grounded theory research design, the author examined 180 reflective essays of teacher candidates who participated in a "Learning Process Project," in which they were asked to synthesize and document their discoveries about the learning process over the course of a completely new learning experience as naive learners. This study explored…
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik;
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines the...
Bovchaliuk, Valentyn; Goloub, Philippe; Podvin, Thierry; Veselovskii, Igor; Tanre, Didier; Chaikovsky, Anatoli; Dubovik, Oleg; Mortier, Augustin; Lopatin, Anton; Korenskiy, Mikhail; Victori, Stephane
2016-07-01
Aerosol particles are important and highly variable components of the terrestrial atmosphere, and they affect both air quality and climate. In order to evaluate their multiple impacts, the most important requirement is to precisely measure their characteristics. Remote sensing technologies such as lidar (light detection and ranging) and sun/sky photometers are powerful tools for determining aerosol optical and microphysical properties. In our work, we applied several methods to joint or separate lidar and sun/sky-photometer data to retrieve aerosol properties. The Raman technique and inversion with regularization use only lidar data. The LIRIC (LIdar-Radiometer Inversion Code) and recently developed GARRLiC (Generalized Aerosol Retrieval from Radiometer and Lidar Combined data) inversion methods use joint lidar and sun/sky-photometer data. This paper presents a comparison and discussion of aerosol optical properties (extinction coefficient profiles and lidar ratios) and microphysical properties (volume concentrations, complex refractive index values, and effective radius values) retrieved using the aforementioned methods. The comparison showed inconsistencies in the retrieved lidar ratios. However, other aerosol properties were found to be generally in close agreement with the AERONET (AErosol RObotic NETwork) products. In future studies, more cases should be analysed in order to clearly define the peculiarities in our results.
A secured Cryptographic Hashing Algorithm
Mohanty, Rakesh; Bishi, Sukant kumar
2010-01-01
Cryptographic hash functions for calculating the message digest of a message has been in practical use as an effective measure to maintain message integrity since a few decades. This message digest is unique, irreversible and avoids all types of collisions for any given input string. The message digest calculated from this algorithm is propagated in the communication medium along with the original message from the sender side and on the receiver side integrity of the message can be verified by recalculating the message digest of the received message and comparing the two digest values. In this paper we have designed and developed a new algorithm for calculating the message digest of any message and implemented t using a high level programming language. An experimental analysis and comparison with the existing MD5 hashing algorithm, which is predominantly being used as a cryptographic hashing tool, shows this algorithm to provide more randomness and greater strength from intrusion attacks. In this algorithm th...
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Anna Bourmistrova; Milan Simic; Reza Hoseinnezhad; Jazar, Reza N.
2011-01-01
The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS) vehicle, though it is also applicable to two-wheel-steering (TWS) vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while th...
Integrative analysis to select cancer candidate biomarkers to targeted validation
Heberle, Henry; Domingues, Romênia R.; Granato, Daniela C.; Yokoo, Sami; Canevarolo, Rafael R.; Winck, Flavia V.; Ribeiro, Ana Carolina P.; Brandão, Thaís Bianca; Filgueiras, Paulo R.; Cruz, Karen S. P.; Barbuto, José Alexandre; Poppi, Ronei J.; Minghim, Rosane; Telles, Guilherme P.; Fonseca, Felipe Paiva; Fox, Jay W.; Santos-Silva, Alan R.; Coletta, Ricardo D.; Sherman, Nicholas E.; Paes Leme, Adriana F.
2015-01-01
Targeted proteomics has flourished as the method of choice for prospecting for and validating potential candidate biomarkers in many diseases. However, challenges still remain due to the lack of standardized routines that can prioritize a limited number of proteins to be further validated in human samples. To help researchers identify candidate biomarkers that best characterize their samples under study, a well-designed integrative analysis pipeline, comprising MS-based discovery, feature selection methods, clustering techniques, bioinformatic analyses and targeted approaches was performed using discovery-based proteomic data from the secretomes of three classes of human cell lines (carcinoma, melanoma and non-cancerous). Three feature selection algorithms, namely, Beta-binomial, Nearest Shrunken Centroids (NSC), and Support Vector Machine-Recursive Features Elimination (SVM-RFE), indicated a panel of 137 candidate biomarkers for carcinoma and 271 for melanoma, which were differentially abundant between the tumor classes. We further tested the strength of the pipeline in selecting candidate biomarkers by immunoblotting, human tissue microarrays, label-free targeted MS and functional experiments. In conclusion, the proposed integrative analysis was able to pre-qualify and prioritize candidate biomarkers from discovery-based proteomics to targeted MS. PMID:26540631
Alstad, K. P.; Venterea, R. T.; Tan, S. M.; Saad, N.
2015-12-01
Understanding chamber-based soil flux model fitting and measurement error is key to scaling soils GHG emissions and resolving the primary uncertainties in climate and management feedbacks at regional scales. One key challenge is the selection of the correct empirical model applied to soil flux rate analysis in chamber-based experiments. Another challenge is the characterization of error in the chamber measurement. Traditionally, most chamber-based N2O and CH4 measurements and model derivations have used discrete sampling for GC analysis, and have been conducted using extended chamber deployment periods (DP) which are expected to result in substantial alteration of the pre-deployment flux. The development of high-precision, high-frequency CRDS analyzers has advanced the science of soil flux analysis by facilitating much shorter DP and, in theory, less chamber-induced suppression of the soil-atmosphere diffusion gradient. As well, a new software tool developed by Picarro (the "Soil Flux Processor" or "SFP") links the power of Cavity Ring-Down Spectroscopy (CRDS) technology with an easy-to-use interface that features flexible sample-ID and run-schemes, and provides real-time monitoring of chamber accumulations and environmental conditions. The SFP also includes a sophisticated flux analysis interface which offers a user-defined model selection, including three predominant fit algorithms as default, and an open-code interface for user-composed algorithms. The SFP is designed to couple with the Picarro G2508 system, an analyzer which simplifies soils flux studies by simultaneously measuring primary GHG species -- N2O, CH4, CO2 and H2O. In this study, Picarro partners with the ARS USDA Soil & Water Management Research Unit (R. Venterea, St. Paul), to examine the degree to which the high-precision, high-frequency Picarro analyzer allows for much shorter DPs periods in chamber-based flux analysis, and, in theory, less chamber-induced suppression of the soil
International Nuclear Information System (INIS)
To investigate systematic changes in dose arising when treatment plans optimised using the Anisotropic Analytical Algorithm (AAA) are recalculated using Acuros XB (AXB) in patients treated with definitive chemoradiotherapy (dCRT) for locally advanced oesophageal cancers. We have compared treatment plans created using AAA with those recalculated using AXB. Although the Anisotropic Analytical Algorithm (AAA) is currently more widely used in clinical routine, Acuros XB (AXB) has been shown to more accurately calculate the dose distribution, particularly in heterogeneous regions. Studies to predict clinical outcome should be based on modelling the dose delivered to the patient as accurately as possible. CT datasets from ten patients were selected for this retrospective study. VMAT (Volumetric modulated arc therapy) plans with 2 arcs, collimator rotation ± 5-10° and dose prescription 50 Gy / 25 fractions were created using Varian Eclipse (v10.0). The initial dose calculation was performed with AAA, and AXB plans were created by re-calculating the dose distribution using the same number of monitor units (MU) and multileaf collimator (MLC) files as the original plan. The difference in calculated dose to organs at risk (OAR) was compared using dose-volume histogram (DVH) statistics and p values were calculated using the Wilcoxon signed rank test. The potential clinical effect of dosimetric differences in the gross tumour volume (GTV) was evaluated using three different TCP models from the literature. PTV Median dose was apparently 0.9 Gy lower (range: 0.5 Gy - 1.3 Gy; p < 0.05) for VMAT AAA plans re-calculated with AXB and GTV mean dose was reduced by on average 1.0 Gy (0.3 Gy −1.5 Gy; p < 0.05). An apparent difference in TCP of between 1.2% and 3.1% was found depending on the choice of TCP model. OAR mean dose was lower in the AXB recalculated plan than the AAA plan (on average, dose reduction: lung 1.7%, heart 2.4%). Similar trends were seen for CRT plans
Improved Tiled Bitmap Forensic Analysis Algorithm
Directory of Open Access Journals (Sweden)
C. D. Badgujar, G. N. Dhanokar
2012-12-01
Full Text Available In Computer network world, the needs for securityand proper systems of control are obvious and findout the intruders who do the modification andmodified data. Nowadays Frauds that occurs incompanies are not only by outsiders but also byinsiders. Insider may perform illegal activity & tryto hide illegal activity. Companies would like to beassured that such illegal activity i.e. tampering hasnot occurred, or if it does, it should be quicklydiscovered. Mechanisms now exist that detecttampering of a database, through the use ofcryptographically-strong hash functions. This papercontains a survey which explores the various beliefsupon database forensics through differentmethodologies using forensic algorithms and toolsfor investigations. Forensic analysis algorithms areused to determine who, when, and what data hadbeen tampered. Tiled Bitmap Algorithm introducesthe notion of a candidate set (all possible locationsof detected tampering(s and provides a completecharacterization of the candidate set and itscardinality. Improved tiled bitmap algorithm willcover come the drawbacks of existing tiled bitmapalgorithm.
Fast Algorithm for N-2 Contingency Problem
Turitsyn, K S
2012-01-01
We present a novel selection algorithm for N-2 contingency analysis problem. The algorithm is based on the iterative bounding of line outage distribution factors and successive pruning of the set of contingency pair candidates. The selection procedure is non-heuristic, and is certified to identify all events that lead to thermal constraints violations in DC approximation. The complexity of the algorithm is O(N^2) comparable to the complexity of N-1 contingency problem. We validate and test the algorithm on the Polish grid network with around 3000 lines. For this test case two iterations of the pruning procedure reduce the total number of candidate pairs by a factor of almost 1000 from 5 millions line pairs to only 6128.
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
A heuristic path-estimating algorithm for large-scale real-time traffic information calculating
Institute of Scientific and Technical Information of China (English)
2008-01-01
As the original Global Position System (GPS) data in Floating Car Data have the accuracy problem,this paper proposes a heuristic path-estimating algorithm for large-scale real-time traffic information calculating. It uses the heuristic search method,imports the restriction with geometric operation,and makes comparison between the vectors composed of the vehicular GPS points and the special road network model to search the set of vehicular traveling route candidates. Finally,it chooses the most optimal one according to weight. Experimental results indicate that the algorithm has considerable efficiency in accuracy (over 92.7%) and com-putational speed (max 8000 GPS records per second) when handling the GPS tracking data whose sampling rate is larger than 1 min even under complex road network conditions.
Institute of Scientific and Technical Information of China (English)
Armand BABOLI; Mohammadali Pirayesh NEGHAB; Rasoul HAJI
2008-01-01
This paper considers a two-level supply chain consisting of one warehouse and one retailer. In this model we determine the optimal ordering policy according to inventory and transportation costs. We assume that the demand rate by the retailer is known. Shortages are allowed neither at the retailer nor at the warehouse. We study this model in two cases; decentralized and centralized. In the decentralized case the retailer and the warehouse independently minimize their own costs; while in the centralized case the warehouse and the retailer are considered as a whole firm. We propose an algorithm to find economic order quantities for both the retailer and the warehouse which minimize the total system cost in the centralized case. The total system cost contains the holding and ordering costs at the retailer and the warehouse as well as the transportation cost from the warehouse to the retailer. The application of this model into the pharmaceutical downstream supply chain of a public hospital allows obtaining significant savings. By numerical examples, the costs are computed in MATLAB(C) to compare the costs in the centralized case with decentralized one and to propose a saving-sharing mechanism through quantity discount.
Differentially Private Frequent Sequence Mining via Sampling-based Candidate Pruning
Xu, Shengzhi; Cheng, Xiang; Li, Zhengyi; Xiong, Li
2016-01-01
In this paper, we study the problem of mining frequent sequences under the rigorous differential privacy model. We explore the possibility of designing a differentially private frequent sequence mining (FSM) algorithm which can achieve both high data utility and a high degree of privacy. We found, in differentially private FSM, the amount of required noise is proportionate to the number of candidate sequences. If we could effectively reduce the number of unpromising candidate sequences, the utility and privacy tradeoff can be significantly improved. To this end, by leveraging a sampling-based candidate pruning technique, we propose a novel differentially private FSM algorithm, which is referred to as PFS2. The core of our algorithm is to utilize sample databases to further prune the candidate sequences generated based on the downward closure property. In particular, we use the noisy local support of candidate sequences in the sample databases to estimate which sequences are potentially frequent. To improve the accuracy of such private estimations, a sequence shrinking method is proposed to enforce the length constraint on the sample databases. Moreover, to decrease the probability of misestimating frequent sequences as infrequent, a threshold relaxation method is proposed to relax the user-specified threshold for the sample databases. Through formal privacy analysis, we show that our PFS2 algorithm is ε-differentially private. Extensive experiments on real datasets illustrate that our PFS2 algorithm can privately find frequent sequences with high accuracy. PMID:26973430
Directory of Open Access Journals (Sweden)
Ji-Wook Kwon
2015-05-01
Full Text Available This paper proposes a Multiple Leader Candidate (MLC structure and a Competitive Position Allocation (CPA algorithm which can be applicable for various applications including environmental sensing. Unlike previous formation structures such as virtual-leader and actual-leader structures with position allocation including a rigid allocation and an optimization based allocation, the formation employing the proposed MLC structure and CPA algorithm is robust against the fault (or disappearance of the member robots and reduces the entire cost. In the MLC structure, a leader of the entire system is chosen among leader candidate robots. The CPA algorithm is the decentralized position allocation algorithm that assigns the robots to the vertex of the formation via the competition of the adjacent robots. The numerical simulations and experimental results are included to show the feasibility and the performance of the multiple robot system employing the proposed MLC structure and the CPA algorithm.
IAEA Director General candidates announced
International Nuclear Information System (INIS)
Full text: The IAEA today confirms receipt of the nomination of five candidates for Director General of the IAEA. Nominations of the following individuals have been received by the Chairperson of the IAEA Board of Governors, Ms. Taous Feroukhi: Mr. Jean-Pol Poncelet of Belgium; Mr. Yukiya Amano of Japan; Mr. Ernest Petric of Slovenia; Mr. Abdul Samad Minty of South Africa; and Mr. Luis Echavarri of Spain. The five candidates were nominated in line with a process approved by the Board in October 2008. IAEA Director General Mohamed ElBaradei's term of office expires on 30 November 2009. He has served as Director General since 1997 and has stated that he is not available for a fourth term of office. (IAEA)
Ghosh, Aniruddha; Joshi, P. K.
2014-02-01
Bamboo is used by different communities in India to develop indigenous products, maintain livelihood and sustain life. Indian National Bamboo Mission focuses on evaluation, monitoring and development of bamboo as an important plant resource. Knowledge of spatial distribution of bamboo therefore becomes necessary in this context. The present study attempts to map bamboo patches using very high resolution (VHR) WorldView 2 (WV 2) imagery in parts of South 24 Parganas, West Bengal, India using both pixel and object-based approaches. A combined layer of pan-sharpened multi-spectral (MS) bands, first 3 principal components (PC) of these bands and seven second order texture measures based Gray Level Co-occurrence Matrices (GLCM) of first three PC were used as input variables. For pixel-based image analysis (PBIA), recursive feature elimination (RFE) based feature selection was carried out to identify the most important input variables. Results of the feature selection indicate that the 10 most important variables include PC 1, PC 2 and their GLCM mean along with 6 MS bands. Three different sets of predictor variables (5 and 10 most important variables and all 32 variables) were classified with Support Vector Machine (SVM) and Random Forest (RF) algorithms. Producer accuracy of bamboo was found to be highest when 10 most important variables selected from RFE were classified with SVM (82%). However object-based image analysis (OBIA) achieved higher classification accuracy than PBIA using the same 32 variables, but with less number of training samples. Using object-based SVM classifier, the producer accuracy of bamboo reached 94%. The significance of this study is that the present framework is capable of accurately identifying bamboo patches as well as detecting other tree species in a tropical region with heterogeneous land use land cover (LULC), which could further aid the mandate of National Bamboo Mission and related programs.
Directory of Open Access Journals (Sweden)
Cristina Anton
2012-01-01
Full Text Available OBJECTIVE: Differentiation between benign and malignant ovarian neoplasms is essential for creating a system for patient referrals. Therefore, the contributions of the tumor markers CA125 and human epididymis protein 4 (HE4 as well as the risk ovarian malignancy algorithm (ROMA and risk malignancy index (RMI values were considered individually and in combination to evaluate their utility for establishing this type of patient referral system. METHODS: Patients who had been diagnosed with ovarian masses through imaging analyses (n = 128 were assessed for their expression of the tumor markers CA125 and HE4. The ROMA and RMI values were also determined. The sensitivity and specificity of each parameter were calculated using receiver operating characteristic curves according to the area under the curve (AUC for each method. RESULTS: The sensitivities associated with the ability of CA125, HE4, ROMA, or RMI to distinguish between malignant versus benign ovarian masses were 70.4%, 79.6%, 74.1%, and 63%, respectively. Among carcinomas, the sensitivities of CA125, HE4, ROMA (pre-and post-menopausal, and RMI were 93.5%, 87.1%, 80%, 95.2%, and 87.1%, respectively. The most accurate numerical values were obtained with RMI, although the four parameters were shown to be statistically equivalent. CONCLUSION: There were no differences in accuracy between CA125, HE4, ROMA, and RMI for differentiating between types of ovarian masses. RMI had the lowest sensitivity but was the most numerically accurate method. HE4 demonstrated the best overall sensitivity for the evaluation of malignant ovarian tumors and the differential diagnosis of endometriosis. All of the parameters demonstrated increased sensitivity when tumors with low malignancy potential were considered low-risk, which may be used as an acceptable assessment method for referring patients to reference centers.
Auletta, Gianluca; Ditommaso, Rocco; Iacovino, Chiara; Carlo Ponzo, Felice; Pina Limongelli, Maria
2016-04-01
Continuous monitoring based on vibrational identification methods is increasingly employed with the aim of evaluate the state of the health of existing structures and infrastructures and to evaluate the performance of safety interventions over time. In case of earthquakes, data acquired by means of continuous monitoring systems can be used to localize and quantify a possible damage occurred on a monitored structure using appropriate algorithms based on the variations of structural parameters. Most of the damage identification methods are based on the variation of few modal and/or non-modal parameters: the former, are strictly related to the structural eigenfrequencies, equivalent viscous damping factors and mode shapes; the latter, are based on the variation of parameters related to the geometric characteristics of the monitored structure whose variations could be correlated related to damage. In this work results retrieved from the application of a curvature evolution based method and an interpolation error based method are compared. The first method is based on the evaluation of the curvature variation (related to the fundamental mode of vibration) over time and compares the variations before, during and after the earthquake. The Interpolation Method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. A damage feature is defined in terms of the error related to the use of a spline function in interpolating the ODSs of the structure: statistically significant variations of the interpolation error between two successive inspections of the structure indicate the onset of damage. Both methods have been applied using both numerical data retrieved from nonlinear FE models and experimental tests on scaled structures carried out on the shaking table of the University of Basilicata. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC
VALUE ORIENTATIONS OF TEACHER CANDIDATES
YAPICI, Asım; KUTLU, M.Oğuz; BİLİCAN, F.Işıl
2012-01-01
Abstract This cross-sectional, descriptive study examined the change in values in time among teacher candidates. The Schwartz Values Inventory was administered to 708 freshmen and senior students studying at Cukurova University, Education Faculty. The results have shown that the students at the department of Science Education valued power, achievement, stimulation; the department of English Teaching Education valued hedonism; and the department of Education of Religious Culture valued un...
An Efficient Hybrid Face Recognition Algorithm Using PCA and GABOR Wavelets
Directory of Open Access Journals (Sweden)
Hyunjong Cho
2014-04-01
Full Text Available With the rapid development of computers and the increasing, mass use of high-tech mobile devices, vision-based face recognition has advanced significantly. However, it is hard to conclude that the performance of computers surpasses that of humans, as humans have generally exhibited better performance in challenging situations involving occlusion or variations. Motivated by the recognition method of humans who utilize both holistic and local features, we present a computationally efficient hybrid face recognition method that employs dual-stage holistic and local feature-based recognition algorithms. In the first coarse recognition stage, the proposed algorithm utilizes Principal Component Analysis (PCA to identify a test image. The recognition ends at this stage if the confidence level of the result turns out to be reliable. Otherwise, the algorithm uses this result for filtering out top candidate images with a high degree of similarity, and passes them to the next fine recognition stage where Gabor filters are employed. As is well known, recognizing a face image with Gabor filters is a computationally heavy task. The contribution of our work is in proposing a flexible dual-stage algorithm that enables fast, hybrid face recognition. Experimental tests were performed with the Extended Yale Face Database B to verify the effectiveness and validity of the research, and we obtained better recognition results under illumination variations not only in terms of computation time but also in terms of the recognition rate in comparison to PCA- and Gabor wavelet-based recognition algorithms.
Directory of Open Access Journals (Sweden)
J. M. A. C. Souza
2011-03-01
Full Text Available Three methods for automatic detection of mesoscale coherent structures are applied to Sea Level Anomaly (SLA fields in the South Atlantic. The first method is based on the wavelet packet decomposition of the SLA data, the second on the estimation of the Okubo-Weiss parameter and the third on a geometric criterion using the winding-angle approach. The results provide a comprehensive picture of the mesoscale eddies over the South Atlantic Ocean, emphasizing their main characteristics: amplitude, diameter, duration and propagation velocity. Five areas of particular eddy dynamics were selected: the Brazil Current, the Agulhas eddies propagation corridor, the Agulhas Current retroflexion, the Brazil-Malvinas confluence zone and the northern branch of the Antarctic Circumpolar Current (ACC. For these areas, mean propagation velocities and amplitudes were calculated. Two regions with long duration eddies were observed, corresponding to the propagation of Agulhas and ACC eddies. Through the comparison between the identification methods, their main advantages and shortcomings were detailed. The geometric criterion presents a better performance, mainly in terms of number of detections, duration of the eddies and propagation velocities. The results are particularly good for the Agulhas Rings, that presented the longest lifetimes of all South Atlantic eddies.
Directory of Open Access Journals (Sweden)
Robin Roj
2014-07-01
Full Text Available This paper presents three different search engines for the detection of CAD-parts in large databases. The analysis of the contained information is performed by the export of the data that is stored in the structure trees of the CAD-models. A preparation program generates one XML-file for every model, which in addition to including the data of the structure tree, also owns certain physical properties of each part. The first search engine is specializes in the discovery of standard parts, like screws or washers. The second program uses certain user input as search parameters, and therefore has the ability to perform personalized queries. The third one compares one given reference part with all parts in the database, and locates files that are identical, or similar to, the reference part. All approaches run automatically, and have the analysis of the structure tree in common. Files constructed with CATIA V5, and search engines written with Python have been used for the implementation. The paper also includes a short comparison of the advantages and disadvantages of each program, as well as a performance test.
Béland, Laurent K; Stoller, Roger; Xu, Haixuan
2014-01-01
We present a comparison of the kinetic Activation-Relaxation Technique (k-ART) and the Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC), two off-lattice, on-the-fly kinetic Monte Carlo (KMC) techniques that were recently used to solve several materials science problems. We show that if the initial displacements are localized the dimer method and the Activation-Relaxation Technique \\emph{nouveau} provide similar performance. We also show that k-ART and SEAKMC, although based on different approximations, are in agreement with each other, as demonstrated by the examples of 50 vacancies in a 1950-atom Fe box and of interstitial loops in 16000-atom boxes. Generally speaking, k-ART's treatment of geometry and flickers is more flexible, e.g. it can handle amorphous systems, and rigorous than SEAKMC's, while the later's concept of active volumes permits a significant speedup of simulations for the systems under consideration and therefore allows investigations of processes requiring large systems that are not acc...
Directory of Open Access Journals (Sweden)
Jyoti Kalyani
2006-01-01
Full Text Available Security of wired and wireless networks is the most challengeable in today's computer world. The aim of this study was to give brief introduction about viruses and worms, their creators and characteristics of algorithms used by viruses. Here wired and wireless network viruses are elaborated. Also viruses are compared with human immune system. On the basis of this comparison four guidelines are given to detect viruses so that more secure systems are made. While concluding this study it is found that the security is most challengeable, thus it is required to make more secure models which automatically detect viruses and prevent the system from its affect.
Five modified boundary scan adaptive test generation algorithms
Institute of Scientific and Technical Information of China (English)
Niu Chunping; Ren Zheping; Yao Zongzhong
2006-01-01
To study the diagnostic problem of Wire-OR (W-O) interconnect fault of PCB (Printed Circuit Board), five modified boundary scan adaptive algorithms for interconnect test are put forward. These algorithms apply Global-diagnosis sequence algorithm to replace the equal weight algorithm of primary test, and the test time is shortened without changing the fault diagnostic capability. The descriptions of five modified adaptive test algorithms are presented, and the capability comparison between the modified algorithm and the original algorithm is made to prove the validity of these algorithms.
DEFF Research Database (Denmark)
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
Evaluating computer-aided detection algorithms
International Nuclear Information System (INIS)
Computer-aided detection (CAD) has been attracting extensive research interest during the last two decades. It is recognized that the full potential of CAD can only be realized by improving the performance and robustness of CAD algorithms and this requires good evaluation methodology that would permit CAD designers to optimize their algorithms. Free-response receiver operating characteristic (FROC) curves are widely used to assess CAD performance, however, evaluation rarely proceeds beyond determination of lesion localization fraction (sensitivity) at an arbitrarily selected value of nonlesion localizations (false marks) per image. This work describes a FROC curve fitting procedure that uses a recent model of visual search that serves as a framework for the free-response task. A maximum likelihood procedure for estimating the parameters of the model from free-response data and fitting CAD generated FROC curves was implemented. Procedures were implemented to estimate two figures of merit and associated statistics such as 95% confidence intervals and goodness of fit. One of the figures of merit does not require the arbitrary specification of an operating point at which to evaluate CAD performance. For comparison a related method termed initial detection and candidate analysis was also implemented that is applicable when all suspicious regions are reported. The two methods were tested on seven mammography CAD data sets and both yielded good to excellent fits. The search model approach has the advantage that it can potentially be applied to radiologist generated free-response data where not all suspicious regions are reported, only the ones that are deemed sufficiently suspicious to warrant clinical follow-up. This work represents the first practical application of the search model to an important evaluation problem in diagnostic radiology. Software based on this work is expected to benefit CAD developers working in diverse areas of medical imaging
Vorozheikin, A.; Gonchar, T.; Panfilov, I.; Sopov, E.; Sopov, S.
2009-01-01
A new algorithm for the solution of complex constrained optimization problems based on the probabilistic genetic algorithm with optimal solution prediction is proposed. The efficiency investigation results in comparison with standard genetic algorithm are presented.