WorldWideScience

Sample records for candid comparison algorithm

  1. Experience with CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.; Cannon, M.

    1994-10-01

    This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.

  2. CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.M.; Cannon, T.M.

    1994-02-21

    In this paper, we propose a method for calculating the similarity between two digital images. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized distance between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to an example target image. This algorithm is applied to the problem of search and retrieval for database containing pulmonary CT imagery, and experimental results are provided.

  3. Efficient RNA structure comparison algorithms.

    Science.gov (United States)

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  4. Comparison of turbulence mitigation algorithms

    Science.gov (United States)

    Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric

    2017-07-01

    When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.

  5. PEACE: pulsar evaluation algorithm for candidate extraction - a software package for post-analysis processing of pulsar survey candidates

    Science.gov (United States)

    Lee, K. J.; Stovall, K.; Jenet, F. A.; Martinez, J.; Dartez, L. P.; Mata, A.; Lunsford, G.; Cohen, S.; Biwer, C. M.; Rohr, M.; Flanigan, J.; Walker, A.; Banaszak, S.; Allen, B.; Barr, E. D.; Bhat, N. D. R.; Bogdanov, S.; Brazier, A.; Camilo, F.; Champion, D. J.; Chatterjee, S.; Cordes, J.; Crawford, F.; Deneva, J.; Desvignes, G.; Ferdman, R. D.; Freire, P.; Hessels, J. W. T.; Karuppusamy, R.; Kaspi, V. M.; Knispel, B.; Kramer, M.; Lazarus, P.; Lynch, R.; Lyne, A.; McLaughlin, M.; Ransom, S.; Scholz, P.; Siemens, X.; Spitler, L.; Stairs, I.; Tan, M.; van Leeuwen, J.; Zhu, W. W.

    2013-07-01

    Modern radio pulsar surveys produce a large volume of prospective candidates, the majority of which are polluted by human-created radio frequency interference or other forms of noise. Typically, large numbers of candidates need to be visually inspected in order to determine if they are real pulsars. This process can be labour intensive. In this paper, we introduce an algorithm called Pulsar Evaluation Algorithm for Candidate Extraction (PEACE) which improves the efficiency of identifying pulsar signals. The algorithm ranks the candidates based on a score function. Unlike popular machine-learning-based algorithms, no prior training data sets are required. This algorithm has been applied to data from several large-scale radio pulsar surveys. Using the human-based ranking results generated by students in the Arecibo Remote Command Center programme, the statistical performance of PEACE was evaluated. It was found that PEACE ranked 68 per cent of the student-identified pulsars within the top 0.17 per cent of sorted candidates, 95 per cent within the top 0.34 per cent and 100 per cent within the top 3.7 per cent. This clearly demonstrates that PEACE significantly increases the pulsar identification rate by a factor of about 50 to 1000. To date, PEACE has been directly responsible for the discovery of 47 new pulsars, 5 of which are millisecond pulsars that may be useful for pulsar timing based gravitational-wave detection projects.

  6. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  7. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  8. Upper-Lower Bounds Candidate Sets Searching Algorithm for Bayesian Network Structure Learning

    Directory of Open Access Journals (Sweden)

    Guangyi Liu

    2014-01-01

    Full Text Available Bayesian network is an important theoretical model in artificial intelligence field and also a powerful tool for processing uncertainty issues. Considering the slow convergence speed of current Bayesian network structure learning algorithms, a fast hybrid learning method is proposed in this paper. We start with further analysis of information provided by low-order conditional independence testing, and then two methods are given for constructing graph model of network, which is theoretically proved to be upper and lower bounds of the structure space of target network, so that candidate sets are given as a result; after that a search and scoring algorithm is operated based on the candidate sets to find the final structure of the network. Simulation results show that the algorithm proposed in this paper is more efficient than similar algorithms with the same learning precision.

  9. Clustering and Candidate Motif Detection in Exosomal miRNAs by Application of Machine Learning Algorithms.

    Science.gov (United States)

    Gaur, Pallavi; Chaturvedi, Anoop

    2017-07-22

    The clustering pattern and motifs give immense information about any biological data. An application of machine learning algorithms for clustering and candidate motif detection in miRNAs derived from exosomes is depicted in this paper. Recent progress in the field of exosome research and more particularly regarding exosomal miRNAs has led much bioinformatic-based research to come into existence. The information on clustering pattern and candidate motifs in miRNAs of exosomal origin would help in analyzing existing, as well as newly discovered miRNAs within exosomes. Along with obtaining clustering pattern and candidate motifs in exosomal miRNAs, this work also elaborates the usefulness of the machine learning algorithms that can be efficiently used and executed on various programming languages/platforms. Data were clustered and sequence candidate motifs were detected successfully. The results were compared and validated with some available web tools such as 'BLASTN' and 'MEME suite'. The machine learning algorithms for aforementioned objectives were applied successfully. This work elaborated utility of machine learning algorithms and language platforms to achieve the tasks of clustering and candidate motif detection in exosomal miRNAs. With the information on mentioned objectives, deeper insight would be gained for analyses of newly discovered miRNAs in exosomes which are considered to be circulating biomarkers. In addition, the execution of machine learning algorithms on various language platforms gives more flexibility to users to try multiple iterations according to their requirements. This approach can be applied to other biological data-mining tasks as well.

  10. Application of a kernel-based online learning algorithm to the classification of nodule candidates in computer-aided detection of CT lung nodules

    International Nuclear Information System (INIS)

    Matsumoto, S.; Ohno, Y.; Takenaka, D.; Sugimura, K.; Yamagata, H.

    2007-01-01

    Classification of the nodule candidates in computer-aided detection (CAD) of lung nodules in CT images was addressed by constructing a nonlinear discriminant function using a kernel-based learning algorithm called the kernel recursive least-squares (KRLS) algorithm. Using the nodule candidates derived from the processing by a CAD scheme of 100 CT datasets containing 253 non-calcified nodules or 3 mm or larger as determined by the consensus of two thoracic radiologists, the following trial were carried out 100 times: by randomly selecting 50 datasets for training, a nonlinear discriminant function was obtained using the nodule candidates in the training datasets and tested with the remaining candidates; for comparison, a rule-based classification was tested in a similar manner. At the number of false positives per case of about 5, the nonlinear classification method showed an improved sensitivity of 80% (mean over the 100 trials) compared with 74% of the rule-based method. (orig.)

  11. Dynamic programming algorithms for biological sequence comparison.

    Science.gov (United States)

    Pearson, W R; Miller, W

    1992-01-01

    Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.

  12. Line Balancing Using Largest Candidate Rule Algorithm In A Garment Industry: A Case Study

    Directory of Open Access Journals (Sweden)

    V. P.Jaganathan

    2014-12-01

    Full Text Available The emergence of fast changes in fashion has given rise to the need to shorten production cycle times in the garment industry. As effective usage of resources has a significant effect on the productivity and efficiency of production operations, garment manufacturers are urged to utilize their resources effectively in order to meet dynamic customer demand. This paper focuses specifically on line balancing and layout modification. The aim of assembly line balance in sewing lines is to assign tasks to the workstations, so that the machines of the workstation can perform the assigned tasks with a balanced loading. Largest Candidate Rule Algorithm (LCR has been deployed in this paper.

  13. PEACE: pulsar evaluation algorithm for candidate extraction - a software package for post-analysis processing of pulsar survey candidates

    NARCIS (Netherlands)

    Lee, K.J.; Stovall, K.; Jenet, F.A.; Martinez, J.; Dartez, L.P.; Mata, A.; Lunsford, G.; Cohen, S.; Biwer, C.M.; Rohr, M.; Flanigan, J.; Walker, A.; Banaszak, S.; Allen, B.; Barr, E.D.; Bhat, N.D.R.; Bogdanov, S.; Brazier, A.; Camilo, F.; Champion, D.J.; Chatterjee, S.; Cordes, J.; Crawford, F.; Deneva, J.; Desvignes, G.; Ferdman, R.D.; Freire, P.; Hessels, J.W.T.; Karuppusamy, R.; Kaspi, V.M.; Knispel, B.; Kramer, M.; Lazarus, P.; Lynch, R.; Lyne, A.; McLaughlin, M.; Ransom, S.; Scholz, P.; Siemens, X.; Spitler, L.; Stairs, I.; Tan, M.; van Leeuwen, J.; Zhu, W.W.

    2013-01-01

    Modern radio pulsar surveys produce a large volume of prospective candidates, the majority of which are polluted by human-created radio frequency interference or other forms of noise. Typically, large numbers of candidates need to be visually inspected in order to determine if they are real pulsars.

  14. Trust Based Algorithm for Candidate Node Selection in Hybrid MANET-DTN

    Directory of Open Access Journals (Sweden)

    Jan Papaj

    2014-01-01

    Full Text Available The hybrid MANET - DTN is a mobile network that enables transport of the data between groups of the disconnected mobile nodes. The network provides benefits of the Mobile Ad-Hoc Networks (MANET and Delay Tolerant Network (DTN. The main problem of the MANET occurs if the communication path is broken or disconnected for some short time period. On the other side, DTN allows sending data in the disconnected environment with respect to higher tolerance to delay. Hybrid MANET - DTN provides optimal solution for emergency situation in order to transport information. Moreover, the security is the critical factor because the data are transported by mobile devices. In this paper, we investigate the issue of secure candidate node selection for transportation of the data in a disconnected environment for hybrid MANET- DTN. To achieve the secure selection of the reliable mobile nodes, the trust algorithm is introduced. The algorithm enables select reliable nodes based on collecting routing information. This algorithm is implemented to the simulator OPNET modeler.

  15. Tradeoffs Between Branch Mispredictions and Comparisons for Sorting Algorithms

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Moruz, Gabriel

    2005-01-01

    Branch mispredictions is an important factor affecting the running time in practice. In this paper we consider tradeoffs between the number of branch mispredictions and the number of comparisons for sorting algorithms in the comparison model. We prove that a sorting algorithm using O(dnlog n......) comparisons performs Omega(nlogd n) branch mispredictions. We show that Multiway MergeSort achieves this tradeoff by adopting a multiway merger with a low number of branch mispredictions. For adaptive sorting algorithms we similarly obtain that an algorithm performing O(dn(1+log (1+Inv/n))) comparisons must...... perform Omega(nlogd (1+Inv/n)) branch mispredictions, where Inv is the number of inversions in the input. This tradeoff can be achieved by GenericSort by Estivill-Castro and Wood by adopting a multiway division protocol and a multiway merging algorithm with a low number of branch mispredictions....

  16. Sorting on STAR. [CDC computer algorithm timing comparison

    Science.gov (United States)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  17. The role of cardiovascular magnetic resonance in candidates for Fontan operation: Proposal of a new Algorithm

    Directory of Open Access Journals (Sweden)

    Ait-Ali Lamia

    2011-11-01

    Full Text Available Abstract Background To propose a new diagnostic algorithm for candidates for Fontan and identify those who can skip cardiac catheterization (CC. Methods Forty-four candidates for Fontan (median age 4.8 years, range: 2-29 years were prospectively evaluated by trans-thoracic echocardiography (TTE, Cardiovascular magnetic resonance (CMR and CC. Before CC, according to clinical, echo and CMR findings, patients were divided in two groups: Group I comprised 18 patients deemed suitable for Fontan without requiring CC; group II comprised 26 patients indicated for CC either in order to detect more details, or for interventional procedures. Results In Group I ("CC not required" no unexpected new information affecting surgical planning was provided by CC. Conversely, in Group II new information was provided by CC in three patients (0 vs 11.5%, p = 0.35 and in six an interventional procedure was performed. During CC, minor complications occurred in one patient from Group I and in three from Group II (6 vs 14%, p = 0.7. Radiation Dose-Area product was similar in the two groups (Median 20 Gycm2, range: 5-40 vs 26.5 Gycm2, range: 9-270 p = 0.37. All 18 Group I patients and 19 Group II patients underwent a total cavo-pulmonary anastomosis; in the remaining seven group II patients, four were excluded from Fontan; two are awaiting Fontan; one refused the intervention. Conclusion In this paper we propose a new diagnostic algorithm in a pre-Fontan setting. An accurate non-invasive evaluation comprising TTE and CMR could select patients who can skip CC.

  18. Comparison of genetic algorithms with conjugate gradient methods

    Science.gov (United States)

    Bosworth, J. L.; Foo, N. Y.; Zeigler, B. P.

    1972-01-01

    Genetic algorithms for mathematical function optimization are modeled on search strategies employed in natural adaptation. Comparisons of genetic algorithms with conjugate gradient methods, which were made on an IBM 1800 digital computer, show that genetic algorithms display superior performance over gradient methods for functions which are poorly behaved mathematically, for multimodal functions, and for functions obscured by additive random noise. Genetic methods offer performance comparable to gradient methods for many of the standard functions.

  19. Comparison of greedy algorithms for α-decision tree construction

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2011-01-01

    A comparison among different heuristics that are used by greedy algorithms which constructs approximate decision trees (α-decision trees) is presented. The comparison is conducted using decision tables based on 24 data sets from UCI Machine Learning Repository [2]. Complexity of decision trees is estimated relative to several cost functions: depth, average depth, number of nodes, number of nonterminal nodes, and number of terminal nodes. Costs of trees built by greedy algorithms are compared with minimum costs calculated by an algorithm based on dynamic programming. The results of experiments assign to each cost function a set of potentially good heuristics that minimize it. © 2011 Springer-Verlag.

  20. A Modified Image Comparison Algorithm Using Histogram Features

    OpenAIRE

    Al-Oraiqat, Anas M.; Kostyukova, Natalya S.

    2018-01-01

    This article discuss the problem of color image content comparison. Particularly, methods of image content comparison are analyzed, restrictions of color histogram are described and a modified method of images content comparison is proposed. This method uses the color histograms and considers color locations. Testing and analyzing of based and modified algorithms are performed. The modified method shows 97% average precision for a collection containing about 700 images without loss of the adv...

  1. Algorithmic parameterization of mixed treatment comparisons

    NARCIS (Netherlands)

    van Valkenhoef, Gert; Tervonen, Tommi; de Brock, Bert; Hillege, Hans

    Mixed Treatment Comparisons (MTCs) enable the simultaneous meta-analysis (data pooling) of networks of clinical trials comparing a parts per thousand yen2 alternative treatments. Inconsistency models are critical in MTC to assess the overall consistency between evidence sources. Only in the absence

  2. Algorithmic parameterization of mixed treatment comparisons

    NARCIS (Netherlands)

    G. van Valkenhoef (Gert); T. Tervonen (Tommi); B. de Brock (Bert)

    2012-01-01

    textabstractMixed Treatment Comparisons (MTCs) enable the simultaneous meta-analysis (data pooling) of networks of clinical trials comparing ≥2 alternative treatments. Inconsistency models are critical in MTC to assess the overall consistency between evidence sources. Only in the absence of

  3. Does a Least-Preferred Candidate Win a Seat? A Comparison of Three Electoral Systems

    Directory of Open Access Journals (Sweden)

    Yoichi Hizen

    2015-01-01

    Full Text Available In this paper, the differences between two variations of proportional representation (PR, open-list PR and closed-list PR, are analyzed in terms of their ability to accurately reflect voter preference. The single nontransferable vote (SNTV is also included in the comparison as a benchmark. We construct a model of voting equilibria with a candidate who is least preferred by voters in the sense that replacing the least-preferred candidate in the set of winners with any loser is Pareto improving, and our focus is on whether the least-preferred candidate wins under each electoral system. We demonstrate that the least-preferred candidate never wins under the SNTV, but can win under open-list PR, although this is less likely than winning under closed-list PR.

  4. Reranking candidate gene models with cross-species comparison for improved gene prediction

    Directory of Open Access Journals (Sweden)

    Pereira Fernando CN

    2008-10-01

    Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.

  5. Comparison of machine learning algorithms for detecting coral reef

    Directory of Open Access Journals (Sweden)

    Eduardo Tusa

    2014-09-01

    Full Text Available (Received: 2014/07/31 - Accepted: 2014/09/23This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009 because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing. We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009.

  6. Comparison of evolutionary algorithms in gene regulatory network model inference.

    LENUS (Irish Health Repository)

    2010-01-01

    ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.

  7. Selection of candidate plus phenotypes of Jatropha curcas L. using method of paired comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, D.K. [Silviculture Division, Arid Forest Research Institute, P.O. Krishi Mandi, New Pali Road, Jodhpur 342005, Rajasthan (India)

    2009-03-15

    Jatropha curcas L. (Euphorbiaceae) is an oil bearing species with multiple uses and considerable potential as a biodiesel crop. The present communication deals with the method of selecting plus phenotypes of J. curcas for exploiting genetic variability for further improvement. Candidate plus tree selection is the first and most important stage in any tree improvement programme. The selection of candidate plus plants (CPPs) is based upon various important attributes associated with the species and their relative ranking. Relative preference between various traits and scoring for each trait has been worked out by using the method of paired comparisons for the selection of CPP in J. curcas L. The most important ones are seed and oil yields. (author)

  8. COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY.

    Science.gov (United States)

    Villalon, Julio; Joshi, Anand A; Toga, Arthur W; Thompson, Paul M

    2011-01-01

    Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic "Demons" algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future.

  9. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  10. Comparison of evolutionary computation algorithms for solving bi ...

    Indian Academy of Sciences (India)

    failure probability. Multiobjective Evolutionary Computation algorithms (MOEAs) are well-suited for Multiobjective task scheduling on heterogeneous environment. The two Multi-Objective Evolutionary Algorithms such as Multiobjective Genetic. Algorithm (MOGA) and Multiobjective Evolutionary Programming (MOEP) with.

  11. Comparison analysis for classification algorithm in data mining and the study of model use

    Science.gov (United States)

    Chen, Junde; Zhang, Defu

    2018-04-01

    As a key technique in data mining, classification algorithm was received extensive attention. Through an experiment of classification algorithm in UCI data set, we gave a comparison analysis method for the different algorithms and the statistical test was used here. Than that, an adaptive diagnosis model for preventive electricity stealing and leakage was given as a specific case in the paper.

  12. Comparison of Pilot Symbol Embedded Channel Estimation Algorithms

    Directory of Open Access Journals (Sweden)

    P. Kadlec

    2009-12-01

    Full Text Available In the paper, algorithms of the pilot symbol embedded channel estimation are compared. Attention is turned to the Least Square (LS channel estimation and the Sliding Correlator (SC algorithm. Both algorithms are implemented in Matlab to estimate the Channel Impulse Response (CIR of a channel exhibiting multi-path propagation. Algorithms are compared from the viewpoint of computational demands, influence of the Additive White Gaussian Noise (AWGN, an embedded pilot symbol and a computed CIR over the estimation error.

  13. A comparison of performance measures for online algorithms

    DEFF Research Database (Denmark)

    Boyar, Joan; Irani, Sandy; Larsen, Kim Skak

    2009-01-01

    is to balance greediness and adaptability. We examine how these measures evaluate the Greedy Algorithm and Lazy Double Coverage, commonly studied algorithms in the context of server problems. We examine Competitive Analysis, the Max/Max Ratio, the Random Order Ratio, Bijective Analysis and Relative Worst Order...... Analysis and determine how they compare the two algorithms. We find that by the Max/Max Ratio and Bijective Analysis, Greedy is the better algorithm. Under the other measures Lazy Double Coverage is better, though Relative Worst Order Analysis indicates that Greedy is sometimes better. Our results also...... provide the first proof of optimality of an algorithm under Relative Worst Order Analysis....

  14. Comparison of two global digital algorithms for Minkowski tensor estimation

    DEFF Research Database (Denmark)

    The geometry of real world objects can be described by Minkowski tensors. Algorithms have been suggested to approximate Minkowski tensors if only a binary image of the object is available. This paper presents implementations of two such algorithms. The theoretical convergence properties...... are confirmed by simulations on test sets, and recommendations for input arguments of the algorithms are given. For increasing resolutions, we obtain more accurate estimators for the Minkowski tensors. Digitisations of more complicated objects are shown to require higher resolutions....

  15. Performance Comparison of Widely-Used Maximum Power Point Tracker Algorithms under Real Environmental Conditions

    Directory of Open Access Journals (Sweden)

    DURUSU, A.

    2014-08-01

    Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.

  16. Searching for an oscillating massive scalar field as a dark matter candidate using atomic hyperfine frequency comparisons

    OpenAIRE

    Hees, A.; Guéna, J.; Abgrall, M.; Bize, S.; Wolf, P.

    2016-01-01

    We use six years of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but pr...

  17. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  18. Comparison of two (geometric) algorithms for auto OMA

    DEFF Research Database (Denmark)

    Juul, Martin; Olsen, Peter; Balling, Ole

    2018-01-01

    parameters. The two algorithms are compared and illustrated on simulated data. Different choices of distance measures are discussed and evaluated. It is illustrated how a simple distance measure outperforms traditional distance measures from other Auto OMA algorithms. Traditional measures are unable...

  19. Comparison of candidate materials for a synthetic osteo-odonto keratoprosthesis device.

    Science.gov (United States)

    Tan, Xiao Wei; Perera, A Promoda P; Tan, Anna; Tan, Donald; Khor, K A; Beuerman, Roger W; Mehta, Jodhbir S

    2011-01-05

    Osteo-odonto keratoprosthesis is one of the most successful forms of keratoprosthesis surgery for end-stage corneal and ocular surface disease. There is a lack of detailed comparison studies on the biocompatibilities of different materials used in keratoprosthesis. The aim of this investigation was to compare synthetic bioinert materials used for keratoprosthesis surgery with hydroxyapatite (HA) as a reference. Test materials were sintered titanium oxide (TiO(2)), aluminum oxide (Al(2)O(3)), and yttria-stabilized zirconia (YSZ) with density >95%. Bacterial adhesion on the substrates was evaluated using scanning electron microscopy and the spread plate method. Surface properties of the implant discs were scanned using optical microscopy. Human keratocyte attachment and proliferation rates were assessed by cell counting and MTT assay at different time points. Morphologic analysis and immunoblotting were used to evaluate focal adhesion formation, whereas cell adhesion force was measured with a multimode atomic force microscope. The authors found that bacterial adhesion on the TiO(2), Al(2)O(3), and YSZ surfaces were lower than that on HA substrates. TiO(2) significantly promoted keratocyte proliferation and viability compared with HA, Al(2)O(3,) and YSZ. Immunofluorescent imaging analyses, immunoblotting, and atomic force microscope measurement revealed that TiO(2) surfaces enhanced cell spreading and cell adhesion compared with HA and Al(2)O(3). TiO(2) is the most suitable replacement candidate for use as skirt material because it enhanced cell functions and reduced bacterial adhesion. This would, in turn, enhance tissue integration and reduce device failure rates during keratoprosthesis surgery.

  20. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    International Nuclear Information System (INIS)

    Cheng Sheng-Yi; Liu Wen-Jin; Chen Shan-Qiu; Dong Li-Zhi; Yang Ping; Xu Bing

    2015-01-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n 2 ) ∼ O(n 3 ) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ∼ (O(n) 3/2 ), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. (paper)

  1. A Comparison of the Life Satisfaction and Hopelessness Levels of Teacher Candidates in Turkey

    Science.gov (United States)

    Gencay, Selcuk; Gencay, Okkes Alpaslan

    2011-01-01

    This study aims to explore the level of hopelessness and life satisfaction of teacher candidates from the view points of gender and branch variables. With this aim, the "Beck Hopelessness Scale and Life Satisfaction Scale" has been applied to a total of 278 teacher candidates, of which 133 were females and 145 were males. According to…

  2. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    Science.gov (United States)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  3. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    Science.gov (United States)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  4. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  5. Comparison of lithium and the eutectic lead lithium alloy, two candidate liquid metal breeder materials for self-cooled blankets

    International Nuclear Information System (INIS)

    Malang, S.; Mattas, R.

    1994-06-01

    Liquid metals are attractive candidates for both near-term and long-term fusion applications. The subjects of this comparison are the differences between the two candidate liquid metal breeder materials Li and LiPb for use in breeding blankets in the areas of neutronics, magnetohydrodynamics, tritium control, compatibility with structural materials, heat extraction system, safety, and required R ampersand D program. Both candidates appear to be promising for use in self-cooled breeding blankets which have inherent simplicity with the liquid metal serving as both breeders and coolant. The remaining feasibility question for both breeder materials is the electrical insulation between liquid metal and duct walls. Different ceramic coatings are required for the two breeders, and their crucial issues, namely self-healing of insulator cracks and radiation induced electrical degradation are not yet demonstrated. Each liquid metal breeder has advantages and concerns associated with it, and further development is needed to resolve these concerns

  6. Comparison of tracking algorithms implemented in OpenCV

    Directory of Open Access Journals (Sweden)

    Janku Peter

    2016-01-01

    Full Text Available Computer vision is very progressive and modern part of computer science. From scientific point of view, theoretical aspects of computer vision algorithms prevail in many papers and publications. The underlying theory is really important, but on the other hand, the final implementation of an algorithm significantly affects its performance and robustness. For this reason, this paper tries to compare real implementation of tracking algorithms (one part of computer vision problem, which can be found in the very popular library OpenCV. Moreover, the possibilities of optimizations are discussed.

  7. A Comparison of Evolutionary Algorithms for Tracking Time-Varying Recursive Systems

    Directory of Open Access Journals (Sweden)

    White Michael S

    2003-01-01

    Full Text Available A comparison is made of the behaviour of some evolutionary algorithms in time-varying adaptive recursive filter systems. Simulations show that an algorithm including random immigrants outperforms a more conventional algorithm using the breeder genetic algorithm as the mutation operator when the time variation is discontinuous, but neither algorithm performs well when the time variation is rapid but smooth. To meet this deficit, a new hybrid algorithm which uses a hill climber as an additional genetic operator, applied for several steps at each generation, is introduced. A comparison is made of the effect of applying the hill climbing operator a few times to all members of the population or a larger number of times solely to the best individual; it is found that applying to the whole population yields the better results, substantially improved compared with those obtained using earlier methods.

  8. Comparison of public peak detection algorithms for MALDI mass spectrometry data analysis.

    Science.gov (United States)

    Yang, Chao; He, Zengyou; Yu, Weichuan

    2009-01-06

    In mass spectrometry (MS) based proteomic data analysis, peak detection is an essential step for subsequent analysis. Recently, there has been significant progress in the development of various peak detection algorithms. However, neither a comprehensive survey nor an experimental comparison of these algorithms is yet available. The main objective of this paper is to provide such a survey and to compare the performance of single spectrum based peak detection methods. In general, we can decompose a peak detection procedure into three consequent parts: smoothing, baseline correction and peak finding. We first categorize existing peak detection algorithms according to the techniques used in different phases. Such a categorization reveals the differences and similarities among existing peak detection algorithms. Then, we choose five typical peak detection algorithms to conduct a comprehensive experimental study using both simulation data and real MALDI MS data. The results of comparison show that the continuous wavelet-based algorithm provides the best average performance.

  9. Advanced reconstruction algorithms for electron tomography: From comparison to combination

    Energy Technology Data Exchange (ETDEWEB)

    Goris, B. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Roelandts, T. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Batenburg, K.J. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Heidari Mezerji, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Bals, S., E-mail: sara.bals@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)

    2013-04-15

    In this work, the simultaneous iterative reconstruction technique (SIRT), the total variation minimization (TVM) reconstruction technique and the discrete algebraic reconstruction technique (DART) for electron tomography are compared and the advantages and disadvantages are discussed. Furthermore, we describe how the result of a three dimensional (3D) reconstruction based on TVM can provide objective information that is needed as the input for a DART reconstruction. This approach results in a tomographic reconstruction of which the segmentation is carried out in an objective manner. - Highlights: ► A comparative study between different reconstruction algorithms for tomography is performed. ► Reconstruction algorithms that uses prior knowledge about the specimen have a superior result. ► One reconstruction algorithm can provide the prior knowledge for a second algorithm.

  10. Comparison of Greedy Algorithms for Decision Tree Optimization

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values

  11. Comparison of SeaWinds Backscatter Imaging Algorithms

    Science.gov (United States)

    Long, David G.

    2017-01-01

    This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143

  12. COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES

    Directory of Open Access Journals (Sweden)

    A.A. Haseena Thasneem

    2015-05-01

    Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.

  13. Algorithm comparison for schedule optimization in MR fingerprinting.

    Science.gov (United States)

    Cohen, Ouri; Rosen, Matthew S

    2017-09-01

    In MR Fingerprinting, the flip angles and repetition times are chosen according to a pseudorandom schedule. In previous work, we have shown that maximizing the discrimination between different tissue types by optimizing the acquisition schedule allows reductions in the number of measurements required. The ideal optimization algorithm for this application remains unknown, however. In this work we examine several different optimization algorithms to determine the one best suited for optimizing MR Fingerprinting acquisition schedules. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Comparison of some evolutionary algorithms for optimization of the path synthesis problem

    Science.gov (United States)

    Grabski, Jakub Krzysztof; Walczak, Tomasz; Buśkiewicz, Jacek; Michałowska, Martyna

    2018-01-01

    The paper presents comparison of the results obtained in a mechanism synthesis by means of some selected evolutionary algorithms. The optimization problem considered in the paper as an example is the dimensional synthesis of the path generating four-bar mechanism. In order to solve this problem, three different artificial intelligence algorithms are employed in this study.

  15. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    Science.gov (United States)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  16. A FIRST COMPARISON OF KEPLER PLANET CANDIDATES IN SINGLE AND MULTIPLE SYSTEMS

    International Nuclear Information System (INIS)

    Latham, David W.; Quinn, Samuel N.; Carter, Joshua A.; Holman, Matthew J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Howell, Steve B.; Batalha, Natalie M.; Brown, Timothy M.; Buchhave, Lars A.; Caldwell, Douglas A.; Christiansen, Jessie L.; Ciardi, David R.; Cochran, William D.; Dunham, Edward W.; Fabrycky, Daniel C.; Ford, Eric B.; Gautier, Thomas N. III; Gilliland, Ronald L.

    2011-01-01

    In this Letter, we present an overview of the rich population of systems with multiple candidate transiting planets found in the first four months of Kepler data. The census of multiples includes 115 targets that show two candidate planets, 45 with three, eight with four, and one each with five and six, for a total of 170 systems with 408 candidates. When compared to the 827 systems with only one candidate, the multiples account for 17% of the total number of systems, and one-third of all the planet candidates. We compare the characteristics of candidates found in multiples with those found in singles. False positives due to eclipsing binaries are much less common for the multiples, as expected. Singles and multiples are both dominated by planets smaller than Neptune; 69 +2 -3 % for singles and 86 +2 -5 % for multiples. This result, that systems with multiple transiting planets are less likely to include a transiting giant planet, suggests that close-in giant planets tend to disrupt the orbital inclinations of small planets in flat systems, or maybe even prevent the formation of such systems in the first place.

  17. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...... and provides a better coverage of the Pareto optimal solutions at a lower computational cost....

  18. Comparison of genetic algorithm and imperialist competitive algorithms in predicting bed load transport in clean pipe.

    Science.gov (United States)

    Ebtehaj, Isa; Bonakdari, Hossein

    2014-01-01

    The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations.

  19. A benchmark for comparison of cell tracking algorithms

    NARCIS (Netherlands)

    M. Maška (Martin); V. Ulman (Vladimír); K. Svoboda; P. Matula (Pavel); P. Matula (Petr); C. Ederra (Cristina); A. Urbiola (Ainhoa); T. España (Tomás); R. Venkatesan (Rajkumar); D.M.W. Balak (Deepak); P. Karas (Pavel); T. Bolcková (Tereza); M. Štreitová (Markéta); C. Carthel (Craig); S. Coraluppi (Stefano); N. Harder (Nathalie); K. Rohr (Karl); K.E.G. Magnusson (Klas E.); J. Jaldén (Joakim); H.M. Blau (Helen); O.M. Dzyubachyk (Oleh); P. Křížek (Pavel); G.M. Hagen (Guy); D. Pastor-Escuredo (David); D. Jimenez-Carretero (Daniel); M.J. Ledesma-Carbayo (Maria); A. Muñoz-Barrutia (Arrate); E. Meijering (Erik); M. Kozubek (Michal); C. Ortiz-De-Solorzano (Carlos)

    2014-01-01

    textabstractMotivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International

  20. Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches

    Directory of Open Access Journals (Sweden)

    Ufuk Çelik

    2015-01-01

    Full Text Available The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy.

  1. Comparison of order reduction algorithms for application to electrical networks

    Directory of Open Access Journals (Sweden)

    Lj. Radić-Weissenfeld

    2009-05-01

    Full Text Available This paper addresses issues related to the minimization of the computational burden in terms of both memory and speed during the simulation of electrical models. In order to achieve a simple and computational fast model the order reduction of its reducible part is proposed. In this paper the overview of the order reduction algorithms and their application are discussed.

  2. Computational Comparison of Several Greedy Algorithms for the Minimum Cost Perfect Matching Problem on Large Graphs

    DEFF Research Database (Denmark)

    Wøhlk, Sanne; Laporte, Gilbert

    2017-01-01

    The aim of this paper is to computationally compare several algorithms for the Minimum Cost Perfect Matching Problem on an undirected complete graph. Our work is motivated by the need to solve large instances of the Capacitated Arc Routing Problem (CARP) arising in the optimization of garbage...... collection in Denmark. Common heuristics for the CARP involve the optimal matching of the odd-degree nodes of a graph. The algorithms used in the comparison include the CPLEX solution of an exact formulation, the LEDA matching algorithm, a recent implementation of the Blossom algorithm, as well as six...

  3. Performance Comparison of Superresolution Array Processing Algorithms. Revised

    National Research Council Canada - National Science Library

    Barabell, A

    1998-01-01

    ... have been documented in the literature, no systematic comparison has heretofore been undertaken. The general approach of the current study is to simulate a sequence of increasingly more general...

  4. QUASI-STELLAR OBJECT SELECTION ALGORITHM USING TIME VARIABILITY AND MACHINE LEARNING: SELECTION OF 1620 QUASI-STELLAR OBJECT CANDIDATES FROM MACHO LARGE MAGELLANIC CLOUD DATABASE

    International Nuclear Information System (INIS)

    Kim, Dae-Won; Protopapas, Pavlos; Alcock, Charles; Trichas, Markos; Byun, Yong-Ik; Khardon, Roni

    2011-01-01

    We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.

  5. A comparison of three optimization algorithms for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Pflugfelder, D.; Wilkens, J.J.; Nill, S.; Oelfke, U.

    2008-01-01

    In intensity modulated treatment techniques, the modulation of each treatment field is obtained using an optimization algorithm. Multiple optimization algorithms have been proposed in the literature, e.g. steepest descent, conjugate gradient, quasi-Newton methods to name a few. The standard optimization algorithm in our in-house inverse planning tool KonRad is a quasi-Newton algorithm. Although this algorithm yields good results, it also has some drawbacks. Thus we implemented an improved optimization algorithm based on the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) routine. In this paper the improved optimization algorithm is described. To compare the two algorithms, several treatment plans are optimized using both algorithms. This included photon (IMRT) as well as proton (IMPT) intensity modulated therapy treatment plans. To present the results in a larger context the widely used conjugate gradient algorithm was also included into this comparison. On average, the improved optimization algorithm was six times faster to reach the same objective function value. However, it resulted not only in an acceleration of the optimization. Due to the faster convergence, the improved optimization algorithm usually terminates the optimization process at a lower objective function value. The average of the observed improvement in the objective function value was 37%. This improvement is clearly visible in the corresponding dose-volume-histograms. The benefit of the improved optimization algorithm is particularly pronounced in proton therapy plans. The conjugate gradient algorithm ranked in between the other two algorithms with an average speedup factor of two and an average improvement of the objective function value of 30%. (orig.)

  6. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  7. Comparison of spike-sorting algorithms for future hardware implementation.

    Science.gov (United States)

    Gibson, Sarah; Judy, Jack W; Markovic, Dejan

    2008-01-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.

  8. A comparison of updating algorithms for large N reduced models

    Energy Technology Data Exchange (ETDEWEB)

    Pérez, Margarita García [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); González-Arroyo, Antonio [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); Departamento de Física Teórica, C-XI Universidad Autónoma de Madrid,E-28049 Madrid (Spain); Keegan, Liam [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland); Okawa, Masanori [Graduate School of Science, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Core of Research for the Energetic Universe, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Ramos, Alberto [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland)

    2015-06-29

    We investigate Monte Carlo updating algorithms for simulating SU(N) Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole SU(N) matrix at once, or iterating through SU(2) subgroups of the SU(N) matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  9. A comparison of updating algorithms for large $N$ reduced models

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto

    2015-01-01

    We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  10. A numeric comparison of variable selection algorithms for supervised learning

    International Nuclear Information System (INIS)

    Palombo, G.; Narsky, I.

    2009-01-01

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( (http://sourceforge.net/projects/statpatrec/)). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ('Add N Remove R') implemented in SPR.

  11. Current Piano Education of Turkish Music Teacher Candidates: Comparisons of Instructors and Students Perceptions

    Science.gov (United States)

    Jelen, Birsen

    2015-01-01

    In recent years almost every newly opened government funded university in Turkey has established a music department where future music teachers are educated and piano is compulsory for every single music teacher candidate in Turkey. The aim of this research is to compare piano teaching instructors' and their students' perceptions about the current…

  12. Searching for an Oscillating Massive Scalar Field as a Dark Matter Candidate Using Atomic Hyperfine Frequency Comparisons.

    Science.gov (United States)

    Hees, A; Guéna, J; Abgrall, M; Bize, S; Wolf, P

    2016-08-05

    We use 6 yrs of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions, and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine structure constant and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed.

  13. A comparison of body image concern in candidates for rhinoplasty and therapeutic surgery.

    Science.gov (United States)

    Hashemi, Seyed Amirhosein Ghazizadeh; Edalatnoor, Behnoosh; Edalatnoor, Behnaz; Niksun, Omid

    2017-09-01

    Body dysmorphic disorder among patients referring for cosmetic surgeries is a disorder that if not diagnosed by a physician, can cause irreparable damage to the doctor and the patient. The aim of this study was to compare body image concern in candidates for rhinoplasty and therapeutic surgery. This was a cross-sectional study conducted on 212 patients referring to Loghman Hospital of Tehran for rhinoplasty and therapeutic surgery during the period from 2014 through 2016. For each person in a cosmetic surgery group, a person of the same sex and age in a therapeutic surgery group was matched, and the study was conducted on 60 subjects in the rhinoplasty group and 62 patients in the therapeutic surgery group. Then, the Body Image Concern Inventory and demographic data were filled by all patients and the level of body image concern in both groups was compared. Statistical analysis was conducted using SPSS 16, Chi-square test as well as paired-samples t-test. P-value of less than 0.05 was considered statistically significant. In this study, 122 patients (49 males and 73 females) with mean age of 27.1±7.3 between 18 and 55 years of age were investigated. Sixty subjects were candidates for rhinoplasty and 62 subjects for therapeutic surgery. Candidates for rhinoplasty were mostly male (60%) and single (63.3%). Results of the t-test demonstrated that body image concern and body dysmorphic disorder were higher in the rhinoplasty group compared to the therapeutic group (pconcern was higher in rhinoplasty candidates compared to candidates for other surgeries. Visiting and correct interviewing of people who referred for rhinoplasty is very important to measure their level of body image concern to diagnose any disorders available and to consider required treatments.

  14. Comparison of four Adaboost algorithm based artificial neural networks in wind speed predictions

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei; Zhang, Lei

    2015-01-01

    Highlights: • Four hybrid algorithms are proposed for the wind speed decomposition. • Adaboost algorithm is adopted to provide a hybrid training framework. • MLP neural networks are built to do the forecasting computation. • Four important network training algorithms are included in the MLP networks. • All the proposed hybrid algorithms are suitable for the wind speed predictions. - Abstract: The technology of wind speed prediction is important to guarantee the safety of wind power utilization. In this paper, four different hybrid methods are proposed for the high-precision multi-step wind speed predictions based on the Adaboost (Adaptive Boosting) algorithm and the MLP (Multilayer Perceptron) neural networks. In the hybrid Adaboost–MLP forecasting architecture, four important algorithms are adopted for the training and modeling of the MLP neural networks, including GD-ALR-BP algorithm, GDM-ALR-BP algorithm, CG-BP-FR algorithm and BFGS algorithm. The aim of the study is to investigate the promoted forecasting percentages of the MLP neural networks by the Adaboost algorithm’ optimization under various training algorithms. The hybrid models in the performance comparison include Adaboost–GD-ALR-BP–MLP, Adaboost–GDM-ALR-BP–MLP, Adaboost–CG-BP-FR–MLP, Adaboost–BFGS–MLP, GD-ALR-BP–MLP, GDM-ALR-BP–MLP, CG-BP-FR–MLP and BFGS–MLP. Two experimental results show that: (1) the proposed hybrid Adaboost–MLP forecasting architecture is effective for the wind speed predictions; (2) the Adaboost algorithm has promoted the forecasting performance of the MLP neural networks considerably; (3) among the proposed Adaboost–MLP forecasting models, the Adaboost–CG-BP-FR–MLP model has the best performance; and (4) the improved percentages of the MLP neural networks by the Adaboost algorithm decrease step by step with the following sequence of training algorithms as: GD-ALR-BP, GDM-ALR-BP, CG-BP-FR and BFGS

  15. Comparison of segmentation algorithms for fluorescence microscopy images of cells.

    Science.gov (United States)

    Dima, Alden A; Elliott, John T; Filliben, James J; Halter, Michael; Peskin, Adele; Bernal, Javier; Kociolek, Marcin; Brady, Mary C; Tang, Hai C; Plant, Anne L

    2011-07-01

    The analysis of fluorescence microscopy of cells often requires the determination of cell edges. This is typically done using segmentation techniques that separate the cell objects in an image from the surrounding background. This study compares segmentation results from nine different segmentation techniques applied to two different cell lines and five different sets of imaging conditions. Significant variability in the results of segmentation was observed that was due solely to differences in imaging conditions or applications of different algorithms. We quantified and compared the results with a novel bivariate similarity index metric that evaluates the degree of underestimating or overestimating a cell object. The results show that commonly used threshold-based segmentation techniques are less accurate than k-means clustering with multiple clusters. Segmentation accuracy varies with imaging conditions that determine the sharpness of cell edges and with geometric features of a cell. Based on this observation, we propose a method that quantifies cell edge character to provide an estimate of how accurately an algorithm will perform. The results of this study will assist the development of criteria for evaluating interlaboratory comparability. Published 2011 Wiley-Liss, Inc.

  16. Comparison of Greedy Algorithms for Decision Tree Optimization

    KAUST Repository

    Alkhalid, Abdulaziz

    2013-01-01

    This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values of average depth, depth, number of nodes, number of terminal nodes, and number of nonterminal nodes of decision trees. We compare average depth, depth, number of nodes, number of terminal nodes and number of nonterminal nodes of constructed trees with minimum values of the considered parameters obtained based on a dynamic programming approach. We report experiments performed on data sets from UCI ML Repository and randomly generated binary decision tables. As a result, for depth, average depth, and number of nodes we propose a number of good heuristics. © Springer-Verlag Berlin Heidelberg 2013.

  17. Identification of new candidate drugs for lung cancer using chemical-chemical interactions, chemical-protein interactions and a K-means clustering algorithm.

    Science.gov (United States)

    Lu, Jing; Chen, Lei; Yin, Jun; Huang, Tao; Bi, Yi; Kong, Xiangyin; Zheng, Mingyue; Cai, Yu-Dong

    2016-01-01

    Lung cancer, characterized by uncontrolled cell growth in the lung tissue, is the leading cause of global cancer deaths. Until now, effective treatment of this disease is limited. Many synthetic compounds have emerged with the advancement of combinatorial chemistry. Identification of effective lung cancer candidate drug compounds among them is a great challenge. Thus, it is necessary to build effective computational methods that can assist us in selecting for potential lung cancer drug compounds. In this study, a computational method was proposed to tackle this problem. The chemical-chemical interactions and chemical-protein interactions were utilized to select candidate drug compounds that have close associations with approved lung cancer drugs and lung cancer-related genes. A permutation test and K-means clustering algorithm were employed to exclude candidate drugs with low possibilities to treat lung cancer. The final analysis suggests that the remaining drug compounds have potential anti-lung cancer activities and most of them have structural dissimilarity with approved drugs for lung cancer.

  18. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    Science.gov (United States)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest

  19. Comparison of machine-learning algorithms to build a predictive model for detecting undiagnosed diabetes - ELSA-Brasil: accuracy study.

    Science.gov (United States)

    Olivera, André Rodrigues; Roesler, Valter; Iochpe, Cirano; Schmidt, Maria Inês; Vigo, Álvaro; Barreto, Sandhi Maria; Duncan, Bruce Bartholow

    2017-01-01

    Type 2 diabetes is a chronic disease associated with a wide range of serious health complications that have a major impact on overall health. The aims here were to develop and validate predictive models for detecting undiagnosed diabetes using data from the Longitudinal Study of Adult Health (ELSA-Brasil) and to compare the performance of different machine-learning algorithms in this task. Comparison of machine-learning algorithms to develop predictive models using data from ELSA-Brasil. After selecting a subset of 27 candidate variables from the literature, models were built and validated in four sequential steps: (i) parameter tuning with tenfold cross-validation, repeated three times; (ii) automatic variable selection using forward selection, a wrapper strategy with four different machine-learning algorithms and tenfold cross-validation (repeated three times), to evaluate each subset of variables; (iii) error estimation of model parameters with tenfold cross-validation, repeated ten times; and (iv) generalization testing on an independent dataset. The models were created with the following machine-learning algorithms: logistic regression, artificial neural network, naïve Bayes, K-nearest neighbor and random forest. The best models were created using artificial neural networks and logistic regression. -These achieved mean areas under the curve of, respectively, 75.24% and 74.98% in the error estimation step and 74.17% and 74.41% in the generalization testing step. Most of the predictive models produced similar results, and demonstrated the feasibility of identifying individuals with highest probability of having undiagnosed diabetes, through easily-obtained clinical data.

  20. Determining OBS Instrument Orientations: A Comparison of Algorithms

    Science.gov (United States)

    Doran, A. K.; Laske, G.

    2015-12-01

    The alignment of the orientation of the horizontal seismometer components with the geographical coordinate system is critical for a wide variety of seismic analyses, but the traditional deployment method of ocean bottom seismometers (OBS) precludes knowledge of this parameter. Current techniques for determining the orientation predominantly rely on body and surface wave data recorded from teleseismic events with sufficiently large magnitudes. Both wave types experience lateral refraction between the source and receiver as a result of heterogeneity and anisotropy, and therefore the arrival angle of any one phase can significantly deviate from the great circle minor arc. We systematically compare the results and uncertainties obtained through current determination methods, as well as describe a new algorithm that uses body wave, surface wave, and differential pressure gauge data (where available) to invert for horizontal orientation. To start with, our method is based on the easily transportable computer code of Stachnik et al. (2012) that is publicly available through IRIS. A major addition is that we utilize updated global dispersion maps to account for lateral refraction, as was done by Laske (1995). We also make measurements in a wide range of frequencies, and analyze surface wave trains of repeat orbits. Our method has the advantage of requiring fewer total events to achieve high precision estimates, which is beneficial for OBS deployments that can be as short as weeks. Although the program is designed for the purpose of use with OBS instruments, it also works with standard land installations. We intend to provide the community with a program that is easy to use, requires minimal user input, and is optimized to work with data cataloged at the IRIS DMC.

  1. Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison

    Directory of Open Access Journals (Sweden)

    Olympia Roeva

    2005-12-01

    Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.

  2. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  3. A comparison of graph- and kernel-based -omics data integration algorithms for classifying complex traits.

    Science.gov (United States)

    Yan, Kang K; Zhao, Hongyu; Pang, Herbert

    2017-12-06

    High-throughput sequencing data are widely collected and analyzed in the study of complex diseases in quest of improving human health. Well-studied algorithms mostly deal with single data source, and cannot fully utilize the potential of these multi-omics data sources. In order to provide a holistic understanding of human health and diseases, it is necessary to integrate multiple data sources. Several algorithms have been proposed so far, however, a comprehensive comparison of data integration algorithms for classification of binary traits is currently lacking. In this paper, we focus on two common classes of integration algorithms, graph-based that depict relationships with subjects denoted by nodes and relationships denoted by edges, and kernel-based that can generate a classifier in feature space. Our paper provides a comprehensive comparison of their performance in terms of various measurements of classification accuracy and computation time. Seven different integration algorithms, including graph-based semi-supervised learning, graph sharpening integration, composite association network, Bayesian network, semi-definite programming-support vector machine (SDP-SVM), relevance vector machine (RVM) and Ada-boost relevance vector machine are compared and evaluated with hypertension and two cancer data sets in our study. In general, kernel-based algorithms create more complex models and require longer computation time, but they tend to perform better than graph-based algorithms. The performance of graph-based algorithms has the advantage of being faster computationally. The empirical results demonstrate that composite association network, relevance vector machine, and Ada-boost RVM are the better performers. We provide recommendations on how to choose an appropriate algorithm for integrating data from multiple sources.

  4. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  5. Portfolio management using value at risk: A comparison between genetic algorithms and particle swarm optimization

    NARCIS (Netherlands)

    V.A.F. Dallagnol (V. A F); J.H. van den Berg (Jan); L. Mous (Lonneke)

    2009-01-01

    textabstractIn this paper, it is shown a comparison of the application of particle swarm optimization and genetic algorithms to portfolio management, in a constrained portfolio optimization problem where no short sales are allowed. The objective function to be minimized is the value at risk

  6. A comparison between two algorithms for the retrieval of soil moisture using AMSR-E data

    Science.gov (United States)

    A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS) watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground condit...

  7. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    International Nuclear Information System (INIS)

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs

  8. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  9. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  10. A study and implementation of algorithm for automatic ECT result comparison

    International Nuclear Information System (INIS)

    Jang, You Hyun; Nam, Min Woo; Kim, In Chul; Joo, Kyung Mun; Kim, Jong Seog

    2012-01-01

    Automatic ECT Result Comparison Algorithm was developed and implemented with computer language to remove the human error in manual comparison with many data. The structures of two ECT Program (Eddy net and ECT IDS) that have unique file structure were analyzed to open file and upload data in PC memory. Comparison algorithm was defined graphically for easy PC programming language conversion. Automatic Result Program was programmed with C language that is suitable for future code management and has object oriented programming structure and fast development potential. Automatic Result Program has MS Excel file exporting function that is useful to use external S/W for additional analysis and intuitive result visualization function with color mapping in user friendly fashion that helps analyze efficiently

  11. A study and implementation of algorithm for automatic ECT result comparison

    Energy Technology Data Exchange (ETDEWEB)

    Jang, You Hyun; Nam, Min Woo; Kim, In Chul; Joo, Kyung Mun; Kim, Jong Seog [Central Research Institute, Daejeon (Korea, Republic of)

    2012-10-15

    Automatic ECT Result Comparison Algorithm was developed and implemented with computer language to remove the human error in manual comparison with many data. The structures of two ECT Program (Eddy net and ECT IDS) that have unique file structure were analyzed to open file and upload data in PC memory. Comparison algorithm was defined graphically for easy PC programming language conversion. Automatic Result Program was programmed with C language that is suitable for future code management and has object oriented programming structure and fast development potential. Automatic Result Program has MS Excel file exporting function that is useful to use external S/W for additional analysis and intuitive result visualization function with color mapping in user friendly fashion that helps analyze efficiently.

  12. Comparison of three mineral candidates in middle and low-pressure condition. Experimental study

    Energy Technology Data Exchange (ETDEWEB)

    Yan, Heng; Zhang, Jun-ying; Zhao, Yong-chun; Wang, Zhi-lang; Pan, Xia; Xu, Jun; Zheng, Chu-guang [Huazhong Univ. of Science and Technology, Wuhan (China). State Key Lab. of Coal Combustion

    2013-07-01

    ''Greenhouse Effect'', which is scientifically proven to be main caused by the increasing concentration of CO{sub 2}, has become a topic of national and international concern. Mineral carbonation, such as carbonation of alkaline silicate Ca/Mg minerals, analogous to natural weathering processes, is a potentially attractive route to mitigate possible global warming on the basis of industrial imitation of natural weathering processes. In this paper, three typical natural mineral candidates in China, serpentine, olivine and wollastonite, were selected as carbonation raw materials for direct mineral carbonation experiments under middle and low-pressure. A series number of experiments were carried out to investigate the factors that influence the conversion of carbonation reaction, such as reaction temperature, reaction pressure, particle size, solution composition and pretreatment. The solid products from carbonation experiments were filtered, collected, dried and analyzed by X-ray diffraction (XRD) and field scanning electron microscopy equipped with energy dispersive X-ray analysis (FSEM-EDX) to identify the reaction of mineral carbonation. And the method of mass equilibrium after heat decomposition was used to calculate the mineral carbonation conversion. All the XRD and FSEM analysis validate that carbonation reaction was occurred during the experiments and mineral carbonation is one of the potential techniques for carbon dioxide sequestration. The data of mass equilibrium after heat decomposition was collected and then the conversion formula was used to calculate the carbonation conversion of all the three mineral candidates. The mass equilibrium results show that, for all of the three mineral materials, the carbonation conversion increases with the increasing of reaction temperature. But once the temperature increases above 150 C, the conversion of serpentine decreases a little. Reaction pressure is also an important factor to mineral

  13. [Comparison of dose calculation algorithms in stereotactic radiation therapy in lung].

    Science.gov (United States)

    Tomiyama, Yuki; Araki, Fujio; Kanetake, Nagisa; Shimohigashi, Yoshinobu; Tominaga, Hirofumi; Sakata, Jyunichi; Oono, Takeshi; Kouno, Tomohiro; Hioki, Kazunari

    2013-06-01

    Dose calculation algorithms in radiation treatment planning systems (RTPSs) play a crucial role in stereotactic body radiation therapy (SBRT) in the lung with heterogeneous media. This study investigated the performance and accuracy of dose calculation for three algorithms: analytical anisotropic algorithm (AAA), pencil beam convolution (PBC) and Acuros XB (AXB) in Eclipse (Varian Medical Systems), by comparison against the Voxel Monte Carlo algorithm (VMC) in iPlan (BrainLab). The dose calculations were performed with clinical lung treatments under identical planning conditions, and the dose distributions and the dose volume histogram (DVH) were compared among algorithms. AAA underestimated the dose in the planning target volume (PTV) compared to VMC and AXB in most clinical plans. In contrast, PBC overestimated the PTV dose. AXB tended to slightly overestimate the PTV dose compared to VMC but the discrepancy was within 3%. The discrepancy in the PTV dose between VMC and AXB appears to be due to differences in physical material assignments, material voxelization methods, and an energy cut-off for electron interactions. The dose distributions in lung treatments varied significantly according to the calculation accuracy of the algorithms. VMC and AXB are better algorithms than AAA for SBRT.

  14. A Damage Resistance Comparison Between Candidate Polymer Matrix Composite Feedline Materials

    Science.gov (United States)

    Nettles, A. T

    2000-01-01

    As part of NASAs focused technology programs for future reusable launch vehicles, a task is underway to study the feasibility of using the polymer matrix composite feedlines instead of metal ones on propulsion systems. This is desirable to reduce weight and manufacturing costs. The task consists of comparing several prototype composite feedlines made by various methods. These methods are electron-beam curing, standard hand lay-up and autoclave cure, solvent assisted resin transfer molding, and thermoplastic tape laying. One of the critical technology drivers for composite components is resistance to foreign objects damage. This paper presents results of an experimental study of the damage resistance of the candidate materials that the prototype feedlines are manufactured from. The materials examined all have a 5-harness weave of IM7 as the fiber constituent (except for the thermoplastic, which is unidirectional tape laid up in a bidirectional configuration). The resin tested were 977-6, PR 520, SE-SA-1, RS-E3 (e-beam curable), Cycom 823 and PEEK. The results showed that the 977-6 and PEEK were the most damage resistant in all tested cases.

  15. Initial Comparison of Baseline Physical and Mechanical Properties for the VHTR Candidate Graphite Grades

    Energy Technology Data Exchange (ETDEWEB)

    Carroll, Mark C. [Idaho National Lab. (INL), Idaho Falls, ID (United States). VHTR Program

    2014-09-01

    High-purity graphite is the core structural material of choice in the Very High Temperature Reactor (VHTR) design, a graphite-moderated, helium-cooled configuration capable of producing thermal energy for power generation as well as process heat for industrial applications that require temperatures higher than the outlet temperatures of present nuclear reactors. The Baseline Graphite Characterization Program is establishing accurate as-manufactured mechanical and physical property distributions in nuclear-grade graphites by providing comprehensive data that captures the level of variation in measured values. In addition to providing a thorough comparison between these values in different graphite grades, the program is also carefully tracking individual specimen source, position, and orientation information in order to provide comparisons both in specific properties and in the associated variability between different lots, different billets, and different positions from within a single billet. This report is a preliminary comparison between each of the grades of graphite that are considered “candidate” grades from four major international graphite producers. These particular grades (NBG-18, NBG-17, PCEA, IG-110, and 2114) are the major focus of the evaluations presently underway on irradiated graphite properties through the series of Advanced Graphite Creep (AGC) experiments. NBG-18, a medium-grain pitch coke graphite from SGL from which billets are formed via vibration molding, was the favored structural material in the pebble-bed configuration. NBG-17 graphite from SGL is essentially NBG-18 with the grain size reduced by a factor of two. PCEA, petroleum coke graphite from GrafTech with a similar grain size to NBG-17, is formed via an extrusion process and was initially considered the favored grade for the prismatic layout. IG-110 and 2114, from Toyo Tanso and Mersen (formerly Carbone Lorraine), respectively, are fine-grain grades produced via an isomolding

  16. VennPainter: A Tool for the Comparison and Identification of Candidate Genes Based on Venn Diagrams.

    Directory of Open Access Journals (Sweden)

    Guoliang Lin

    Full Text Available VennPainter is a program for depicting unique and shared sets of genes lists and generating Venn diagrams, by using the Qt C++ framework. The software produces Classic Venn, Edwards' Venn and Nested Venn diagrams and allows for eight sets in a graph mode and 31 sets in data processing mode only. In comparison, previous programs produce Classic Venn and Edwards' Venn diagrams and allow for a maximum of six sets. The software incorporates user-friendly features and works in Windows, Linux and Mac OS. Its graphical interface does not require a user to have programing skills. Users can modify diagram content for up to eight datasets because of the Scalable Vector Graphics output. VennPainter can provide output results in vertical, horizontal and matrix formats, which facilitates sharing datasets as required for further identification of candidate genes. Users can obtain gene lists from shared sets by clicking the numbers on the diagram. Thus, VennPainter is an easy-to-use, highly efficient, cross-platform and powerful program that provides a more comprehensive tool for identifying candidate genes and visualizing the relationships among genes or gene families in comparative analysis.

  17. Comparison of the nucleotide sequence of wild-type hepatitis - A virus and its attenuated candidate vaccine derivative

    International Nuclear Information System (INIS)

    Cohen, J.I.; Rosenblum, B.; Ticehurst, J.R.; Daemer, R.; Feinstone, S.; Purcell, R.H.

    1987-01-01

    Development of attenuated mutants for use as vaccines is in progress for other viruses, including influenza, rotavirus, varicella-zoster, cytomegalovirus, and hepatitis-A virus (HAV). Attenuated viruses may be derived from naturally occurring mutants that infect human or nonhuman hosts. Alternatively, attenuated mutants may be generated by passage of wild-type virus in cell culture. Production of attenuated viruses in cell culture is a laborious and empiric process. Despite previous empiric successes, understanding the molecular basis for attenuation of vaccine viruses could facilitate future development and use of live-virus vaccines. Comparison of the complete nucleotide sequences of wild-type (virulent) and vaccine (attenuated) viruses has been reported for polioviruses and yellow fever virus. Here, the authors compare the nucleotide sequence of wild-type HAV HM-175 with that of a candidate vaccine derivative

  18. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons

    Science.gov (United States)

    2014-01-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

  19. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.

    Science.gov (United States)

    Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C

    2015-02-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  20. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  1. Quasar Photometric Redshifts and Candidate Selection: A New Algorithm Based on Optical and Mid-infrared Photometric Data

    Science.gov (United States)

    Yang, Qian; Wu, Xue-Bing; Fan, Xiaohui; Jiang, Linhua; McGreer, Ian; Green, Richard; Yang, Jinyi; Schindler, Jan-Torge; Wang, Feige; Zuo, Wenwen; Fu, Yuming

    2017-12-01

    We present a new algorithm to estimate quasar photometric redshifts (photo-zs), by considering the asymmetries in the relative flux distributions of quasars. The relative flux models are built with multivariate Skew-t distributions in the multidimensional space of relative fluxes as a function of redshift and magnitude. For 151,392 quasars in the SDSS, we achieve a photo-z accuracy, defined as the fraction of quasars with the difference between the photo-z z p and the spectroscopic redshift z s , | {{Δ }}z| =| {z}s-{z}p| /(1+{z}s) within 0.1, of 74%. Combining the WISE W1 and W2 infrared data with the SDSS data, the photo-z accuracy is enhanced to 87%. Using the Pan-STARRS1 or DECaLS photometry with WISE W1 and W2 data, the photo-z accuracies are 79% and 72%, respectively. The prior probabilities as a function of magnitude for quasars, stars, and galaxies are calculated, respectively, based on (1) the quasar luminosity function, (2) the Milky Way synthetic simulation with the Besançon model, and (3) the Bayesian Galaxy Photometric Redshift estimation. The relative fluxes of stars are obtained with the Padova isochrones, and the relative fluxes of galaxies are modeled through galaxy templates. We test our classification method to select quasars using the DECaLS g, r, z, and WISE W1 and W2 photometry. The quasar selection completeness is higher than 70% for a wide redshift range 0.5publicly available.

  2. A COMPARISON BETWEEN TWO ALGORITHMS FOR THE RETRIEVAL OF SOIL MOISTURE USING AMSR-E DATA

    Directory of Open Access Journals (Sweden)

    Simonetta ePaloscia

    2015-04-01

    Full Text Available A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions and provide long-term in-situ data. One of the algorithms is the artificial neural network-based algorithm developed by the Institute of Applied Physics of the National Research Council (IFAC-CNR (HydroAlgo and the second one is the Single Channel Algorithm (SCA developed by USDA-ARS (US Department of Agriculture-Agricultural Research Service. Both algorithms are based on the same radiative transfer equations but are implemented very differently. Both made use of datasets provided by the Japanese Aerospace Exploration Agency (JAXA, within the framework of Advanced Microwave Scanning Radiometer–Earth Observing System (AMSR-E and Global Change Observation Mission–Water GCOM/AMSR-2 programs. Results demonstrated that both algorithms perform better than the mission specified accuracy, with Root Mean Square Error (RMSE ≤0.06 m3/m3 and Bias <0.02 m3/m3. These results expand on previous investigations using different algorithms and sites. The novelty of the paper consists of the fact that it is the first intercomparison of the HydroAlgo algorithm with a more traditional retrieval algorithm, which offers an approach to higher spatial resolution products.

  3. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    Science.gov (United States)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  4. Performance evaluation of 2D image registration algorithms with the numeric image registration and comparison platform

    International Nuclear Information System (INIS)

    Gerganov, G.; Kuvandjiev, V.; Dimitrova, I.; Mitev, K.; Kawrakow, I.

    2012-01-01

    The objective of this work is to present the capabilities of the NUMERICS web platform for evaluation of the performance of image registration algorithms. The NUMERICS platform is a web accessible tool which provides access to dedicated numerical algorithms for registration and comparison of medical images (http://numerics.phys.uni-sofia.bg). The platform allows comparison of noisy medical images by means of different types of image comparison algorithms, which are based on statistical tests for outliers. The platform also allows 2D image registration with different techniques like Elastic Thin-Plate Spline registration, registration based on rigid transformations, affine transformations, as well as non-rigid image registration based on Mobius transformations. In this work we demonstrate how the platform can be used as a tool for evaluation of the quality of the image registration process. We demonstrate performance evaluation of a deformable image registration technique based on Mobius transformations. The transformations are applied with appropriate cost functions like: Mutual information, Correlation coefficient, Sum of Squared Differences. The accent is on the results provided by the platform to the user and their interpretation in the context of the performance evaluation of 2D image registration. The NUMERICS image registration and image comparison platform provides detailed statistical information about submitted image registration jobs and can be used to perform quantitative evaluation of the performance of different image registration techniques. (authors)

  5. Comparison of genetic algorithm and harmony search for generator maintenance scheduling

    International Nuclear Information System (INIS)

    Khan, L.; Mumtaz, S.; Khattak, A.

    2012-01-01

    GMS (Generator Maintenance Scheduling) ranks very high in decision making of power generation management. Generators maintenance schedule decides the time period of maintenance tasks and a reliable reserve margin is also maintained during this time period. In this paper, a comparison of GA (Genetic Algorithm) and US (Harmony Search) algorithm is presented to solve generators maintenance scheduling problem for WAPDA (Water And Power Development Authority) Pakistan. GA is a search procedure, which is used in search problems to compute exact and optimized solution. GA is considered as global search heuristic technique. HS algorithm is quite efficient, because the convergence rate of this algorithm is very fast. HS algorithm is based on the concept of music improvisation process of searching for a perfect state of harmony. The two algorithms generate feasible and optimal solutions and overcome the limitations of the conventional methods including extensive computational effort, which increases exponentially as the size of the problem increases. The proposed methods are tested, validated and compared on the WAPDA electric system. (author)

  6. EXPERIMENTAL COMPARISON OF HOMODYNE DEMODULATION ALGORITHMS FOR PHASE FIBER-OPTIC SENSOR

    Directory of Open Access Journals (Sweden)

    M. N. Belikin

    2015-11-01

    Full Text Available Subject of Research. The paper presents the results of experimental comparative analysis of homodyne demodulation algorithms based on differential cross multiplying method and on arctangent method under the same conditions. The dependencies of parameters for the output signals on the optical radiation intensity are studied for the considered demodulation algorithms. Method. The prototype of single fiber optic phase interferometric sensor has been used for experimental comparison of signal demodulation algorithms. Main Results. We have found that homodyne demodulation based on arctangent method provides greater (by 7 dB at average signal-to-noise ratio of output signals over the frequency band of acoustic impact from 100 Hz to 500 Hz as compared to differential cross multiplying algorithms. We have demonstrated that no change in the output signal amplitude occurs for the studied range of values of the optical pulses amplitudes. Obtained results indicate that the homodyne demodulation based on arctangent method is most suitable for application in the phase fiber-optic sensors. It provides higher repeatability of their characteristics than the differential cross multiplying algorithm. Practical Significance. Algorithms of interferometric signals demodulation are widely used in phase fiber-optic sensors. Improvement of their characteristics has a positive effect on the performance of such sensors.

  7. Inversion of Land Surface Temperature (LST Using Terra ASTER Data: A Comparison of Three Algorithms

    Directory of Open Access Journals (Sweden)

    Milton Isaya Ndossi

    2016-12-01

    Full Text Available Land Surface Temperature (LST is an important measurement in studies related to the Earth surface’s processes. The Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER instrument onboard the Terra spacecraft is the currently available Thermal Infrared (TIR imaging sensor with the highest spatial resolution. This study involves the comparison of LSTs inverted from the sensor using the Split Window Algorithm (SWA, the Single Channel Algorithm (SCA and the Planck function. This study has used the National Oceanic and Atmospheric Administration’s (NOAA data to model and compare the results from the three algorithms. The data from the sensor have been processed by the Python programming language in a free and open source software package (QGIS to enable users to make use of the algorithms. The study revealed that the three algorithms are suitable for LST inversion, whereby the Planck function showed the highest level of accuracy, the SWA had moderate level of accuracy and the SCA had the least accuracy. The algorithms produced results with Root Mean Square Errors (RMSE of 2.29 K, 3.77 K and 2.88 K for the Planck function, the SCA and SWA respectively.

  8. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  9. Comparison Spatial Pattern of Land Surface Temperature with Mono Window Algorithm and Split Window Algorithm: A Case Study in South Tangerang, Indonesia

    Science.gov (United States)

    Bunai, Tasya; Rokhmatuloh; Wibowo, Adi

    2018-05-01

    In this paper, two methods to retrieve the Land Surface Temperature (LST) from thermal infrared data supplied by band 10 and 11 of the Thermal Infrared Sensor (TIRS) onboard the Landsat 8 is compared. The first is mono window algorithm developed by Qin et al. and the second is split window algorithm by Rozenstein et al. The purpose of this study is to perform the spatial distribution of land surface temperature, as well as to determine more accurate algorithm for retrieving land surface temperature by calculated root mean square error (RMSE). Finally, we present comparison the spatial distribution of land surface temperature by both of algorithm, and more accurate algorithm is split window algorithm refers to the root mean square error (RMSE) is 7.69° C.

  10. Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region

    Directory of Open Access Journals (Sweden)

    Matko Šarić

    2008-06-01

    Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.

  11. A comparison between genetic algorithms and neural networks for optimizing fuel recharges in BWR

    International Nuclear Information System (INIS)

    Ortiz J, J.; Requena, I.

    2002-01-01

    In this work the results of a genetic algorithm (AG) and a neural recurrent multi state network (RNRME) for optimizing the fuel reload of 5 cycles of the Laguna Verde nuclear power plant (CNLV) are presented. The fuel reload obtained by both methods are compared and it was observed that the RNRME creates better fuel distributions that the AG. Moreover a comparison of the utility for using one or another one techniques is make. (Author)

  12. Computerized Dental Comparison: A Critical Review of Dental Coding and Ranking Algorithms Used in Victim Identification.

    Science.gov (United States)

    Adams, Bradley J; Aschheim, Kenneth W

    2016-01-01

    Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.

  13. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Directory of Open Access Journals (Sweden)

    E. Dall'Asta

    2014-06-01

    Full Text Available Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM, which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  14. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Science.gov (United States)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  15. Comparison of vessel enhancement algorithms applied to time-of-flight MRA images for cerebrovascular segmentation.

    Science.gov (United States)

    Phellan, Renzo; Forkert, Nils D

    2017-11-01

    Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented flux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from 5 healthy subjects and 10 patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, and high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting nonenhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM

  16. Comparison of Algorithms for the Optimal Location of Control Valves for Leakage Reduction in WDNs

    Directory of Open Access Journals (Sweden)

    Enrico Creaco

    2018-04-01

    Full Text Available The paper presents the comparison of two different algorithms for the optimal location of control valves for leakage reduction in water distribution networks (WDNs. The former is based on the sequential addition (SA of control valves. At the generic step Nval of SA, the search for the optimal combination of Nval valves is carried out, while containing the optimal combination of Nval − 1 valves found at the previous step. Therefore, only one new valve location is searched for at each step of SA, among all the remaining available locations. The latter algorithm consists of a multi-objective genetic algorithm (GA, in which valve locations are encoded inside individual genes. For the sake of consistency, the same embedded algorithm, based on iterated linear programming (LP, was used inside SA and GA, to search for the optimal valve settings at various time slots in the day. The results of applications to two WDNs show that SA and GA yield identical results for small values of Nval. When this number grows, the limitations of SA, related to its reduced exploration of the research space, emerge. In fact, for higher values of Nval, SA tends to produce less beneficial valve locations in terms of leakage abatement. However, the smaller computation time of SA may make this algorithm preferable in the case of large WDNs, for which the application of GA would be overly burdensome.

  17. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET.

    Science.gov (United States)

    Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M

    2014-07-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.

  18. A comparison of an algorithm for automated sequential beam orientation selection (Cycle) with simulated annealing

    International Nuclear Information System (INIS)

    Woudstra, Evert; Heijmen, Ben J M; Storchi, Pascal R M

    2008-01-01

    Some time ago we developed and published a new deterministic algorithm (called Cycle) for automatic selection of beam orientations in radiotherapy. This algorithm is a plan generation process aiming at the prescribed PTV dose within hard dose and dose-volume constraints. The algorithm allows a large number of input orientations to be used and selects only the most efficient orientations, surviving the selection process. Efficiency is determined by a score function and is more or less equal to the extent of uninhibited access to the PTV for a specific beam during the selection process. In this paper we compare the capabilities of fast-simulated annealing (FSA) and Cycle for cases where local optima are supposed to be present. Five pancreas and five oesophagus cases previously treated in our institute were selected for this comparison. Plans were generated for FSA and Cycle, using the same hard dose and dose-volume constraints, and the largest possible achieved PTV doses as obtained from these algorithms were compared. The largest achieved PTV dose values were generally very similar for the two algorithms. In some cases FSA resulted in a slightly higher PTV dose than Cycle, at the cost of switching on substantially more beam orientations than Cycle. In other cases, when Cycle generated the solution with the highest PTV dose using only a limited number of non-zero weight beams, FSA seemed to have some difficulty in switching off the unfavourable directions. Cycle was faster than FSA, especially for large-dimensional feasible spaces. In conclusion, for the cases studied in this paper, we have found that despite the inherent drawback of sequential search as used by Cycle (where Cycle could probably get trapped in a local optimum), Cycle is nevertheless able to find comparable or sometimes slightly better treatment plans in comparison with FSA (which in theory finds the global optimum) especially in large-dimensional beam weight spaces

  19. On the mid-infrared variability of candidate eruptive variables (exors): A comparison between Spitzer and WISE data

    Energy Technology Data Exchange (ETDEWEB)

    Antoniucci, S.; Giannini, T.; Li Causi, G.; Lorenzetti, D., E-mail: simone.antoniucci@oa-roma.inaf.it, E-mail: teresa.giannini@oa-roma.inaf.it, E-mail: gianluca.licausi@oa-roma.inaf.it, E-mail: dario.lorenzetti@oa-roma.inaf.it [INAF-Osservatorio Astronomico di Roma, via Frascati 33, I-00040 Monte Porzio (Italy)

    2014-02-10

    Aiming to statistically study the variability in the mid-IR of young stellar objects, we have compared the 3.6, 4.5, and 24 μm Spitzer fluxes of 1478 sources belonging to the C2D (Cores to Disks) legacy program with the WISE fluxes at 3.4, 4.6, and 22 μm. From this comparison, we have selected a robust sample of 34 variable sources. Their variations were classified per spectral Class (according to the widely accepted scheme of Class I/flat/II/III protostars), and per star forming region. On average, the number of variable sources decreases with increasing Class and is definitely higher in Perseus and Ophiuchus than in Chamaeleon and Lupus. According to the paradigm Class ≡ Evolution, the photometric variability can be considered to be a feature more pronounced in less evolved protostars, and, as such, related to accretion processes. Moreover, our statistical findings agree with the current knowledge of star formation activity in different regions. The 34 selected variables were further investigated for similarities with known young eruptive variables, namely the EXors. In particular, we analyzed (1) the shape of the spectral energy distribution, (2) the IR excess over the stellar photosphere, (3) magnitude versus color variations, and (4) output parameters of model fitting. This first systematic search for EXors ends up with 11 bona fide candidates that can be considered as suitable targets for monitoring or future investigations.

  20. Identification of alternative splice variants in Aspergillus flavus through comparison of multiple tandem MS search algorithms

    Directory of Open Access Journals (Sweden)

    Chang Kung-Yen

    2011-07-01

    Full Text Available Abstract Background Database searching is the most frequently used approach for automated peptide assignment and protein inference of tandem mass spectra. The results, however, depend on the sequences in target databases and on search algorithms. Recently by using an alternative splicing database, we identified more proteins than with the annotated proteins in Aspergillus flavus. In this study, we aimed at finding a greater number of eligible splice variants based on newly available transcript sequences and the latest genome annotation. The improved database was then used to compare four search algorithms: Mascot, OMSSA, X! Tandem, and InsPecT. Results The updated alternative splicing database predicted 15833 putative protein variants, 61% more than the previous results. There was transcript evidence for 50% of the updated genes compared to the previous 35% coverage. Database searches were conducted using the same set of spectral data, search parameters, and protein database but with different algorithms. The false discovery rates of the peptide-spectrum matches were estimated Conclusions We were able to detect dozens of new peptides using the improved alternative splicing database with the recently updated annotation of the A. flavus genome. Unlike the identifications of the peptides and the RefSeq proteins, large variations existed between the putative splice variants identified by different algorithms. 12 candidates of putative isoforms were reported based on the consensus peptide-spectrum matches. This suggests that applications of multiple search engines effectively reduced the possible false positive results and validated the protein identifications from tandem mass spectra using an alternative splicing database.

  1. Quantitative comparison of direct phase retrieval algorithms in in-line phase tomography

    International Nuclear Information System (INIS)

    Langer, Max; Cloetens, Peter; Guigay, Jean-Pierre; Peyrin, Francoise

    2008-01-01

    A well-known problem in x-ray microcomputed tomography is low sensitivity. Phase contrast imaging offers an increase of sensitivity of up to a factor of 10 3 in the hard x-ray region, which makes it possible to image soft tissue and small density variations. If a sufficiently coherent x-ray beam, such as that obtained from a third generation synchrotron, is used, phase contrast can be obtained by simply moving the detector downstream of the imaged object. This setup is known as in-line or propagation based phase contrast imaging. A quantitative relationship exists between the phase shift induced by the object and the recorded intensity and inversion of this relationship is called phase retrieval. Since the phase shift is proportional to projections through the three-dimensional refractive index distribution in the object, once the phase is retrieved, the refractive index can be reconstructed by using the phase as input to a tomographic reconstruction algorithm. A comparison between four phase retrieval algorithms is presented. The algorithms are based on the transport of intensity equation (TIE), transport of intensity equation for weak absorption, the contrast transfer function (CTF), and a mixed approach between the CTF and TIE, respectively. The compared methods all rely on linearization of the relationship between phase shift and recorded intensity to yield fast phase retrieval algorithms. The phase retrieval algorithms are compared using both simulated and experimental data, acquired at the European Synchrotron Radiation Facility third generation synchrotron light source. The algorithms are evaluated in terms of two different reconstruction error metrics. While being slightly less computationally effective, the mixed approach shows the best performance in terms of the chosen criteria.

  2. Phase-Retrieval Uncertainty Estimation and Algorithm Comparison for the JWST-ISIM Test Campaign

    Science.gov (United States)

    Aronstein, David L.; Smith, J. Scott

    2016-01-01

    Phase retrieval, the process of determining the exitpupil wavefront of an optical instrument from image-plane intensity measurements, is the baseline methodology for characterizing the wavefront for the suite of science instruments (SIs) in the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST). JWST is a large, infrared space telescope with a 6.5-meter diameter primary mirror. JWST is currently NASA's flagship mission and will be the premier space observatory of the next decade. ISIM contains four optical benches with nine unique instruments, including redundancies. ISIM was characterized at the Goddard Space Flight Center (GSFC) in Greenbelt, MD in a series of cryogenic vacuum tests using a telescope simulator. During these tests, phase-retrieval algorithms were used to characterize the instruments. The objective of this paper is to describe the Monte-Carlo simulations that were used to establish uncertainties (i.e., error bars) for the wavefronts of the various instruments in ISIM. Multiple retrieval algorithms were used in the analysis of ISIM phase-retrieval focus-sweep data, including an iterativetransform algorithm and a nonlinear optimization algorithm. These algorithms emphasize the recovery of numerous optical parameters, including low-order wavefront composition described by Zernike polynomial terms and high-order wavefront described by a point-by-point map, location of instrument best focus, focal ratio, exit-pupil amplitude, the morphology of any extended object, and optical jitter. The secondary objective of this paper is to report on the relative accuracies of these algorithms for the ISIM instrument tests, and a comparison of their computational complexity and their performance on central and graphical processing unit clusters. From a phase-retrieval perspective, the ISIM test campaign includes a variety of source illumination bandwidths, various image-plane sampling criteria above and below the Nyquist- Shannon

  3. Sensitivity study of voxel-based PET image comparison to image registration algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Chen, Aileen B.; Berbeco, Ross [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Aerts, Hugo J. W. L. [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 and Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts 02115 (United States)

    2014-11-01

    Purpose: Accurate deformable registration is essential for voxel-based comparison of sequential positron emission tomography (PET) images for proper adaptation of treatment plan and treatment response assessment. The comparison may be sensitive to the method of deformable registration as the optimal algorithm is unknown. This study investigated the impact of registration algorithm choice on therapy response evaluation. Methods: Sixteen patients with 20 lung tumors underwent a pre- and post-treatment computed tomography (CT) and 4D FDG-PET scans before and after chemoradiotherapy. All CT images were coregistered using a rigid and ten deformable registration algorithms. The resulting transformations were then applied to the respective PET images. Moreover, the tumor region defined by a physician on the registered PET images was classified into progressor, stable-disease, and responder subvolumes. Particularly, voxels with standardized uptake value (SUV) decreases >30% were classified as responder, while voxels with SUV increases >30% were progressor. All other voxels were considered stable-disease. The agreement of the subvolumes resulting from difference registration algorithms was assessed by Dice similarity index (DSI). Coefficient of variation (CV) was computed to assess variability of DSI between individual tumors. Root mean square difference (RMS{sub rigid}) of the rigidly registered CT images was used to measure the degree of tumor deformation. RMS{sub rigid} and DSI were correlated by Spearman correlation coefficient (R) to investigate the effect of tumor deformation on DSI. Results: Median DSI{sub rigid} was found to be 72%, 66%, and 80%, for progressor, stable-disease, and responder, respectively. Median DSI{sub deformable} was 63%–84%, 65%–81%, and 82%–89%. Variability of DSI was substantial and similar for both rigid and deformable algorithms with CV > 10% for all subvolumes. Tumor deformation had moderate to significant impact on DSI for progressor

  4. Comparison of soil moisture retrieval algorithms based on the synergy between SMAP and SMOS-IC

    Science.gov (United States)

    Ebrahimi-Khusfi, Mohsen; Alavipanah, Seyed Kazem; Hamzeh, Saeid; Amiraslani, Farshad; Neysani Samany, Najmeh; Wigneron, Jean-Pierre

    2018-05-01

    This study was carried out to evaluate possible improvements of the soil moisture (SM) retrievals from the SMAP observations, based on the synergy between SMAP and SMOS. We assessed the impacts of the vegetation and soil roughness parameters on SM retrievals from SMAP observations. To do so, the effects of three key input parameters including the vegetation optical depth (VOD), effective scattering albedo (ω) and soil roughness (HR) parameters were assessed with the emphasis on the synergy with the VOD product derived from SMOS-IC, a new and simpler version of the SMOS algorithm, over two years of data (April 2015 to April 2017). First, a comprehensive comparison of seven SM retrieval algorithms was made to find the best one for SM retrievals from the SMAP observations. All results were evaluated against in situ measurements over 548 stations from the International Soil Moisture Network (ISMN) in terms of four statistical metrics: correlation coefficient (R), root mean square error (RMSE), bias and unbiased RMSE (UbRMSE). The comparison of seven SM retrieval algorithms showed that the dual channel algorithm based on the additional use of the SMOS-IC VOD product (selected algorithm) led to the best results of SM retrievals over 378, 399, 330 and 271 stations (out of a total of 548 stations) in terms of R, RMSE, UbRMSE and both R & UbRMSE, respectively. Moreover, comparing the measured and retrieved SM values showed that this synergy approach led to an increase in median R value from 0.6 to 0.65 and a decrease in median UbRMSE from 0.09 m3/m3 to 0.06 m3/m3. Second, using the algorithm selected in a first step and defined above, the ω and HR parameters were calibrated over 218 rather homogenous ISMN stations. 72 combinations of various values of ω and HR were used for the calibration over different land cover classes. In this calibration process, the optimal values of ω and HR were found for the different land cover classes. The obtained results indicated that the

  5. Comparison of the mass preconditioned HMC and the DD-HMC algorithm for two-flavour QCD

    CERN Document Server

    Marinkovic, Marina

    2010-01-01

    Mass preconditioned HMC and DD-HMC are among the most popular algorithms to simulate Wilson fermions. We present a comparison of the performance of the two algorithms for realistic quark masses and lattice sizes. In particular, we use the locally deflated solver of the DD-HMC environment also for the mass preconditioned simulations.

  6. Definition and Analysis of a System for the Automated Comparison of Curriculum Sequencing Algorithms in Adaptive Distance Learning

    Science.gov (United States)

    Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia

    2011-01-01

    LS-Lab provides automatic support to comparison/evaluation of the Learning Object Sequences produced by different Curriculum Sequencing Algorithms. Through this framework a teacher can verify the correspondence between the behaviour of different sequencing algorithms and her pedagogical preferences. In fact the teacher can compare algorithms…

  7. Comparison between dynamic programming and genetic algorithm for hydro unit economic load dispatch

    Directory of Open Access Journals (Sweden)

    Bin Xu

    2014-10-01

    Full Text Available The hydro unit economic load dispatch (ELD is of great importance in energy conservation and emission reduction. Dynamic programming (DP and genetic algorithm (GA are two representative algorithms for solving ELD problems. The goal of this study was to examine the performance of DP and GA while they were applied to ELD. We established numerical experiments to conduct performance comparisons between DP and GA with two given schemes. The schemes included comparing the CPU time of the algorithms when they had the same solution quality, and comparing the solution quality when they had the same CPU time. The numerical experiments were applied to the Three Gorges Reservoir in China, which is equipped with 26 hydro generation units. We found the relation between the performance of algorithms and the number of units through experiments. Results show that GA is adept at searching for optimal solutions in low-dimensional cases. In some cases, such as with a number of units of less than 10, GA's performance is superior to that of a coarse-grid DP. However, GA loses its superiority in high-dimensional cases. DP is powerful in obtaining stable and high-quality solutions. Its performance can be maintained even while searching over a large solution space. Nevertheless, due to its exhaustive enumerating nature, it costs excess time in low-dimensional cases.

  8. Comparison of Genetic Algorithm and Hill Climbing for Shortest Path Optimization Mapping

    Directory of Open Access Journals (Sweden)

    Fronita Mona

    2018-01-01

    Full Text Available Traveling Salesman Problem (TSP is an optimization to find the shortest path to reach several destinations in one trip without passing through the same city and back again to the early departure city, the process is applied to the delivery systems. This comparison is done using two methods, namely optimization genetic algorithm and hill climbing. Hill Climbing works by directly selecting a new path that is exchanged with the neighbour’s to get the track distance smaller than the previous track, without testing. Genetic algorithms depend on the input parameters, they are the number of population, the probability of crossover, mutation probability and the number of generations. To simplify the process of determining the shortest path supported by the development of software that uses the google map API. Tests carried out as much as 20 times with the number of city 8, 16, 24 and 32 to see which method is optimal in terms of distance and time computation. Based on experiments conducted with a number of cities 3, 4, 5 and 6 producing the same value and optimal distance for the genetic algorithm and hill climbing, the value of this distance begins to differ with the number of city 7. The overall results shows that these tests, hill climbing are more optimal to number of small cities and the number of cities over 30 optimized using genetic algorithms.

  9. Algorithms imaging tests comparison following the first febrile urinary tract infection in children.

    Science.gov (United States)

    Tombesi, María M; Alconcher, Laura F; Lucarelli, Lucas; Ciccioli, Agustina

    2017-08-01

    To compare the diagnostic sensitivity, costs and radiation doses of imaging tests algorithms developed by the Argentine Society of Pediatrics in 2003 and 2015, against British and American guidelines after the first febrile urinary tract infection (UTI). Inclusion criteria: children ≤ 2 years old with their first febrile UTI and normal ultrasound, voiding cystourethrography and dimercaptosuccinic acid scintigraphy, according to the algorithm established by the Argentine Society of Pediatrics in 2003, treated between 2003 and 2010. The comparisons between algorithms were carried out through retrospective simulation. Eighty (80) patients met the inclusion criteria; 51 (63%) had vesicoureteral reflux (VUR); 6% of the cases were severe. Renal scarring was observed in 6 patients (7.5%). Cost: ARS 404,000. Radiation: 160 millisieverts. With the Argentine Society of Pediatrics' algorithm developed in 2015, the diagnosis of 4 VURs and 2 cases of renal scarring would have been missed. The cost of this omission would have been ARS 301,800 and 124 millisieverts of radiation. British and American guidelines would have missed the diagnosis of all VURs and all cases of renal scarring, with a related cost of ARS 23,000 and ARS 40,000, respectively and 0 radiation. Intensive protocols are highly sensitive to VUR and renal scarring, but they imply high costs and doses of radiation, and result in questionable benefits. Sociedad Argentina de Pediatría

  10. Computational complexity of algorithms for sequence comparison, short-read assembly and genome alignment.

    Science.gov (United States)

    Baichoo, Shakuntala; Ouzounis, Christos A

    A multitude of algorithms for sequence comparison, short-read assembly and whole-genome alignment have been developed in the general context of molecular biology, to support technology development for high-throughput sequencing, numerous applications in genome biology and fundamental research on comparative genomics. The computational complexity of these algorithms has been previously reported in original research papers, yet this often neglected property has not been reviewed previously in a systematic manner and for a wider audience. We provide a review of space and time complexity of key sequence analysis algorithms and highlight their properties in a comprehensive manner, in order to identify potential opportunities for further research in algorithm or data structure optimization. The complexity aspect is poised to become pivotal as we will be facing challenges related to the continuous increase of genomic data on unprecedented scales and complexity in the foreseeable future, when robust biological simulation at the cell level and above becomes a reality. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. A multithreaded parallel implementation of a dynamic programming algorithm for sequence comparison.

    Science.gov (United States)

    Martins, W S; Del Cuvillo, J B; Useche, F J; Theobald, K B; Gao, G R

    2001-01-01

    This paper discusses the issues involved in implementing a dynamic programming algorithm for biological sequence comparison on a general-purpose parallel computing platform based on a fine-grain event-driven multithreaded program execution model. Fine-grain multithreading permits efficient parallelism exploitation in this application both by taking advantage of asynchronous point-to-point synchronizations and communication with low overheads and by effectively tolerating latency through the overlapping of computation and communication. We have implemented our scheme on EARTH, a fine-grain event-driven multithreaded execution and architecture model which has been ported to a number of parallel machines with off-the-shelf processors. Our experimental results show that the dynamic programming algorithm can be efficiently implemented on EARTH systems with high performance (e.g., speedup of 90 on 120 nodes), good programmability and reasonable cost.

  12. Distinguishing between cancer driver and passenger gene alteration candidates via cross-species comparison: a pilot study

    International Nuclear Information System (INIS)

    Ji, Xinglai; Tang, Jie; Halberg, Richard; Busam, Dana; Ferriera, Steve; Peña, Maria Marjorette O; Venkataramu, Chinnambally; Yeatman, Timothy J; Zhao, Shaying

    2010-01-01

    We are developing a cross-species comparison strategy to distinguish between cancer driver- and passenger gene alteration candidates, by utilizing the difference in genomic location of orthologous genes between the human and other mammals. As an initial test of this strategy, we conducted a pilot study with human colorectal cancer (CRC) and its mouse model C57BL/6J Apc Min/+ , focusing on human 5q22.2 and 18q21.1-q21.2. We first performed bioinformatics analysis on the evolution of 5q22.2 and 18q21.1-q21.2 regions. Then, we performed exon-targeted sequencing, real time quantitative polymerase chain reaction (qPCR), and real time quantitative reverse transcriptase PCR (qRT-PCR) analyses on a number of genes of both regions with both human and mouse colon tumors. These two regions (5q22.2 and 18q21.1-q21.2) are frequently deleted in human CRCs and encode genuine colorectal tumor suppressors APC and SMAD4. They also encode genes such as MCC (mutated in colorectal cancer) with their role in CRC etiology unknown. We have discovered that both regions are evolutionarily unstable, resulting in genes that are clustered in each human region being found scattered at several distinct loci in the genome of many other species. For instance, APC and MCC are within 200 kb apart in human 5q22.2 but are 10 Mb apart in the mouse genome. Importantly, our analyses revealed that, while known CRC driver genes APC and SMAD4 were disrupted in both human colorectal tumors and tumors from Apc Min/+ mice, the questionable MCC gene was disrupted in human tumors but appeared to be intact in mouse tumors. These results indicate that MCC may not actually play any causative role in early colorectal tumorigenesis. We also hypothesize that its disruption in human CRCs is likely a mere result of its close proximity to APC in the human genome. Expanding this pilot study to the entire genome may identify more questionable genes like MCC, facilitating the discovery of new CRC driver gene candidates

  13. Distinguishing between cancer driver and passenger gene alteration candidates via cross-species comparison: a pilot study.

    Science.gov (United States)

    Ji, Xinglai; Tang, Jie; Halberg, Richard; Busam, Dana; Ferriera, Steve; Peña, Maria Marjorette O; Venkataramu, Chinnambally; Yeatman, Timothy J; Zhao, Shaying

    2010-08-13

    We are developing a cross-species comparison strategy to distinguish between cancer driver- and passenger gene alteration candidates, by utilizing the difference in genomic location of orthologous genes between the human and other mammals. As an initial test of this strategy, we conducted a pilot study with human colorectal cancer (CRC) and its mouse model C57BL/6J ApcMin/+, focusing on human 5q22.2 and 18q21.1-q21.2. We first performed bioinformatics analysis on the evolution of 5q22.2 and 18q21.1-q21.2 regions. Then, we performed exon-targeted sequencing, real time quantitative polymerase chain reaction (qPCR), and real time quantitative reverse transcriptase PCR (qRT-PCR) analyses on a number of genes of both regions with both human and mouse colon tumors. These two regions (5q22.2 and 18q21.1-q21.2) are frequently deleted in human CRCs and encode genuine colorectal tumor suppressors APC and SMAD4. They also encode genes such as MCC (mutated in colorectal cancer) with their role in CRC etiology unknown. We have discovered that both regions are evolutionarily unstable, resulting in genes that are clustered in each human region being found scattered at several distinct loci in the genome of many other species. For instance, APC and MCC are within 200 kb apart in human 5q22.2 but are 10 Mb apart in the mouse genome. Importantly, our analyses revealed that, while known CRC driver genes APC and SMAD4 were disrupted in both human colorectal tumors and tumors from ApcMin/+ mice, the questionable MCC gene was disrupted in human tumors but appeared to be intact in mouse tumors. These results indicate that MCC may not actually play any causative role in early colorectal tumorigenesis. We also hypothesize that its disruption in human CRCs is likely a mere result of its close proximity to APC in the human genome. Expanding this pilot study to the entire genome may identify more questionable genes like MCC, facilitating the discovery of new CRC driver gene candidates.

  14. An extensive assessment of network alignment algorithms for comparison of brain connectomes.

    Science.gov (United States)

    Milano, Marianna; Guzzi, Pietro Hiram; Tymofieva, Olga; Xu, Duan; Hess, Christofer; Veltri, Pierangelo; Cannataro, Mario

    2017-06-06

    Recently the study of the complex system of connections in neural systems, i.e. the connectome, has gained a central role in neurosciences. The modeling and analysis of connectomes are therefore a growing area. Here we focus on the representation of connectomes by using graph theory formalisms. Macroscopic human brain connectomes are usually derived from neuroimages; the analyzed brains are co-registered in the image domain and brought to a common anatomical space. An atlas is then applied in order to define anatomically meaningful regions that will serve as the nodes of the network - this process is referred to as parcellation. The atlas-based parcellations present some known limitations in cases of early brain development and abnormal anatomy. Consequently, it has been recently proposed to perform atlas-free random brain parcellation into nodes and align brains in the network space instead of the anatomical image space, as a way to deal with the unknown correspondences of the parcels. Such process requires modeling of the brain using graph theory and the subsequent comparison of the structure of graphs. The latter step may be modeled as a network alignment (NA) problem. In this work, we first define the problem formally, then we test six existing state of the art of network aligners on diffusion MRI-derived brain networks. We compare the performances of algorithms by assessing six topological measures. We also evaluated the robustness of algorithms to alterations of the dataset. The results confirm that NA algorithms may be applied in cases of atlas-free parcellation for a fully network-driven comparison of connectomes. The analysis shows MAGNA++ is the best global alignment algorithm. The paper presented a new analysis methodology that uses network alignment for validating atlas-free parcellation brain connectomes. The methodology has been experimented on several brain datasets.

  15. Comparison of phase unwrapping algorithms for topography reconstruction based on digital speckle pattern interferometry

    Science.gov (United States)

    Li, Yuanbo; Cui, Xiaoqian; Wang, Hongbei; Zhao, Mengge; Ding, Hongbin

    2017-10-01

    Digital speckle pattern interferometry (DSPI) can diagnose the topography evolution in real-time, continuous and non-destructive, and has been considered as a most promising technique for Plasma-Facing Components (PFCs) topography diagnostic under the complicated environment of tokamak. It is important for the study of digital speckle pattern interferometry to enhance speckle patterns and obtain the real topography of the ablated crater. In this paper, two kinds of numerical model based on flood-fill algorithm has been developed to obtain the real profile by unwrapping from the wrapped phase in speckle interference pattern, which can be calculated through four intensity images by means of 4-step phase-shifting technique. During the process of phase unwrapping by means of flood-fill algorithm, since the existence of noise pollution, and other inevitable factors will lead to poor quality of the reconstruction results, this will have an impact on the authenticity of the restored topography. The calculation of the quality parameters was introduced to obtain the quality-map from the wrapped phase map, this work presents two different methods to calculate the quality parameters. Then quality parameters are used to guide the path of flood-fill algorithm, and the pixels with good quality parameters are given priority calculation, so that the quality of speckle interference pattern reconstruction results are improved. According to the comparison between the flood-fill algorithm which is suitable for speckle pattern interferometry and the quality-guided flood-fill algorithm (with two different calculation approaches), the errors which caused by noise pollution and the discontinuous of the strips were successfully reduced.

  16. Comparison of the inversion algorithms applied to the ozone vertical profile retrieval from SCIAMACHY limb measurements

    Directory of Open Access Journals (Sweden)

    A. Rozanov

    2007-09-01

    Full Text Available This paper is devoted to an intercomparison of ozone vertical profiles retrieved from the measurements of scattered solar radiation performed by the SCIAMACHY instrument in the limb viewing geometry. Three different inversion algorithms including the prototype of the operational Level 1 to 2 processor to be operated by the European Space Agency are considered. Unlike usual validation studies, this comparison removes the uncertainties arising when comparing measurements made by different instruments probing slightly different air masses and focuses on the uncertainties specific to the modeling-retrieval problem only. The intercomparison was performed for 5 selected orbits of SCIAMACHY showing a good overall agreement of the results in the middle stratosphere, whereas considerable discrepancies were identified in the lower stratosphere and upper troposphere altitude region. Additionally, comparisons with ground-based lidar measurements are shown for selected profiles demonstrating an overall correctness of the retrievals.

  17. Comparison of Firefly algorithm and Artificial Immune System algorithm for lot streaming in -machine flow shop scheduling

    Directory of Open Access Journals (Sweden)

    G. Vijay Chakaravarthy

    2012-11-01

    Full Text Available Lot streaming is a technique used to split the processing of lots into several sublots (transfer batches to allow the overlapping of operations in a multistage manufacturing systems thereby shortening the production time (makespan. The objective of this paper is to minimize the makespan and total flow time of -job, -machine lot streaming problem in a flow shop with equal and variable size sublots and also to determine the optimal sublot size. In recent times researchers are concentrating and applying intelligent heuristics to solve flow shop problems with lot streaming. In this research, Firefly Algorithm (FA and Artificial Immune System (AIS algorithms are used to solve the problem. The results obtained by the proposed algorithms are also compared with the performance of other worked out traditional heuristics. The computational results shows that the identified algorithms are more efficient, effective and better than the algorithms already tested for this problem.

  18. A comparison of two photon planning algorithms for 8 MV and 25 MV X-ray beams in lung

    International Nuclear Information System (INIS)

    Kan, M.W.K.; Young, E.C.M.; Yu, P.K.N.

    1995-01-01

    The results of a comparison of two photon planning algorithms, the Clarkson Scatter Integration algorithm and the Equivalent Tissue-air Ratio algorithm are reported, using a simple lung phantom for 8 MV and 25 MV X-ray beams of field sizes 5 cm x 5 cm and 10 cm x 10 cm. Central axis depth-dose distributions were measured with a thimble chamber or a Markus parallel-plate chamber. Dose profile distributions were measured with TLD rods and films. Measured dose distributions were then compared to predicted dose distributions. Both algorithms overestimate the dose at mid-lung as they do not account for the effect of electronic disequilibrium. The Clarkson algorithm consistently shows less accurate results in comparison with the ETAR algorithm. There is additional error in the case of the Clarkson algorithm because of the assumption of a unit density medium in calculating scatter, which gives an overestimate in the effective scatter-air ratios in lung. For a 5 cm x 5 cm field, the error of dose prediction for 25 MV x-ray beam at mid-lung is 15.8 % and 12.8 % for Clarkson and ETAR algorithm respectively. At 8 MV the error is 9.3 % and 5.1 % respectively. In addition, both algorithms underestimate the penumbral width at mid-lung as they do not account for the penumbral flaring effect in low density medium. 25 refs., 2 tabs., 5 figs

  19. A COMPARISON OF HAZE REMOVAL ALGORITHMS AND THEIR IMPACTS ON CLASSIFICATION ACCURACY FOR LANDSAT IMAGERY

    Directory of Open Access Journals (Sweden)

    Yang Xiao

    Full Text Available The quality of Landsat images in humid areas is considerably degraded by haze in terms of their spectral response pattern, which limits the possibility of their application in using visible and near-infrared bands. A variety of haze removal algorithms have been proposed to correct these unsatisfactory illumination effects caused by the haze contamination. The purpose of this study was to illustrate the difference of two major algorithms (the improved homomorphic filtering (HF and the virtual cloud point (VCP for their effectiveness in solving spatially varying haze contamination, and to evaluate the impacts of haze removal on land cover classification. A case study with exploiting large quantities of Landsat TM images and climates (clear and haze in the most humid areas in China proved that these haze removal algorithms both perform well in processing Landsat images contaminated by haze. The outcome of the application of VCP appears to be more similar to the reference images compared to HF. Moreover, the Landsat image with VCP haze removal can improve the classification accuracy effectively in comparison to that without haze removal, especially in the cloudy contaminated area

  20. A comparison of two adaptive algorithms for the control of active engine mounts

    Science.gov (United States)

    Hillis, A. J.; Harrison, A. J. L.; Stoten, D. P.

    2005-08-01

    This paper describes work conducted in order to control automotive active engine mounts, consisting of a conventional passive mount and an internal electromagnetic actuator. Active engine mounts seek to cancel the oscillatory forces generated by the rotation of out-of-balance masses within the engine. The actuator generates a force dependent on a control signal from an algorithm implemented with a real-time DSP. The filtered-x least-mean-square (FXLMS) adaptive filter is used as a benchmark for comparison with a new implementation of the error-driven minimal controller synthesis (Er-MCSI) adaptive controller. Both algorithms are applied to an active mount fitted to a saloon car equipped with a four-cylinder turbo-diesel engine, and have no a priori knowledge of the system dynamics. The steady-state and transient performance of the two algorithms are compared and the relative merits of the two approaches are discussed. The Er-MCSI strategy offers significant computational advantages as it requires no cancellation path modelling. The Er-MCSI controller is found to perform in a fashion similar to the FXLMS filter—typically reducing chassis vibration by 50-90% under normal driving conditions.

  1. Comparison of Metabolic Pathways in Escherichia coli by Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Patricia Ortegon

    2015-01-01

    Full Text Available In order to understand how cellular metabolism has taken its modern form, the conservation and variations between metabolic pathways were evaluated by using a genetic algorithm (GA. The GA approach considered information on the complete metabolism of the bacterium Escherichia coli K-12, as deposited in the KEGG database, and the enzymes belonging to a particular pathway were transformed into enzymatic step sequences by using the breadth-first search algorithm. These sequences represent contiguous enzymes linked to each other, based on their catalytic activities as they are encoded in the Enzyme Commission numbers. In a posterior step, these sequences were compared using a GA in an all-against-all (pairwise comparisons approach. Individual reactions were chosen based on their measure of fitness to act as parents of offspring, which constitute the new generation. The sequences compared were used to construct a similarity matrix (of fitness values that was then considered to be clustered by using a k-medoids algorithm. A total of 34 clusters of conserved reactions were obtained, and their sequences were finally aligned with a multiple-sequence alignment GA optimized to align all the reaction sequences included in each group or cluster. From these comparisons, maps associated with the metabolism of similar compounds also contained similar enzymatic step sequences, reinforcing the Patchwork Model for the evolution of metabolism in E. coli K-12, an observation that can be expanded to other organisms, for which there is metabolism information. Finally, our mapping of these reactions is discussed, with illustrations from a particular case.

  2. A new warfarin dosing algorithm including VKORC1 3730 G > A polymorphism: comparison with results obtained by other published algorithms.

    Science.gov (United States)

    Cini, Michela; Legnani, Cristina; Cosmi, Benilde; Guazzaloca, Giuliana; Valdrè, Lelia; Frascaro, Mirella; Palareti, Gualtiero

    2012-08-01

    Warfarin dosing is affected by clinical and genetic variants, but the contribution of the genotype associated with warfarin resistance in pharmacogenetic algorithms has not been well assessed yet. We developed a new dosing algorithm including polymorphisms associated both with warfarin sensitivity and resistance in the Italian population, and its performance was compared with those of eight previously published algorithms. Clinical and genetic data (CYP2C9*2, CYP2C9*3, VKORC1 -1639 G > A, and VKORC1 3730 G > A) were used to elaborate the new algorithm. Derivation and validation groups comprised 55 (58.2% men, mean age 69 years) and 40 (57.5% men, mean age 70 years) patients, respectively, who were on stable anticoagulation therapy for at least 3 months with different oral anticoagulation therapy (OAT) indications. Performance of the new algorithm, evaluated with mean absolute error (MAE) defined as the absolute value of the difference between observed daily maintenance dose and predicted daily dose, correlation with the observed dose and R(2) value, was comparable with or slightly lower than that obtained using the other algorithms. The new algorithm could correctly assign 53.3%, 50.0%, and 57.1% of patients to the low (≤25 mg/week), intermediate (26-44 mg/week) and high (≥ 45 mg/week) dosing range, respectively. Our data showed a significant increase in predictive accuracy among patients requiring high warfarin dose compared with the other algorithms (ranging from 0% to 28.6%). The algorithm including VKORC1 3730 G > A, associated with warfarin resistance, allowed a more accurate identification of resistant patients who require higher warfarin dosage.

  3. In Pursuit of LSST Science Requirements: A Comparison of Photometry Algorithms

    Science.gov (United States)

    Becker, Andrew C.; Silvestri, Nicole M.; Owen, Russell E.; Ivezić, Željko; Lupton, Robert H.

    2007-12-01

    We have developed an end-to-end photometric data-processing pipeline to compare current photometric algorithms commonly used on ground-based imaging data. This test bed is exceedingly adaptable and enables us to perform many research and development tasks, including image subtraction and co-addition, object detection and measurements, the production of photometric catalogs, and the creation and stocking of database tables with time-series information. This testing has been undertaken to evaluate existing photometry algorithms for consideration by a next-generation image-processing pipeline for the Large Synoptic Survey Telescope (LSST). We outline the results of our tests for four packages: the Sloan Digital Sky Survey's Photo package, DAOPHOT and ALLFRAME, DOPHOT, and two versions of Source Extractor (SExtractor). The ability of these algorithms to perform point-source photometry, astrometry, shape measurements, and star-galaxy separation and to measure objects at low signal-to-noise ratio is quantified. We also perform a detailed crowded-field comparison of DAOPHOT and ALLFRAME, and profile the speed and memory requirements in detail for SExtractor. We find that both DAOPHOT and Photo are able to perform aperture photometry to high enough precision to meet LSST's science requirements, and less adequately at PSF-fitting photometry. Photo performs the best at simultaneous point- and extended-source shape and brightness measurements. SExtractor is the fastest algorithm, and recent upgrades in the software yield high-quality centroid and shape measurements with little bias toward faint magnitudes. ALLFRAME yields the best photometric results in crowded fields.

  4. Algorithm, applications and evaluation for protein comparison by Ramanujan Fourier transform.

    Science.gov (United States)

    Zhao, Jian; Wang, Jiasong; Hua, Wei; Ouyang, Pingkai

    2015-12-01

    The amino acid sequence of a protein determines its chemical properties, chain conformation and biological functions. Protein sequence comparison is of great importance to identify similarities of protein structures and infer their functions. Many properties of a protein correspond to the low-frequency signals within the sequence. Low frequency modes in protein sequences are linked to the secondary structures, membrane protein types, and sub-cellular localizations of the proteins. In this paper, we present Ramanujan Fourier transform (RFT) with a fast algorithm to analyze the low-frequency signals of protein sequences. The RFT method is applied to similarity analysis of protein sequences with the Resonant Recognition Model (RRM). The results show that the proposed fast RFT method on protein comparison is more efficient than commonly used discrete Fourier transform (DFT). RFT can detect common frequencies as significant feature for specific protein families, and the RFT spectrum heat-map of protein sequences demonstrates the information conservation in the sequence comparison. The proposed method offers a new tool for pattern recognition, feature extraction and structural analysis on protein sequences. Copyright © 2015 Elsevier Ltd. All rights reserved.

  5. SU-E-T-516: Dosimetric Validation of AcurosXB Algorithm in Comparison with AAA & CCC Algorithms for VMAT Technique.

    Science.gov (United States)

    Kathirvel, M; Subramanian, V Sai; Arun, G; Thirumalaiswamy, S; Ramalingam, K; Kumar, S Ashok; Jagadeesh, K

    2012-06-01

    To dosimetrically validate AcurosXB algorithm for Volumetric Modulated Arc Therapy (VMAT) in comparison with standard clinical Anisotropic Analytic Algorithm(AAA) and Collapsed Cone Convolution(CCC) dose calculation algorithms. AcurosXB dose calculation algorithm is available with Varian Eclipse treatment planning system (V10). It uses grid-based Boltzmann equation solver to predict dose precisely in lesser time. This study was made to realize algorithms ability to predict dose accurately as its delivery for which five clinical cases each of Brain, Head&Neck, Thoracic, Pelvic and SBRT were taken. Verification plans were created on multicube phantom with iMatrixx-2D detector array and then dose prediction was done with AcurosXB, AAA & CCC (COMPASS System) algorithm and the same were delivered onto CLINAC-iX treatment machine. Delivered dose was captured in iMatrixx plane for all 25 plans. Measured dose was taken as reference to quantify the agreement between AcurosXB calculation algorithm against previously validated AAA and CCC algorithm. Gamma evaluation was performed with clinical criteria distance-to-agreement 3&2mm and dose difference 3&2% in omnipro-I'MRT software. Plans were evaluated in terms of correlation coefficient, quantitative area gamma and average gamma. Study shows good agreement between mean correlation 0.9979±0.0012, 0.9984±0.0009 & 0.9979±0.0011 for AAA, CCC & Acuros respectively. Mean area gamma for criteria 3mm/3% was found to be 98.80±1.04, 98.14±2.31, 98.08±2.01 and 2mm/2% was found to be 93.94±3.83, 87.17±10.54 & 92.36±5.46 for AAA, CCC & Acuros respectively. Mean average gamma for 3mm/3% was 0.26±0.07, 0.42±0.08, 0.28±0.09 and 2mm/2% was found to be 0.39±0.10, 0.64±0.11, 0.42±0.13 for AAA, CCC & Acuros respectively. This study demonstrated that the AcurosXB algorithm had a good agreement with the AAA & CCC in terms of dose prediction. In conclusion AcurosXB algorithm provides a valid, accurate and speedy alternative to AAA

  6. An evaluation of scanpath-comparison and machine-learning classification algorithms used to study the dynamics of analogy making.

    Science.gov (United States)

    French, Robert M; Glady, Yannick; Thibaut, Jean-Pierre

    2017-08-01

    In recent years, eyetracking has begun to be used to study the dynamics of analogy making. Numerous scanpath-comparison algorithms and machine-learning techniques are available that can be applied to the raw eyetracking data. We show how scanpath-comparison algorithms, combined with multidimensional scaling and a classification algorithm, can be used to resolve an outstanding question in analogy making-namely, whether or not children's and adults' strategies in solving analogy problems are different. (They are.) We show which of these scanpath-comparison algorithms is best suited to the kinds of analogy problems that have formed the basis of much analogy-making research over the years. Furthermore, we use machine-learning classification algorithms to examine the item-to-item saccade vectors making up these scanpaths. We show which of these algorithms best predicts, from very early on in a trial, on the basis of the frequency of various item-to-item saccades, whether a child or an adult is doing the problem. This type of analysis can also be used to predict, on the basis of the item-to-item saccade dynamics in the first third of a trial, whether or not a problem will be solved correctly.

  7. A new comparison of hyperspectral anomaly detection algorithms for real-time applications

    Science.gov (United States)

    Díaz, María.; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    Due to the high spectral resolution that remotely sensed hyperspectral images provide, there has been an increasing interest in anomaly detection. The aim of anomaly detection is to stand over pixels whose spectral signature differs significantly from the background spectra. Basically, anomaly detectors mark pixels with a certain score, considering as anomalies those whose scores are higher than a threshold. Receiver Operating Characteristic (ROC) curves have been widely used as an assessment measure in order to compare the performance of different algorithms. ROC curves are graphical plots which illustrate the trade- off between false positive and true positive rates. However, they are limited in order to make deep comparisons due to the fact that they discard relevant factors required in real-time applications such as run times, costs of misclassification and the competence to mark anomalies with high scores. This last fact is fundamental in anomaly detection in order to distinguish them easily from the background without any posterior processing. An extensive set of simulations have been made using different anomaly detection algorithms, comparing their performances and efficiencies using several extra metrics in order to complement ROC curves analysis. Results support our proposal and demonstrate that ROC curves do not provide a good visualization of detection performances for themselves. Moreover, a figure of merit has been proposed in this paper which encompasses in a single global metric all the measures yielded for the proposed additional metrics. Therefore, this figure, named Detection Efficiency (DE), takes into account several crucial types of performance assessment that ROC curves do not consider. Results demonstrate that algorithms with the best detection performances according to ROC curves do not have the highest DE values. Consequently, the recommendation of using extra measures to properly evaluate performances have been supported and justified by

  8. Comparison of different chaotic maps in particle swarm optimization algorithm for long-term cascaded hydroelectric system scheduling

    International Nuclear Information System (INIS)

    He Yaoyao; Zhou Jianzhong; Xiang Xiuqiao; Chen Heng; Qin Hui

    2009-01-01

    The goal of this paper is to present a novel chaotic particle swarm optimization (CPSO) algorithm and compares the efficiency of three one-dimensional chaotic maps within symmetrical region for long-term cascaded hydroelectric system scheduling. The introduced chaotic maps improve the global optimal capability of CPSO algorithm. Moreover, a piecewise linear interpolation function is employed to transform all constraints into restrict upriver water level for implementing the maximum of objective function. Numerical results and comparisons demonstrate the effect and speed of different algorithms on a practical hydro-system.

  9. ACCURACY COMPARISON OF ALGORITHMS FOR DETERMINATION OF IMAGE CENTER COORDINATES IN OPTOELECTRONIC DEVICES

    Directory of Open Access Journals (Sweden)

    N. A. Starasotnikau

    2018-01-01

    Full Text Available Accuracy in determination of coordinates for image having simple shapes is considered as one of important and significant parameters in metrological optoelectronic systems such as autocollimators, stellar sensors, Shack-Hartmann sensors, schemes for geometric calibration of digital cameras for aerial and space imagery, various tracking systems. The paper describes a mathematical model for a measuring stand based on a collimator which projects a test-object onto a photodetector of an optoelectronic device. The mathematical model takes into account characteristic noises for photodetectors: a shot noise of the desired signal (photon and a shot noise of a dark signal, readout and spatial heterogeneity of CCD (charge-coupled device matrix elements. In order to reduce noise effect it is proposed to apply the Wiener filter for smoothing an image and its unambiguous identification and also enter a threshold according to brightness level. The paper contains a comparison of two algorithms for determination of coordinates in accordance with energy gravity center and contour. Sobel, Pruitt, Roberts, Laplacian Gaussian, Canni detectors have been used for determination of the test-object contour. The essence of the algorithm for determination of coordinates lies in search for an image contour in the form of a circle with its subsequent approximation and determination of the image center. An error calculation has been made while determining coordinates of a gravity center for test-objects of various diameters: 5, 10, 20, 30, 40, 50 pixels of a photodetector and also signalto-noise ratio values: 200, 100, 70, 20, 10. Signal-to-noise ratio has been calculated as a difference between maximum image intensity of the test-object and the background which is divided by mean-square deviation of the background. The accuracy for determination of coordinates has been improved by 0.5-1 order in case when there was an increase in a signal-to-noise ratio. Accuracy

  10. Predicting Performance: A Comparison of University Supervisors' Predictions and Teacher Candidates' Scores on a Teaching Performance Assessment

    Science.gov (United States)

    Sandholtz, Judith Haymore; Shea, Lauren M.

    2012-01-01

    The implementation of teaching performance assessments has prompted a range of concerns. Some educators question whether these assessments provide information beyond what university supervisors gain through their formative evaluations and classroom observations of candidates. This research examines the relationship between supervisors' predictions…

  11. Influence on dose calculation by difference of dose calculation algorithms in stereotactic lung irradiation. Comparison of pencil beam convolution (inhomogeneity correction: batho power law) and analytical anisotropic algorithm

    International Nuclear Information System (INIS)

    Tachibana, Masayuki; Noguchi, Yoshitaka; Fukunaga, Jyunichi; Hirano, Naomi; Yoshidome, Satoshi; Hirose, Takaaki

    2009-01-01

    The monitor unit (MU) was calculated by pencil beam convolution (inhomogeneity correction algorithm: batho power law) [PBC (BPL)] which is the dose calculation algorithm based on measurement in the past in the stereotactic lung irradiation study. The recalculation was done by analytical anisotropic algorithm (AAA), which is the dose calculation algorithm based on theory data. The MU calculated by PBC (BPL) and AAA was compared for each field. In the result of the comparison of 1031 fields in 136 cases, the MU calculated by PBC (BPL) was about 2% smaller than that calculated by AAA. This depends on whether one does the calculation concerning the extension of the second electrons. In particular, the difference in the MU is influenced by the X-ray energy. With the same X-ray energy, when the irradiation field size is small, the lung pass length is long, the lung pass length percentage is large, and the CT value of the lung is low, and the difference of MU is increased. (author)

  12. Comparison of algorithms for blood stain detection applied to forensic hyperspectral imagery

    Science.gov (United States)

    Yang, Jie; Messinger, David W.; Mathew, Jobin J.; Dube, Roger R.

    2016-05-01

    Blood stains are among the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Early detection of blood stains is particularly important since the blood reacts physically and chemically with air and materials over time. Accurate identification of blood remnants, including regions that might have been intentionally cleaned, is an important aspect of forensic investigation. Hyperspectral imaging might be a potential method to detect blood stains because it is non-contact and provides substantial spectral information that can be used to identify regions in a scene with trace amounts of blood. The potential complexity of scenes in which such vast violence occurs can be high when the range of scene material types and conditions containing blood stains at a crime scene are considered. Some stains are hard to detect by the unaided eye, especially if a conscious effort to clean the scene has occurred (we refer to these as "latent" blood stains). In this paper we present the initial results of a study of the use of hyperspectral imaging algorithms for blood detection in complex scenes. We describe a hyperspectral imaging system which generates images covering 400 nm - 700 nm visible range with a spectral resolution of 10 nm. Three image sets of 31 wavelength bands were generated using this camera for a simulated indoor crime scene in which blood stains were placed on a T-shirt and walls. To detect blood stains in the scene, Principal Component Analysis (PCA), Subspace Reed Xiaoli Detection (SRXD), and Topological Anomaly Detection (TAD) algorithms were used. Comparison of the three hyperspectral image analysis techniques shows that TAD is most suitable for detecting blood stains and discovering latent blood stains.

  13. Spatiao – Temporal Evaluation and Comparison of MM5 Model using Similarity Algorithm

    Directory of Open Access Journals (Sweden)

    N. Siabi

    2016-02-01

    Full Text Available Introduction temporal and spatial change of meteorological and environmental variables is very important. These changes can be predicted by numerical prediction models over time and in different locations and can be provided as spatial zoning maps with interpolation methods such as geostatistics (16, 6. But these maps are comparable to each other as visual, qualitative and univariate for a limited number of maps (15. To resolve this problem the similarity algorithm is used. This algorithm is a simultaneous comparison method to a large number of data (18. Numerical prediction models such as MM5 were used in different studies (10, 22, and 23. But a little research is done to compare the spatio-temporal similarity of the models with real data quantitatively. The purpose of this paper is to integrate geostatistical techniques with similarity algorithm to study the spatial and temporal MM5 model predicted results with real data. Materials and Methods The study area is north east of Iran. 55 to 61 degrees of longitude and latitude is 30 to 38 degrees. Monthly and annual temperature and precipitation actual data for the period of 1990-2010 was received from the Meteorological Agency and Department of Energy. MM5 Model Data, with a spatial resolution 0.5 × 0.5 degree were downloaded from the NASA website (5. GS+ and ArcGis software were used to produce each variable map. We used multivariate methods co-kriging and kriging with an external drift by applying topography and height as a secondary variable via implementing Digital Elevation Model. (6,12,14. Then the standardize and similarity algorithms (9,11 was applied by programming in MATLAB software to each map grid point. The spatial and temporal similarities between data collections and model results were obtained by F values. These values are between 0 and 0.5 where the value below 0.2 indicates good similarity and above 0.5 shows very poor similarity. The results were plotted on maps by MATLAB

  14. A comparison of two dose calculation algorithms-anisotropic analytical algorithm and Acuros XB-for radiation therapy planning of canine intranasal tumors.

    Science.gov (United States)

    Nagata, Koichi; Pethel, Timothy D

    2017-07-01

    Although anisotropic analytical algorithm (AAA) and Acuros XB (AXB) are both radiation dose calculation algorithms that take into account the heterogeneity within the radiation field, Acuros XB is inherently more accurate. The purpose of this retrospective method comparison study was to compare them and evaluate the dose discrepancy within the planning target volume (PTV). Radiation therapy (RT) plans of 11 dogs with intranasal tumors treated by radiation therapy at the University of Georgia were evaluated. All dogs were planned for intensity-modulated radiation therapy using nine coplanar X-ray beams that were equally spaced, then dose calculated with anisotropic analytical algorithm. The same plan with the same monitor units was then recalculated using Acuros XB for comparisons. Each dog's planning target volume was separated into air, bone, and tissue and evaluated. The mean dose to the planning target volume estimated by Acuros XB was 1.3% lower. It was 1.4% higher for air, 3.7% lower for bone, and 0.9% lower for tissue. The volume of planning target volume covered by the prescribed dose decreased by 21% when Acuros XB was used due to increased dose heterogeneity within the planning target volume. Anisotropic analytical algorithm relatively underestimates the dose heterogeneity and relatively overestimates the dose to the bone and tissue within the planning target volume for the radiation therapy planning of canine intranasal tumors. This can be clinically significant especially if the tumor cells are present within the bone, because it may result in relative underdosing of the tumor. © 2017 American College of Veterinary Radiology.

  15. Query by image example: The CANDID approach

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.M.; Cannon, M. [Los Alamos National Lab., NM (United States). Computer Research and Applications Group; Hush, D.R. [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Electrical and Computer Engineering

    1995-02-01

    CANDID (Comparison Algorithm for Navigating Digital Image Databases) was developed to enable content-based retrieval of digital imagery from large databases using a query-by-example methodology. A user provides an example image to the system, and images in the database that are similar to that example are retrieved. The development of CANDID was inspired by the N-gram approach to document fingerprinting, where a ``global signature`` is computed for every document in a database and these signatures are compared to one another to determine the similarity between any two documents. CANDID computes a global signature for every image in a database, where the signature is derived from various image features such as localized texture, shape, or color information. A distance between probability density functions of feature vectors is then used to compare signatures. In this paper, the authors present CANDID and highlight two results from their current research: subtracting a ``background`` signature from every signature in a database in an attempt to improve system performance when using inner-product similarity measures, and visualizing the contribution of individual pixels in the matching process. These ideas are applicable to any histogram-based comparison technique.

  16. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    International Nuclear Information System (INIS)

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; McIntyre, Justin

    2017-01-01

    The aim of this study is to compare radioxenon beta–gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Finally, our results show that existing algorithms can be improved and some newer algorithms can be better than the ones currently used.

  17. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    International Nuclear Information System (INIS)

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; McIntyre, Justin

    2017-01-01

    The aim of this paper is to compare radioxenon beta-gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Our results show that existing algorithms can be improved and some newer algorithms can be better than the ones currently used. (author)

  18. Inter-algorithm lesion volumetry comparison of real and 3D simulated lung lesions in CT

    Science.gov (United States)

    Robins, Marthony; Solomon, Justin; Hoye, Jocelyn; Smith, Taylor; Ebner, Lukas; Samei, Ehsan

    2017-03-01

    The purpose of this study was to establish volumetric exchangeability between real and computational lung lesions in CT. We compared the overall relative volume estimation performance of segmentation tools when used to measure real lesions in actual patient CT images and computational lesions virtually inserted into the same patient images (i.e., hybrid datasets). Pathologically confirmed malignancies from 30 thoracic patient cases from Reference Image Database to Evaluate Therapy Response (RIDER) were modeled and used as the basis for the comparison. Lesions included isolated nodules as well as those attached to the pleura or other lung structures. Patient images were acquired using a 16 detector row or 64 detector row CT scanner (Lightspeed 16 or VCT; GE Healthcare). Scans were acquired using standard chest protocols during a single breath-hold. Virtual 3D lesion models based on real lesions were developed in Duke Lesion Tool (Duke University), and inserted using a validated image-domain insertion program. Nodule volumes were estimated using multiple commercial segmentation tools (iNtuition, TeraRecon, Inc., Syngo.via, Siemens Healthcare, and IntelliSpace, Philips Healthcare). Consensus based volume comparison showed consistent trends in volume measurement between real and virtual lesions across all software. The average percent bias (+/- standard error) shows -9.2+/-3.2% for real lesions versus -6.7+/-1.2% for virtual lesions with tool A, 3.9+/-2.5% and 5.0+/-0.9% for tool B, and 5.3+/-2.3% and 1.8+/-0.8% for tool C, respectively. Virtual lesion volumes were statistically similar to those of real lesions (.05 in most cases. Results suggest that hybrid datasets had similar inter-algorithm variability compared to real datasets.

  19. Comparison of 3D Maximum A Posteriori and Filtered Backprojection algorithms for high resolution animal imaging in microPET

    International Nuclear Information System (INIS)

    Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.

    2000-01-01

    We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior

  20. Corroboration of mechanoregulatory algorithms for tissue differentiation during fracture healing: comparison with in vivo results

    NARCIS (Netherlands)

    Isaksson, H.E.; Donkelaar, van C.C.; Huiskes, R.; Ito, K.

    2006-01-01

    Several mechanoregulation algorithms proposed to control tissue differentiation during bone healing have been shown to accurately predict temporal and spatial tissue distributions during normal fracture healing. As these algorithms are different in nature and biophysical parameters, it raises the

  1. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł; Marszał-Paszek, Barbara; Moshkov, Mikhail; Paszek, Piotr; Skowron, Andrzej; Suraj, Zbigniew

    2010-01-01

    the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory

  2. A Comparison of the Effects of K-Anonymity on Machine Learning Algorithms

    OpenAIRE

    Hayden Wimmer; Loreen Powell

    2014-01-01

    While research has been conducted in machine learning algorithms and in privacy preserving in data mining (PPDM), a gap in the literature exists which combines the aforementioned areas to determine how PPDM affects common machine learning algorithms. The aim of this research is to narrow this literature gap by investigating how a common PPDM algorithm, K-Anonymity, affects common machine learning and data mining algorithms, namely neural networks, logistic regression, decision trees, and Baye...

  3. Cross-study and cross-omics comparisons of three nephrotoxic compounds reveal mechanistic insights and new candidate biomarkers

    International Nuclear Information System (INIS)

    Matheis, Katja A.; Com, Emmanuelle; Gautier, Jean-Charles; Guerreiro, Nelson; Brandenburg, Arnd; Gmuender, Hans; Sposny, Alexandra; Hewitt, Philip; Amberg, Alexander; Boernsen, Olaf; Riefke, Bjoern; Hoffmann, Dana; Mally, Angela; Kalkuhl, Arno; Suter, Laura; Dieterle, Frank; Staedtler, Frank

    2011-01-01

    The European InnoMed-PredTox project was a collaborative effort between 15 pharmaceutical companies, 2 small and mid-sized enterprises, and 3 universities with the goal of delivering deeper insights into the molecular mechanisms of kidney and liver toxicity and to identify mechanism-linked diagnostic or prognostic safety biomarker candidates by combining conventional toxicological parameters with 'omics' data. Mechanistic toxicity studies with 16 different compounds, 2 dose levels, and 3 time points were performed in male Crl: WI(Han) rats. Three of the 16 investigated compounds, BI-3 (FP007SE), Gentamicin (FP009SF), and IMM125 (FP013NO), induced kidney proximal tubule damage (PTD). In addition to histopathology and clinical chemistry, transcriptomics microarray and proteomics 2D-DIGE analysis were performed. Data from the three PTD studies were combined for a cross-study and cross-omics meta-analysis of the target organ. The mechanistic interpretation of kidney PTD-associated deregulated transcripts revealed, in addition to previously described kidney damage transcript biomarkers such as KIM-1, CLU and TIMP-1, a number of additional deregulated pathways congruent with histopathology observations on a single animal basis, including a specific effect on the complement system. The identification of new, more specific biomarker candidates for PTD was most successful when transcriptomics data were used. Combining transcriptomics data with proteomics data added extra value.

  4. Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms

    Directory of Open Access Journals (Sweden)

    Dominik Zurek

    2013-01-01

    Full Text Available Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting algorithms on GPU cards and multicore processors. Then hybrid algorithm will be presented which consists of parts executed on both platforms, standard CPU and GPU.

  5. Comparison of analyzer-based imaging computed tomography extraction algorithms and application to bone-cartilage imaging

    International Nuclear Information System (INIS)

    Diemoz, Paul C; Bravin, Alberto; Coan, Paola; Glaser, Christian

    2010-01-01

    In x-ray phase-contrast analyzer-based imaging, the contrast is provided by a combination of absorption, refraction and scattering effects. Several extraction algorithms, which attempt to separate and quantify these different physical contributions, have been proposed and applied. In a previous work, we presented a quantitative comparison of five among the most well-known extraction algorithms based on the geometrical optics approximation applied to planar images: diffraction-enhanced imaging (DEI), extended diffraction-enhanced imaging (E-DEI), generalized diffraction-enhanced imaging (G-DEI), multiple-image radiography (MIR) and Gaussian curve fitting (GCF). In this paper, we compare these algorithms in the case of the computed tomography (CT) modality. The extraction algorithms are applied to analyzer-based CT images of both plastic phantoms and biological samples (cartilage-on-bone cylinders). Absorption, refraction and scattering signals are derived. Results obtained with the different algorithms may vary greatly, especially in the case of large refraction angles. We show that ABI-CT extraction algorithms can provide an excellent tool to enhance the visualization of cartilage internal structures, which may find applications in a clinical context. Besides, by using the refraction images, the refractive index decrements for both the cartilage matrix and the cartilage cells have been estimated.

  6. Comparison and combination of NLPQL and MOGA algorithms for a marine medium-speed diesel engine optimisation

    International Nuclear Information System (INIS)

    Hu, Nao; Zhou, Peilin; Yang, Jianguo

    2017-01-01

    Highlights: • NLPQL algorithm is not effective when used for seven engine parameters optimisation. • MOGA algorithm is time consuming but offers broader and finer solutions. • A better design is offered by NLPQL algorithm when using a start point from MOGA. • SOI has the dominant and clearly opposite effects on NOx and SFOC. • Late injection, low swirl and large spray angle can lower NOx and soot simultaneously. - Abstract: Seven engine design parameters were investigated by use of NLPQL algorithm and MOGA separately and together. Detailed comparisons were made on NOx, soot, SFOC, and also on the design parameters. Results indicate that NLPQL algorithm failed to approach optimal designs while MOGA offered more and better feasible Pareto designs. Then, an optimal design obtained by MOGA which has the trade-off between NOx and soot was set as the starting point of NLPQL algorithm. In this situation, an even better design with lower NOx and soot was approached. Combustion processes of the optimal designs were also disclosed and compared in detail. Late injection and small swirl were reckoned to be the main reasons for reducing NOx. In the end, RSM contour maps were applied in order to gain a better understanding of the sensitivity of import parameters on NOx, soot and SFOC.

  7. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  8. A comparison and evaluation of five biclustering algorithms by quantifying goodness of biclusters for gene expression data

    Directory of Open Access Journals (Sweden)

    Li Li

    2012-07-01

    Full Text Available Abstract Background Several biclustering algorithms have been proposed to identify biclusters, in which genes share similar expression patterns across a number of conditions. However, different algorithms would yield different biclusters and further lead to distinct conclusions. Therefore, some testing and comparisons between these algorithms are strongly required. Methods In this study, five biclustering algorithms (i.e. BIMAX, FABIA, ISA, QUBIC and SAMBA were compared with each other in the cases where they were used to handle two expression datasets (GDS1620 and pathway with different dimensions in Arabidopsis thaliana (A. thaliana GO (gene ontology annotation and PPI (protein-protein interaction network were used to verify the corresponding biological significance of biclusters from the five algorithms. To compare the algorithms’ performance and evaluate quality of identified biclusters, two scoring methods, namely weighted enrichment (WE scoring and PPI scoring, were proposed in our study. For each dataset, after combining the scores of all biclusters into one unified ranking, we could evaluate the performance and behavior of the five biclustering algorithms in a better way. Results Both WE and PPI scoring methods has been proved effective to validate biological significance of the biclusters, and a significantly positive correlation between the two sets of scores has been tested to demonstrate the consistence of these two methods. A comparative study of the above five algorithms has revealed that: (1 ISA is the most effective one among the five algorithms on the dataset of GDS1620 and BIMAX outperforms the other algorithms on the dataset of pathway. (2 Both ISA and BIMAX are data-dependent. The former one does not work well on the datasets with few genes, while the latter one holds well for the datasets with more conditions. (3 FABIA and QUBIC perform poorly in this study and they may be suitable to large datasets with more genes and

  9. Comparison of Expression Profiles in Ovarian Epithelium In Vivo and Ovarian Cancer Identifies Novel Candidate Genes Involved in Disease Pathogenesis

    Science.gov (United States)

    Emmanuel, Catherine; Gava, Natalie; Kennedy, Catherine; Balleine, Rosemary L.; Sharma, Raghwa; Wain, Gerard; Brand, Alison; Hogg, Russell; Etemadmoghadam, Dariush; George, Joshy; Birrer, Michael J.; Clarke, Christine L.; Chenevix-Trench, Georgia; Bowtell, David D. L.; Harnett, Paul R.; deFazio, Anna

    2011-01-01

    Molecular events leading to epithelial ovarian cancer are poorly understood but ovulatory hormones and a high number of life-time ovulations with concomitant proliferation, apoptosis, and inflammation, increases risk. We identified genes that are regulated during the estrous cycle in murine ovarian surface epithelium and analysed these profiles to identify genes dysregulated in human ovarian cancer, using publically available datasets. We identified 338 genes that are regulated in murine ovarian surface epithelium during the estrous cycle and dysregulated in ovarian cancer. Six of seven candidates selected for immunohistochemical validation were expressed in serous ovarian cancer, inclusion cysts, ovarian surface epithelium and in fallopian tube epithelium. Most were overexpressed in ovarian cancer compared with ovarian surface epithelium and/or inclusion cysts (EpCAM, EZH2, BIRC5) although BIRC5 and EZH2 were expressed as highly in fallopian tube epithelium as in ovarian cancer. We prioritised the 338 genes for those likely to be important for ovarian cancer development by in silico analyses of copy number aberration and mutation using publically available datasets and identified genes with established roles in ovarian cancer as well as novel genes for which we have evidence for involvement in ovarian cancer. Chromosome segregation emerged as an important process in which genes from our list of 338 were over-represented including two (BUB1, NCAPD2) for which there is evidence of amplification and mutation. NUAK2, upregulated in ovarian surface epithelium in proestrus and predicted to have a driver mutation in ovarian cancer, was examined in a larger cohort of serous ovarian cancer where patients with lower NUAK2 expression had shorter overall survival. In conclusion, defining genes that are activated in normal epithelium in the course of ovulation that are also dysregulated in cancer has identified a number of pathways and novel candidate genes that may contribute

  10. Comparison of switching control algorithms effective in restricting the switching in the neighborhood of the origin

    International Nuclear Information System (INIS)

    Joung, JinWook; Chung, Lan; Smyth, Andrew W

    2010-01-01

    The active interaction control (AIC) system consisting of a primary structure, an auxiliary structure and an interaction element was proposed to protect the primary structure against earthquakes and winds. The objective of the AIC system in reducing the responses of the primary structure is fulfilled by activating or deactivating the switching between the engagement and the disengagement of the primary and auxiliary structures through the interaction element. The status of the interaction element is controlled by switching control algorithms. The previously developed switching control algorithms require an excessive amount of switching, which is inefficient. In this paper, the excessive amount of switching is restricted by imposing an appropriately designed switching boundary region, where switching is prohibited, on pre-designed engagement–disengagement conditions. Two different approaches are used in designing the newly proposed AID-off and AID-off 2 algorithms. The AID-off 2 algorithm is designed to affect deactivated switching regions explicitly, unlike the AID-off algorithm, which follows the same procedure of designing the engagement–disengagement conditions of the previously developed algorithms, by using the current status of the AIC system. Both algorithms are shown to be effective in reducing the amount of switching times triggered from the previously developed AID algorithm under an appropriately selected control sampling period for different earthquakes, but the AID-off 2 algorithm outperforms the AID-off algorithm in reducing the number of switching times

  11. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    Science.gov (United States)

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  12. An Algorithmic Comparison of the Hyper-Reduction and the Discrete Empirical Interpolation Method for a Nonlinear Thermal Problem

    Directory of Open Access Journals (Sweden)

    Felix Fritzen

    2018-02-01

    Full Text Available A novel algorithmic discussion of the methodological and numerical differences of competing parametric model reduction techniques for nonlinear problems is presented. First, the Galerkin reduced basis (RB formulation is presented, which fails at providing significant gains with respect to the computational efficiency for nonlinear problems. Renowned methods for the reduction of the computing time of nonlinear reduced order models are the Hyper-Reduction and the (Discrete Empirical Interpolation Method (EIM, DEIM. An algorithmic description and a methodological comparison of both methods are provided. The accuracy of the predictions of the hyper-reduced model and the (DEIM in comparison to the Galerkin RB is investigated. All three approaches are applied to a simple uncertainty quantification of a planar nonlinear thermal conduction problem. The results are compared to computationally intense finite element simulations.

  13. Comparison Performance of Genetic Algorithm and Ant Colony Optimization in Course Scheduling Optimizing

    Directory of Open Access Journals (Sweden)

    Imam Ahmad Ashari

    2016-11-01

    Full Text Available Scheduling problems at the university is a complex type of scheduling problems. The scheduling process should be carried out at every turn of the semester's. The core of the problem of scheduling courses at the university is that the number of components that need to be considered in making the schedule, some of the components was made up of students, lecturers, time and a room with due regard to the limits and certain conditions so that no collision in the schedule such as mashed room, mashed lecturer and others. To resolve a scheduling problem most appropriate technique used is the technique of optimization. Optimization techniques can give the best results desired. Metaheuristic algorithm is an algorithm that has a lot of ways to solve the problems to the very limit the optimal solution. In this paper, we use a genetic algorithm and ant colony optimization algorithm is an algorithm metaheuristic to solve the problem of course scheduling. The two algorithm will be tested and compared to get performance is the best. The algorithm was tested using data schedule courses of the university in Semarang. From the experimental results we conclude that the genetic algorithm has better performance than the ant colony optimization  algorithm in solving the case of course scheduling.

  14. A comparison between physicians and computer algorithms for form CMS-2728 data reporting.

    Science.gov (United States)

    Malas, Mohammed Said; Wish, Jay; Moorthi, Ranjani; Grannis, Shaun; Dexter, Paul; Duke, Jon; Moe, Sharon

    2017-01-01

    CMS-2728 form (Medical Evidence Report) assesses 23 comorbidities chosen to reflect poor outcomes and increased mortality risk. Previous studies questioned the validity of physician reporting on forms CMS-2728. We hypothesize that reporting of comorbidities by computer algorithms identifies more comorbidities than physician completion, and, therefore, is more reflective of underlying disease burden. We collected data from CMS-2728 forms for all 296 patients who had incident ESRD diagnosis and received chronic dialysis from 2005 through 2014 at Indiana University outpatient dialysis centers. We analyzed patients' data from electronic medical records systems that collated information from multiple health care sources. Previously utilized algorithms or natural language processing was used to extract data on 10 comorbidities for a period of up to 10 years prior to ESRD incidence. These algorithms incorporate billing codes, prescriptions, and other relevant elements. We compared the presence or unchecked status of these comorbidities on the forms to the presence or absence according to the algorithms. Computer algorithms had higher reporting of comorbidities compared to forms completion by physicians. This remained true when decreasing data span to one year and using only a single health center source. The algorithms determination was well accepted by a physician panel. Importantly, algorithms use significantly increased the expected deaths and lowered the standardized mortality ratios. Using computer algorithms showed superior identification of comorbidities for form CMS-2728 and altered standardized mortality ratios. Adapting similar algorithms in available EMR systems may offer more thorough evaluation of comorbidities and improve quality reporting. © 2016 International Society for Hemodialysis.

  15. Performance comparison of extracellular spike sorting algorithms for single-channel recordings.

    Science.gov (United States)

    Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert

    2012-01-30

    Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (psorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (palgorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.

  16. Linking mothers and infants within electronic health records: a comparison of deterministic and probabilistic algorithms.

    Science.gov (United States)

    Baldwin, Eric; Johnson, Karin; Berthoud, Heidi; Dublin, Sascha

    2015-01-01

    To compare probabilistic and deterministic algorithms for linking mothers and infants within electronic health records (EHRs) to support pregnancy outcomes research. The study population was women enrolled in Group Health (Washington State, USA) delivering a liveborn infant from 2001 through 2008 (N = 33,093 deliveries) and infant members born in these years. We linked women to infants by surname, address, and dates of birth and delivery using deterministic and probabilistic algorithms. In a subset previously linked using "gold standard" identifiers (N = 14,449), we assessed each approach's sensitivity and positive predictive value (PPV). For deliveries with no "gold standard" linkage (N = 18,644), we compared the algorithms' linkage proportions. We repeated our analyses in an independent test set of deliveries from 2009 through 2013. We reviewed medical records to validate a sample of pairs apparently linked by one algorithm but not the other (N = 51 or 1.4% of discordant pairs). In the 2001-2008 "gold standard" population, the probabilistic algorithm's sensitivity was 84.1% (95% CI, 83.5-84.7) and PPV 99.3% (99.1-99.4), while the deterministic algorithm had sensitivity 74.5% (73.8-75.2) and PPV 95.7% (95.4-96.0). In the test set, the probabilistic algorithm again had higher sensitivity and PPV. For deliveries in 2001-2008 with no "gold standard" linkage, the probabilistic algorithm found matched infants for 58.3% and the deterministic algorithm, 52.8%. On medical record review, 100% of linked pairs appeared valid. A probabilistic algorithm improved linkage proportion and accuracy compared to a deterministic algorithm. Better linkage methods can increase the value of EHRs for pregnancy outcomes research. Copyright © 2014 John Wiley & Sons, Ltd.

  17. Comparison of Controller and Flight Deck Algorithm Performance During Interval Management with Dynamic Arrival Trees (STARS)

    Science.gov (United States)

    Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.

    2012-01-01

    Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.

  18. Comparison of the efficiency of two algorithms which solve the shortest path problem with an emotional agent

    Directory of Open Access Journals (Sweden)

    Petruseva Silvana

    2006-01-01

    Full Text Available This paper discusses the comparison of the efficiency of two algorithms, by estimation of their complexity. For solving the problem, the Neural Network Crossbar Adaptive Array (NN-CAA is used as the agent architecture, implementing a model of an emotion. The problem discussed is how to find the shortest path in an environment with n states. The domains concerned are environments with n states, one of which is the starting state, one is the goal state, and some states are undesirable and they should be avoided. It is obtained that finding one path (one solution is efficient, i.e. in polynomial time by both algorithms. One of the algorithms is faster than the other only in the multiplicative constant, and it shows a step forward toward the optimality of the learning process. However, finding the optimal solution (the shortest path by both algorithms is in exponential time which is asserted by two theorems. It might be concluded that the concept of subgoal is one step forward toward the optimality of the process of the agent learning. Yet, it should be explored further on, in order to obtain an efficient, polynomial algorithm.

  19. Assessment of subsidence in karst terranes at selected areas in East Tennessee and comparison with a candidate site at Oak Ridge, Tennessee: Phase 2

    International Nuclear Information System (INIS)

    Newton, J.G.; Tanner, J.M.

    1987-09-01

    Work in the respective areas included assessment of conditions related to sinkhole development. Information collected and assessed involved geology, hydrogeology, land use, lineaments and linear trends, identification of karst features and zones, and inventory of historical sinkhole development and type. Karstification of the candidate, Rhea County, and Morristown study areas, in comparison to other karst areas in Tennessee, can be classified informally as youthful, submature, and mature, respectively. Historical sinkhole development in the more karstified areas is attributed to the greater degree of structural deformation by faulting and fracturing, subsequent solutioning of bedrock, thinness of residuum, and degree of development by man. Sinkhole triggering mechanisms identified are progressive solution of bedrock, water-level fluctuations, piping, and loading. 68 refs., 18 figs., 11 tabs

  20. Comparison of several algorithms of the electric force calculation in particle plasma models

    International Nuclear Information System (INIS)

    Lachnitt, J; Hrach, R

    2014-01-01

    This work is devoted to plasma modelling using the technique of molecular dynamics. The crucial problem of most such models is the efficient calculation of electric force. This is usually solved by using the particle-in-cell (PIC) algorithm. However, PIC is an approximative algorithm as it underestimates the short-range interactions of charged particles. We propose a hybrid algorithm which adds these interactions to PIC. Then we include this algorithm in a set of algorithms which we test against each other in a two-dimensional collisionless magnetized plasma model. Besides our hybrid algorithm, this set includes two variants of pure PIC and the direct application of Coulomb's law. We compare particle forces, particle trajectories, total energy conservation and the speed of the algorithms. We find out that the hybrid algorithm can be a good replacement of direct Coulomb's law application (quite accurate and much faster). It is however probably unnecessary to use it in practical 2D models.

  1. NUCLEAR SEGMENTATION IN MICROSCOPE CELL IMAGES: A HAND-SEGMENTED DATASET AND COMPARISON OF ALGORITHMS

    OpenAIRE

    Coelho, Luís Pedro; Shariff, Aabid; Murphy, Robert F.

    2009-01-01

    Image segmentation is an essential step in many image analysis pipelines and many algorithms have been proposed to solve this problem. However, they are often evaluated subjectively or based on a small number of examples. To fill this gap, we hand-segmented a set of 97 fluorescence microscopy images (a total of 4009 cells) and objectively evaluated some previously proposed segmentation algorithms.

  2. Comparison between Genetic Algorithms and Particle Swarm Optimization Methods on Standard Test Functions and Machine Design

    DEFF Research Database (Denmark)

    Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika

    2013-01-01

    , genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application.......Nowadays the requirements imposed by the industry and economy ask for better quality and performance while the price must be maintained in the same range. To achieve this goal optimization must be introduced in the design process. Two of the best known optimization algorithms for machine design...

  3. Dynamic contrast-enhanced MRI of the prostate. Comparison of two different post-processing algorithms

    International Nuclear Information System (INIS)

    Beyersdorff, Dirk; Franiel, T.; Luedemann, L.; Dietz, E.; Galler, D.; Marchot, P.

    2011-01-01

    Purpose: To evaluate the usefulness of a commercially available post-processing software tool for detecting prostate cancer on dynamic contrast-enhanced magnetic resonance imaging (MRI) and to compare the results to those obtained with a custom-made post-processing algorithm already tested under clinical conditions. Materials and Methods: Forty-eight patients with proven prostate cancer were examined by standard MRI supplemented by dynamic contrast-enhanced dual susceptibility contrast (DCE-DSC) MRI prior to prostatectomy. A custom-made post-processing algorithm was used to analyze the MRI data sets and the results were compared to those obtained using a post-processing algorithm from Invivo Corporation (Dyna CAD for Prostate) applied to dynamic T 1-weighted images. Histology was used as the gold standard. Results: The sensitivity for prostate cancer detection was 78 % for the custom-made algorithm and 60 % for the commercial algorithm and the specificity was 79 % and 82 %, respectively. The accuracy was 79 % for our algorithm and 77.5 % for the commercial software tool. The chi-square test (McNemar-Bowker test) yielded no significant differences between the two tools (p = 0.06). Conclusion: The two investigated post-processing algorithms did not differ in terms of prostate cancer detection. The commercially available software tool allows reliable and fast analysis of dynamic contrast-enhanced MRI for the detection of prostate cancer. (orig.)

  4. Comparison of SAR calculation algorithms for the finite-difference time-domain method

    International Nuclear Information System (INIS)

    Laakso, Ilkka; Uusitupa, Tero; Ilvonen, Sami

    2010-01-01

    Finite-difference time-domain (FDTD) simulations of specific-absorption rate (SAR) have several uncertainty factors. For example, significantly varying SAR values may result from the use of different algorithms for determining the SAR from the FDTD electric field. The objective of this paper is to rigorously study the divergence of SAR values due to different SAR calculation algorithms and to examine if some SAR calculation algorithm should be preferred over others. For this purpose, numerical FDTD results are compared to analytical solutions in a one-dimensional layered model and a three-dimensional spherical object. Additionally, the implications of SAR calculation algorithms for dosimetry of anatomically realistic whole-body models are studied. The results show that the trapezium algorithm-based on the trapezium integration rule-is always conservative compared to the analytic solution, making it a good choice for worst-case exposure assessment. In contrast, the mid-ordinate algorithm-named after the mid-ordinate integration rule-usually underestimates the analytic SAR. The linear algorithm-which is approximately a weighted average of the two-seems to be the most accurate choice overall, typically giving the best fit with the shape of the analytic SAR distribution. For anatomically realistic models, the whole-body SAR difference between different algorithms is relatively independent of the used body model, incident direction and polarization of the plane wave. The main factors affecting the difference are cell size and frequency. The choice of the SAR calculation algorithm is an important simulation parameter in high-frequency FDTD SAR calculations, and it should be explained to allow intercomparison of the results between different studies. (note)

  5. Comparison of a constraint directed search to a genetic algorithm in a scheduling application

    International Nuclear Information System (INIS)

    Abbott, L.

    1993-01-01

    Scheduling plutonium containers for blending is a time-intensive operation. Several constraints must be taken into account; including the number of containers in a dissolver run, the size of each dissolver run, and the size and target purity of the blended mixture formed from these runs. Two types of algorithms have been used to solve this problem: a constraint directed search and a genetic algorithm. This paper discusses the implementation of these two different approaches to the problem and the strengths and weaknesses of each algorithm

  6. Performance comparison of some evolutionary algorithms on job shop scheduling problems

    Science.gov (United States)

    Mishra, S. K.; Rao, C. S. P.

    2016-09-01

    Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.

  7. Cloud detection algorithm comparison and validation for operational Landsat data products

    Science.gov (United States)

    Foga, Steven Curtis; Scaramuzza, Pat; Guo, Song; Zhu, Zhe; Dilley, Ronald; Beckmann, Tim; Schmidt, Gail L.; Dwyer, John L.; Hughes, MJ; Laue, Brady

    2017-01-01

    Clouds are a pervasive and unavoidable issue in satellite-borne optical imagery. Accurate, well-documented, and automated cloud detection algorithms are necessary to effectively leverage large collections of remotely sensed data. The Landsat project is uniquely suited for comparative validation of cloud assessment algorithms because the modular architecture of the Landsat ground system allows for quick evaluation of new code, and because Landsat has the most comprehensive manual truth masks of any current satellite data archive. Currently, the Landsat Level-1 Product Generation System (LPGS) uses separate algorithms for determining clouds, cirrus clouds, and snow and/or ice probability on a per-pixel basis. With more bands onboard the Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) satellite, and a greater number of cloud masking algorithms, the U.S. Geological Survey (USGS) is replacing the current cloud masking workflow with a more robust algorithm that is capable of working across multiple Landsat sensors with minimal modification. Because of the inherent error from stray light and intermittent data availability of TIRS, these algorithms need to operate both with and without thermal data. In this study, we created a workflow to evaluate cloud and cloud shadow masking algorithms using cloud validation masks manually derived from both Landsat 7 Enhanced Thematic Mapper Plus (ETM +) and Landsat 8 OLI/TIRS data. We created a new validation dataset consisting of 96 Landsat 8 scenes, representing different biomes and proportions of cloud cover. We evaluated algorithm performance by overall accuracy, omission error, and commission error for both cloud and cloud shadow. We found that CFMask, C code based on the Function of Mask (Fmask) algorithm, and its confidence bands have the best overall accuracy among the many algorithms tested using our validation data. The Artificial Thermal-Automated Cloud Cover Algorithm (AT-ACCA) is the most accurate

  8. Prime Numbers Comparison using Sieve of Eratosthenes and Sieve of Sundaram Algorithm

    Science.gov (United States)

    Abdullah, D.; Rahim, R.; Apdilah, D.; Efendi, S.; Tulus, T.; Suwilo, S.

    2018-03-01

    Prime numbers are numbers that have their appeal to researchers due to the complexity of these numbers, many algorithms that can be used to generate prime numbers ranging from simple to complex computations, Sieve of Eratosthenes and Sieve of Sundaram are two algorithm that can be used to generate Prime numbers of randomly generated or sequential numbered random numbers, testing in this study to find out which algorithm is better used for large primes in terms of time complexity, the test also assisted with applications designed using Java language with code optimization and Maximum memory usage so that the testing process can be simultaneously and the results obtained can be objective

  9. Disease candidate gene identification and prioritization using protein interaction networks

    Directory of Open Access Journals (Sweden)

    Aronow Bruce J

    2009-02-01

    Full Text Available Abstract Background Although most of the current disease candidate gene identification and prioritization methods depend on functional annotations, the coverage of the gene functional annotations is a limiting factor. In the current study, we describe a candidate gene prioritization method that is entirely based on protein-protein interaction network (PPIN analyses. Results For the first time, extended versions of the PageRank and HITS algorithms, and the K-Step Markov method are applied to prioritize disease candidate genes in a training-test schema. Using a list of known disease-related genes from our earlier study as a training set ("seeds", and the rest of the known genes as a test list, we perform large-scale cross validation to rank the candidate genes and also evaluate and compare the performance of our approach. Under appropriate settings – for example, a back probability of 0.3 for PageRank with Priors and HITS with Priors, and step size 6 for K-Step Markov method – the three methods achieved a comparable AUC value, suggesting a similar performance. Conclusion Even though network-based methods are generally not as effective as integrated functional annotation-based methods for disease candidate gene prioritization, in a one-to-one comparison, PPIN-based candidate gene prioritization performs better than all other gene features or annotations. Additionally, we demonstrate that methods used for studying both social and Web networks can be successfully used for disease candidate gene prioritization.

  10. Comparison of Clustering Algorithms for the Identification of Topics on Twitter

    Directory of Open Access Journals (Sweden)

    Marjori N. M. Klinczak

    2016-05-01

    Full Text Available Topic Identification in Social Networks has become an important task when dealing with event detection, particularly when global communities are affected. In order to attack this problem, text processing techniques and machine learning algorithms have been extensively used. In this paper we compare four clustering algorithms – k-means, k-medoids, DBSCAN and NMF (Non-negative Matrix Factorization – in order to detect topics related to textual messages obtained from Twitter. The algorithms were applied to a database initially composed by tweets having hashtags related to the recent Nepal earthquake as initial context. Obtained results suggest that the NMF clustering algorithm presents superior results, providing simpler clusters that are also easier to interpret.

  11. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry.

    Science.gov (United States)

    Bedggood, Phillip; Metha, Andrew

    2010-01-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  12. Performance comparison of attitude determination, attitude estimation, and nonlinear observers algorithms

    Science.gov (United States)

    MOHAMMED, M. A. SI; BOUSSADIA, H.; BELLAR, A.; ADNANE, A.

    2017-01-01

    This paper presents a brief synthesis and useful performance analysis of different attitude filtering algorithms (attitude determination algorithms, attitude estimation algorithms, and nonlinear observers) applied to Low Earth Orbit Satellite in terms of accuracy, convergence time, amount of memory, and computation time. This latter is calculated in two ways, using a personal computer and also using On-board computer 750 (OBC 750) that is being used in many SSTL Earth observation missions. The use of this comparative study could be an aided design tool to the designer to choose from an attitude determination or attitude estimation or attitude observer algorithms. The simulation results clearly indicate that the nonlinear Observer is the more logical choice.

  13. Comparison of Two Phenotypic Algorithms To Detect Carbapenemase-Producing Enterobacteriaceae

    Science.gov (United States)

    Dortet, Laurent; Bernabeu, Sandrine; Gonzalez, Camille

    2017-01-01

    ABSTRACT A novel algorithm designed for the screening of carbapenemase-producing Enterobacteriaceae (CPE), based on faropenem and temocillin disks, was compared to that of the Committee of the Antibiogram of the French Society of Microbiology (CA-SFM), which is based on ticarcillin-clavulanate, imipenem, and temocillin disks. The two algorithms presented comparable negative predictive values (98.6% versus 97.5%) for CPE screening among carbapenem-nonsusceptible Enterobacteriaceae. However, since 46.2% (n = 49) of the CPE were correctly identified as OXA-48-like producers by the faropenem/temocillin-based algorithm, it significantly decreased the number of complementary tests needed (42.2% versus 62.6% with the CA-SFM algorithm). PMID:28607010

  14. A Qualitative Comparison between the Proportional Navigation and Differential Geometry Guidance Algorithms

    Directory of Open Access Journals (Sweden)

    Yunes Sh. ALQUDSI

    2018-06-01

    Full Text Available This paper discusses and presents an overview of the proportional navigation (PN guidance law as well as the differential geometry (DG guidance algorithm that are used to develop the intercept course of a certain target. The intent of this study is to illustrate the advantages of the guidance algorithm generated based on the concepts of differential geometry against the well-known PN guidance law. The basic principles behind the both algorithms are mentioned. Moreover, the different versions of the PN approach is briefly clarified to show the essential improvement from one version to the other. The paper terminated with numerous two-dimension simulation figures to give a great value of visual aids, illustrating the significant relations and main features and properties of both algorithms.

  15. Clustering performance comparison using K-means and expectation maximization algorithms.

    Science.gov (United States)

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-11-14

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.

  16. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  17. Currency recognition using a smartphone: Comparison between color SIFT and gray scale SIFT algorithms

    OpenAIRE

    Iyad Abu Doush; Sahar AL-Btoush

    2017-01-01

    Banknote recognition means classifying the currency (coin and paper) to the correct class. In this paper, we developed a dataset for Jordanian currency. After that we applied automatic mobile recognition system using a smartphone on the dataset using scale-invariant feature transform (SIFT) algorithm. This is the first attempt, to the best of the authors knowledge, to recognize both coins and paper banknotes on a smartphone using SIFT algorithm. SIFT has been developed to be the most robust a...

  18. Comparison of four machine learning algorithms for their applicability in satellite-based optical rainfall retrievals

    Science.gov (United States)

    Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas

    2016-03-01

    Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.

  19. A comparison of optimization algorithms for localized in vivo B0 shimming.

    Science.gov (United States)

    Nassirpour, Sahar; Chang, Paul; Fillmer, Ariane; Henning, Anke

    2018-02-01

    To compare several different optimization algorithms currently used for localized in vivo B 0 shimming, and to introduce a novel, fast, and robust constrained regularized algorithm (ConsTru) for this purpose. Ten different optimization algorithms (including samples from both generic and dedicated least-squares solvers, and a novel constrained regularized inversion method) were implemented and compared for shimming in five different shimming volumes on 66 in vivo data sets from both 7 T and 9.4 T. The best algorithm was chosen to perform single-voxel spectroscopy at 9.4 T in the frontal cortex of the brain on 10 volunteers. The results of the performance tests proved that the shimming algorithm is prone to unstable solutions if it depends on the value of a starting point, and is not regularized to handle ill-conditioned problems. The ConsTru algorithm proved to be the most robust, fast, and efficient algorithm among all of the chosen algorithms. It enabled acquisition of spectra of reproducible high quality in the frontal cortex at 9.4 T. For localized in vivo B 0 shimming, the use of a dedicated linear least-squares solver instead of a generic nonlinear one is highly recommended. Among all of the linear solvers, the constrained regularized method (ConsTru) was found to be both fast and most robust. Magn Reson Med 79:1145-1156, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  20. Iterative metal artefact reduction (MAR) in postsurgical chest CT: comparison of three iMAR-algorithms.

    Science.gov (United States)

    Aissa, Joel; Boos, Johannes; Sawicki, Lino Morris; Heinzler, Niklas; Krzymyk, Karl; Sedlmair, Martin; Kröpil, Patric; Antoch, Gerald; Thomas, Christoph

    2017-11-01

    The purpose of this study was to evaluate the impact of three novel iterative metal artefact (iMAR) algorithms on image quality and artefact degree in chest CT of patients with a variety of thoracic metallic implants. 27 postsurgical patients with thoracic implants who underwent clinical chest CT between March and May 2015 in clinical routine were retrospectively included. Images were retrospectively reconstructed with standard weighted filtered back projection (WFBP) and with three iMAR algorithms (iMAR-Algo1 = Cardiac algorithm, iMAR-Algo2 = Pacemaker algorithm and iMAR-Algo3 = ThoracicCoils algorithm). The subjective and objective image quality was assessed. Averaged over all artefacts, artefact degree was significantly lower for the iMAR-Algo1 (58.9 ± 48.5 HU), iMAR-Algo2 (52.7 ± 46.8 HU) and the iMAR-Algo3 (51.9 ± 46.1 HU) compared with WFBP (91.6 ± 81.6 HU, p algorithms, respectively. iMAR-Algo2 and iMAR-Algo3 reconstructions decreased mild and moderate artefacts compared with WFBP and iMAR-Algo1 (p algorithms led to a significant reduction of metal artefacts and increase in overall image quality compared with WFBP in chest CT of patients with metallic implants in subjective and objective analysis. The iMARAlgo2 and iMARAlgo3 were best for mild artefacts. IMARAlgo1 was superior for severe artefacts. Advances in knowledge: Iterative MAR led to significant artefact reduction and increase image-quality compared with WFBP in CT after implementation of thoracic devices. Adjusting iMAR-algorithms to patients' metallic implants can help to improve image quality in CT.

  1. Comparison of two heterogeneity correction algorithms in pituitary gland treatments with intensity-modulated radiation therapy

    International Nuclear Information System (INIS)

    Albino, Lucas D.; Santos, Gabriela R.; Ribeiro, Victor A.B.; Rodrigues, Laura N.; Weltman, Eduardo; Braga, Henrique F.

    2013-01-01

    The dose accuracy calculated by a treatment planning system is directly related to the chosen algorithm. Nowadays, several calculation doses algorithms are commercially available and they differ in calculation time and accuracy, especially when individual tissue densities are taken into account. The aim of this study was to compare two different calculation algorithms from iPlan®, BrainLAB, in the treatment of pituitary gland tumor with intensity-modulated radiation therapy (IMRT). These tumors are located in a region with variable electronic density tissues. The deviations from the plan with no heterogeneity correction were evaluated. To initial validation of the data inserted into the planning system, an IMRT plan was simulated in a anthropomorphic phantom and the dose distribution was measured with a radiochromic film. The gamma analysis was performed in the film, comparing it with dose distributions calculated with X-ray Voxel Monte Carlo (XVMC) algorithm and pencil beam convolution (PBC). Next, 33 patients plans, initially calculated by PBC algorithm, were recalculated with XVMC algorithm. The treatment volumes and organs-at-risk dose-volume histograms were compared. No relevant differences were found in dose-volume histograms between XVMC and PBC. However, differences were obtained when comparing each plan with the plan without heterogeneity correction. (author)

  2. Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.

    Science.gov (United States)

    Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S

    2010-01-01

    Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.

  3. Comparison of different reconstruction algorithms for three-dimensional ultrasound imaging in a neurosurgical setting.

    Science.gov (United States)

    Miller, D; Lippert, C; Vollmer, F; Bozinov, O; Benes, L; Schulte, D M; Sure, U

    2012-09-01

    Freehand three-dimensional ultrasound imaging (3D-US) is increasingly used in image-guided surgery. During image acquisition, a set of B-scans is acquired that is distributed in a non-parallel manner over the area of interest. Reconstructing these images into a regular array allows 3D visualization. However, the reconstruction process may introduce artefacts and may therefore reduce image quality. The aim of the study is to compare different algorithms with respect to image quality and diagnostic value for image guidance in neurosurgery. 3D-US data sets were acquired during surgery of various intracerebral lesions using an integrated ultrasound-navigation device. They were stored for post-hoc evaluation. Five different reconstruction algorithms, a standard multiplanar reconstruction with interpolation (MPR), a pixel nearest neighbour method (PNN), a voxel nearest neighbour method (VNN) and two voxel based distance-weighted algorithms (VNN2 and DW) were tested with respect to image quality and artefact formation. The capability of the algorithm to fill gaps within the sample volume was investigated and a clinical evaluation with respect to the diagnostic value of the reconstructed images was performed. MPR was significantly worse than the other algorithms in filling gaps. In an image subtraction test, VNN2 and DW reliably reconstructed images even if large amounts of data were missing. However, the quality of the reconstruction improved, if data acquisition was performed in a structured manner. When evaluating the diagnostic value of reconstructed axial, sagittal and coronal views, VNN2 and DW were judged to be significantly better than MPR and VNN. VNN2 and DW could be identified as robust algorithms that generate reconstructed US images with a high diagnostic value. These algorithms improve the utility and reliability of 3D-US imaging during intraoperative navigation. Copyright © 2012 John Wiley & Sons, Ltd.

  4. Target recognition of ladar range images using slice image: comparison of four improved algorithms

    Science.gov (United States)

    Xia, Wenze; Han, Shaokun; Cao, Jingya; Wang, Liang; Zhai, Yu; Cheng, Yang

    2017-07-01

    Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors-which are feature slice image, slice-Zernike moments, and slice-Fourier moments-are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.

  5. Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows

    Science.gov (United States)

    Palmer, Grant; Venkatapathy, Ethiraj

    1995-01-01

    Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  6. Comparison of optimization algorithms in intensity-modulated radiation therapy planning

    Science.gov (United States)

    Kendrick, Rachel

    Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.

  7. Comparison of Statistical Algorithms for the Detection of Infectious Disease Outbreaks in Large Multiple Surveillance Systems

    Science.gov (United States)

    Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre

    2016-01-01

    A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749

  8. Dense Matching Comparison Between Census and a Convolutional Neural Network Algorithm for Plant Reconstruction

    Science.gov (United States)

    Xia, Y.; Tian, J.; d'Angelo, P.; Reinartz, P.

    2018-05-01

    3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  9. DENSE MATCHING COMPARISON BETWEEN CENSUS AND A CONVOLUTIONAL NEURAL NETWORK ALGORITHM FOR PLANT RECONSTRUCTION

    Directory of Open Access Journals (Sweden)

    Y. Xia

    2018-05-01

    Full Text Available 3D reconstruction of plants is hard to implement, as the complex leaf distribution highly increases the difficulty level in dense matching. Semi-Global Matching has been successfully applied to recover the depth information of a scene, but may perform variably when different matching cost algorithms are used. In this paper two matching cost computation algorithms, Census transform and an algorithm using a convolutional neural network, are tested for plant reconstruction based on Semi-Global Matching. High resolution close-range photogrammetric images from a handheld camera are used for the experiment. The disparity maps generated based on the two selected matching cost methods are comparable with acceptable quality, which shows the good performance of Census and the potential of neural networks to improve the dense matching.

  10. Performance Comparison of Different System Identification Algorithms for FACET and ATF2

    CERN Document Server

    Pfingstner, J; Schulte, D

    2013-01-01

    Good system knowledge is an essential ingredient for the operation of modern accelerator facilities. For example, beam-based alignment algorithms and orbit feedbacks rely strongly on a precise measurement of the orbit response matrix. The quality of the measurement of this matrix can be improved over time by statistically combining the effects of small system excitations with the help of system identification algorithms. These small excitations can be applied in a parasitic mode without stopping the accelerator operation (on-line). In this work, different system identification algorithms are used in simulation studies for the response matrix measurement at ATF2. The results for ATF2 are finally compared with the results for FACET, latter originating from an earlier work.

  11. A comparison of different algorithms for phasing haplotypes using Holstein cattle genotypes and pedigree data.

    Science.gov (United States)

    Miar, Younes; Sargolzaei, Mehdi; Schenkel, Flavio S

    2017-04-01

    Phasing genotypes to haplotypes is becoming increasingly important due to its applications in the study of diseases, population and evolutionary genetics, imputation, and so on. Several studies have focused on the development of computational methods that infer haplotype phase from population genotype data. The aim of this study was to compare phasing algorithms implemented in Beagle, Findhap, FImpute, Impute2, and ShapeIt2 software using 50k and 777k (HD) genotyping data. Six scenarios were considered: no-parents, sire-progeny pairs, sire-dam-progeny trios, each with and without pedigree information in Holstein cattle. Algorithms were compared with respect to their phasing accuracy and computational efficiency. In the studied population, Beagle and FImpute were more accurate than other phasing algorithms. Across scenarios, phasing accuracies for Beagle and FImpute were 99.49-99.90% and 99.44-99.99% for 50k, respectively, and 99.90-99.99% and 99.87-99.99% for HD, respectively. Generally, FImpute resulted in higher accuracy when genotypic information of at least one parent was available. In the absence of parental genotypes and pedigree information, Beagle and Impute2 (with double the default number of states) were slightly more accurate than FImpute. Findhap gave high phasing accuracy when parents' genotypes and pedigree information were available. In terms of computing time, Findhap was the fastest algorithm followed by FImpute. FImpute was 30 to 131, 87 to 786, and 353 to 1,400 times faster across scenarios than Beagle, ShapeIt2, and Impute2, respectively. In summary, FImpute and Beagle were the most accurate phasing algorithms. Moreover, the low computational requirement of FImpute makes it an attractive algorithm for phasing genotypes of large livestock populations. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  12. COMPARISON OF PARTIAL LEAST SQUARES REGRESSION METHOD ALGORITHMS: NIPALS AND PLS-KERNEL AND AN APPLICATION

    Directory of Open Access Journals (Sweden)

    ELİF BULUT

    2013-06-01

    Full Text Available Partial Least Squares Regression (PLSR is a multivariate statistical method that consists of partial least squares and multiple linear regression analysis. Explanatory variables, X, having multicollinearity are reduced to components which explain the great amount of covariance between explanatory and response variable. These components are few in number and they don’t have multicollinearity problem. Then multiple linear regression analysis is applied to those components to model the response variable Y. There are various PLSR algorithms. In this study NIPALS and PLS-Kernel algorithms will be studied and illustrated on a real data set.

  13. Teacher Candidates' Experiences with Clinical Teaching in Reading Instruction: A Comparison between the Professional Development School Environment and the Non-Professional Development School Environment

    Science.gov (United States)

    Hopper, Cynthia J.

    2016-01-01

    Teacher candidates experience a variety of school settings when enrolled in teacher education methods courses. Candidates report varied experiences when in public school classrooms. This dissertation investigated clinical experiences of teacher candidates when placed in two different environments for clinical teaching. The two environments were a…

  14. Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath

    2012-01-01

    This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals...

  15. An up-to-date comparison of state-of-the-art classification algorithms

    KAUST Repository

    Zhang, Chongsheng

    2017-04-05

    Current benchmark reports of classification algorithms generally concern common classifiers and their variants but do not include many algorithms that have been introduced in recent years. Moreover, important properties such as the dependency on number of classes and features and CPU running time are typically not examined. In this paper, we carry out a comparative empirical study on both established classifiers and more recently proposed ones on 71 data sets originating from different domains, publicly available at UCI and KEEL repositories. The list of 11 algorithms studied includes Extreme Learning Machine (ELM), Sparse Representation based Classification (SRC), and Deep Learning (DL), which have not been thoroughly investigated in existing comparative studies. It is found that Stochastic Gradient Boosting Trees (GBDT) matches or exceeds the prediction performance of Support Vector Machines (SVM) and Random Forests (RF), while being the fastest algorithm in terms of prediction efficiency. ELM also yields good accuracy results, ranking in the top-5, alongside GBDT, RF, SVM, and C4.5 but this performance varies widely across all data sets. Unsurprisingly, top accuracy performers have average or slow training time efficiency. DL is the worst performer in terms of accuracy but second fastest in prediction efficiency. SRC shows good accuracy performance but it is the slowest classifier in both training and testing.

  16. Comparison of SAR Wind Speed Retrieval Algorithms for Evaluating Offshore Wind Energy Resources

    DEFF Research Database (Denmark)

    Kozai, K.; Ohsawa, T.; Takeyama, Y.

    2010-01-01

    Envisat/ASAR-derived offshore wind speeds and energy densities based on 4 different SAR wind speed retrieval algorithms (CMOD4, CMOD-IFR2, CMOD5, CMOD5.N) are compared with observed wind speeds and energy densities for evaluating offshore wind energy resources. CMOD4 ignores effects of atmospheri...

  17. Comparison and application of wind retrieval algorithms for small unmanned aerial systems

    Science.gov (United States)

    Bonin, T. A.; Chilson, P. B.; Zielke, B. S.; Klein, P. M.; Leeman, J. R.

    2013-07-01

    Recently, there has been an increase in use of Unmanned Aerial Systems (UASs) as platforms for conducting fundamental and applied research in the lower atmosphere due to their relatively low cost and ability to collect samples with high spatial and temporal resolution. Concurrent with this development comes the need for accurate instrumentation and measurement methods suitable for small meteorological UASs. Moreover, the instrumentation to be integrated into such platforms must be small and lightweight. Whereas thermodynamic variables can be easily measured using well-aspirated sensors onboard, it is much more challenging to accurately measure the wind with a UAS. Several algorithms have been developed that incorporate GPS observations as a means of estimating the horizontal wind vector, with each algorithm exhibiting its own particular strengths and weaknesses. In the present study, the performance of three such GPS-based wind-retrieval algorithms has been investigated and compared with wind estimates from rawinsonde and sodar observations. Each of the algorithms considered agreed well with the wind measurements from sounding and sodar data. Through the integration of UAS-retrieved profiles of thermodynamic and kinematic parameters, one can investigate the static and dynamic stability of the atmosphere and relate them to the state of the boundary layer across a variety of times and locations, which might be difficult to access using conventional instrumentation.

  18. Development and comparisons of wind retrieval algorithms for small unmanned aerial systems

    Science.gov (United States)

    Bonin, T. A.; Chilson, P. B.; Zielke, B. S.; Klein, P. M.; Leeman, J. R.

    2012-12-01

    Recently, there has been an increase in use of Unmanned Aerial Systems (UASs) as platforms for conducting fundamental and applied research in the lower atmosphere due to their relatively low cost and ability to collect samples with high spatial and temporal resolution. Concurrent with this development comes the need for accurate instrumentation and measurement methods suitable for small meteorological UASs. Moreover, the instrumentation to be integrated into such platforms must be small and lightweight. Whereas thermodynamic variables can be easily measured using well aspirated sensors onboard, it is much more challenging to accurately measure the wind with a UAS. Several algorithms have been developed that incorporate GPS observations as a means of estimating the horizontal wind vector, with each algorithm exhibiting its own particular strengths and weaknesses. In the present study, the performance of three such GPS-based wind-retrieval algorithms has been investigated and compared with wind estimates from rawinsonde and sodar observations. Each of the algorithms considered agreed well with the wind measurements from sounding and sodar data. Through the integration of UAS-retrieved profiles of thermodynamic and kinematic parameters, one can investigate the static and dynamic stability of the atmosphere and relate them to the state of the boundary layer across a variety of times and locations, which might be difficult to access using conventional instrumentation.

  19. A comparison of regression algorithms for wind speed forecasting at Alexander Bay

    CSIR Research Space (South Africa)

    Botha, Nicolene

    2016-12-01

    Full Text Available to forecast 1 to 24 hours ahead, in hourly intervals. Predictions are performed on a wind speed time series with three machine learning regression algorithms, namely support vector regression, ordinary least squares and Bayesian ridge regression. The resulting...

  20. Comparison of advanced imputation algorithms for detection of transportation mode and activity episode using GPS data

    NARCIS (Netherlands)

    Feng, T.; Timmermans, H.J.P.

    2016-01-01

    Global Positioning System (GPS) technologies have been increasingly considered as an alternative to traditional travel survey methods to collect activity-travel data. Algorithms applied to extract activity-travel patterns vary from informal ad-hoc decision rules to advanced machine learning methods

  1. Comparison of square law, linear and bessel detectors for CA and OS CFAR algorithms

    CSIR Research Space (South Africa)

    Melebari, A

    2015-10-01

    Full Text Available . These detectors have difference detection performances and computational costs. In this paper, the detection performances of these three detectors are investigated for CA-CFAR and Order Statistic CFAR (OS-CFAR) algorithms using simulated and measured data of semi...

  2. Comparison of Unsupervised Vegetation Classification Methods from Vhr Images after Shadows Removal by Innovative Algorithms

    Science.gov (United States)

    Movia, A.; Beinat, A.; Crosilla, F.

    2015-04-01

    The recognition of vegetation by the analysis of very high resolution (VHR) aerial images provides meaningful information about environmental features; nevertheless, VHR images frequently contain shadows that generate significant problems for the classification of the image components and for the extraction of the needed information. The aim of this research is to classify, from VHR aerial images, vegetation involved in the balance process of the environmental biochemical cycle, and to discriminate it with respect to urban and agricultural features. Three classification algorithms have been experimented in order to better recognize vegetation, and compared to NDVI index; unfortunately all these methods are conditioned by the presence of shadows on the images. Literature presents several algorithms to detect and remove shadows in the scene: most of them are based on the RGB to HSI transformations. In this work some of them have been implemented and compared with one based on RGB bands. Successively, in order to remove shadows and restore brightness on the images, some innovative algorithms, based on Procrustes theory, have been implemented and applied. Among these, we evaluate the capability of the so called "not-centered oblique Procrustes" and "anisotropic Procrustes" methods to efficiently restore brightness with respect to a linear correlation correction based on the Cholesky decomposition. Some experimental results obtained by different classification methods after shadows removal carried out with the innovative algorithms are presented and discussed.

  3. Numerical Laplace inversion in problems of elastodynamics: Comparison of four algorithms

    Czech Academy of Sciences Publication Activity Database

    Adámek, V.; Valeš, František; Červ, Jan

    2017-01-01

    Roč. 113, November (2017), s. 120-129 ISSN 0965-9978 R&D Projects: GA ČR(CZ) GAP101/12/2315 Institutional support: RVO:61388998 Keywords : inverse Laplace transform * numerical algorithm * wave propagation * multi-precision computation * Maple code Subject RIV: BI - Acoustics OBOR OECD: Acoustics Impact factor: 3.000, year: 2016

  4. Comparison of optimization of loading patterns on the basis of SA and PMA algorithms

    International Nuclear Information System (INIS)

    Beliczai, Botond

    2007-01-01

    Optimization of loading patterns is a very important task from economical point of view in a nuclear power plant. The optimization algorithms used for this purpose can be categorized basically into two categories: deterministic ones and stochastic ones. In the Paks nuclear power plant a deterministic optimization procedure is used to optimize the loading pattern at BOC, so that the core would have maximal reactivity reserve. To the group of stochastic optimization procedures belong mainly simulated annealing (SA) procedures and genetic algorithms (GA). There are new procedures as well, which try to combine the advantages of SAs and GAs. One of them is called population mutation annealing algorithm (PMA). In the Paks NPP we would like to introduce fuel assemblies including burnable poison (Gd) in the near future. In order to be able to find the optimal loading pattern (or near-optimal loading patterns) in that case, we have to optimize our core not only for objective functions defined at BOC, but at EOC as well. For this purpose I used stochastic algorithms (SA and PMA) to investigate loading pattern optimization results for different objective functions at BOC. (author)

  5. A comparison of two open source LiDAR surface classification algorithms

    Science.gov (United States)

    With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are op...

  6. A comparison of algorithms for inference and learning in probabilistic graphical models.

    Science.gov (United States)

    Frey, Brendan J; Jojic, Nebojsa

    2005-09-01

    Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.

  7. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł

    2010-01-01

    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  8. An up-to-date comparison of state-of-the-art classification algorithms

    KAUST Repository

    Zhang, Chongsheng; Liu, Changchang; Zhang, Xiangliang; Almpanidis, George

    2017-01-01

    Current benchmark reports of classification algorithms generally concern common classifiers and their variants but do not include many algorithms that have been introduced in recent years. Moreover, important properties such as the dependency on number of classes and features and CPU running time are typically not examined. In this paper, we carry out a comparative empirical study on both established classifiers and more recently proposed ones on 71 data sets originating from different domains, publicly available at UCI and KEEL repositories. The list of 11 algorithms studied includes Extreme Learning Machine (ELM), Sparse Representation based Classification (SRC), and Deep Learning (DL), which have not been thoroughly investigated in existing comparative studies. It is found that Stochastic Gradient Boosting Trees (GBDT) matches or exceeds the prediction performance of Support Vector Machines (SVM) and Random Forests (RF), while being the fastest algorithm in terms of prediction efficiency. ELM also yields good accuracy results, ranking in the top-5, alongside GBDT, RF, SVM, and C4.5 but this performance varies widely across all data sets. Unsurprisingly, top accuracy performers have average or slow training time efficiency. DL is the worst performer in terms of accuracy but second fastest in prediction efficiency. SRC shows good accuracy performance but it is the slowest classifier in both training and testing.

  9. A Matlab-Based Testbed for Integration, Evaluation and Comparison of Heterogeneous Stereo Vision Matching Algorithms

    Directory of Open Access Journals (Sweden)

    Raul Correal

    2016-11-01

    Full Text Available Stereo matching is a heavily researched area with a prolific published literature and a broad spectrum of heterogeneous algorithms available in diverse programming languages. This paper presents a Matlab-based testbed that aims to centralize and standardize this variety of both current and prospective stereo matching approaches. The proposed testbed aims to facilitate the application of stereo-based methods to real situations. It allows for configuring and executing algorithms, as well as comparing results, in a fast, easy and friendly setting. Algorithms can be combined so that a series of processes can be chained and executed consecutively, using the output of a process as input for the next; some additional filtering and image processing techniques have been included within the testbed for this purpose. A use case is included to illustrate how these processes are sequenced and its effect on the results for real applications. The testbed has been conceived as a collaborative and incremental open-source project, where its code is accessible and modifiable, with the objective of receiving contributions and releasing future versions to include new algorithms and features. It is currently available online for the research community.

  10. Forecasting spot electricity prices : Deep learning approaches and empirical comparison of traditional algorithms

    NARCIS (Netherlands)

    Lago Garcia, J.; De Ridder, Fjo; De Schutter, B.H.K.

    2018-01-01

    In this paper, a novel modeling framework for forecasting electricity prices is proposed. While many predictive models have been already proposed to perform this task, the area of deep learning algorithms remains yet unexplored. To fill this scientific gap, we propose four different deep learning

  11. A comparison of two open source LiDAR surface classification algorithms

    Science.gov (United States)

    Wade T. Tinkham; Hongyu Huang; Alistair M.S. Smith; Rupesh Shrestha; Michael J. Falkowski; Andrew T. Hudak; Timothy E. Link; Nancy F. Glenn; Danny G. Marks

    2011-01-01

    With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are openly available, free to use, and are supported by published results....

  12. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities

    International Nuclear Information System (INIS)

    Carrasco, P.; Jornet, N.; Duch, M. A.; Panettieri, V.; Weber, L.; Eudaldo, T.; Ginjaume, M.; Ribas, M.

    2007-01-01

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10x10, 5x5, and 2x2 cm 2 ) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2x2 cm 2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values within

  13. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities.

    Science.gov (United States)

    Carrasco, P; Jornet, N; Duch, M A; Panettieri, V; Weber, L; Eudaldo, T; Ginjaume, M; Ribas, M

    2007-08-01

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10 x 10, 5 x 5, and 2 x 2 cm2) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2 x 2 cm2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  14. Performance comparison of weighted sum-minimum mean square error and virtual signal-to-interference plus noise ratio algorithms in simulated and measured channels

    DEFF Research Database (Denmark)

    Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels

    2014-01-01

    A comparison in data achievement between two well-known algorithms with simulated and real measured data is presented. The algorithms maximise the data rate in cooperative base stations (BS) multiple-input-single-output scenario. Weighted sum-minimum mean square error algorithm could be used...... in multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results....

  15. Dosimetric comparison of peripheral NSCLC SBRT using Acuros XB and AAA calculation algorithms.

    Science.gov (United States)

    Ong, Chloe C H; Ang, Khong Wei; Soh, Roger C X; Tin, Kah Ming; Yap, Jerome H H; Lee, James C L; Bragg, Christopher M

    2017-01-01

    There is a concern for dose calculation in highly heterogenous environments such as the thorax region. This study compares the quality of treatment plans of peripheral non-small cell lung cancer (NSCLC) stereotactic body radiation therapy (SBRT) using 2 calculation algorithms, namely, Eclipse Anisotropic Analytical Algorithm (AAA) and Acuros External Beam (AXB), for 3-dimensional conformal radiation therapy (3DCRT) and volumetric-modulated arc therapy (VMAT). Four-dimensional computed tomography (4DCT) data from 20 anonymized patients were studied using Varian Eclipse planning system, AXB, and AAA version 10.0.28. A 3DCRT plan and a VMAT plan were generated using AAA and AXB with constant plan parameters for each patient. The prescription and dose constraints were benchmarked against Radiation Therapy Oncology Group (RTOG) 0915 protocol. Planning parameters of the plan were compared statistically using Mann-Whitney U tests. Results showed that 3DCRT and VMAT plans have a lower target coverage up to 8% when calculated using AXB as compared with AAA. The conformity index (CI) for AXB plans was 4.7% lower than AAA plans, but was closer to unity, which indicated better target conformity. AXB produced plans with global maximum doses which were, on average, 2% hotter than AAA plans. Both 3DCRT and VMAT plans were able to achieve D95%. VMAT plans were shown to be more conformal (CI = 1.01) and were at least 3.2% and 1.5% lower in terms of PTV maximum and mean dose, respectively. There was no statistically significant difference for doses received by organs at risk (OARs) regardless of calculation algorithms and treatment techniques. In general, the difference in tissue modeling for AXB and AAA algorithm is responsible for the dose distribution between the AXB and the AAA algorithms. The AXB VMAT plans could be used to benefit patients receiving peripheral NSCLC SBRT. Copyright © 2017 American Association of Medical Dosimetrists. Published by Elsevier Inc. All rights

  16. Comparison of Different MPPT Algorithms with a Proposed One Using a Power Estimator for Grid Connected PV Systems

    Directory of Open Access Journals (Sweden)

    Manel Hlaili

    2016-01-01

    Full Text Available Photovoltaic (PV energy is one of the most important energy sources since it is clean and inexhaustible. It is important to operate PV energy conversion systems in the maximum power point (MPP to maximize the output energy of PV arrays. An MPPT control is necessary to extract maximum power from the PV arrays. In recent years, a large number of techniques have been proposed for tracking the maximum power point. This paper presents a comparison of different MPPT methods and proposes one which used a power estimator and also analyses their suitability for systems which experience a wide range of operating conditions. The classic analysed methods, the incremental conductance (IncCond, perturbation and observation (P&O, ripple correlation (RC algorithms, are suitable and practical. Simulation results of a single phase NPC grid connected PV system operating with the aforementioned methods are presented to confirm effectiveness of the scheme and algorithms. Simulation results verify the correct operation of the different MPPT and the proposed algorithm.

  17. Comparison Study on Two Model-Based Adaptive Algorithms for SOC Estimation of Lithium-Ion Batteries in Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Yong Tian

    2014-12-01

    Full Text Available State of charge (SOC estimation is essential to battery management systems in electric vehicles (EVs to ensure the safe operations of batteries and providing drivers with the remaining range of the EVs. A number of estimation algorithms have been developed to get an accurate SOC value because the SOC cannot be directly measured with sensors and is closely related to various factors, such as ambient temperature, current rate and battery aging. In this paper, two model-based adaptive algorithms, including the adaptive unscented Kalman filter (AUKF and adaptive slide mode observer (ASMO are applied and compared in terms of convergence behavior, tracking accuracy, computational cost and estimation robustness against parameter uncertainties of the battery model in SOC estimation. Two typical driving cycles, including the Dynamic Stress Test (DST and New European Driving Cycle (NEDC are applied to evaluate the performance of the two algorithms. Comparison results show that the AUKF has merits in convergence ability and tracking accuracy with an accurate battery model, while the ASMO has lower computational cost and better estimation robustness against parameter uncertainties of the battery model.

  18. Comparison of multiobjective harmony search, cuckoo search and bat-inspired algorithms for renewable distributed generation placement

    Directory of Open Access Journals (Sweden)

    John E. Candelo-Becerra

    2015-07-01

    Full Text Available Electric power losses have a significant impact on the total costs of distribution networks. The use of renewable energy sources is a major alternative to improve power losses and costs, although other important issues are also enhanced such as voltage magnitudes and network congestion. However, determining the best location and size of renewable energy generators can be sometimes a challenging task due to a large number of possible combinations in the search space. Furthermore, the multiobjective functions increase the complexity of the problem and metaheuristics are preferred to find solutions in a relatively short time. This paper evaluates the performance of the cuckoo search (CS, harmony search (HS, and bat-inspired (BA algorithms for the location and size of renewable distributed generation (RDG in radial distribution networks using a multiobjective function defined as minimizing the energy losses and the RDG costs. The metaheuristic algorithms were programmed in Matlab and tested using the 33-node radial distribution network. The three algorithms obtained similar results for the two objectives evaluated, finding points close to the best solutions in the Pareto front. Comparisons showed that the CS obtained the minimum results for most points evaluated, but the BA and the HS were close to the best solution.

  19. Testing a polarimetric cloud imager aboard research vessel Polarstern: comparison of color-based and polarimetric cloud detection algorithms.

    Science.gov (United States)

    Barta, András; Horváth, Gábor; Horváth, Ákos; Egri, Ádám; Blahó, Miklós; Barta, Pál; Bumke, Karl; Macke, Andreas

    2015-02-10

    Cloud cover estimation is an important part of routine meteorological observations. Cloudiness measurements are used in climate model evaluation, nowcasting solar radiation, parameterizing the fluctuations of sea surface insolation, and building energy transfer models of the atmosphere. Currently, the most widespread ground-based method to measure cloudiness is based on analyzing the unpolarized intensity and color distribution of the sky obtained by digital cameras. As a new approach, we propose that cloud detection can be aided by the additional use of skylight polarization measured by 180° field-of-view imaging polarimetry. In the fall of 2010, we tested such a novel polarimetric cloud detector aboard the research vessel Polarstern during expedition ANT-XXVII/1. One of our goals was to test the durability of the measurement hardware under the extreme conditions of a trans-Atlantic cruise. Here, we describe the instrument and compare the results of several different cloud detection algorithms, some conventional and some newly developed. We also discuss the weaknesses of our design and its possible improvements. The comparison with cloud detection algorithms developed for traditional nonpolarimetric full-sky imagers allowed us to evaluate the added value of polarimetric quantities. We found that (1) neural-network-based algorithms perform the best among the investigated schemes and (2) global information (the mean and variance of intensity), nonoptical information (e.g., sun-view geometry), and polarimetric information (e.g., the degree of polarization) improve the accuracy of cloud detection, albeit slightly.

  20. A comparison between anisotropic analytical and multigrid superposition dose calculation algorithms in radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Wu, Vincent W.C.; Tse, Teddy K.H.; Ho, Cola L.M.; Yeung, Eric C.Y.

    2013-01-01

    Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (p<0.001). With regards to body density combinations, the MAPE of AAA ranged from 1.8% (soft tissue) to 4.9% (Bone/Air), whereas that of MGS from 1.6% (air cavities) to 2.9% (Soft/Bone). The MAPEs of MGS (2.6%±2.1) were significantly lower than that of AAA (3.7%±2.5) in all tissue density combinations (p<0.001). The mean computation time of AAA for all treatment plans was significantly lower than that of the MGS (p<0.001). Both AAA and MGS algorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time

  1. Currency recognition using a smartphone: Comparison between color SIFT and gray scale SIFT algorithms

    Directory of Open Access Journals (Sweden)

    Iyad Abu Doush

    2017-10-01

    Full Text Available Banknote recognition means classifying the currency (coin and paper to the correct class. In this paper, we developed a dataset for Jordanian currency. After that we applied automatic mobile recognition system using a smartphone on the dataset using scale-invariant feature transform (SIFT algorithm. This is the first attempt, to the best of the authors knowledge, to recognize both coins and paper banknotes on a smartphone using SIFT algorithm. SIFT has been developed to be the most robust and efficient local invariant feature descriptor. Color provides significant information and important values in the object description process and matching tasks. Many objects cannot be classified correctly without their color features. We compared between two approaches colored local invariant feature descriptor (color SIFT approach and gray image local invariant feature descriptor (gray SIFT approach. The evaluation results show that the color SIFT approach outperforms the gray SIFT approach in terms of processing time and accuracy.

  2. A comparison of thermal algorithms of fuel rod performance code systems

    International Nuclear Information System (INIS)

    Park, C. J.; Park, J. H.; Kang, K. H.; Ryu, H. J.; Moon, J. S.; Jeong, I. H.; Lee, C. Y.; Song, K. C.

    2003-11-01

    The goal of the fuel rod performance is to identify the robustness of a fuel rod with cladding material. Computer simulation of the fuel rod performance becomes one of important parts to designed and evaluate new nuclear fuels and claddings. To construct a computing code system for the fuel rod performance, several algorithms of the existing fuel rod performance code systems are compared and are summarized as a preliminary work. Among several code systems, FRAPCON, and FEMAXI for LWR, ELESTRES for CANDU reactor, and LIFE for fast reactor are reviewed. Thermal algorithms of the above codes are investigated including methodologies and subroutines. This work will be utilized to construct a computing code system for dry process fuel rod performance

  3. Comparison of Multiobjective Evolutionary Algorithms for Operations Scheduling under Machine Availability Constraints

    Directory of Open Access Journals (Sweden)

    M. Frutos

    2013-01-01

    Full Text Available Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier.

  4. Comparisons of feature extraction algorithm based on unmanned aerial vehicle image

    Directory of Open Access Journals (Sweden)

    Xi Wenfei

    2017-07-01

    Full Text Available Feature point extraction technology has become a research hotspot in the photogrammetry and computer vision. The commonly used point feature extraction operators are SIFT operator, Forstner operator, Harris operator and Moravec operator, etc. With the high spatial resolution characteristics, UAV image is different from the traditional aviation image. Based on these characteristics of the unmanned aerial vehicle (UAV, this paper uses several operators referred above to extract feature points from the building images, grassland images, shrubbery images, and vegetable greenhouses images. Through the practical case analysis, the performance, advantages, disadvantages and adaptability of each algorithm are compared and analyzed by considering their speed and accuracy. Finally, the suggestions of how to adapt different algorithms in diverse environment are proposed.

  5. Comparison between beamforming and super resolution imaging algorithms for non-destructive evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Chengguang [College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha 410073, PR China and Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom); Drinkwater, Bruce W. [Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom)

    2014-02-18

    In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.

  6. Comparison Study on the Battery SoC Estimation with EKF and UKF Algorithms

    Directory of Open Access Journals (Sweden)

    Hongwen He

    2013-09-01

    Full Text Available The battery state of charge (SoC, whose estimation is one of the basic functions of battery management system (BMS, is a vital input parameter in the energy management and power distribution control of electric vehicles (EVs. In this paper, two methods based on an extended Kalman filter (EKF and unscented Kalman filter (UKF, respectively, are proposed to estimate the SoC of a lithium-ion battery used in EVs. The lithium-ion battery is modeled with the Thevenin model and the model parameters are identified based on experimental data and validated with the Beijing Driving Cycle. Then space equations used for SoC estimation are established. The SoC estimation results with EKF and UKF are compared in aspects of accuracy and convergence. It is concluded that the two algorithms both perform well, while the UKF algorithm is much better with a faster convergence ability and a higher accuracy.

  7. Comparison between beamforming and super resolution imaging algorithms for non-destructive evaluation

    International Nuclear Information System (INIS)

    Fan, Chengguang; Drinkwater, Bruce W.

    2014-01-01

    In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded

  8. A comparison of thermal algorithms of fuel rod performance code systems

    Energy Technology Data Exchange (ETDEWEB)

    Park, C. J.; Park, J. H.; Kang, K. H.; Ryu, H. J.; Moon, J. S.; Jeong, I. H.; Lee, C. Y.; Song, K. C

    2003-11-01

    The goal of the fuel rod performance is to identify the robustness of a fuel rod with cladding material. Computer simulation of the fuel rod performance becomes one of important parts to designed and evaluate new nuclear fuels and claddings. To construct a computing code system for the fuel rod performance, several algorithms of the existing fuel rod performance code systems are compared and are summarized as a preliminary work. Among several code systems, FRAPCON, and FEMAXI for LWR, ELESTRES for CANDU reactor, and LIFE for fast reactor are reviewed. Thermal algorithms of the above codes are investigated including methodologies and subroutines. This work will be utilized to construct a computing code system for dry process fuel rod performance.

  9. Comparison of dose calculation algorithms for treatment planning in external photon beam therapy for clinical situations

    DEFF Research Database (Denmark)

    Knöös, Tommy; Wieslander, Elinore; Cozzi, Luca

    2006-01-01

    to the fields. A Monte Carlo calculated algorithm input data set and a benchmark set for a virtual linear accelerator have been produced which have facilitated the analysis and interpretation of the results. The more sophisticated models in the type b group exhibit changes in both absorbed dose and its...... distribution which are congruent with the simulations performed by Monte Carlo-based virtual accelerator....

  10. Multivariate anomaly detection for Earth observations: a comparison of algorithms and feature extraction techniques

    Directory of Open Access Journals (Sweden)

    M. Flach

    2017-08-01

    Full Text Available Today, many processes at the Earth's surface are constantly monitored by multiple data streams. These observations have become central to advancing our understanding of vegetation dynamics in response to climate or land use change. Another set of important applications is monitoring effects of extreme climatic events, other disturbances such as fires, or abrupt land transitions. One important methodological question is how to reliably detect anomalies in an automated and generic way within multivariate data streams, which typically vary seasonally and are interconnected across variables. Although many algorithms have been proposed for detecting anomalies in multivariate data, only a few have been investigated in the context of Earth system science applications. In this study, we systematically combine and compare feature extraction and anomaly detection algorithms for detecting anomalous events. Our aim is to identify suitable workflows for automatically detecting anomalous patterns in multivariate Earth system data streams. We rely on artificial data that mimic typical properties and anomalies in multivariate spatiotemporal Earth observations like sudden changes in basic characteristics of time series such as the sample mean, the variance, changes in the cycle amplitude, and trends. This artificial experiment is needed as there is no gold standard for the identification of anomalies in real Earth observations. Our results show that a well-chosen feature extraction step (e.g., subtracting seasonal cycles, or dimensionality reduction is more important than the choice of a particular anomaly detection algorithm. Nevertheless, we identify three detection algorithms (k-nearest neighbors mean distance, kernel density estimation, a recurrence approach and their combinations (ensembles that outperform other multivariate approaches as well as univariate extreme-event detection methods. Our results therefore provide an effective workflow to

  11. Actigraphy-based sleep estimation in adolescents and adults: a comparison with polysomnography using two scoring algorithms

    Directory of Open Access Journals (Sweden)

    Quante M

    2018-01-01

    Full Text Available Mirja Quante,1–3 Emily R Kaplan,2 Michael Cailler,2 Michael Rueschman,2 Rui Wang,2–5 Jia Weng,2 Elsie M Taveras,3,5,6 Susan Redline2,3,7 1Department of Neonatology, University of Tuebingen, Tuebingen, Germany; 2Division of Sleep and Circadian Disorders, Departments of Medicine and Neurology, Brigham and Women’s Hospital, Boston, MA, USA; 3Harvard Medical School, Boston, MA, USA; 4Department of Biostatistics, Harvard T. H. Chan School of Public Health, Boston, MA, USA; 5Department of Population Medicine, Harvard Medical School and The Harvard Pilgrim Health Care Institute, Boston, MA, USA; 6Division of General Academic Pediatrics, Department of Pediatrics, MassGeneral Hospital for Children, Boston, MA, USA; 7Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA Objectives: Actigraphy is widely used to estimate sleep–wake time, despite limited information regarding the comparability of different devices and algorithms. We compared estimates of sleep–wake times determined by two wrist actigraphs (GT3X+ versus Actiwatch Spectrum [AWS] to in-home polysomnography (PSG, using two algorithms (Sadeh and Cole–Kripke for the GT3X+ recordings.Subjects and methods: Participants included a sample of 35 healthy volunteers (13 school children and 22 adults, 46% male from Boston, MA, USA. Twenty-two adults wore the GT3X+ and AWS simultaneously for at least five consecutive days and nights. In addition, actigraphy and PSG were concurrently measured in 12 of these adults and another 13 children over a single night. We used intraclass correlation coefficients (ICCs, epoch-by-epoch comparisons, paired t-tests, and Bland–Altman plots to determine the level of agreement between actigraphy and PSG, and differences between devices and algorithms.Results: Each actigraph showed comparable accuracy (0.81–0.86 for sleep–wake estimation compared to PSG. When analyzing data from the GT3X+, the Cole–Kripke algorithm was more

  12. A Comparison Study of Machine Learning Based Algorithms for Fatigue Crack Growth Calculation.

    Science.gov (United States)

    Wang, Hongxun; Zhang, Weifang; Sun, Fuqiang; Zhang, Wei

    2017-05-18

    The relationships between the fatigue crack growth rate ( d a / d N ) and stress intensity factor range ( Δ K ) are not always linear even in the Paris region. The stress ratio effects on fatigue crack growth rate are diverse in different materials. However, most existing fatigue crack growth models cannot handle these nonlinearities appropriately. The machine learning method provides a flexible approach to the modeling of fatigue crack growth because of its excellent nonlinear approximation and multivariable learning ability. In this paper, a fatigue crack growth calculation method is proposed based on three different machine learning algorithms (MLAs): extreme learning machine (ELM), radial basis function network (RBFN) and genetic algorithms optimized back propagation network (GABP). The MLA based method is validated using testing data of different materials. The three MLAs are compared with each other as well as the classical two-parameter model ( K * approach). The results show that the predictions of MLAs are superior to those of K * approach in accuracy and effectiveness, and the ELM based algorithms show overall the best agreement with the experimental data out of the three MLAs, for its global optimization and extrapolation ability.

  13. The association between neurological deficit in acute ischemic stroke and mean transit time. Comparison of four different perfusion MRI algorithms

    International Nuclear Information System (INIS)

    Schellinger, Peter D.; Latour, Lawrence L.; Chalela, Julio A.; Warach, Steven; Wu, Chen-Sen

    2006-01-01

    The purpose of our study was to identify the perfusion MRI (pMRI) algorithm which yields a volume of hypoperfused tissue that best correlates with the acute clinical deficit as quantified by the NIH Stroke Scale (NIHSS) and therefore reflects critically hypoperfused tissue. A group of 20 patients with a first acute stroke and stroke MRI within 24 h of symptom onset were retrospectively analyzed. Perfusion maps were derived using four different algorithms to estimate relative mean transit time (rMTT): (1) cerebral blood flow (CBF) arterial input function (AIF)/singular voxel decomposition (SVD); (2) area peak; (3) time to peak (TTP); and (4) first moment method. Lesion volumes based on five different MTT thresholds relative to contralateral brain were compared with each other and correlated with NIHSS score. The first moment method had the highest correlation with NIHSS (r=0.79, P<0.001) followed by the AIF/SVD method, both of which did not differ significantly from each other with regard to lesion volumes. TTP and area peak derived both volumes, which correlated poorly or only moderately with NIHSS scores. Data from our pilot study suggest that the first moment and the AIF/SVD method have advantages over the other algorithms in identifying the pMRI lesion volume that best reflects clinical severity. At present there seems to be no need for extensive postprocessing and arbitrarily defined delay thresholds in pMRI as the simple qualitative approach with a first moment algorithm is equally accurate. Larger sample sizes which allow comparison between imaging and clinical outcomes are needed to refine the choice of best perfusion parameter in pMRI. (orig.)

  14. A systematic benchmark method for analysis and comparison of IMRT treatment planning algorithms.

    Science.gov (United States)

    Mayo, Charles S; Urie, Marcia M

    2003-01-01

    Tools and procedures for evaluating and comparing different intensity-modulated radiation therapy (IMRT) systems are presented. IMRT is increasingly in demand and there are numerous systems available commercially. These programs introduce significantly different software to dosimetrists and physicists than conventional planning systems, and the options often seem initially overwhelmingly complex to the user. By creating geometric target volumes and critical normal tissues, the characteristics of the algorithms may be investigated, and the influence of the different parameters explored. Overall optimization strategies of the algorithm may be characterized by treating a square target volume (TV) with 2 perpendicular beams, with and without heterogeneities. A half-donut (hemi-annulus) TV with a "donut hole" (central cylinder) critical normal tissue (CNT) on a CT of a simulated quality assurance phantom is suggested as a good geometry to explore the IMRT algorithm parameters. Using this geometry, the order of varying parameters is suggested. First is to determine the effects of the number of stratifications of optimized intensity fluence on the resulting dose distribution, and selecting a fixed number of stratifications for further studies. To characterize the dose distributions, a dose-homogeneity index (DHI) is defined as the ratio of the dose received by 90% of the volume to the minimum dose received by the "hottest" 10% of the volume. The next step is to explore the effects of priority and penalty on both the TV and the CNT. Then, choosing and fixing these parameters, the effects of varying the number of beams can be looked at. As well as evaluating the dose distributions (and DHI), the number of subfields and the number of monitor units required for different numbers of stratifications and beams can be evaluated.

  15. A comparison of three speaker-intrinsic vowel formant frequency normalization algorithms for sociophonetics

    DEFF Research Database (Denmark)

    Fabricius, Anne; Watt, Dominic; Johnson, Daniel Ezra

    2009-01-01

    from RP and Aberdeen English (northeast Scotland). We conclude that, for the data examined here, the S-centroid W&F procedures performs at least as well as the two most recognized speaker-intrinsic, vowel-extrinsic, formant-intrinsic normalization methods, Lobanov's (1971) z-score procedure and Nearey......This paper evaluates a speaker-intrinsic vowel formant frequency normalization algorithm initially proposed in Watt & Fabricius (2002). We compare how well this routine, known as the S-centroid procedure, performs as a sociophonetic research tool in three ways: reducing variance in area ratios...

  16. Comparison of Different Classification Algorithms for the Detection of User's Interaction with Windows in Office Buildings

    DEFF Research Database (Denmark)

    Markovic, Romana; Wolf, Sebastian; Cao, Jun

    2017-01-01

    Occupant behavior in terms of interactions with windows and heating systems is seen as one of the main sources of discrepancy between predicted and measured heating, ventilation and air conditioning (HVAC) building energy consumption. Thus, this work analyzes the performance of several...... classification algorithms for detecting occupant's interactions with windows, while taking the imbalanced properties of the available data set into account. The tested methods include support vector machines (SVM), random forests, and their combination with dynamic Bayesian networks (DBN). The results will show...

  17. A Comparison of Selected Modifications of the Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Michala Jakubcová

    2014-01-01

    Full Text Available We compare 27 modifications of the original particle swarm optimization (PSO algorithm. The analysis evaluated nine basic PSO types, which differ according to the swarm evolution as controlled by various inertia weights and constriction factor. Each of the basic PSO modifications was analyzed using three different distributed strategies. In the first strategy, the entire swarm population is considered as one unit (OC-PSO, the second strategy periodically partitions the population into equally large complexes according to the particle’s functional value (SCE-PSO, and the final strategy periodically splits the swarm population into complexes using random permutation (SCERand-PSO. All variants are tested using 11 benchmark functions that were prepared for the special session on real-parameter optimization of CEC 2005. It was found that the best modification of the PSO algorithm is a variant with adaptive inertia weight. The best distribution strategy is SCE-PSO, which gives better results than do OC-PSO and SCERand-PSO for seven functions. The sphere function showed no significant difference between SCE-PSO and SCERand-PSO. It follows that a shuffling mechanism improves the optimization process.

  18. Comparison of algorithms of testing for use in automated evaluation of sensation.

    Science.gov (United States)

    Dyck, P J; Karnes, J L; Gillen, D A; O'Brien, P C; Zimmerman, I R; Johnson, D M

    1990-10-01

    Estimates of vibratory detection threshold may be used to detect, characterize, and follow the course of sensory abnormality in neurologic disease. The approach is especially useful in epidemiologic and controlled clinical trials. We studied which algorithm of testing and finding threshold should be used in automatic systems by comparing among algorithms and stimulus conditions for the index finger of healthy subjects and for the great toe of patients with mild neuropathy. Appearance thresholds obtained by linear ramps increasing at a rate less than 4.15 microns/sec provided accurate and repeatable thresholds compared with thresholds obtained by forced-choice testing. These rates would be acceptable if only sensitive sites were studied, but they were too slow for use in automatic testing of insensitive parts. Appearance thresholds obtained by fast linear rates (4.15 or 16.6 microns/sec) overestimated threshold, especially for sensitive parts. Use of the mean of appearance and disappearance thresholds, with the stimulus increasing exponentially at rates of 0.5 or 1.0 just noticeable difference (JND) units per second, and interspersion of null stimuli, Békésy with null stimuli, provided accurate, repeatable, and fast estimates of threshold for sensitive parts. Despite the good performance of Békésy testing, we prefer forced choice for evaluation of the sensation of patients with neuropathy.

  19. A Research on the Comparison of the MultipleIntelligince Types of the Candidates Who Succeeded and Failedinthe Entrance Exams of Physical Educationand Sports School

    Directory of Open Access Journals (Sweden)

    Murat KUL

    2014-07-01

    Full Text Available The purpose of the study , candidates who participated in a special aptitude test of Physical Education and Sports School are compared those who were eligible to register with the win of Multiple Inte lligence Areas. In the research Scan model was used. Within the investigation, in 785 candidates who applied Bartin Universty School of Physical Education and Sports Special Ability Test for 2013 - 2014 academic year, 536 volunteer candidates who have average age x yaş = 21.15± 2.66 constitude. As data collection tool, belogns to the candidates personal information form and “Multiple Intelligences Inventory” which was developed by Özden (2003 for he identification of multiple intellegences was applied. Reliability coefficient was discovered as .96. In evaluation of data, SPSS data an alysis program was used. In evaluation of data, frequency, average, standard, deviation from descriptive statistical techniques was used. Also by taking into account normal distribution of the data, Independent Sample T - test of statistical techniques was u sed. In considering the findings of the study “Bodily - Kinesthetic Intelligence” which is a field of Multiple Intelligences of candidates as statistically significant diffirence was found in the area. Candidates winning higher than avarage scores candidates who can not win are seen to have. Also, “Social - Interpersonal Intelligence” of candidates qualifing to register with who can not qualify to register statistically significant results were observed in the levels. Winning candidates in this area compared t o the candidates who win more than others, it is concluded that they carry the dominant features. As a result of “Verbal - Linguistic Intelligence”, “Logical - Mathematical Intelligence”, “Musical - Rhythmic Intelligence”, “Bodily - Kinesthetic Intelligence, “Soci al - Interpersonal Intelligence” of Multiple Intelligence Areas candidates who participated in Physical Education

  20. Determining gestational age for public health care users in Brazil: comparison of methods and algorithm creation

    Directory of Open Access Journals (Sweden)

    Pereira Ana Paula Esteves

    2013-02-01

    Full Text Available Abstract Background A valid, accurate method for determining gestational age (GA is crucial in classifying early and late prematurity, and it is a relevant issue in perinatology. This study aimed at assessing the validity of different measures for approximating GA, and it provides an insight into the development of algorithms that can be adopted in places with similar characteristics to Brazil. A follow-up study was carried out in two cities in southeast Brazil. Participants were interviewed in the first trimester of pregnancy and in the postpartum period, with a final sample of 1483 participants after exclusions. The distribution of GA estimates at birth using ultrasound (US at 21–28 weeks, US at 29+ weeks, last menstrual period (LMP, and the Capurro method were compared with GA estimates at birth using the reference US (at 7–20 weeks of gestation. Kappa, sensitivity, and specificity tests were calculated for preterm (=42 weeks birth rates. The difference in days in the GA estimates between the reference US and the LMP and between the reference US and the Capurro method were evaluated in terms of maternal and infant characteristics, respectively. Results For prematurity, US at 21–28 weeks had the highest sensitivity (0.84 and the Capurro method the highest specificity (0.97. For postmaturity, US at 21–28 weeks and the Capurro method had a very high sensitivity (0.98. All methods of GA estimation had a very low specificity (≤0.50 for postmaturity. GA estimates at birth with the algorithm and the reference US produced very similar results, with a preterm birth rate of 12.5%. Conclusions In countries such as Brazil, where there is less accurate information about the LMP and lower coverage of early obstetric US examinations, we recommend the development of algorithms that enable the use of available information using methodological strategies to reduce the chance of errors with GA. Thus, this study calls into attention the care needed

  1. On the comparison of perturbation-iteration algorithm and residual power series method to solve fractional Zakharov-Kuznetsov equation

    Science.gov (United States)

    Şenol, Mehmet; Alquran, Marwan; Kasmaei, Hamed Daei

    2018-06-01

    In this paper, we present analytic-approximate solution of time-fractional Zakharov-Kuznetsov equation. This model demonstrates the behavior of weakly nonlinear ion acoustic waves in a plasma bearing cold ions and hot isothermal electrons in the presence of a uniform magnetic field. Basic definitions of fractional derivatives are described in the Caputo sense. Perturbation-iteration algorithm (PIA) and residual power series method (RPSM) are applied to solve this equation with success. The convergence analysis is also presented for both methods. Numerical results are given and then they are compared with the exact solutions. Comparison of the results reveal that both methods are competitive, powerful, reliable, simple to use and ready to apply to wide range of fractional partial differential equations.

  2. LC HCAL Absorber And Active Media Comparisons Using a Particle-Flow Algorithm

    International Nuclear Information System (INIS)

    Magill, Steve; Kuhlmann, S.

    2006-01-01

    We compared Stainless Steel (SS) to Tungsten (W) as absorber for the HCAL in simulation using single particles (pions) and a Particle-Flow Algorithm applied to e + e - -> Z -> qqbar events. We then used the PFA to evaluate the performance characteristics of a LC HCAL using W absorber and comparing scintillator and RPC as active media. The W/Scintillator HCAL performs better than the SS/Scintillator version due to finer λ I sampling and narrower showers in the dense absorber. The W/Scintillator HCAL performs better than the W/RPC HCAL except in the number of unused hits in the PFA. Since this represents the confusion term in the PFA response, additional tuning and optimization of a W/RPC HCAL might significantly improve this HCAL configuration

  3. Vehicle Routing with Three-dimensional Container Loading Constraints—Comparison of Nested and Joint Algorithms

    Science.gov (United States)

    Koloch, Grzegorz; Kaminski, Bogumil

    2010-10-01

    In the paper we examine a modification of the classical Vehicle Routing Problem (VRP) in which shapes of transported cargo are accounted for. This problem, known as a three-dimensional VRP with loading constraints (3D-VRP), is appropriate when transported commodities are not perfectly divisible, but they have fixed and heterogeneous dimensions. In the paper restrictions on allowable cargo positionings are also considered. These restrictions are derived from business practice and they extended the baseline 3D-VRP formulation as considered by Koloch and Kaminski (2010). In particular, we investigate how additional restrictions influence relative performance of two proposed optimization algorithms: the nested and the joint one. Performance of both methods is compared on artificial problems and on a big-scale real life case study.

  4. Comparison between genetic algorithm and self organizing map to detect botnet network traffic

    Science.gov (United States)

    Yugandhara Prabhakar, Shinde; Parganiha, Pratishtha; Madhu Viswanatham, V.; Nirmala, M.

    2017-11-01

    In Cyber Security world the botnet attacks are increasing. To detect botnet is a challenging task. Botnet is a group of computers connected in a coordinated fashion to do malicious activities. Many techniques have been developed and used to detect and prevent botnet traffic and the attacks. In this paper, a comparative study is done on Genetic Algorithm (GA) and Self Organizing Map (SOM) to detect the botnet network traffic. Both are soft computing techniques and used in this paper as data analytics system. GA is based on natural evolution process and SOM is an Artificial Neural Network type, uses unsupervised learning techniques. SOM uses neurons and classifies the data according to the neurons. Sample of KDD99 dataset is used as input to GA and SOM.

  5. Randomized Crossover Comparison of Personalized MPC and PID Control Algorithms for the Artificial Pancreas.

    Science.gov (United States)

    Pinsker, Jordan E; Lee, Joon Bok; Dassau, Eyal; Seborg, Dale E; Bradley, Paige K; Gondhalekar, Ravi; Bevier, Wendy C; Huyett, Lauren; Zisser, Howard C; Doyle, Francis J

    2016-07-01

    To evaluate two widely used control algorithms for an artificial pancreas (AP) under nonideal but comparable clinical conditions. After a pilot safety and feasibility study (n = 10), closed-loop control (CLC) was evaluated in a randomized, crossover trial of 20 additional adults with type 1 diabetes. Personalized model predictive control (MPC) and proportional integral derivative (PID) algorithms were compared in supervised 27.5-h CLC sessions. Challenges included overnight control after a 65-g dinner, response to a 50-g breakfast, and response to an unannounced 65-g lunch. Boluses of announced dinner and breakfast meals were given at mealtime. The primary outcome was time in glucose range 70-180 mg/dL. Mean time in range 70-180 mg/dL was greater for MPC than for PID (74.4 vs. 63.7%, P = 0.020). Mean glucose was also lower for MPC than PID during the entire trial duration (138 vs. 160 mg/dL, P = 0.012) and 5 h after the unannounced 65-g meal (181 vs. 220 mg/dL, P = 0.019). There was no significant difference in time with glucose PID control for the AP indicates that MPC performed particularly well, achieving nearly 75% time in the target range, including the unannounced meal. Although both forms of CLC provided safe and effective glucose management, MPC performed as well or better than PID in all metrics. © 2016 by the American Diabetes Association. Readers may use this article as long as the work is properly cited, the use is educational and not for profit, and the work is not altered.

  6. Comparison of algorithms for determination of rotation measure and Faraday structure. I. 1100–1400 MHz

    International Nuclear Information System (INIS)

    Sun, X. H.; Akahori, Takuya; Anderson, C. S.; Farnes, J. S.; O’Sullivan, S. P.; Rudnick, L.; O’Brien, T.; Bell, M. R.; Bray, J. D.; Scaife, A. M. M.; Ideguchi, S.; Kumazaki, K.; Stepanov, R.; Stil, J.; Wolleben, M.; Takahashi, K.; Weeren, R. J. van

    2015-01-01

    Faraday rotation measures (RMs) and more general Faraday structures are key parameters for studying cosmic magnetism and are also sensitive probes of faint ionized thermal gas. A definition of which derived quantities are required for various scientific studies is needed, as well as addressing the challenges in determining Faraday structures. A wide variety of algorithms has been proposed to reconstruct these structures. In preparation for the Polarization Sky Survey of the Universe's Magnetism (POSSUM) to be conducted with the Australian Square Kilometre Array Pathfinder and the ongoing Galactic Arecibo L-band Feeds Array Continuum Transit Survey (GALFACTS), we run a Faraday structure determination data challenge to benchmark the currently available algorithms, including Faraday synthesis (previously called RM synthesis in the literature), wavelet, compressive sampling, and QU-fitting. The input models include sources with one Faraday thin component, two Faraday thin components, and one Faraday thick component. The frequency set is similar to POSSUM/GALFACTS with a 300 MHz bandwidth from 1.1 to 1.4 GHz. We define three figures of merit motivated by the underlying science: (1) an average RM weighted by polarized intensity, RM wtd , (2) the separation Δϕ of two Faraday components, and (3) the reduced chi-squared χ r 2 . Based on the current test data with a signal-to-noise ratio of about 32, we find the following. (1) When only one Faraday thin component is present, most methods perform as expected, with occasional failures where two components are incorrectly found. (2) For two Faraday thin components, QU-fitting routines perform the best, with errors close to the theoretical ones for RM wtd but with significantly higher errors for Δϕ. All other methods, including standard Faraday synthesis, frequently identify only one component when Δϕ is below or near the width of the Faraday point-spread function. (3) No methods as currently implemented work well

  7. A New Retrieval Algorithm for OMI NO2: Tropospheric Results and Comparisons with Measurements and Models

    Science.gov (United States)

    Swartz, W. H.; Bucesla, E. J.; Lamsal, L. N.; Celarier, E. A.; Krotkov, N. A.; Bhartia, P, K,; Strahan, S. E.; Gleason, J. F.; Herman, J.; Pickering, K.

    2012-01-01

    Nitrogen oxides (NOx =NO+NO2) are important atmospheric trace constituents that impact tropospheric air pollution chemistry and air quality. We have developed a new NASA algorithm for the retrieval of stratospheric and tropospheric NO2 vertical column densities using measurements from the nadir-viewing Ozone Monitoring Instrument (OMI) on NASA's Aura satellite. The new products rely on an improved approach to stratospheric NO2 column estimation and stratosphere-troposphere separation and a new monthly NO2 climatology based on the NASA Global Modeling Initiative chemistry-transport model. The retrieval does not rely on daily model profiles, minimizing the influence of a priori information. We evaluate the retrieved tropospheric NO2 columns using surface in situ (e.g., AQS/EPA), ground-based (e.g., DOAS), and airborne measurements (e.g., DISCOVER-AQ). The new, improved OMI tropospheric NO2 product is available at high spatial resolution for the years 200S-present. We believe that this product is valuable for the evaluation of chemistry-transport models, examining the spatial and temporal patterns of NOx emissions, constraining top-down NOx inventories, and for the estimation of NOx lifetimes.

  8. A Comparison of Advanced Regression Algorithms for Quantifying Urban Land Cover

    Directory of Open Access Journals (Sweden)

    Akpona Okujeni

    2014-07-01

    Full Text Available Quantitative methods for mapping sub-pixel land cover fractions are gaining increasing attention, particularly with regard to upcoming hyperspectral satellite missions. We evaluated five advanced regression algorithms combined with synthetically mixed training data for quantifying urban land cover from HyMap data at 3.6 and 9 m spatial resolution. Methods included support vector regression (SVR, kernel ridge regression (KRR, artificial neural networks (NN, random forest regression (RFR and partial least squares regression (PLSR. Our experiments demonstrate that both kernel methods SVR and KRR yield high accuracies for mapping complex urban surface types, i.e., rooftops, pavements, grass- and tree-covered areas. SVR and KRR models proved to be stable with regard to the spatial and spectral differences between both images and effectively utilized the higher complexity of the synthetic training mixtures for improving estimates for coarser resolution data. Observed deficiencies mainly relate to known problems arising from spectral similarities or shadowing. The remaining regressors either revealed erratic (NN or limited (RFR and PLSR performances when comprehensively mapping urban land cover. Our findings suggest that the combination of kernel-based regression methods, such as SVR and KRR, with synthetically mixed training data is well suited for quantifying urban land cover from imaging spectrometer data at multiple scales.

  9. Comparison study of noise reduction algorithms in dual energy chest digital tomosynthesis

    Science.gov (United States)

    Lee, D.; Kim, Y.-S.; Choi, S.; Lee, H.; Choi, S.; Kim, H.-J.

    2018-04-01

    Dual energy chest digital tomosynthesis (CDT) is a recently developed medical technique that takes advantage of both tomosynthesis and dual energy X-ray images. However, quantum noise, which occurs in dual energy X-ray images, strongly interferes with diagnosis in various clinical situations. Therefore, noise reduction is necessary in dual energy CDT. In this study, noise-compensating algorithms, including a simple smoothing of high-energy images (SSH) and anti-correlated noise reduction (ACNR), were evaluated in a CDT system. We used a newly developed prototype CDT system and anthropomorphic chest phantom for experimental studies. The resulting images demonstrated that dual energy CDT can selectively image anatomical structures, such as bone and soft tissue. Among the resulting images, those acquired with ACNR showed the best image quality. Both coefficient of variation and contrast to noise ratio (CNR) were the highest in ACNR among the three different dual energy techniques, and the CNR of bone was significantly improved compared to the reconstructed images acquired at a single energy. This study demonstrated the clinical value of dual energy CDT and quantitatively showed that ACNR is the most suitable among the three developed dual energy techniques, including standard log subtraction, SSH, and ACNR.

  10. Clinical implications in the use of the PBC algorithm versus the AAA by comparison of different NTCP models/parameters.

    Science.gov (United States)

    Bufacchi, Antonella; Nardiello, Barbara; Capparella, Roberto; Begnozzi, Luisa

    2013-07-04

    Retrospective analysis of 3D clinical treatment plans to investigate qualitative, possible, clinical consequences of the use of PBC versus AAA. The 3D dose distributions of 80 treatment plans at four different tumour sites, produced using PBC algorithm, were recalculated using AAA and the same number of monitor units provided by PBC and clinically delivered to each patient; the consequences of the difference on the dose-effect relations for normal tissue injury were studied by comparing different NTCP model/parameters extracted from a review of published studies. In this study the AAA dose calculation is considered as benchmark data. The paired Student t-test was used for statistical comparison of all results obtained from the use of the two algorithms. In the prostate plans, the AAA predicted lower NTCP value (NTCPAAA) for the risk of late rectal bleeding for each of the seven combinations of NTCP parameters, the maximum mean decrease was 2.2%. In the head-and-neck treatments, each combination of parameters used for the risk of xerostemia from irradiation of the parotid glands involved lower NTCPAAA, that varied from 12.8% (sd=3.0%) to 57.5% (sd=4.0%), while when the PBC algorithm was used the NTCPPBC's ranging was from 15.2% (sd=2.7%) to 63.8% (sd=3.8%), according the combination of parameters used; the differences were statistically significant. Also NTCPAAA regarding the risk of radiation pneumonitis in the lung treatments was found to be lower than NTCPPBC for each of the eight sets of NTCP parameters; the maximum mean decrease was 4.5%. A mean increase of 4.3% was found when the NTCPAAA was calculated by the parameters evaluated from dose distribution calculated by a convolution-superposition (CS) algorithm. A markedly different pattern was observed for the risk relating to the development of pneumonitis following breast treatments: the AAA predicted higher NTCP value. The mean NTCPAAA varied from 0.2% (sd = 0.1%) to 2.1% (sd = 0.3%), while the mean NTCPPBC

  11. Clinical implications in the use of the PBC algorithm versus the AAA by comparison of different NTCP models/parameters

    International Nuclear Information System (INIS)

    Bufacchi, Antonella; Nardiello, Barbara; Capparella, Roberto; Begnozzi, Luisa

    2013-01-01

    Retrospective analysis of 3D clinical treatment plans to investigate qualitative, possible, clinical consequences of the use of PBC versus AAA. The 3D dose distributions of 80 treatment plans at four different tumour sites, produced using PBC algorithm, were recalculated using AAA and the same number of monitor units provided by PBC and clinically delivered to each patient; the consequences of the difference on the dose-effect relations for normal tissue injury were studied by comparing different NTCP model/parameters extracted from a review of published studies. In this study the AAA dose calculation is considered as benchmark data. The paired Student t-test was used for statistical comparison of all results obtained from the use of the two algorithms. In the prostate plans, the AAA predicted lower NTCP value (NTCP AAA ) for the risk of late rectal bleeding for each of the seven combinations of NTCP parameters, the maximum mean decrease was 2.2%. In the head-and-neck treatments, each combination of parameters used for the risk of xerostemia from irradiation of the parotid glands involved lower NTCP AAA , that varied from 12.8% (sd=3.0%) to 57.5% (sd=4.0%), while when the PBC algorithm was used the NTCP PBC ’s ranging was from 15.2% (sd=2.7%) to 63.8% (sd=3.8%), according the combination of parameters used; the differences were statistically significant. Also NTCP AAA regarding the risk of radiation pneumonitis in the lung treatments was found to be lower than NTCP PBC for each of the eight sets of NTCP parameters; the maximum mean decrease was 4.5%. A mean increase of 4.3% was found when the NTCP AAA was calculated by the parameters evaluated from dose distribution calculated by a convolution-superposition (CS) algorithm. A markedly different pattern was observed for the risk relating to the development of pneumonitis following breast treatments: the AAA predicted higher NTCP value. The mean NTCP AAA varied from 0.2% (sd = 0.1%) to 2.1% (sd = 0.3%), while the

  12. A genetic algorithm for the optimization of fiber angles in composite laminates

    International Nuclear Information System (INIS)

    Hwang, Shun Fa; Hsu, Ya Chu; Chen, Yuder

    2014-01-01

    A genetic algorithm for the optimization of composite laminates is proposed in this work. The well-known roulette selection criterion, one-point crossover operator, and uniform mutation operator are used in this genetic algorithm to create the next population. To improve the hill-climbing capability of the algorithm, adaptive mechanisms designed to adjust the probabilities of the crossover and mutation operators are included, and the elite strategy is enforced to ensure the quality of the optimum solution. The proposed algorithm includes a new operator called the elite comparison, which compares and uses the differences in the design variables of the two best solutions to find possible combinations. This genetic algorithm is tested in four optimization problems of composite laminates. Specifically, the effect of the elite comparison operator is evaluated. Results indicate that the elite comparison operator significantly accelerates the convergence of the algorithm, which thus becomes a good candidate for the optimization of composite laminates.

  13. Photothermal depth profiling: Comparison between genetic algorithms and thermal wave backscattering (abstract)

    Science.gov (United States)

    Li Voti, R.; Sibilia, C.; Bertolotti, M.

    2003-01-01

    Photothermal depth profiling has been the subject of many papers in the last years. Inverse problems on different kinds of materials have been identified, classified, and solved. A first classification has been done according to the type of depth profile: the physical quantity to be reconstructed is the optical absorption in the problems of type I, the thermal effusivity for type II, and both of them for type III. Another classification may be done depending on the time scale of the pump beam heating (frequency scan, time scan), or on its geometrical symmetry (one- or three-dimensional). In this work we want to discuss two different approaches, the genetic algorithms (GA) [R. Li Voti, C. Melchiorri, C. Sibilia, and M. Bertolotti, Anal. Sci. 17, 410 (2001); R. Li Voti, Proceedings, IV Int. Workshop on Advances in Signal Processing for Non-Destructive Evaluation of Materials, Quebec, August 2001] and the thermal wave backscattering (TWBS) [R. Li Voti, G. L. Liakhou, S. Paoloni, C. Sibilia, and M. Bertolotti, Anal. Sci. 17, 414 (2001); J. C. Krapez and R. Li Voti, Anal. Sci. 17, 417 (2001)], showing their performances and limits of validity for several kinds of photothermal depth profiling problems: The two approaches are based on different mechanisms and exhibit obviously different features. GA may be implemented on the exact heat diffusion equation as follows: one chromosome is associated to each profile. The genetic evolution of the chromosome allows one to find better and better profiles, eventually converging towards the solution of the inverse problem. The main advantage is that GA may be applied to any arbitrary profile, but several disadvantages exist; for example, the complexity of the algorithm, the slow convergence, and consequently the computer time consumed. On the contrary, TWBS uses a simplified theoretical model of heat diffusion in inhomogeneous materials. According to such a model, the photothermal signal depends linearly on the thermal effusivity

  14. Assessment of left ventricular function and volumes by myocardial perfusion scintigraphy - comparison of two algorithms

    International Nuclear Information System (INIS)

    Zajic, T.; Fischer, R.; Brink, I.; Moser, E.; Krause, T.; Saurbier, B.

    2001-01-01

    Aim: Left ventricular volume and function can be computed from gated SPECT myocardial perfusion imaging using emory cardiac toolbox (ECT) or gated SPECT quantification (GS-Quant). The aim of this study was to compare both programs with respect to their practical application, stability and precision on heart-models as well as in clinical use. Methods: The volumes of five cardiac models were calculated by ECT and GS-Quant. 48 patients (13 female, 35 male) underwent a one day stress-rest protocol and gated SPECT. From these 96 gated SPECT images, left ventricular ejection fraction (LVEF), end-diastolic volume (EDV) and end-systolic volume (ESV) were estimated by ECT and GS-Quant. For 42 patients LVEF was also determined by echocardiography. Results: For the cardiac models the computed volumes showed high correlation with the model-volumes as well as high correlation between ECT and GS-Quant (r ≥0.99). Both programs underestimated the volume by approximately 20-30% independent of the ventricle-size. Calculating LVEF, EDV and ESV, GS-Quant and ECT correlated well to each other and to the LVEF estimated by echocardiography (r ≥0.86). LVEF values determined with ECT were about 10% higher than values determined with GS-Quant or echocardiography. The incorrect surfaces calculated by the automatic algorithm of GS-Quant for three examinations could not be corrected manually. 34 of the ECT studies were optimized by the operator. Conclusion: GS-Quant and ECT are two reliable programs in estimating LVEF. Both seem to underestimate the cardiac volume. In practical application GS-Quant was faster and easier to use. ECT allows the user to define the contour of the ventricle and thus is less susceptible to artifacts. (orig.) [de

  15. Characterization and Comparison of the 10-2 SITA-Standard and Fast Algorithms

    Directory of Open Access Journals (Sweden)

    Yaniv Barkana

    2012-01-01

    Full Text Available Purpose: To compare the 10-2 SITA-standard and SITA-fast visual field programs in patients with glaucoma. Methods: We enrolled 26 patients with open angle glaucoma with involvement of at least one paracentral location on 24-2 SITA-standard field test. Each subject performed 10-2 SITA-standard and SITA-fast tests. Within 2 months this sequence of tests was repeated. Results: SITA-fast was 30% shorter than SITA-standard (5.5±1.1 vs 7.9±1.1 minutes, <0.001. Mean MD was statistically significantly higher for SITA-standard compared with SITA-fast at first visit (Δ=0.3 dB, =0.017 but not second visit. Inter-visit difference in MD or in number of depressed points was not significant for both programs. Bland-Altman analysis showed that clinically significant variations can exist in individual instances between the 2 programs and between repeat tests with the same program. Conclusions: The 10-2 SITA-fast algorithm is significantly shorter than SITA-standard. The two programs have similar long-term variability. Average same-visit between-program and same-program between-visit sensitivity results were similar for the study population, but clinically significant variability was observed for some individual test pairs. Group inter- and intra-program test results may be comparable, but in the management of the individual patient field change should be verified by repeat testing.

  16. Application of a Combination of a Knowledge-Based Algorithm and 2-Stage Screening to Hypothesis-Free Genomic Data on Irinotecan-Treated Patients for Identification of a Candidate Single Nucleotide Polymorphism Related to an Adverse Effect

    Science.gov (United States)

    Takahashi, Hiro; Sai, Kimie; Saito, Yoshiro; Kaniwa, Nahoko; Matsumura, Yasuhiro; Hamaguchi, Tetsuya; Shimada, Yasuhiro; Ohtsu, Atsushi; Yoshino, Takayuki; Doi, Toshihiko; Okuda, Haruhiro; Ichinohe, Risa; Takahashi, Anna; Doi, Ayano; Odaka, Yoko; Okuyama, Misuzu; Saijo, Nagahiro; Sawada, Jun-ichi; Sakamoto, Hiromi; Yoshida, Teruhiko

    2014-01-01

    Interindividual variation in a drug response among patients is known to cause serious problems in medicine. Genomic information has been proposed as the basis for “personalized” health care. The genome-wide association study (GWAS) is a powerful technique for examining single nucleotide polymorphisms (SNPs) and their relationship with drug response variation; however, when using only GWAS, it often happens that no useful SNPs are identified due to multiple testing problems. Therefore, in a previous study, we proposed a combined method consisting of a knowledge-based algorithm, 2 stages of screening, and a permutation test for identifying SNPs. In the present study, we applied this method to a pharmacogenomics study where 109,365 SNPs were genotyped using Illumina Human-1 BeadChip in 168 cancer patients treated with irinotecan chemotherapy. We identified the SNP rs9351963 in potassium voltage-gated channel subfamily KQT member 5 (KCNQ5) as a candidate factor related to incidence of irinotecan-induced diarrhea. The p value for rs9351963 was 3.31×10−5 in Fisher's exact test and 0.0289 in the permutation test (when multiple testing problems were corrected). Additionally, rs9351963 was clearly superior to the clinical parameters and the model involving rs9351963 showed sensitivity of 77.8% and specificity of 57.6% in the evaluation by means of logistic regression. Recent studies showed that KCNQ4 and KCNQ5 genes encode members of the M channel expressed in gastrointestinal smooth muscle and suggested that these genes are associated with irritable bowel syndrome and similar peristalsis diseases. These results suggest that rs9351963 in KCNQ5 is a possible predictive factor of incidence of diarrhea in cancer patients treated with irinotecan chemotherapy and for selecting chemotherapy regimens, such as irinotecan alone or a combination of irinotecan with a KCNQ5 opener. Nonetheless, clinical importance of rs9351963 should be further elucidated. PMID:25127363

  17. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    Science.gov (United States)

    Ceylan Koydemir, Hatice; Feng, Steve; Liang, Kyle; Nadkarni, Rohan; Benien, Parul; Ozcan, Aydogan

    2017-06-01

    Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of 0.8 cm2 and weighs only 180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging) approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond water) and achieved a

  18. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    KAUST Repository

    Ceylan Koydemir, Hatice

    2017-06-14

    Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of ~0.8 cm2 and weighs only ~180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging) approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond water) and achieved

  19. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    Directory of Open Access Journals (Sweden)

    Ceylan Koydemir Hatice

    2017-06-01

    Full Text Available Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of ~0.8 cm2 and weighs only ~180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond

  20. Comparison of supervised machine learning algorithms for waterborne pathogen detection using mobile phone fluorescence microscopy

    KAUST Repository

    Ceylan Koydemir, Hatice; Feng, Steve; Liang, Kyle; Nadkarni, Rohan; Benien, Parul; Ozcan, Aydogan

    2017-01-01

    Giardia lamblia is a waterborne parasite that affects millions of people every year worldwide, causing a diarrheal illness known as giardiasis. Timely detection of the presence of the cysts of this parasite in drinking water is important to prevent the spread of the disease, especially in resource-limited settings. Here we provide extended experimental testing and evaluation of the performance and repeatability of a field-portable and cost-effective microscopy platform for automated detection and counting of Giardia cysts in water samples, including tap water, non-potable water, and pond water. This compact platform is based on our previous work, and is composed of a smartphone-based fluorescence microscope, a disposable sample processing cassette, and a custom-developed smartphone application. Our mobile phone microscope has a large field of view of ~0.8 cm2 and weighs only ~180 g, excluding the phone. A custom-developed smartphone application provides a user-friendly graphical interface, guiding the users to capture a fluorescence image of the sample filter membrane and analyze it automatically at our servers using an image processing algorithm and training data, consisting of >30,000 images of cysts and >100,000 images of other fluorescent particles that are captured, including, e.g. dust. The total time that it takes from sample preparation to automated cyst counting is less than an hour for each 10 ml of water sample that is tested. We compared the sensitivity and the specificity of our platform using multiple supervised classification models, including support vector machines and nearest neighbors, and demonstrated that a bootstrap aggregating (i.e. bagging) approach using raw image file format provides the best performance for automated detection of Giardia cysts. We evaluated the performance of this machine learning enabled pathogen detection device with water samples taken from different sources (e.g. tap water, non-potable water, pond water) and achieved

  1. Comparison of measurement methods with a mixed effects procedure accounting for replicated evaluations (COM3PARE): method comparison algorithm implementation for head and neck IGRT positional verification.

    Science.gov (United States)

    Roy, Anuradha; Fuller, Clifton D; Rosenthal, David I; Thomas, Charles R

    2015-08-28

    Comparison of imaging measurement devices in the absence of a gold-standard comparator remains a vexing problem; especially in scenarios where multiple, non-paired, replicated measurements occur, as in image-guided radiotherapy (IGRT). As the number of commercially available IGRT presents a challenge to determine whether different IGRT methods may be used interchangeably, an unmet need conceptually parsimonious and statistically robust method to evaluate the agreement between two methods with replicated observations. Consequently, we sought to determine, using an previously reported head and neck positional verification dataset, the feasibility and utility of a Comparison of Measurement Methods with the Mixed Effects Procedure Accounting for Replicated Evaluations (COM3PARE), a unified conceptual schema and analytic algorithm based upon Roy's linear mixed effects (LME) model with Kronecker product covariance structure in a doubly multivariate set-up, for IGRT method comparison. An anonymized dataset consisting of 100 paired coordinate (X/ measurements from a sequential series of head and neck cancer patients imaged near-simultaneously with cone beam CT (CBCT) and kilovoltage X-ray (KVX) imaging was used for model implementation. Software-suggested CBCT and KVX shifts for the lateral (X), vertical (Y) and longitudinal (Z) dimensions were evaluated for bias, inter-method (between-subject variation), intra-method (within-subject variation), and overall agreement using with a script implementing COM3PARE with the MIXED procedure of the statistical software package SAS (SAS Institute, Cary, NC, USA). COM3PARE showed statistically significant bias agreement and difference in inter-method between CBCT and KVX was observed in the Z-axis (both p - value<0.01). Intra-method and overall agreement differences were noted as statistically significant for both the X- and Z-axes (all p - value<0.01). Using pre-specified criteria, based on intra-method agreement, CBCT was deemed

  2. Head-to-head comparison of adaptive statistical and model-based iterative reconstruction algorithms for submillisievert coronary CT angiography.

    Science.gov (United States)

    Benz, Dominik C; Fuchs, Tobias A; Gräni, Christoph; Studer Bruengger, Annina A; Clerc, Olivier F; Mikulicic, Fran; Messerli, Michael; Stehli, Julia; Possner, Mathias; Pazhenkottil, Aju P; Gaemperli, Oliver; Kaufmann, Philipp A; Buechel, Ronny R

    2018-02-01

    Iterative reconstruction (IR) algorithms allow for a significant reduction in radiation dose of coronary computed tomography angiography (CCTA). We performed a head-to-head comparison of adaptive statistical IR (ASiR) and model-based IR (MBIR) algorithms to assess their impact on quantitative image parameters and diagnostic accuracy for submillisievert CCTA. CCTA datasets of 91 patients were reconstructed using filtered back projection (FBP), increasing contributions of ASiR (20, 40, 60, 80, and 100%), and MBIR. Signal and noise were measured in the aortic root to calculate signal-to-noise ratio (SNR). In a subgroup of 36 patients, diagnostic accuracy of ASiR 40%, ASiR 100%, and MBIR for diagnosis of coronary artery disease (CAD) was compared with invasive coronary angiography. Median radiation dose was 0.21 mSv for CCTA. While increasing levels of ASiR gradually reduced image noise compared with FBP (up to - 48%, P ASiR (-59% compared with ASiR 100%; P ASiR 40% and ASiR 100% resulted in substantially lower diagnostic accuracy to detect CAD as diagnosed by invasive coronary angiography compared with MBIR: sensitivity and specificity were 100 and 37%, 100 and 57%, and 100 and 74% for ASiR 40%, ASiR 100%, and MBIR, respectively. MBIR offers substantial noise reduction with increased SNR, paving the way for implementation of submillisievert CCTA protocols in clinical routine. In contrast, inferior noise reduction by ASiR negatively affects diagnostic accuracy of submillisievert CCTA for CAD detection. Published on behalf of the European Society of Cardiology. All rights reserved. © The Author 2017. For permissions, please email: journals.permissions@oup.com.

  3. Comparison of primary productivity estimates in the Baltic Sea based on the DESAMBEM algorithm with estimates based on other similar algorithms

    Directory of Open Access Journals (Sweden)

    Małgorzata Stramska

    2013-02-01

    Full Text Available The quasi-synoptic view available from satellites has been broadly used in recent years to observe in near-real time the large-scale dynamics of marine ecosystems and to estimate primary productivity in the world ocean. However, the standard global NASA ocean colour algorithms generally do not produce good results in the Baltic Sea. In this paper, we compare the ability of seven algorithms to estimate depth-integrated daily primary production (PP, mg C m-2 in the Baltic Sea. All the algorithms use surface chlorophyll concentration, sea surface temperature, photosynthetic available radiation, latitude, longitude and day of the year as input data. Algorithm-derived PP is then compared with PP estimates obtained from 14C uptake measurements. The results indicate that the best agreement between the modelled and measured PP in the Baltic Sea is obtained with the DESAMBEM algorithm. This result supports the notion that a regional approach should be used in the interpretation of ocean colour satellite data in the Baltic Sea.

  4. Library correlation nuclide identification algorithm

    International Nuclear Information System (INIS)

    Russ, William R.

    2007-01-01

    A novel nuclide identification algorithm, Library Correlation Nuclide Identification (LibCorNID), is proposed. In addition to the spectrum, LibCorNID requires the standard energy, peak shape and peak efficiency calibrations. Input parameters include tolerances for some expected variations in the calibrations, a minimum relative nuclide peak area threshold, and a correlation threshold. Initially, the measured peak spectrum is obtained as the residual after baseline estimation via peak erosion, removing the continuum. Library nuclides are filtered by examining the possible nuclide peak areas in terms of the measured peak spectrum and applying the specified relative area threshold. Remaining candidates are used to create a set of theoretical peak spectra based on the calibrations and library entries. These candidate spectra are then simultaneously fit to the measured peak spectrum while also optimizing the calibrations within the bounds of the specified tolerances. Each candidate with optimized area still exceeding the area threshold undergoes a correlation test. The normalized Pearson's correlation value is calculated as a comparison of the optimized nuclide peak spectrum to the measured peak spectrum with the other optimized peak spectra subtracted. Those candidates with correlation values that exceed the specified threshold are identified and their optimized activities are output. An evaluation of LibCorNID was conducted to verify identification performance in terms of detection probability and false alarm rate. LibCorNID has been shown to perform well compared to standard peak-based analyses

  5. Immunogenicity of a virosomally-formulated Plasmodium falciparum GLURP-MSP3 chimeric protein-based malaria vaccine candidate in comparison to adjuvanted formulations

    DEFF Research Database (Denmark)

    Tamborrini, Marco; Stoffel, Sabine A; Westerfeld, Nicole

    2011-01-01

    In clinical trials, immunopotentiating reconstituted influenza virosomes (IRIVs) have shown great potential as a versatile antigen delivery platform for synthetic peptides derived from Plasmodium falciparum antigens. This study describes the immunogenicity of a virosomally-formulated recombinant ...... fusion protein comprising domains of the two malaria vaccine candidate antigens MSP3 and GLURP....

  6. Social Studies Teacher Candidates' Opinions about Digital Citizenship and Its Place in Social Studies Teacher Training Program: A Comparison between the USA and Turkey

    Science.gov (United States)

    Karaduman, Hidir

    2017-01-01

    This research aims to determine and compare what social studies teacher candidates living in two different countries think about digital citizenship and its place within social studies and social studies teacher training program and to produce suggestions concerning digital citizenship education. Having a descriptive design, this research has…

  7. Personalization in e-campaigning: A cross-national comparison of personalization strategies used on candidate websites of 17 countries in EP elections 2009

    NARCIS (Netherlands)

    Hermans, E.A.H.M.; Vergeer, M.R.M.

    2013-01-01

    Candidate websites provide politicians with opportunities to present themselves in an individual way. To a greater or lesser extent politicians share personal information in their biographies and provide options to connect with citizens by putting links on their websites to their social networking

  8. Speeding Up the String Comparison of the IDS Snort using Parallel Programming: A Systematic Literature Review on the Parallelized Aho-Corasick Algorithm

    Directory of Open Access Journals (Sweden)

    SILVA JUNIOR,J. B.

    2016-12-01

    Full Text Available The Intrusion Detection System (IDS needs to compare the contents of all packets arriving at the network interface with a set of signatures for indicating possible attacks, a task that consumes much CPU processing time. In order to alleviate this problem, some researchers have tried to parallelize the IDS's comparison engine, transferring execution from the CPU to GPU. This paper identifies and maps the parallelization features of the Aho-Corasick algorithm, which is used in Snort to compare patterns, in order to show this algorithm's implementation and execution issues, as well as optimization techniques for the Aho-Corasick machine. We have found 147 papers from important computer science publications databases, and have mapped them. We selected 22 and analyzed them in order to find our results. Our analysis of the papers showed, among other results, that parallelization of the AC algorithm is a new task and the authors have focused on the State Transition Table as the most common way to implement the algorithm on the GPU. Furthermore, we found that some techniques speed up the algorithm and reduce the required machine storage space are highly used, such as the algorithm running on the fastest memories and mechanisms for reducing the number of nodes and bit maping.

  9. Technical Report Series on Global Modeling and Data Assimilation. Volume 12; Comparison of Satellite Global Rainfall Algorithms

    Science.gov (United States)

    Suarez, Max J. (Editor); Chang, Alfred T. C.; Chiu, Long S.

    1997-01-01

    Seventeen months of rainfall data (August 1987-December 1988) from nine satellite rainfall algorithms (Adler, Chang, Kummerow, Prabhakara, Huffman, Spencer, Susskind, and Wu) were analyzed to examine the uncertainty of satellite-derived rainfall estimates. The variability among algorithms, measured as the standard deviation computed from the ensemble of algorithms, shows regions of high algorithm variability tend to coincide with regions of high rain rates. Histograms of pattern correlation (PC) between algorithms suggest a bimodal distribution, with separation at a PC-value of about 0.85. Applying this threshold as a criteria for similarity, our analyses show that algorithms using the same sensor or satellite input tend to be similar, suggesting the dominance of sampling errors in these satellite estimates.

  10. Accuracy of Cardiac Output by Nine Different Pulse Contour Algorithms in Cardiac Surgery Patients: A Comparison with Transpulmonary Thermodilution

    Directory of Open Access Journals (Sweden)

    Ole Broch

    2016-01-01

    Full Text Available Objective. Today, there exist several different pulse contour algorithms for calculation of cardiac output (CO. The aim of the present study was to compare the accuracy of nine different pulse contour algorithms with transpulmonary thermodilution before and after cardiopulmonary bypass (CPB. Methods. Thirty patients scheduled for elective coronary surgery were studied before and after CPB. A passive leg raising maneuver was also performed. Measurements included CO obtained by transpulmonary thermodilution (COTPTD and by nine pulse contour algorithms (COX1–9. Calibration of pulse contour algorithms was performed by esophageal Doppler ultrasound after induction of anesthesia and 15 min after CPB. Correlations, Bland-Altman analysis, four-quadrant, and polar analysis were also calculated. Results. There was only a poor correlation between COTPTD and COX1–9 during passive leg raising and in the period before and after CPB. Percentage error exceeded the required 30% limit. Four-quadrant and polar analysis revealed poor trending ability for most algorithms before and after CPB. The Liljestrand-Zander algorithm revealed the best reliability. Conclusions. Estimation of CO by nine different pulse contour algorithms revealed poor accuracy compared with transpulmonary thermodilution. Furthermore, the less-invasive algorithms showed an insufficient capability for trending hemodynamic changes before and after CPB. The Liljestrand-Zander algorithm demonstrated the highest reliability. This trial is registered with NCT02438228 (ClinicalTrials.gov.

  11. Comparison of the accuracy of three algorithms in predicting accessory pathways among adult Wolff-Parkinson-White syndrome patients.

    Science.gov (United States)

    Maden, Orhan; Balci, Kevser Gülcihan; Selcuk, Mehmet Timur; Balci, Mustafa Mücahit; Açar, Burak; Unal, Sefa; Kara, Meryem; Selcuk, Hatice

    2015-12-01

    The aim of this study was to investigate the accuracy of three algorithms in predicting accessory pathway locations in adult patients with Wolff-Parkinson-White syndrome in Turkish population. A total of 207 adult patients with Wolff-Parkinson-White syndrome were retrospectively analyzed. The most preexcited 12-lead electrocardiogram in sinus rhythm was used for analysis. Two investigators blinded to the patient data used three algorithms for prediction of accessory pathway location. Among all locations, 48.5% were left-sided, 44% were right-sided, and 7.5% were located in the midseptum or anteroseptum. When only exact locations were accepted as match, predictive accuracy for Chiang was 71.5%, 72.4% for d'Avila, and 71.5% for Arruda. The percentage of predictive accuracy of all algorithms did not differ between the algorithms (p = 1.000; p = 0.875; p = 0.885, respectively). The best algorithm for prediction of right-sided, left-sided, and anteroseptal and midseptal accessory pathways was Arruda (p algorithms were similar in predicting accessory pathway location and the predicted accuracy was lower than previously reported by their authors. However, according to the accessory pathway site, the algorithm designed by Arruda et al. showed better predictions than the other algorithms and using this algorithm may provide advantages before a planned ablation.

  12. A study of the dosimetry of small field photon beams used in intensity modulated radiation therapy in inhomogeneous media: Monte Carlo simulations, and algorithm comparisons and corrections

    International Nuclear Information System (INIS)

    Jones, Andrew Osler

    2004-01-01

    There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however, their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs farther downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. The dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the

  13. Immunogenicity of a virosomally-formulated Plasmodium falciparum GLURP-MSP3 chimeric protein-based malaria vaccine candidate in comparison to adjuvanted formulations

    Directory of Open Access Journals (Sweden)

    Tamborrini Marco

    2011-12-01

    Full Text Available Abstract Background In clinical trials, immunopotentiating reconstituted influenza virosomes (IRIVs have shown great potential as a versatile antigen delivery platform for synthetic peptides derived from Plasmodium falciparum antigens. This study describes the immunogenicity of a virosomally-formulated recombinant fusion protein comprising domains of the two malaria vaccine candidate antigens MSP3 and GLURP. Methods The highly purified recombinant protein GMZ2 was coupled to phosphatidylethanolamine and the conjugates incorporated into the membrane of IRIVs. The immunogenicity of this adjuvant-free virosomal formulation was compared to GMZ2 formulated with the adjuvants Montanide ISA 720 and Alum in three mouse strains with different genetic backgrounds. Results Intramuscular injections of all three candidate vaccine formulations induced GMZ2-specific antibody responses in all mice tested. In general, the humoral immune response in outbred NMRI mice was stronger than that in inbred BALB/c and C57BL/6 mice. ELISA with the recombinant antigens demonstrated immunodominance of the GLURP component over the MSP3 component. However, compared to the Al(OH3-adjuvanted formulation the two other formulations elicited in NMRI mice a larger proportion of anti-MSP3 antibodies. Analyses of the induced GMZ2-specific IgG subclass profiles showed for all three formulations a predominance of the IgG1 isotype. Immune sera against all three formulations exhibited cross-reactivity with in vitro cultivated blood-stage parasites. Immunofluorescence and immunoblot competition experiments showed that both components of the hybrid protein induced IgG cross-reactive with the corresponding native proteins. Conclusion A virosomal formulation of the chimeric protein GMZ2 induced P. falciparum blood stage parasite cross-reactive IgG responses specific for both MSP3 and GLURP. GMZ2 thus represents a candidate component suitable for inclusion into a multi-valent virosomal

  14. A Comparison Between the Hemodynamic Effects of Cisatracurium and Atracurium in Patient with Low Function of Left Ventricle who are Candidate for Open Heart Surgery.

    Science.gov (United States)

    Ghorbanlo, Masoud; Mohaghegh, Mahmoud Reza; Yazdanian, Forozan; Mesbah, Mehrdad; Totonchi, Ziya

    2016-07-27

    The need for muscle relaxants in general anesthesia in different surgeries including cardiac surgeries, and the type of relaxant to be used considering its different hemodynamic effects on patients with heart disease can be of considerable importance. In this study, the hemodynamic effects of two muscle relaxants, Cisatracurium and Atracurium in patients whit low function of left ventricle who are candidate for open heart surgery have been considered. This study has been designed as a randomized prospective double-blind clinical trial. The target population included all adult patients with heart disease whose ejection fraction reported by echocardiography or cardiac catheterization was 35% or less before the surgery, and were candidate for open heart surgery in Shahid Rajaei Heart Center. Taking into account the inclusion and exclusion criteria, the patients were randomly placed in two groups of 30 people each. In the induction stage, all the patients received midazolam, etomidate, and one of the considered muscle relaxant, either 0.2 mg/kg of cisatracurium or 0.5mg/kg of Atracurium within one minute. In the maintenance stage of anesthesia, the patients were administered by infusion of midazolam, sufentanil and the same muscle relaxant used in the induction stage. The hemodynamic indexes were recorded and evaluated in different stages of anesthesia and surgery as well as prior to transfer to ICU. In regard with descriptive indexes (age and sex distributions, premedication with cardiac drugs, ejection fraction before surgery, basic disease) there was no statistically significant difference between the groups. The significant difference of hemodynamic indexes between the two groups of this study, and the need for hemodynamic stability in all stages of surgery for patients with low function of left ventricle who are candidate for open heart surgery, proves that administering Cisatracurium as the muscle relaxant is advantageous and better.

  15. Comparison between T2-weighted MR and contrast-enhanced MR cholangiography in the evaluation of biliary anatomy in liver transplant donor candidates

    International Nuclear Information System (INIS)

    Wang Hong; Mu Xuetao; Wu Chunnan; Dong Yuru; Dong Yue; Zhang Huiqing; Zang Yunjin

    2008-01-01

    Objective: To compare conventional T 2 -weighted MR cholangiography (T 2 WI-MRC) with gadobenate dimeglumine enhanced T 1 -weighted MR cholangiography(CE-MRC) for evalution of biliary anatomy in liver transplant donor candidates. Methods: Thirty-two healthy liver transplant donor candidates were examined with two MR cholangiographic methods. For T 2 WI-MRC, a three-dimensional turbo spin-echo sequence and oblique coronal heavily T 2 -weighted thick-slab turbo spin-echo imaging sequence were performed. For CE-MRC, three-dimensional fat-suppressed spoiled gradient-echo sequences were performed, with a time delay of 60 minutes following the administration of gadobenate dimeglumine. To compare the depiction of biliary duct anatomy and the artifact caused by intestinal liquid and breathing between the two methods. Intraoperative cholangiography was the reference-standard examination. Results: The both methods depicted the biliary anatomy correctly in all 9 cases. The both methods showed the third branches of intrahepatic biliary duct clearly. T 2 WI-MRC showed interhepatic biliary duct before the third branches in 28 cases (87.5%), CE-MRC showed the same finding in 14 eases (43.8%). T 2 WI-MRC showed common bile ducts intermitantly in 2 cases, which were normal in CE-MRC and intraoperative cholangiography. Intestinal liquid affected the image quality of biliary duct in 6 cases (18.8%) performed with T 2 WI-MRC, but none with CE-MRC. The artifacts caused by breathing were not obvious in the either method. Conclusion: T 2 WI-MRC and CE-MRC both can be used to evaluate biliary anatomy of liver transplant donor candidates, but CE-MRC appears to be more accurate than T 2 WI-MRC. (authors)

  16. Identification and Comparison of Candidate Olfactory Genes in the Olfactory and Non-Olfactory Organs of Elm Pest Ambrostoma quadriimpressum (Coleoptera: Chrysomelidae) Based on Transcriptome Analysis.

    Science.gov (United States)

    Wang, Yinliang; Chen, Qi; Zhao, Hanbo; Ren, Bingzhong

    2016-01-01

    The leaf beetle Ambrostoma quadriimpressum (Coleoptera: Chrysomelidae) is a predominant forest pest that causes substantial damage to the lumber industry and city management. However, no effective and environmentally friendly chemical method has been discovered to control this pest. Until recently, the molecular basis of the olfactory system in A. quadriimpressum was completely unknown. In this study, antennae and leg transcriptomes were analyzed and compared using deep sequencing data to identify the olfactory genes in A. quadriimpressum. Moreover, the expression profiles of both male and female candidate olfactory genes were analyzed and validated by bioinformatics, motif analysis, homology analysis, semi-quantitative RT-PCR and RT-qPCR experiments in antennal and non-olfactory organs to explore the candidate olfactory genes that might play key roles in the life cycle of A. quadriimpressum. As a result, approximately 102.9 million and 97.3 million clean reads were obtained from the libraries created from the antennas and legs, respectively. Annotation led to 34344 Unigenes, which were matched to known proteins. Annotation data revealed that the number of genes in antenna with binding functions and receptor activity was greater than that of legs. Furthermore, many pathway genes were differentially expressed in the two organs. Sixteen candidate odorant binding proteins (OBPs), 10 chemosensory proteins (CSPs), 34 odorant receptors (ORs), 20 inotropic receptors [1] and 2 sensory neuron membrane proteins (SNMPs) and their isoforms were identified. Additionally, 15 OBPs, 9 CSPs, 18 ORs, 6 IRs and 2 SNMPs were predicted to be complete ORFs. Using RT-PCR, RT-qPCR and homology analysis, AquaOBP1/2/4/7/C1/C6, AquaCSP3/9, AquaOR8/9/10/14/15/18/20/26/29/33, AquaIR8a/13/25a showed olfactory-specific expression, indicating that these genes might play a key role in olfaction-related behaviors in A. quadriimpressum such as foraging and seeking. AquaOBP4/C5, AquaOBP4/C5, AquaCSP7

  17. Identification and Comparison of Candidate Olfactory Genes in the Olfactory and Non-Olfactory Organs of Elm Pest Ambrostoma quadriimpressum (Coleoptera: Chrysomelidae Based on Transcriptome Analysis.

    Directory of Open Access Journals (Sweden)

    Yinliang Wang

    Full Text Available The leaf beetle Ambrostoma quadriimpressum (Coleoptera: Chrysomelidae is a predominant forest pest that causes substantial damage to the lumber industry and city management. However, no effective and environmentally friendly chemical method has been discovered to control this pest. Until recently, the molecular basis of the olfactory system in A. quadriimpressum was completely unknown. In this study, antennae and leg transcriptomes were analyzed and compared using deep sequencing data to identify the olfactory genes in A. quadriimpressum. Moreover, the expression profiles of both male and female candidate olfactory genes were analyzed and validated by bioinformatics, motif analysis, homology analysis, semi-quantitative RT-PCR and RT-qPCR experiments in antennal and non-olfactory organs to explore the candidate olfactory genes that might play key roles in the life cycle of A. quadriimpressum. As a result, approximately 102.9 million and 97.3 million clean reads were obtained from the libraries created from the antennas and legs, respectively. Annotation led to 34344 Unigenes, which were matched to known proteins. Annotation data revealed that the number of genes in antenna with binding functions and receptor activity was greater than that of legs. Furthermore, many pathway genes were differentially expressed in the two organs. Sixteen candidate odorant binding proteins (OBPs, 10 chemosensory proteins (CSPs, 34 odorant receptors (ORs, 20 inotropic receptors [1] and 2 sensory neuron membrane proteins (SNMPs and their isoforms were identified. Additionally, 15 OBPs, 9 CSPs, 18 ORs, 6 IRs and 2 SNMPs were predicted to be complete ORFs. Using RT-PCR, RT-qPCR and homology analysis, AquaOBP1/2/4/7/C1/C6, AquaCSP3/9, AquaOR8/9/10/14/15/18/20/26/29/33, AquaIR8a/13/25a showed olfactory-specific expression, indicating that these genes might play a key role in olfaction-related behaviors in A. quadriimpressum such as foraging and seeking. AquaOBP4/C5, Aqua

  18. Comparison of Diagnostic Algorithms for Detecting Toxigenic Clostridium difficile in Routine Practice at a Tertiary Referral Hospital in Korea.

    Science.gov (United States)

    Moon, Hee-Won; Kim, Hyeong Nyeon; Hur, Mina; Shim, Hee Sook; Kim, Heejung; Yun, Yeo-Min

    2016-01-01

    Since every single test has some limitations for detecting toxigenic Clostridium difficile, multistep algorithms are recommended. This study aimed to compare the current, representative diagnostic algorithms for detecting toxigenic C. difficile, using VIDAS C. difficile toxin A&B (toxin ELFA), VIDAS C. difficile GDH (GDH ELFA, bioMérieux, Marcy-l'Etoile, France), and Xpert C. difficile (Cepheid, Sunnyvale, California, USA). In 271 consecutive stool samples, toxigenic culture, toxin ELFA, GDH ELFA, and Xpert C. difficile were performed. We simulated two algorithms: screening by GDH ELFA and confirmation by Xpert C. difficile (GDH + Xpert) and combined algorithm of GDH ELFA, toxin ELFA, and Xpert C. difficile (GDH + Toxin + Xpert). The performance of each assay and algorithm was assessed. The agreement of Xpert C. difficile and two algorithms (GDH + Xpert and GDH+ Toxin + Xpert) with toxigenic culture were strong (Kappa, 0.848, 0.857, and 0.868, respectively). The sensitivity, specificity, positive predictive value (PPV) and negative predictive value (NPV) of algorithms (GDH + Xpert and GDH + Toxin + Xpert) were 96.7%, 95.8%, 85.0%, 98.1%, and 94.5%, 95.8%, 82.3%, 98.5%, respectively. There were no significant differences between Xpert C. difficile and two algorithms in sensitivity, specificity, PPV and NPV. The performances of both algorithms for detecting toxigenic C. difficile were comparable to that of Xpert C. difficile. Either algorithm would be useful in clinical laboratories and can be optimized in the diagnostic workflow of C. difficile depending on costs, test volume, and clinical needs.

  19. Citizen Candidates Under Uncertainty

    OpenAIRE

    Eguia, Jon X.

    2005-01-01

    In this paper we make two contributions to the growing literature on "citizen-candidate" models of representative democracy. First, we add uncertainty about the total vote count. We show that in a society with a large electorate, where the outcome of the election is uncertain and where winning candidates receive a large reward from holding office, there will be a two-candidate equilibrium and no equilibria with a single candidate. Second, we introduce a new concept of equilibrium, which we te...

  20. Comparison of predictive performance of data mining algorithms in predicting body weight in Mengali rams of Pakistan

    Directory of Open Access Journals (Sweden)

    Senol Celik

    Full Text Available ABSTRACT The present study aimed at comparing predictive performance of some data mining algorithms (CART, CHAID, Exhaustive CHAID, MARS, MLP, and RBF in biometrical data of Mengali rams. To compare the predictive capability of the algorithms, the biometrical data regarding body (body length, withers height, and heart girth and testicular (testicular length, scrotal length, and scrotal circumference measurements of Mengali rams in predicting live body weight were evaluated by most goodness of fit criteria. In addition, age was considered as a continuous independent variable. In this context, MARS data mining algorithm was used for the first time to predict body weight in two forms, without (MARS_1 and with interaction (MARS_2 terms. The superiority order in the predictive accuracy of the algorithms was found as CART > CHAID ≈ Exhaustive CHAID > MARS_2 > MARS_1 > RBF > MLP. Moreover, all tested algorithms provided a strong predictive accuracy for estimating body weight. However, MARS is the only algorithm that generated a prediction equation for body weight. Therefore, it is hoped that the available results might present a valuable contribution in terms of predicting body weight and describing the relationship between the body weight and body and testicular measurements in revealing breed standards and the conservation of indigenous gene sources for Mengali sheep breeding. Therefore, it will be possible to perform more profitable and productive sheep production. Use of data mining algorithms is useful for revealing the relationship between body weight and testicular traits in describing breed standards of Mengali sheep.

  1. The Ocean Colour Climate Change Initiative: III. A Round-Robin Comparison on In-Water Bio-Optical Algorithms

    Science.gov (United States)

    Brewin, Robert J.W.; Sathyendranath, Shubha; Muller, Dagmar; Brockmann, Carsten; Deschamps, Pierre-Yves; Devred, Emmanuel; Doerffer, Roland; Fomferra, Norman; Franz, Bryan; Grant, Mike; hide

    2013-01-01

    Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situ Rrs as input to the models, the performance of eleven semianalytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional

  2. Comparison of single distance phase retrieval algorithms by considering different object composition and the effect of statistical and structural noise.

    Science.gov (United States)

    Chen, R C; Rigon, L; Longo, R

    2013-03-25

    Phase retrieval is a technique for extracting quantitative phase information from X-ray propagation-based phase-contrast tomography (PPCT). In this paper, the performance of different single distance phase retrieval algorithms will be investigated. The algorithms are herein called phase-attenuation duality Born Algorithm (PAD-BA), phase-attenuation duality Rytov Algorithm (PAD-RA), phase-attenuation duality Modified Bronnikov Algorithm (PAD-MBA), phase-attenuation duality Paganin algorithm (PAD-PA) and phase-attenuation duality Wu Algorithm (PAD-WA), respectively. They are all based on phase-attenuation duality property and on weak absorption of the sample and they employ only a single distance PPCT data. In this paper, they are investigated via simulated noise-free PPCT data considering the fulfillment of PAD property and weakly absorbing conditions, and with experimental PPCT data of a mixture sample containing absorbing and weakly absorbing materials, and of a polymer sample considering different degrees of statistical and structural noise. The simulation shows all algorithms can quantitatively reconstruct the 3D refractive index of a quasi-homogeneous weakly absorbing object from noise-free PPCT data. When the weakly absorbing condition is violated, the PAD-RA and PAD-PA/WA obtain better result than PAD-BA and PAD-MBA that are shown in both simulation and mixture sample results. When considering the statistical noise, the contrast-to-noise ratio values decreases as the photon number is reduced. The structural noise study shows that the result is progressively corrupted by ring-like artifacts with the increase of structural noise (i.e. phantom thickness). The PAD-RA and PAD-PA/WA gain better density resolution than the PAD-BA and PAD-MBA in both statistical and structural noise study.

  3. Comparison of Performance between Genetic Algorithm and SCE-UA for Calibration of SCS-CN Surface Runoff Simulation

    OpenAIRE

    Jeon, Ji-Hong; Park, Chan-Gi; Engel, Bernard

    2014-01-01

    Global optimization methods linked with simulation models are widely used for automated calibration and serve as useful tools for searching for cost-effective alternatives for environmental management. A genetic algorithm (GA) and shuffled complex evolution (SCE-UA) algorithm were linked with the Long-Term Hydrologic Impact Assessment (L-THIA) model, which employs the curve number (SCS-CN) method. The performance of the two optimization methods was compared by automatically calibrating L-THI...

  4. Performance Comparison of GPU, DSP and FPGA implementations of image processing and computer vision algorithms in embedded systems

    OpenAIRE

    Fykse, Egil

    2013-01-01

    The objective of this thesis is to compare the suitability of FPGAs, GPUs and DSPs for digital image processing applications. Normalized cross-correlation is used as a benchmark, because this algorithm includes convolution, a common operation in image processing and elsewhere. Normalized cross-correlation is a template matching algorithm that is used to locate predefined objects in a scene image. Because the throughput of DSPs is low for efficient calculation of normalized cross-correlation, ...

  5. Comparison of remote sensing algorithms for retrieval of suspended particulate matter concentration from reflectance in coastal waters

    Science.gov (United States)

    Freeman, Lauren A.; Ackleson, Steven G.; Rhea, William Joseph

    2017-10-01

    Suspended particulate matter (SPM) is a key environmental indicator for rivers, estuaries, and coastal waters, which can be calculated from remote sensing reflectance obtained by an airborne or satellite imager. Here, algorithms from prior studies are applied to a dataset of in-situ at surface hyperspectral remote sensing reflectance, collected in three geographic regions representing different water types. These data show the optically inherent exponential nature of the relationship between reflectance and sediment concentration. However, linear models are also shown to provide a reasonable estimate of sediment concentration when utilized with care in similar conditions to those under which the algorithms were developed, particularly at lower SPM values (0 to 20 mg/L). Fifteen published SPM algorithms are tested, returning strong correlations of R2>0.7, and in most cases, R2>0.8. Very low SPM values show weaker correlation with algorithm calculated SPM that is not wavelength dependent. None of the tested algorithms performs well for high SPM values (>30 mg/L), with most algorithms underestimating SPM. A shift toward a smaller number of simple exponential or linear models relating satellite remote sensing reflectance to suspended sediment concentration with regional consideration will greatly aid larger spatiotemporal studies of suspended sediment trends.

  6. A comparison of step-and-shoot leaf sequencing algorithms that eliminate tongue-and-groove effects

    International Nuclear Information System (INIS)

    Kamath, Srijit; Sahni, Sartaj; Ranka, Sanjay; Li, Jonathan; Palta, Jatinder

    2004-01-01

    The performances of three recently published leaf sequencing algorithms for step-and-shoot intensity-modulated radiation therapy delivery that eliminates tongue-and-groove underdosage are evaluated. Proofs are given to show that the algorithm of Que et al (2004 Phys. Med. Biol. 49 399-405) generates leaf sequences free of tongue-and-groove underdosage and interdigitation. However, the total beam-on times could be up to n times those of the sequences generated by the algorithms of Kamath et al (2004 Phys. Med. Biol. 49 N7-N19), which are optimal in beam-on time for unidirectional leaf movement under the same constraints, where n is the total number of involved leaf pairs. Using 19 clinical fluence matrices and 100 000 randomly generated 15 x 15 matrices, the average monitor units and number of segments of the leaf sequences generated using the algorithm of Que et al are about two to four times those generated by the algorithm of Kamath et al

  7. A comparison of three-dimensional nonequilibrium solution algorithms applied to hypersonic flows with stiff chemical source terms

    Science.gov (United States)

    Palmer, Grant; Venkatapathy, Ethiraj

    1993-01-01

    Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  8. A comparison of step-and-shoot leaf sequencing algorithms that eliminate tongue-and-groove effects

    Energy Technology Data Exchange (ETDEWEB)

    Kamath, Srijit [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Li, Jonathan [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States)

    2004-07-21

    The performances of three recently published leaf sequencing algorithms for step-and-shoot intensity-modulated radiation therapy delivery that eliminates tongue-and-groove underdosage are evaluated. Proofs are given to show that the algorithm of Que et al (2004 Phys. Med. Biol. 49 399-405) generates leaf sequences free of tongue-and-groove underdosage and interdigitation. However, the total beam-on times could be up to n times those of the sequences generated by the algorithms of Kamath et al (2004 Phys. Med. Biol. 49 N7-N19), which are optimal in beam-on time for unidirectional leaf movement under the same constraints, where n is the total number of involved leaf pairs. Using 19 clinical fluence matrices and 100 000 randomly generated 15 x 15 matrices, the average monitor units and number of segments of the leaf sequences generated using the algorithm of Que et al are about two to four times those generated by the algorithm of Kamath et al.

  9. Comparison of Biomass and Lipid Production under Ambient Carbon Dioxide Vigorous Aeration and 3% Carbon Dioxide Condition Among the Lead Candidate Chlorella Strains Screened by Various Photobioreactor Scales

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Naoko [Univ. of Nebraska, Lincoln, NE (United States); Barnes, Austin [Univ. of Nebraska, Lincoln, NE (United States); Jensen, Travis [Univ. of Nebraska, Lincoln, NE (United States); Noel, Eric [Univ. of Nebraska, Lincoln, NE (United States); Andlay, Gunjan [Synaptic Research, Baltimore, MD (United States); Rosenberg, Julian N. [Johns Hopkins Univ., Baltimore, MD (United States); Betenbaugh, Michael J. [Johns Hopkins Univ., Baltimore, MD (United States); Guarnieri, Michael T. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Oyler, George A. [Univ. of Nebraska, Lincoln, NE (United States); Johns Hopkins Univ., Baltimore, MD (United States); Synaptic Research, Baltimore, MD (United States)

    2015-09-01

    Chlorella species from the UTEX collection, classified by rDNA-based phylogenetic analysis, were screened based on biomass and lipid production in different scales and modes of culture. Lead candidate strains of C. sorokiniana UTEX 1230 and C. vulgaris UTEX 395 and 259 were compared between conditions of vigorous aeration with filtered atmospheric air and 3% CO2 shake-flask cultivation. We found that the biomass of UTEX 1230 produced 2 times higher at 652 mg L-1 dry weight under both ambient CO2 vigorous aeration and 3% CO2 conditions, while UTEX 395 and 259 under 3% CO2 increased to 3 times higher at 863 mg L-1 dry weight than ambient CO2 vigorous aeration. The triacylglycerol contents of UTEX 395 and 259 increased more than 30 times to 30% dry weight with 3% CO2, indicating that additional CO2 is essential for both biomass and lipid accumulation in UTEX 395 and 259.

  10. Development of a neuromedin U-human serum albumin conjugate as a long-acting candidate for the treatment of obesity and diabetes. Comparison with the PEGylated peptide.

    Science.gov (United States)

    Neuner, Philippe; Peier, Andrea M; Talamo, Fabio; Ingallinella, Paolo; Lahm, Armin; Barbato, Gaetano; Di Marco, Annalise; Desai, Kunal; Zytko, Karolina; Qian, Ying; Du, Xiaobing; Ricci, Davide; Monteagudo, Edith; Laufer, Ralph; Pocai, Alessandro; Bianchi, Elisabetta; Marsh, Donald J; Pessi, Antonello

    2014-01-01

    Neuromedin U (NMU) is an endogenous peptide implicated in the regulation of feeding, energy homeostasis, and glycemic control, which is being considered for the therapy of obesity and diabetes. A key liability of NMU as a therapeutic is its very short half-life in vivo. We show here that conjugation of NMU to human serum albumin (HSA) yields a compound with long circulatory half-life, which maintains full potency at both the peripheral and central NMU receptors. Initial attempts to conjugate NMU via the prevalent strategy of reacting a maleimide derivative of the peptide with the free thiol of Cys34 of HSA met with limited success, because the resulting conjugate was unstable in vivo. Use of a haloacetyl derivative of the peptide led instead to the formation of a metabolically stable conjugate. HSA-NMU displayed long-lasting, potent anorectic, and glucose-normalizing activity. When compared side by side with a previously described PEG conjugate, HSA-NMU proved superior on a molar basis. Collectively, our results reinforce the notion that NMU-based therapeutics are promising candidates for the treatment of obesity and diabetes. Copyright © 2013 European Peptide Society and John Wiley & Sons, Ltd.

  11. The detectability of hepatic metastases in candidates of radiofrequency ablation: comparison for helical CT scanning and late-phase pulse-inversion harmonic imaging

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kang Won; Yoon, Kwon Ha; Kim, Eun A; Park, Ki Han; Juhng, Seon Kwan; Won, Jong Jin [School of Medicine, Wonkwang Univ., Iksan (Korea, Republic of)

    2002-02-01

    To compare dual-phase helical CT and pulse inversion harmonic US using microbubble contrast agents in the detection of hepatic metastases prior to radiofrequency (RF) ablation. Twenty-one patients in whom hepatic metastases from colorectal cancer had been diagnosed by dual-phase CT scanning and who were considered to be candidates for RF ablation underwent pulse-inversion barmonic US examination. Images were obtained 5 minutes after the bolus injection of microbubble contrast agent SH U 508 A (4.0 g, 300 mg/mL). The number of metastatic tumors revealed by CT and US was determined, and the findings were statistically analysed. The influence of the results of US examination on treatment planning was also evaluated. In 21 patients, 48 metastaic lesions were detected by helical CT, and 56 lesions by US. These eight additional lesions revealed by US occurred in six patients (29%), and their diameter was 3-13 (mean, 7.2) mm. In three of these patients, RF ablation could not be performed ,while in the other three, the additional lesions were ablated. Pulse-inversion harmonic US imaging using microbubble contrast agents may depict small hepatic metastatic tumors that were not apparent at CT. US-therefore appears to be useful in the planning of treatment prior to the RF ablation of hepatic metastases.

  12. Species-specific audio detection: a comparison of three template-based detection algorithms using random forests

    Directory of Open Access Journals (Sweden)

    Carlos J. Corrada Bravo

    2017-04-01

    Full Text Available We developed a web-based cloud-hosted system that allow users to archive, listen, visualize, and annotate recordings. The system also provides tools to convert these annotations into datasets that can be used to train a computer to detect the presence or absence of a species. The algorithm used by the system was selected after comparing the accuracy and efficiency of three variants of a template-based detection. The algorithm computes a similarity vector by comparing a template of a species call with time increments across the spectrogram. Statistical features are extracted from this vector and used as input for a Random Forest classifier that predicts presence or absence of the species in the recording. The fastest algorithm variant had the highest average accuracy and specificity; therefore, it was implemented in the ARBIMON web-based system.

  13. Preliminary Structural Design Using Topology Optimization with a Comparison of Results from Gradient and Genetic Algorithm Methods

    Science.gov (United States)

    Burt, Adam O.; Tinker, Michael L.

    2014-01-01

    In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.

  14. Comparison of Dose Distributions With TG-43 and Collapsed Cone Convolution Algorithms Applied to Accelerated Partial Breast Irradiation Patient Plans

    Energy Technology Data Exchange (ETDEWEB)

    Thrower, Sara L., E-mail: slloupot@mdanderson.org [The University of Texas Graduate School of Biomedical Sciences at Houston, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Shaitelman, Simona F.; Bloom, Elizabeth [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Salehpour, Mohammad; Gifford, Kent [Department of Radiation Physics, The University of Texas Graduate School of Biomedical Sciences at Houston, The University of Texas MD Anderson Cancer Center, Houston, Texas (United States)

    2016-08-01

    Purpose: To compare the treatment plans for accelerated partial breast irradiation calculated by the new commercially available collapsed cone convolution (CCC) and current standard TG-43–based algorithms for 50 patients treated at our institution with either a Strut-Adjusted Volume Implant (SAVI) or Contura device. Methods and Materials: We recalculated target coverage, volume of highly dosed normal tissue, and dose to organs at risk (ribs, skin, and lung) with each algorithm. For 1 case an artificial air pocket was added to simulate 10% nonconformance. We performed a Wilcoxon signed rank test to determine the median differences in the clinical indices V90, V95, V100, V150, V200, and highest-dosed 0.1 cm{sup 3} and 1.0 cm{sup 3} of rib, skin, and lung between the two algorithms. Results: The CCC algorithm calculated lower values on average for all dose-volume histogram parameters. Across the entire patient cohort, the median difference in the clinical indices calculated by the 2 algorithms was <10% for dose to organs at risk, <5% for target volume coverage (V90, V95, and V100), and <4 cm{sup 3} for dose to normal breast tissue (V150 and V200). No discernable difference was seen in the nonconformance case. Conclusions: We found that on average over our patient population CCC calculated (<10%) lower doses than TG-43. These results should inform clinicians as they prepare for the transition to heterogeneous dose calculation algorithms and determine whether clinical tolerance limits warrant modification.

  15. SU-F-J-88: Comparison of Two Deformable Image Registration Algorithms for CT-To-CT Contour Propagation

    Energy Technology Data Exchange (ETDEWEB)

    Gopal, A; Xu, H; Chen, S [University of Maryland School of Medicine, Columbia, MD (United States)

    2016-06-15

    Purpose: To compare the contour propagation accuracy of two deformable image registration (DIR) algorithms in the Raystation treatment planning system – the “Hybrid” algorithm based on image intensities and anatomical information; and the “Biomechanical” algorithm based on linear anatomical elasticity and finite element modeling. Methods: Both DIR algorithms were used for CT-to-CT deformation for 20 lung radiation therapy patients that underwent treatment plan revisions. Deformation accuracy was evaluated using landmark tracking to measure the target registration error (TRE) and inverse consistency error (ICE). The deformed contours were also evaluated against physician drawn contours using Dice similarity coefficients (DSC). Contour propagation was qualitatively assessed using a visual quality score assigned by physicians, and a refinement quality score (0algorithms showed similar ICE (< 1.5 mm), but the hybrid DIR (TRE = 3.2 mm) performed better than the biomechanical DIR (TRE = 4.3 mm) with landmark tracking. Both algorithms had comparable DSC (DSC > 0.9 for lungs, > 0.85 for heart, > 0.8 for liver) and similar qualitative assessments (VQS < 0.35, RQS > 0.75 for lungs). When anatomical structures were used to control the deformation, the DSC improved more significantly for the biomechanical DIR compared to the hybrid DIR, while the VQS and RQS improved only for the controlling structures. However, while the inclusion of controlling structures improved the TRE for the hybrid DIR, it increased the TRE for the biomechanical DIR. Conclusion: The hybrid DIR was found to perform slightly better than the biomechanical DIR based on lower TRE while the DSC, VQS, and RQS studies yielded comparable results for both. The use of controlling structures showed considerable improvement in the hybrid DIR results and is recommended for clinical use in

  16. Comparison of dose calculation algorithms in phantoms with lung equivalent heterogeneities under conditions of lateral electronic disequilibrium

    International Nuclear Information System (INIS)

    Carrasco, P.; Jornet, N.; Duch, M.A.; Weber, L.; Ginjaume, M.; Eudaldo, T.; Jurado, D.; Ruiz, A.; Ribas, M.

    2004-01-01

    An extensive set of benchmark measurement of PDDs and beam profiles was performed in a heterogeneous layer phantom, including a lung equivalent heterogeneity, by means of several detectors and compared against the predicted dose values by different calculation algorithms in two treatment planning systems. PDDs were measured with TLDs, plane parallel and cylindrical ionization chambers and beam profiles with films. Additionally, Monte Carlo simulations by meansof the PENELOPE code were performed. Four different field sizes (10x10, 5x5, 2x2, and1x1 cm 2 ) and two lung equivalent materials (CIRS, ρ e w =0.195 and St. Bartholomew Hospital, London, ρ e w =0.244-0.322) were studied. The performance of four correction-based algorithms and one based on convolution-superposition was analyzed. The correction-based algorithms were the Batho, the Modified Batho, and the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system and the TMS Pencil Beam from the Helax-TMS (Nucletron) treatment planning system. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. The only studied calculation methods that correlated successfully with the measured values with a 2% average inside all media were the Collapsed Cone and the Monte Carlo simulation. The biggest difference between the predicted and the delivered dose in the beam axis was found for the EqTAR algorithm inside the CIRS lung equivalent material in a 2x2 cm 2 18 MV x-ray beam. In these conditions, average and maximum difference against the TLD measurements were 32% and 39%, respectively. In the water equivalent part of the phantom every algorithm correctly predicted the dose (within 2%) everywhere except very close to the interfaces where differences up to 24% were found for 2x2 cm 2 18 MV photon beams. Consistent values were found between the reference detector (ionization chamber in water and TLD in lung) and Monte Carlo simulations, yielding minimal differences (0

  17. Comparison of different image analysis algorithms on MRI to predict physico-chemical and sensory attributes of loin

    DEFF Research Database (Denmark)

    Caballero, Daniel; Caro, Andrés; Dahl, Anders Bjorholm

    2018-01-01

    -chemical and sensory analysis. CFA reached low relationship for the quality parameters of loins, the remaining algorithms achieved correlation coefficients higher than 0.5 noting OPFTA that reached the highest correlation coefficients in all cases except for the L* coordinate color that GLCM obtained the highest...... correlation coefficient. These high correlation coefficients confirm the new algorithm as an alternative to the other computer vision approaches in order to compute the physico chemical and sensory parameters of meat products in a non-destructive and efficient way....

  18. Comparison and optimization of in silico algorithms for predicting the pathogenicity of sodium channel variants in epilepsy.

    Science.gov (United States)

    Holland, Katherine D; Bouley, Thomas M; Horn, Paul S

    2017-07-01

    Variants in neuronal voltage-gated sodium channel α-subunits genes SCN1A, SCN2A, and SCN8A are common in early onset epileptic encephalopathies and other autosomal dominant childhood epilepsy syndromes. However, in clinical practice, missense variants are often classified as variants of uncertain significance when missense variants are identified but heritability cannot be determined. Genetic testing reports often include results of computational tests to estimate pathogenicity and the frequency of that variant in population-based databases. The objective of this work was to enhance clinicians' understanding of results by (1) determining how effectively computational algorithms predict epileptogenicity of sodium channel (SCN) missense variants; (2) optimizing their predictive capabilities; and (3) determining if epilepsy-associated SCN variants are present in population-based databases. This will help clinicians better understand the results of indeterminate SCN test results in people with epilepsy. Pathogenic, likely pathogenic, and benign variants in SCNs were identified using databases of sodium channel variants. Benign variants were also identified from population-based databases. Eight algorithms commonly used to predict pathogenicity were compared. In addition, logistic regression was used to determine if a combination of algorithms could better predict pathogenicity. Based on American College of Medical Genetic Criteria, 440 variants were classified as pathogenic or likely pathogenic and 84 were classified as benign or likely benign. Twenty-eight variants previously associated with epilepsy were present in population-based gene databases. The output provided by most computational algorithms had a high sensitivity but low specificity with an accuracy of 0.52-0.77. Accuracy could be improved by adjusting the threshold for pathogenicity. Using this adjustment, the Mendelian Clinically Applicable Pathogenicity (M-CAP) algorithm had an accuracy of 0.90 and a

  19. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations.

    Science.gov (United States)

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R; St James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-09-12

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2

  20. Dosimetric evaluation of a commercial proton spot scanning Monte-Carlo dose algorithm: comparisons against measurements and simulations

    Science.gov (United States)

    Saini, Jatinder; Maes, Dominic; Egan, Alexander; Bowen, Stephen R.; St. James, Sara; Janson, Martin; Wong, Tony; Bloch, Charles

    2017-10-01

    RaySearch Americas Inc. (NY) has introduced a commercial Monte Carlo dose algorithm (RS-MC) for routine clinical use in proton spot scanning. In this report, we provide a validation of this algorithm against phantom measurements and simulations in the GATE software package. We also compared the performance of the RayStation analytical algorithm (RS-PBA) against the RS-MC algorithm. A beam model (G-MC) for a spot scanning gantry at our proton center was implemented in the GATE software package. The model was validated against measurements in a water phantom and was used for benchmarking the RS-MC. Validation of the RS-MC was performed in a water phantom by measuring depth doses and profiles for three spread-out Bragg peak (SOBP) beams with normal incidence, an SOBP with oblique incidence, and an SOBP with a range shifter and large air gap. The RS-MC was also validated against measurements and simulations in heterogeneous phantoms created by placing lung or bone slabs in a water phantom. Lateral dose profiles near the distal end of the beam were measured with a microDiamond detector and compared to the G-MC simulations, RS-MC and RS-PBA. Finally, the RS-MC and RS-PBA were validated against measured dose distributions in an Alderson-Rando (AR) phantom. Measurements were made using Gafchromic film in the AR phantom and compared to doses using the RS-PBA and RS-MC algorithms. For SOBP depth doses in a water phantom, all three algorithms matched the measurements to within  ±3% at all points and a range within 1 mm. The RS-PBA algorithm showed up to a 10% difference in dose at the entrance for the beam with a range shifter and  >30 cm air gap, while the RS-MC and G-MC were always within 3% of the measurement. For an oblique beam incident at 45°, the RS-PBA algorithm showed up to 6% local dose differences and broadening of distal fall-off by 5 mm. Both the RS-MC and G-MC accurately predicted the depth dose to within  ±3% and distal fall-off to within 2

  1. Identification of Patients with Statin Intolerance in a Managed Care Plan: A Comparison of 2 Claims-Based Algorithms.

    Science.gov (United States)

    Bellows, Brandon K; Sainski-Nguyen, Amy M; Olsen, Cody J; Boklage, Susan H; Charland, Scott; Mitchell, Matthew P; Brixner, Diana I

    2017-09-01

    While statins are safe and efficacious, some patients may experience statin intolerance or treatment-limiting adverse events. Identifying patients with statin intolerance may allow optimal management of cardiovascular event risk through other strategies. Recently, an administrative claims data (ACD) algorithm was developed to identify patients with statin intolerance and validated against electronic medical records. However, how this algorithm compared with perceptions of statin intolerance by integrated delivery networks remains largely unknown. To determine the concurrent validity of an algorithm developed by a regional integrated delivery network multidisciplinary panel (MP) and a published ACD algorithm in identifying patients with statin intolerance. The MP consisted of 3 physicians and 2 pharmacists with expertise in cardiology, internal medicine, and formulary management. The MP algorithm used pharmacy and medical claims to identify patients with statin intolerance, classifying them as having statin intolerance if they met any of the following criteria: (a) medical claim for rhabdomyolysis, (b) medical claim for muscle weakness, (c) an outpatient medical claim for creatinine kinase assay, (d) fills for ≥ 2 different statins excluding dose increases, (e) decrease in statin dose, or (f) discontinuation of a statin with a subsequent fill for a nonstatin lipid-lowering therapy. The validated ACD algorithm identified statin intolerance as absolute intolerance with rhabdomyolysis; absolute intolerance without rhabdomyolysis (i.e., other adverse events); or as dose titration intolerance. Adult patients (aged ≥ 18 years) from the integrated delivery network with at least 1 prescription fill for a statin between January 1, 2011, and December 31, 2012 (first fill defined the index date) were identified. Patients with ≥ 1 year pre- and ≥ 2 years post-index continuous enrollment and no statin prescription fills in the pre-index period were included. The MP and

  2. A performance comparison of multi-objective optimization algorithms for solving nearly-zero-energy-building design problems

    NARCIS (Netherlands)

    Hamdy, M.; Nguyen, A.T. (Anh Tuan); Hensen, J.L.M.

    2016-01-01

    Integrated building design is inherently a multi-objective optimization problem where two or more conflicting objectives must be minimized and/or maximized concurrently. Many multi-objective optimization algorithms have been developed; however few of them are tested in solving building design

  3. A comparison of global optimization algorithms with standard benchmark functions and real-world applications using Energy Plus

    Energy Technology Data Exchange (ETDEWEB)

    Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael

    2009-09-01

    There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.

  4. Predicting Student Academic Performance: A Comparison of Two Meta-Heuristic Algorithms Inspired by Cuckoo Birds for Training Neural Networks

    Directory of Open Access Journals (Sweden)

    Jeng-Fung Chen

    2014-10-01

    Full Text Available Predicting student academic performance with a high accuracy facilitates admission decisions and enhances educational services at educational institutions. This raises the need to propose a model that predicts student performance, based on the results of standardized exams, including university entrance exams, high school graduation exams, and other influential factors. In this study, an approach to the problem based on the artificial neural network (ANN with the two meta-heuristic algorithms inspired by cuckoo birds and their lifestyle, namely, Cuckoo Search (CS and Cuckoo Optimization Algorithm (COA is proposed. In particular, we used previous exam results and other factors, such as the location of the student’s high school and the student’s gender as input variables, and predicted the student academic performance. The standard CS and standard COA were separately utilized to train the feed-forward network for prediction. The algorithms optimized the weights between layers and biases of the neuron network. The simulation results were then discussed and analyzed to investigate the prediction ability of the neural network trained by these two algorithms. The findings demonstrated that both CS and COA have potential in training ANN and ANN-COA obtained slightly better results for predicting student academic performance in this case. It is expected that this work may be used to support student admission procedures and strengthen the service system in educational institutions.

  5. Comparison of feature and classifier algorithms for online automatic sleep staging based on a single EEG signal

    NARCIS (Netherlands)

    Radha, M.; Garcia Molina, G.; Poel, M.; Tononi, G.

    2014-01-01

    Automatic sleep staging on an online basis has recently emerged as a research topic motivated by fundamental sleep research. The aim of this paper is to find optimal signal processing methods and machine learning algorithms to achieve online sleep staging on the basis of a single EEG signal. The

  6. Comparison of algorithms to quantify muscle fatigue in upper limb muscles based on sEMG signals.

    Science.gov (United States)

    Kahl, Lorenz; Hofmann, Ulrich G

    2016-11-01

    This work compared the performance of six different fatigue detection algorithms quantifying muscle fatigue based on electromyographic signals. Surface electromyography (sEMG) was obtained by an experiment from upper arm contractions at three different load levels from twelve volunteers. Fatigue detection algorithms mean frequency (MNF), spectral moments ratio (SMR), the wavelet method WIRM1551, sample entropy (SampEn), fuzzy approximate entropy (fApEn) and recurrence quantification analysis (RQA%DET) were calculated. The resulting fatigue signals were compared considering the disturbances incorporated in fatiguing situations as well as according to the possibility to differentiate the load levels based on the fatigue signals. Furthermore we investigated the influence of the electrode locations on the fatigue detection quality and whether an optimized channel set is reasonable. The results of the MNF, SMR, WIRM1551 and fApEn algorithms fell close together. Due to the small amount of subjects in this study significant differences could not be found. In terms of disturbances the SMR algorithm showed a slight tendency to out-perform the others. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  7. Clustering Educational Digital Library Usage Data: A Comparison of Latent Class Analysis and K-Means Algorithms

    Science.gov (United States)

    Xu, Beijie; Recker, Mimi; Qi, Xiaojun; Flann, Nicholas; Ye, Lei

    2013-01-01

    This article examines clustering as an educational data mining method. In particular, two clustering algorithms, the widely used K-means and the model-based Latent Class Analysis, are compared, using usage data from an educational digital library service, the Instructional Architect (IA.usu.edu). Using a multi-faceted approach and multiple data…

  8. Multispectral imaging burn wound tissue classification system: a comparison of test accuracies between several common machine learning algorithms

    Science.gov (United States)

    Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.

    2016-03-01

    The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care

  9. Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    Science.gov (United States)

    Taylor, Jonathan Christopher; Fenner, John Wesley

    2017-11-29

    Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification

  10. A Comparison of Supervised Machine Learning Algorithms and Feature Vectors for MS Lesion Segmentation Using Multimodal Structural MRI

    Science.gov (United States)

    Sweeney, Elizabeth M.; Vogelstein, Joshua T.; Cuzzocreo, Jennifer L.; Calabresi, Peter A.; Reich, Daniel S.; Crainiceanu, Ciprian M.; Shinohara, Russell T.

    2014-01-01

    Machine learning is a popular method for mining and analyzing large collections of medical data. We focus on a particular problem from medical research, supervised multiple sclerosis (MS) lesion segmentation in structural magnetic resonance imaging (MRI). We examine the extent to which the choice of machine learning or classification algorithm and feature extraction function impacts the performance of lesion segmentation methods. As quantitative measures derived from structural MRI are important clinical tools for research into the pathophysiology and natural history of MS, the development of automated lesion segmentation methods is an active research field. Yet, little is known about what drives performance of these methods. We evaluate the performance of automated MS lesion segmentation methods, which consist of a supervised classification algorithm composed with a feature extraction function. These feature extraction functions act on the observed T1-weighted (T1-w), T2-weighted (T2-w) and fluid-attenuated inversion recovery (FLAIR) MRI voxel intensities. Each MRI study has a manual lesion segmentation that we use to train and validate the supervised classification algorithms. Our main finding is that the differences in predictive performance are due more to differences in the feature vectors, rather than the machine learning or classification algorithms. Features that incorporate information from neighboring voxels in the brain were found to increase performance substantially. For lesion segmentation, we conclude that it is better to use simple, interpretable, and fast algorithms, such as logistic regression, linear discriminant analysis, and quadratic discriminant analysis, and to develop the features to improve performance. PMID:24781953

  11. Uncertainty assessment and comparison of different dose algorithms used to evaluate a two element LiF:Mg,Ti TL personal dosemeter

    International Nuclear Information System (INIS)

    Stadtmann, H.; Hranitzky, F.C.

    2008-01-01

    This paper presents the results of an uncertainty assessment and comparison study of different dose algorithms used for evaluating our routine two element TL whole body dosemeter. Due to the photon energy response of the two different filtered LiF:Mg,Ti detector elements the application of dose algorithms is necessary to assess the relevant photon doses over the rated energy range with an acceptable energy response. Three dose algorithms are designed to calculate the dose for the different dose equivalent quantities, i.e. personal dose equivalent H p (10) and H p (0.07) and photon dose equivalent H x used for personal monitoring before introducing personal dose equivalent. Based on experimental results both for free in air calibration as well as calibration on the ISO water slab phantom (type test data) a detailed uncertainty analysis war performed by means of Monte Carlo simulation techniques. The uncertainty contribution of the individual detector element signals was taken into special consideration. (author)

  12. Using Hadoop MapReduce for Parallel Genetic Algorithms: A Comparison of the Global, Grid and Island Models.

    Science.gov (United States)

    Ferrucci, Filomena; Salza, Pasquale; Sarro, Federica

    2017-06-29

    The need to improve the scalability of Genetic Algorithms (GAs) has motivated the research on Parallel Genetic Algorithms (PGAs), and different technologies and approaches have been used. Hadoop MapReduce represents one of the most mature technologies to develop parallel algorithms. Based on the fact that parallel algorithms introduce communication overhead, the aim of the present work is to understand if, and possibly when, the parallel GAs solutions using Hadoop MapReduce show better performance than sequential versions in terms of execution time. Moreover, we are interested in understanding which PGA model can be most effective among the global, grid, and island models. We empirically assessed the performance of these three parallel models with respect to a sequential GA on a software engineering problem, evaluating the execution time and the achieved speedup. We also analysed the behaviour of the parallel models in relation to the overhead produced by the use of Hadoop MapReduce and the GAs' computational effort, which gives a more machine-independent measure of these algorithms. We exploited three problem instances to differentiate the computation load and three cluster configurations based on 2, 4, and 8 parallel nodes. Moreover, we estimated the costs of the execution of the experimentation on a potential cloud infrastructure, based on the pricing of the major commercial cloud providers. The empirical study revealed that the use of PGA based on the island model outperforms the other parallel models and the sequential GA for all the considered instances and clusters. Using 2, 4, and 8 nodes, the island model achieves an average speedup over the three datasets of 1.8, 3.4, and 7.0 times, respectively. Hadoop MapReduce has a set of different constraints that need to be considered during the design and the implementation of parallel algorithms. The overhead of data store (i.e., HDFS) accesses, communication, and latency requires solutions that reduce data store

  13. Stability of ferritic steel to higher doses: Survey of reactor pressure vessel steel data and comparison with candidate materials for future nuclear systems

    International Nuclear Information System (INIS)

    Blagoeva, D.T.; Debarberis, L.; Jong, M.; Pierick, P. ten

    2014-01-01

    This paper is illustrating the potential of the well-known low alloyed clean steels, extensively used for the current light water Reactor Pressure Vessels (RPV) steels, for a likely use as a structural material also for the new generation nuclear systems. This option would provide, especially for large components, affordable, easily accessible and a technically more convenient solution in terms of manufacturing and joining techniques. A comprehensive comparison between several sets of surveillance and research data available for a number of RPV clean steels for doses up to 1.5 dpa, and up to 12 dpa for 9%Cr steels, is carried out in order to evaluate radiation stability of the currently used RPV clean steels even at higher doses. Based on the numerous data available, positive preliminary conclusions are drawn regarding the eventual use of clean RPV steels for the massive structural components of the new reactor systems. - Highlights: • Common embrittlement trend between RPV and advanced steels till intermediate doses. • For doses >1.5 dpa, damage rate saturation tendency is observed for RPV steels. • RPV steels might be conveniently utilised also outside their foreseen dose range

  14. An automated land-use mapping comparison of the Bayesian maximum likelihood and linear discriminant analysis algorithms

    Science.gov (United States)

    Tom, C. H.; Miller, L. D.

    1984-01-01

    The Bayesian maximum likelihood parametric classifier has been tested against the data-based formulation designated 'linear discrimination analysis', using the 'GLIKE' decision and "CLASSIFY' classification algorithms in the Landsat Mapping System. Identical supervised training sets, USGS land use/land cover classes, and various combinations of Landsat image and ancilliary geodata variables, were used to compare the algorithms' thematic mapping accuracy on a single-date summer subscene, with a cellularized USGS land use map of the same time frame furnishing the ground truth reference. CLASSIFY, which accepts a priori class probabilities, is found to be more accurate than GLIKE, which assumes equal class occurrences, for all three mapping variable sets and both levels of detail. These results may be generalized to direct accuracy, time, cost, and flexibility advantages of linear discriminant analysis over Bayesian methods.

  15. An Empirical Comparison of Algorithms to Find Communities in Directed Graphs and Their Application in Web Data Analytics

    DEFF Research Database (Denmark)

    Agreste, Santa; De Meo, Pasquale; Fiumara, Giacomo

    2017-01-01

    Detecting communities in graphs is a fundamental tool to understand the structure of Web-based systems and predict their evolution. Many community detection algorithms are designed to process undirected graphs (i.e., graphs with bidirectional edges) but many graphs on the Web-e.g., microblogging ...... the best trade-off between accuracy and computational performance and, therefore, it has to be considered as a promising tool for Web Data Analytics purposes....

  16. Evaluation of heterogeneity dose distributions for Stereotactic Radiotherapy (SRT: comparison of commercially available Monte Carlo dose calculation with other algorithms

    Directory of Open Access Journals (Sweden)

    Takahashi Wataru

    2012-02-01

    Full Text Available Abstract Background The purpose of this study was to compare dose distributions from three different algorithms with the x-ray Voxel Monte Carlo (XVMC calculations, in actual computed tomography (CT scans for use in stereotactic radiotherapy (SRT of small lung cancers. Methods Slow CT scan of 20 patients was performed and the internal target volume (ITV was delineated on Pinnacle3. All plans were first calculated with a scatter homogeneous mode (SHM which is compatible with Clarkson algorithm using Pinnacle3 treatment planning system (TPS. The planned dose was 48 Gy in 4 fractions. In a second step, the CT images, structures and beam data were exported to other treatment planning systems (TPSs. Collapsed cone convolution (CCC from Pinnacle3, superposition (SP from XiO, and XVMC from Monaco were used for recalculating. The dose distributions and the Dose Volume Histograms (DVHs were compared with each other. Results The phantom test revealed that all algorithms could reproduce the measured data within 1% except for the SHM with inhomogeneous phantom. For the patient study, the SHM greatly overestimated the isocenter (IC doses and the minimal dose received by 95% of the PTV (PTV95 compared to XVMC. The differences in mean doses were 2.96 Gy (6.17% for IC and 5.02 Gy (11.18% for PTV95. The DVH's and dose distributions with CCC and SP were in agreement with those obtained by XVMC. The average differences in IC doses between CCC and XVMC, and SP and XVMC were -1.14% (p = 0.17, and -2.67% (p = 0.0036, respectively. Conclusions Our work clearly confirms that the actual practice of relying solely on a Clarkson algorithm may be inappropriate for SRT planning. Meanwhile, CCC and SP were close to XVMC simulations and actual dose distributions obtained in lung SRT.

  17. SU-F-J-88: Comparison of Two Deformable Image Registration Algorithms for CT-To-CT Contour Propagation

    International Nuclear Information System (INIS)

    Gopal, A; Xu, H; Chen, S

    2016-01-01

    Purpose: To compare the contour propagation accuracy of two deformable image registration (DIR) algorithms in the Raystation treatment planning system – the “Hybrid” algorithm based on image intensities and anatomical information; and the “Biomechanical” algorithm based on linear anatomical elasticity and finite element modeling. Methods: Both DIR algorithms were used for CT-to-CT deformation for 20 lung radiation therapy patients that underwent treatment plan revisions. Deformation accuracy was evaluated using landmark tracking to measure the target registration error (TRE) and inverse consistency error (ICE). The deformed contours were also evaluated against physician drawn contours using Dice similarity coefficients (DSC). Contour propagation was qualitatively assessed using a visual quality score assigned by physicians, and a refinement quality score (0 0.9 for lungs, > 0.85 for heart, > 0.8 for liver) and similar qualitative assessments (VQS 0.75 for lungs). When anatomical structures were used to control the deformation, the DSC improved more significantly for the biomechanical DIR compared to the hybrid DIR, while the VQS and RQS improved only for the controlling structures. However, while the inclusion of controlling structures improved the TRE for the hybrid DIR, it increased the TRE for the biomechanical DIR. Conclusion: The hybrid DIR was found to perform slightly better than the biomechanical DIR based on lower TRE while the DSC, VQS, and RQS studies yielded comparable results for both. The use of controlling structures showed considerable improvement in the hybrid DIR results and is recommended for clinical use in contour propagation.

  18. Comparison of two optimization algorithms for fuzzy finite element model updating for damage detection in a wind turbine blade

    Science.gov (United States)

    Turnbull, Heather; Omenzetter, Piotr

    2018-03-01

    vDifficulties associated with current health monitoring and inspection practices combined with harsh, often remote, operational environments of wind turbines highlight the requirement for a non-destructive evaluation system capable of remotely monitoring the current structural state of turbine blades. This research adopted a physics based structural health monitoring methodology through calibration of a finite element model using inverse techniques. A 2.36m blade from a 5kW turbine was used as an experimental specimen, with operational modal analysis techniques utilised to realize the modal properties of the system. Modelling the experimental responses as fuzzy numbers using the sub-level technique, uncertainty in the response parameters was propagated back through the model and into the updating parameters. Initially, experimental responses of the blade were obtained, with a numerical model of the blade created and updated. Deterministic updating was carried out through formulation and minimisation of a deterministic objective function using both firefly algorithm and virus optimisation algorithm. Uncertainty in experimental responses were modelled using triangular membership functions, allowing membership functions of updating parameters (Young's modulus and shear modulus) to be obtained. Firefly algorithm and virus optimisation algorithm were again utilised, however, this time in the solution of fuzzy objective functions. This enabled uncertainty associated with updating parameters to be quantified. Varying damage location and severity was simulated experimentally through addition of small masses to the structure intended to cause a structural alteration. A damaged model was created, modelling four variable magnitude nonstructural masses at predefined points and updated to provide a deterministic damage prediction and information in relation to the parameters uncertainty via fuzzy updating.

  19. Performance comparison of independent component analysis algorithms for fetal cardiac signal reconstruction: a study on synthetic fMCG data

    International Nuclear Information System (INIS)

    Mantini, D; II, K E Hild; Alleva, G; Comani, S

    2006-01-01

    Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times

  20. Performance comparison of six independent components analysis algorithms for fetal signal extraction from real fMCG data

    International Nuclear Information System (INIS)

    Hild, Kenneth E; Alleva, Giovanna; Nagarajan, Srikantan; Comani, Silvia

    2007-01-01

    In this study we compare the performance of six independent components analysis (ICA) algorithms on 16 real fetal magnetocardiographic (fMCG) datasets for the application of extracting the fetal cardiac signal. We also compare the extraction results for real data with the results previously obtained for synthetic data. The six ICA algorithms are FastICA, CubICA, JADE, Infomax, MRMI-SIG and TDSEP. The results obtained using real fMCG data indicate that the FastICA method consistently outperforms the others in regard to separation quality and that the performance of an ICA method that uses temporal information suffers in the presence of noise. These two results confirm the previous results obtained using synthetic fMCG data. There were also two notable differences between the studies based on real and synthetic data. The differences are that all six ICA algorithms are independent of gestational age and sensor dimensionality for synthetic data, but depend on gestational age and sensor dimensionality for real data. It is possible to explain these differences by assuming that the number of point sources needed to completely explain the data is larger than the dimensionality used in the ICA extraction

  1. Comparison of dose evaluation index by pencil beam convolution and anisotropic analytical algorithm in stereotactic radiotherapy for lung cancer

    International Nuclear Information System (INIS)

    Tachibana, Masayuki; Noguchi, Yoshitaka; Fukunaga, Jyunichi; Hirano, Naomi; Yoshidome, Satoshi; Hirose, Takaaki

    2009-01-01

    We previously studied dose distributions of stereotactic radiotherapy (SRT) for lung cancer. Our aim is to compare in combination pencil beam convolution with the inhomogeneity correction algorithm of Batho power low [PBC (BPL)] to the anisotropic analytical algorithm (AAA) by using the dose evaluation indexes. There were significant differences in D95, planning target volume (PTV) mean dose, homogeneity index, and conformity index, V10, and V5. The dose distributions inside the PTV calculated by PBC (BPL) were more uniform than those of AAA. There were no significant differences in V20 and mean dose of total lung. There was no large difference for the whole lung. However, the surrounding high-dose region of PTV became smaller in AAA. The difference in dose evaluation indexes extended between PBC (BPL) and AAA that as many as low CT value of lung. When the dose calculation algorithm is changed, it is necessary to consider difference dose distributions compared with those of established practice. (author)

  2. Iterative image reconstruction algorithms in coronary CT angiography improve the detection of lipid-core plaque - a comparison with histology

    International Nuclear Information System (INIS)

    Puchner, Stefan B.; Ferencik, Maros; Maurovich-Horvat, Pal; Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu; Kauczor, Hans-Ulrich; Hoffmann, Udo; Schlett, Christopher L.

    2015-01-01

    To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area 2 : 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)

  3. Integrated Navigation System Design for Micro Planetary Rovers: Comparison of Absolute Heading Estimation Algorithms and Nonlinear Filtering

    Science.gov (United States)

    Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok

    2016-01-01

    This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293

  4. Risk-informed decision making in the nuclear industry: Application and effectiveness comparison of different genetic algorithm techniques

    International Nuclear Information System (INIS)

    Gjorgiev, Blaže; Kančev, Duško; Čepin, Marko

    2012-01-01

    Highlights: ► Multi-objective optimization of STI based on risk-informed decision making. ► Four different genetic algorithms (GAs) techniques are used as optimization tool. ► Advantages/disadvantages among the four different GAs applied are emphasized. - Abstract: The risk-informed decision making (RIDM) process, where insights gained from the probabilistic safety assessment are contemplated together with other engineering insights, is gaining an ever-increasing attention in the process industries. Increasing safety systems availability by applying RIDM is one of the prime goals for the authorities operating with nuclear power plants. Additionally, equipment ageing is gradually becoming a major concern in the process industries and especially in the nuclear industry, since more and more safety-related components are approaching or are already in their wear-out phase. A significant difficulty regarding the consideration of ageing effects on equipment (un)availability is the immense uncertainty the available equipment ageing data are associated to. This paper presents an approach for safety system unavailability reduction by optimizing the related test and maintenance schedule suggested by the technical specifications in the nuclear industry. Given the RIDM philosophy, two additional insights, i.e. ageing data uncertainty and test and maintenance costs, are considered along with unavailability insights gained from the probabilistic safety assessment for a selected standard safety system. In that sense, an approach for multi-objective optimization of the equipment surveillance test interval is proposed herein. Three different objective functions related to each one of the three different insights discussed above comprise the multi-objective nature of the optimization process. Genetic algorithm technique is utilized as an optimization tool. Four different types of genetic algorithms are utilized and consequently comparative analysis is conducted given the

  5. Comparison Of Semi-Automatic And Automatic Slick Detection Algorithms For Jiyeh Power Station Oil Spill, Lebanon

    Science.gov (United States)

    Osmanoglu, B.; Ozkan, C.; Sunar, F.

    2013-10-01

    After air strikes on July 14 and 15, 2006 the Jiyeh Power Station started leaking oil into the eastern Mediterranean Sea. The power station is located about 30 km south of Beirut and the slick covered about 170 km of coastline threatening the neighboring countries Turkey and Cyprus. Due to the ongoing conflict between Israel and Lebanon, cleaning efforts could not start immediately resulting in 12 000 to 15 000 tons of fuel oil leaking into the sea. In this paper we compare results from automatic and semi-automatic slick detection algorithms. The automatic detection method combines the probabilities calculated for each pixel from each image to obtain a joint probability, minimizing the adverse effects of atmosphere on oil spill detection. The method can readily utilize X-, C- and L-band data where available. Furthermore wind and wave speed observations can be used for a more accurate analysis. For this study, we utilize Envisat ASAR ScanSAR data. A probability map is generated based on the radar backscatter, effect of wind and dampening value. The semi-automatic algorithm is based on supervised classification. As a classifier, Artificial Neural Network Multilayer Perceptron (ANN MLP) classifier is used since it is more flexible and efficient than conventional maximum likelihood classifier for multisource and multi-temporal data. The learning algorithm for ANN MLP is chosen as the Levenberg-Marquardt (LM). Training and test data for supervised classification are composed from the textural information created from SAR images. This approach is semiautomatic because tuning the parameters of classifier and composing training data need a human interaction. We point out the similarities and differences between the two methods and their results as well as underlining their advantages and disadvantages. Due to the lack of ground truth data, we compare obtained results to each other, as well as other published oil slick area assessments.

  6. Iterative image reconstruction algorithms in coronary CT angiography improve the detection of lipid-core plaque - a comparison with histology

    Energy Technology Data Exchange (ETDEWEB)

    Puchner, Stefan B. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Medical University of Vienna, Department of Biomedical Imaging and Image-Guided Therapy, Vienna (Austria); Ferencik, Maros [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Harvard Medical School, Division of Cardiology, Massachusetts General Hospital, Boston, MA (United States); Maurovich-Horvat, Pal [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Semmelweis University, MTA-SE Lenduelet Cardiovascular Imaging Research Group, Heart and Vascular Center, Budapest (Hungary); Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu [CV Path Institute Inc., Gaithersburg, MD (United States); Kauczor, Hans-Ulrich [University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany); Hoffmann, Udo [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Schlett, Christopher L. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany)

    2015-01-15

    To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area <60 HU was semiautomatically quantified in CCTA. LCP was defined by histology as fibroatheroma with a large lipid/necrotic core. Area under the curve (AUC) was derived from logistic regression analysis as a measure of diagnostic accuracy. Overall, 173 CCTA triplets (FBP/ASIR/MBIR) were co-registered with histology. LCP was present in 26 cross-sections. Average measured plaque area <60 HU was significantly larger in LCP compared to non-LCP cross-sections (mm{sup 2}: 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)

  7. Implementation and Comparison of the Lifting 5/3 and 9/7 Algorithms in MatLab on GPU

    Directory of Open Access Journals (Sweden)

    Randa Khemiri

    2016-06-01

    Full Text Available In order to accelerate the Discrete Wavelet Transform DWT, we have implemented and compared the lifting "Le Gall5/3" and "Cohen-Daubechies-Feauveau9/7" (CDF9/7 algorithms on a low cost NVIDIA’s GPU. The suggested implementation is realized in MatLab using the in-house parallel computation toolbox (PCT. Our experimental results indicate, that the speedup is proportional to the image size until it attains a maximum at 20482 pixels, beyond these values the curve decreases. The performance with GPU enhances above a factor of 2~3 compared with CPU.

  8. Comparison between Possibilistic c-Means (PCM and Artificial Neural Network (ANN Classification Algorithms in Land use/ Land cover Classification

    Directory of Open Access Journals (Sweden)

    Ganchimeg Ganbold

    2017-03-01

    Full Text Available There are several statistical classification algorithms available for landuse/land cover classification. However, each has a certain bias orcompromise. Some methods like the parallel piped approach in supervisedclassification, cannot classify continuous regions within a feature. Onthe other hand, while unsupervised classification method takes maximumadvantage of spectral variability in an image, the maximally separableclusters in spectral space may not do much for our perception of importantclasses in a given study area. In this research, the output of an ANNalgorithm was compared with the Possibilistic c-Means an improvementof the fuzzy c-Means on both moderate resolutions Landsat8 and a highresolution Formosat 2 images. The Formosat 2 image comes with an8m spectral resolution on the multispectral data. This multispectral imagedata was resampled to 10m in order to maintain a uniform ratio of1:3 against Landsat 8 image. Six classes were chosen for analysis including:Dense forest, eucalyptus, water, grassland, wheat and riverine sand. Using a standard false color composite (FCC, the six features reflecteddifferently in the infrared region with wheat producing the brightestpixel values. Signature collection per class was therefore easily obtainedfor all classifications. The output of both ANN and FCM, were analyzedseparately for accuracy and an error matrix generated to assess the qualityand accuracy of the classification algorithms. When you compare theresults of the two methods on a per-class-basis, ANN had a crisperoutput compared to PCM which yielded clusters with pixels especiallyon the moderate resolution Landsat 8 imagery.

  9. Evaluation of dose calculation algorithms for the electron beams used in radiotherapy. Comparison with radiochromic film measurements

    International Nuclear Information System (INIS)

    El Barouky, Jad

    2011-01-01

    In radiotherapy, the dose calculation accuracy is crucial for the quality and the outcome of the treatments. The purpose of our study was to evaluate the accuracy of dose calculation algorithms for electron beams in situations close to clinical conditions. A new practical approach of radiochromic film dosimetry was developed and validated especially for difficult situations. An accuracy of 3.1% and 2.6% was achieved for absolute and relative dosimetry respectively. Using this technique a measured database of dose distributions was developed to form the basis of several fast and efficient Quality Assurance tests. Such tests are intended to be used also when the dose calculation algorithm is changed or the Treatment Planning System replaced. Pencil Beam and Monte Carlo dose calculations were compared to the measured data for simple geometrical phantom setups. They both gave similar results for obliquity, surface irregularity and extended SSD tests but the Monte Carlo calculation was more accurate in presence of heterogeneities. The same radiochromic film dosimetry method was applied to film cuts inserted into anthropomorphic phantoms providing a 2D dose distribution for any transverse plan. This allowed us to develop clinical test that can be also used for internal Quality Assurance purposes. As for simpler geometries, the Monte Carlo calculations showed better agreement with the measured data than the Pencil Beam calculation, especially in presence of heterogeneities such as lungs, cavities and bones. (author) [fr

  10. Comparison of low-contrast detectability between two CT reconstruction algorithms using voxel-based 3D printed textured phantoms.

    Science.gov (United States)

    Solomon, Justin; Ba, Alexandre; Bochud, François; Samei, Ehsan

    2016-12-01

    To use novel voxel-based 3D printed textured phantoms in order to compare low-contrast detectability between two reconstruction algorithms, FBP (filtered-backprojection) and SAFIRE (sinogram affirmed iterative reconstruction) and determine what impact background texture (i.e., anatomical noise) has on estimating the dose reduction potential of SAFIRE. Liver volumes were segmented from 23 abdominal CT cases. The volumes were characterized in terms of texture features from gray-level co-occurrence and run-length matrices. Using a 3D clustered lumpy background (CLB) model, a fitting technique based on a genetic optimization algorithm was used to find CLB textures that were reflective of the liver textures, accounting for CT system factors of spatial blurring and noise. With the modeled background texture as a guide, four cylindrical phantoms (Textures A-C and uniform, 165 mm in diameter, and 30 mm height) were designed, each containing 20 low-contrast spherical signals (6 mm diameter at nominal contrast levels of ∼3.2, 5.2, 7.2, 10, and 14 HU with four repeats per signal). The phantoms were voxelized and input into a commercial multimaterial 3D printer (Object Connex 350), with custom software for voxel-based printing (using principles of digital dithering). Images of the textured phantoms and a corresponding uniform phantom were acquired at six radiation dose levels (SOMATOM Flash, Siemens Healthcare) and observer model detection performance (detectability index of a multislice channelized Hotelling observer) was estimated for each condition (5 contrasts × 6 doses × 2 reconstructions × 4 backgrounds = 240 total conditions). A multivariate generalized regression analysis was performed (linear terms, no interactions, random error term, log link function) to assess whether dose, reconstruction algorithm, signal contrast, and background type have statistically significant effects on detectability. Also, fitted curves of detectability (averaged across contrast levels

  11. Comparison between SARS CoV and MERS CoV Using Apriori Algorithm, Decision Tree, SVM

    Directory of Open Access Journals (Sweden)

    Jang Seongpil

    2016-01-01

    Full Text Available MERS (Middle East Respiratory Syndrome is a worldwide disease these days. The number of infected people is 1038(08/03/2015 in Saudi Arabia and 186(08/03/2015 in South Korea. MERS is all over the world including Europe and the fatality rate is 38.8%, East Asia and the Middle East. The MERS is also known as a cousin of SARS (Severe Acute Respiratory Syndrome because both diseases show similar symptoms such as high fever and difficulty in breathing. This is why we compared MERS with SARS. We used data of the spike glycoprotein from NCBI. As a way of analyzing the protein, apriori algorithm, decision tree, SVM were used, and particularly SVM was iterated by normal, polynomial, and sigmoid. The result came out that the MERS and the SARS are alike but also different in some way.

  12. Comparison of quantification algorithms for circulating cell-free DNA methylation biomarkers in blood plasma from cancer patients.

    Science.gov (United States)

    de Vos, Luka; Gevensleben, Heidrun; Schröck, Andreas; Franzen, Alina; Kristiansen, Glen; Bootz, Friedrich; Dietrich, Dimo

    2017-01-01

    SHOX2 and SEPT9 methylation in circulating cell-free DNA (ccfDNA) in blood are established powerful and clinically valuable biomarkers for diagnosis, staging, prognosis, and monitoring of cancer patients. The aim of the present study was to evaluate different quantification algorithms (relative quantification, absolute quantification, quasi-digital PCR) with regard to their clinical performance. Methylation analyses were performed in a training cohort (141 patients with head and neck squamous cell carcinoma [HNSCC], 170 control cases) and a testing cohort (137 HNSCC cases, 102 controls). DNA was extracted from plasma samples, bisulfite-converted, and analyzed via quantitative real-time PCR. SHOX2 and SEPT9 methylations were assessed separately and as panel [mean SEPT9 / SHOX2 ] using the ΔCT method for absolute quantification and the ΔΔCT-method for relative quantification. Quasi-digital PCR was defined as the number of amplification-positive PCR replicates. The diagnostic (sensitivity, specificity, area under the curve (AUC) of the receiver operating characteristic (ROC)) and prognostic accuracy (hazard ratio (HR) from Cox regression) were evaluated. Sporadic methylation in control samples necessitated the introduction of cutoffs resulting in 61-63% sensitivity/90-92% specificity ( SEPT9 /training), 53-57% sensitivity/87-90% specificity ( SHOX2 /training), and 64-65% sensitivity/90-91% specificity (mean SEPT9 / SHOX2 /training). Results were confirmed in a testing cohort with 54-56% sensitivity/88-90% specificity ( SEPT9 /testing), 43-48% sensitivity/93-95% specificity ( SHOX2 /testing), and 49-58% sensitivity/88-94% specificity (mean SEPT9 / SHOX2 /testing). All algorithms showed comparable cutoff-independent diagnostic accuracy with largely overlapping 95% confidence intervals ( SEPT9 : AUC training  = 0.79-0.80; AUC testing  = 0.74-0.75; SHOX2 : AUC training  = 0.78-0.81, AUC testing  = 0.77-0.79; mean SEPT9 / SHOX2 : AUC training  = 0

  13. Comparison of K-means and fuzzy c-means algorithm performance for automated determination of the arterial input function.

    Science.gov (United States)

    Yin, Jiandong; Sun, Hongzan; Yang, Jiawen; Guo, Qiyong

    2014-01-01

    The arterial input function (AIF) plays a crucial role in the quantification of cerebral perfusion parameters. The traditional method for AIF detection is based on manual operation, which is time-consuming and subjective. Two automatic methods have been reported that are based on two frequently used clustering algorithms: fuzzy c-means (FCM) and K-means. However, it is still not clear which is better for AIF detection. Hence, we compared the performance of these two clustering methods using both simulated and clinical data. The results demonstrate that K-means analysis can yield more accurate and robust AIF results, although it takes longer to execute than the FCM method. We consider that this longer execution time is trivial relative to the total time required for image manipulation in a PACS setting, and is acceptable if an ideal AIF is obtained. Therefore, the K-means method is preferable to FCM in AIF detection.

  14. A comparison of native GPU computing versus OpenACC for implementing flow-routing algorithms in hydrological applications

    Science.gov (United States)

    Rueda, Antonio J.; Noguera, José M.; Luque, Adrián

    2016-02-01

    In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.

  15. Comparison of Computational Algorithms for the Classification of Liver Cancer using SELDI Mass Spectrometry: A Case Study

    Directory of Open Access Journals (Sweden)

    Robert J Hickey

    2007-01-01

    Full Text Available Introduction: As an alternative to DNA microarrays, mass spectrometry based analysis of proteomic patterns has shown great potential in cancer diagnosis. The ultimate application of this technique in clinical settings relies on the advancement of the technology itself and the maturity of the computational tools used to analyze the data. A number of computational algorithms constructed on different principles are available for the classification of disease status based on proteomic patterns. Nevertheless, few studies have addressed the difference in the performance of these approaches. In this report, we describe a comparative case study on the classification accuracy of hepatocellular carcinoma based on the serum proteomic pattern generated from a Surface Enhanced Laser Desorption/Ionization (SELDI mass spectrometer.Methods: Nine supervised classifi cation algorithms are implemented in R software and compared for the classification accuracy.Results: We found that the support vector machine with radial function is preferable as a tool for classification of hepatocellular carcinoma using features in SELDI mass spectra. Among the rest of the methods, random forest and prediction analysis of microarrays have better performance. A permutation-based technique reveals that the support vector machine with a radial function seems intrinsically superior in learning from the training data since it has a lower prediction error than others when there is essentially no differential signal. On the other hand, the performance of the random forest and prediction analysis of microarrays rely on their capability of capturing the signals with substantial differentiation between groups.Conclusions: Our finding is similar to a previous study, where classification methods based on the Matrix Assisted Laser Desorption/Ionization (MALDI mass spectrometry are compared for the prediction accuracy of ovarian cancer. The support vector machine, random forest and prediction

  16. Multiscale comparison of GPM radar and passive microwave precipitation fields over oceans and land: effective resolution and global/regional/local diagnostics for improving retrieval algorithms

    Science.gov (United States)

    Guilloteau, C.; Foufoula-Georgiou, E.; Kummerow, C.; Kirstetter, P. E.

    2017-12-01

    A multiscale approach is used to compare precipitation fields retrieved from GMI using the last version of the GPROF algorithm (GPROF-2017) to the DPR fields all over the globe. Using a wavelet-based spectral analysis, which renders the multi-scale decompositions of the original fields independent of each other spatially and across scales, we quantitatively assess the various scales of variability of the retrieved fields, and thus define the spatially-variable "effective resolution" (ER) of the retrievals. Globally, a strong agreement is found between passive microwave and radar patterns at scales coarser than 80km. Over oceans the patterns match down to the 20km scale. Over land, comparison statistics are spatially heterogeneous. In most areas a strong discrepancy is observed between passive microwave and radar patterns at scales finer than 40-80km. The comparison is also supported by ground-based observations over the continental US derived from the NOAA/NSSL MRMS suite of products. While larger discrepancies over land than over oceans are classically explained by land complex surface emissivity perturbing the passive microwave retrieval, other factors are investigated here, such as intricate differences in the storm structure over oceans and land. Differences in term of statistical properties (PDF of intensities and spatial organization) of precipitation fields over land and oceans are assessed from radar data, as well as differences in the relation between the 89GHz brightness temperature and precipitation. Moreover, the multiscale approach allows quantifying the part of discrepancies caused by miss-match of the location of intense cells and instrument-related geometric effects. The objective is to diagnose shortcomings of current retrieval algorithms such that targeted improvements can be made to achieve over land the same retrieval performance as over oceans.

  17. Evaluation of a metal artifact reduction algorithm applied to post-interventional flat detector CT in comparison to pre-treatment CT in patients with acute subarachnoid haemorrhage

    Energy Technology Data Exchange (ETDEWEB)

    Mennecke, Angelika; Svergun, Stanislav; Doerfler, Arnd; Struffert, Tobias [University of Erlangen-Nuremberg, Department of Neuroradiology, Erlangen (Germany); Scholz, Bernhard [Siemens Healthcare GmbH, Forchheim (Germany); Royalty, Kevin [Siemens Medical Solutions, USA, Inc., Hoffman Estates, IL (United States)

    2017-01-15

    Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. (orig.)

  18. Evaluation of a metal artifact reduction algorithm applied to post-interventional flat detector CT in comparison to pre-treatment CT in patients with acute subarachnoid haemorrhage

    International Nuclear Information System (INIS)

    Mennecke, Angelika; Svergun, Stanislav; Doerfler, Arnd; Struffert, Tobias; Scholz, Bernhard; Royalty, Kevin

    2017-01-01

    Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. (orig.)

  19. Sentiment analysis: a comparison of deep learning neural network algorithm with SVM and naϊve Bayes for Indonesian text

    Science.gov (United States)

    Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia

    2018-03-01

    Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.

  20. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  1. A new algorithm combining geostatistics with the surrogate data approach to increase the accuracy of comparisons of point radiation measurements with cloud measurements

    Science.gov (United States)

    Venema, V. K. C.; Lindau, R.; Varnai, T.; Simmer, C.

    2009-04-01

    algorithm is similar to the standard iterative amplitude adjusted Fourier transform (IAAFT) algorithm, but has an additional iterative step in which the surrogate field is nudged towards the kriged field. The nudging strength is gradually reduced to zero. We work with four types of pseudo-measurements: one zenith pointing measurement (which together with the wind produces a line measurement), five zenith pointing measurements, a slow and a fast azimuth scan (which together with the wind produce spirals). Because we work with LES clouds and the truth is known, we can validate the algorithm by performing 3D radiative transfer calculations on the original LES clouds and on the new surrogate clouds. For comparison also the radiative properties of the kriged fields and standard surrogate fields are computed. Preliminary results already show that these new surrogate clouds reproduce the structure of the original clouds very well and the minima and maxima are located where the pseudo-measurements sees them. The main limitation seems to be the amount of data, which is especially very limited in case of just one zenith pointing measurement.

  2. Many-Objective Distinct Candidates Optimization using Differential Evolution

    DEFF Research Database (Denmark)

    Justesen, Peter; Ursem, Rasmus Kjær

    2010-01-01

    for each objective. The Many-Objective Distinct Candidates Optimization using Differential Evolution (MODCODE) algorithm takes a novel approach by focusing search using a user-defined number of subpopulations each returning a distinct optimal solution within the preferred region of interest. In this paper......, we present the novel MODCODE algorithm incorporating the ROD measure to measure and control candidate distinctiveness. MODCODE is tested against GDE3 on three real world centrifugal pump design problems supplied by Grundfos. Our algorithm outperforms GDE3 on all problems with respect to all...

  3. Comparison of Performance between Genetic Algorithm and SCE-UA for Calibration of SCS-CN Surface Runoff Simulation

    Directory of Open Access Journals (Sweden)

    Ji-Hong Jeon

    2014-11-01

    Full Text Available Global optimization methods linked with simulation models are widely used for automated calibration and serve as useful tools for searching for cost-effective alternatives for environmental management. A genetic algorithm (GA and shuffled complex evolution (SCE-UA algorithm were linked with the Long-Term Hydrologic Impact Assessment (L-THIA model, which employs the curve number (SCS-CN method. The performance of the two optimization methods was compared by automatically calibrating L-THIA for monthly runoff from 10 watersheds in Indiana. The selected watershed areas ranged from 32.7 to 5844.1 km2. The SCS-CN values and total five-day rainfall for adjustment were optimized, and the objective function used was the Nash-Sutcliffe value (NS value. The GA method rapidly reached the optimal space until the 10th generating population (generation, and after the 10th generation solutions increased dispersion around the optimal space, called a cross hair pattern, because of mutation rate increase. The number of looping executions influenced the performance of model calibration for the SCE-UA and GA method. The GA method performed better for the case of fewer loop executions than the SCE-UA method. For most watersheds, calibration performance using GA was better than for SCE-UA until the 50th generation when the number of model loop executions was around 5150 (one generation has 100 individuals. However, after the 50th generation of the GA method, the SCE-UA method performed better for calibrating monthly runoff compared to the GA method. Optimized SCS-CN values for primary land use types were nearly the same for the two methods, but those for minor land use types and total five-day rainfall for AMC adjustment were somewhat different because those parameters did not significantly influence calculation of the objective function. The GA method is recommended for cases when model simulation takes a long time and the model user does not have sufficient time

  4. Dual-objective optimization of organic Rankine cycle (ORC) systems using genetic algorithm: a comparison between basic and recuperative cycles

    Science.gov (United States)

    Hayat, Nasir; Ameen, Muhammad Tahir; Tariq, Muhammad Kashif; Shah, Syed Nadeem Abbas; Naveed, Ahmad

    2017-08-01

    Exploitation of low potential waste thermal energy for useful net power output can be done by manipulating organic Rankine cycle systems. In the current article dual-objectives (η_{th} and SIC) optimization of ORC systems [basic organic Rankine cycle (BORC) and recuperative organic Rankine cycle (RORC)] has been done using non-dominated sorting genetic algorithm (II). Seven organic compounds (R-123, R-1234ze, R-152a, R-21, R-236ea, R-245ca and R-601) have been employed in basic cycle and four dry compounds (R-123, R-236ea, R-245ca and R-601) have been employed in recuperative cycle to investigate the behaviour of two systems and compare their performance. Sensitivity analyses show that recuperation boosts the thermodynamic behaviour of systems but it also raises specific investment cost significantly. R-21, R-245ca and R-601 show attractive performance in BORC whereas R-601 and R-236ea in RORC. RORC, due to higher total investment cost and operation & maintenance costs, has longer payback periods as compared to BORC.

  5. Accurate Learning with Few Atlases (ALFA): an algorithm for MRI neonatal brain extraction and comparison with 11 publicly available methods.

    Science.gov (United States)

    Serag, Ahmed; Blesa, Manuel; Moore, Emma J; Pataky, Rozalia; Sparrow, Sarah A; Wilkinson, A G; Macnaught, Gillian; Semple, Scott I; Boardman, James P

    2016-03-24

    Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases 'uniformly' distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.

  6. Method of immersion of a problem of comparison financial conditions of the enterprises in an expert cover in a class algorithms of artificial intelligence

    Directory of Open Access Journals (Sweden)

    S. V. Bukharin

    2016-01-01

    Full Text Available The financial condition of the enterprise can be estimated by a set of characteristics (solvency and liquidity, structure of the capital, profitability, etc.. The part of financial coefficients is low-informative, and other part contains the interconnected sizes. Therefore for elimination of ambiguity we will pass to the generalized indicators – rating numbers, and as the main means of research it is offered to use the theory of expert systems. As characteristic of the modern theory of expert systems it is necessary to consider application of intellectual ways of data processing of data mining, or simply data mining. The method of immersion of a problem of comparison of a financial condition of economic objects in an expert cover in a class of systems of artificial intelligence is offered (algorithms of a method of the analysis of hierarchies, contiguity leaning of a neural network, algorithm of training with function of activation softmax. The generalized indicator of structure of the capital in the form of rating number is entered and the sign (factorial space for seven concrete enterprises is created. Quantitative signs (financial coefficients of structure of the capital are allocated and their normalization by rules of the theory of expert systems is carried out. To the received set of the generalized indicators the method of the analysis of hierarchies is applied: on the basis of a linguistic scale of T. Saaty the ranks of signs reflecting the relative importance of various financial coefficients are defined and the matrix of pair comparisons is constructed. The vector of priority signs on the basis of the solution of the equation for own numbers and own vectors of the mentioned matrix is calculated. As a result the visualization of the received results which has allowed to eliminate difficulties of interpretation of small and negative values of the generalized indicator is carried out. The neural network with contiguity leaning and

  7. A comparison of machine learning algorithms for chemical toxicity classification using a simulated multi-scale data model

    Directory of Open Access Journals (Sweden)

    Li Zhen

    2008-05-01

    analysis of data sets in which in vitro bioassay data is being used to predict in vivo chemical toxicology. From our analysis, we can recommend that several ML methods, most notably SVM and ANN, are good candidates for use in real world applications in this area.

  8. A comparison of two measures of HIV diversity in multi-assay algorithms for HIV incidence estimation.

    Directory of Open Access Journals (Sweden)

    Matthew M Cousins

    Full Text Available Multi-assay algorithms (MAAs can be used to estimate HIV incidence in cross-sectional surveys. We compared the performance of two MAAs that use HIV diversity as one of four biomarkers for analysis of HIV incidence.Both MAAs included two serologic assays (LAg-Avidity assay and BioRad-Avidity assay, HIV viral load, and an HIV diversity assay. HIV diversity was quantified using either a high resolution melting (HRM diversity assay that does not require HIV sequencing (HRM score for a 239 base pair env region or sequence ambiguity (the percentage of ambiguous bases in a 1,302 base pair pol region. Samples were classified as MAA positive (likely from individuals with recent HIV infection if they met the criteria for all of the assays in the MAA. The following performance characteristics were assessed: (1 the proportion of samples classified as MAA positive as a function of duration of infection, (2 the mean window period, (3 the shadow (the time period before sample collection that is being assessed by the MAA, and (4 the accuracy of cross-sectional incidence estimates for three cohort studies.The proportion of samples classified as MAA positive as a function of duration of infection was nearly identical for the two MAAs. The mean window period was 141 days for the HRM-based MAA and 131 days for the sequence ambiguity-based MAA. The shadows for both MAAs were <1 year. Both MAAs provided cross-sectional HIV incidence estimates that were very similar to longitudinal incidence estimates based on HIV seroconversion.MAAs that include the LAg-Avidity assay, the BioRad-Avidity assay, HIV viral load, and HIV diversity can provide accurate HIV incidence estimates. Sequence ambiguity measures obtained using a commercially-available HIV genotyping system can be used as an alternative to HRM scores in MAAs for cross-sectional HIV incidence estimation.

  9. A Comparison of Two Measures of HIV Diversity in Multi-Assay Algorithms for HIV Incidence Estimation

    Science.gov (United States)

    Cousins, Matthew M.; Konikoff, Jacob; Sabin, Devin; Khaki, Leila; Longosz, Andrew F.; Laeyendecker, Oliver; Celum, Connie; Buchbinder, Susan P.; Seage, George R.; Kirk, Gregory D.; Moore, Richard D.; Mehta, Shruti H.; Margolick, Joseph B.; Brown, Joelle; Mayer, Kenneth H.; Kobin, Beryl A.; Wheeler, Darrell; Justman, Jessica E.; Hodder, Sally L.; Quinn, Thomas C.; Brookmeyer, Ron; Eshleman, Susan H.

    2014-01-01

    Background Multi-assay algorithms (MAAs) can be used to estimate HIV incidence in cross-sectional surveys. We compared the performance of two MAAs that use HIV diversity as one of four biomarkers for analysis of HIV incidence. Methods Both MAAs included two serologic assays (LAg-Avidity assay and BioRad-Avidity assay), HIV viral load, and an HIV diversity assay. HIV diversity was quantified using either a high resolution melting (HRM) diversity assay that does not require HIV sequencing (HRM score for a 239 base pair env region) or sequence ambiguity (the percentage of ambiguous bases in a 1,302 base pair pol region). Samples were classified as MAA positive (likely from individuals with recent HIV infection) if they met the criteria for all of the assays in the MAA. The following performance characteristics were assessed: (1) the proportion of samples classified as MAA positive as a function of duration of infection, (2) the mean window period, (3) the shadow (the time period before sample collection that is being assessed by the MAA), and (4) the accuracy of cross-sectional incidence estimates for three cohort studies. Results The proportion of samples classified as MAA positive as a function of duration of infection was nearly identical for the two MAAs. The mean window period was 141 days for the HRM-based MAA and 131 days for the sequence ambiguity-based MAA. The shadows for both MAAs were cross-sectional HIV incidence estimates that were very similar to longitudinal incidence estimates based on HIV seroconversion. Conclusions MAAs that include the LAg-Avidity assay, the BioRad-Avidity assay, HIV viral load, and HIV diversity can provide accurate HIV incidence estimates. Sequence ambiguity measures obtained using a commercially-available HIV genotyping system can be used as an alternative to HRM scores in MAAs for cross-sectional HIV incidence estimation. PMID:24968135

  10. Discovery of new natural products by application of X-hitting, a novel algorithm for automated comparison of full UV-spectra, combined with structural determination by NMR spectroscophy

    DEFF Research Database (Denmark)

    Larsen, Thomas Ostenfeld; Petersen, Bent O.; Duus, Jens Øllgaard

    2005-01-01

    X-hitting, a newly developed algorithm for automated comparison of UV data, has been used for the tracking of two novel spiro-quinazoline metabolites, lapatins A (1)andB(2), in a screening study targeting quinazolines. The structures of 1 and 2 were elucidated by analysis of spectroscopic data...

  11. Comparison Algorithm Kernels on Support Vector Machine (SVM To Compare The Trend Curves with Curves Online Forex Trading

    Directory of Open Access Journals (Sweden)

    irfan abbas

    2017-01-01

    Full Text Available At this time, the players Forex Trading generally still use the data exchange in the form of a Forex Trading figures from different sources. Thus they only receive or know the data rate of a Forex Trading prevailing at the time just so difficult to analyze or predict exchange rate movements future. Forex players usually use the indicators to enable them to analyze and memperdiksi future value. Indicator is a decision making tool. Trading forex is trading currency of a country, the other country's currency. Trading took place globally between the financial centers of the world with the involvement of the world's major banks as the major transaction. Trading Forex offers profitable investment type with a small capital and high profit, with relatively small capital can earn profits doubled. This is due to the forex trading systems exist leverage which the invested capital will be doubled if the predicted results of buy / sell is accurate, but Trading Forex having high risk level, but by knowing the right time to trade (buy or sell, the losses can be avoided. Traders who invest in the foreign exchange market is expected to have the ability to analyze the circumstances and situations in predicting the difference in currency exchange rates. Forex price movements that form the pattern (curve up and down greatly assist traders in making decisions. The movement of the curve used as an indicator in the decision to purchase (buy or sell (sell. This study compares (Comparation type algorithm kernel on Support Vector Machine (SVM to predict the movement of the curve in live time trading forex using the data GBPUSD, 1H. Results of research on the study of the results and discussion can be concluded that the Kernel Dot, Kernel Multiquaric, Kernel Neural inappropriately used for data is non-linear in the case of data forex to follow the pattern of trend curves, because curves generated curved linear (straight and then to type of kernel is the closest curve

  12. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  13. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  14. Dark matter candidates

    International Nuclear Information System (INIS)

    Turner, M.S.

    1989-01-01

    One of the simplest, yet most profound, questions we can ask about the Universe is, how much stuff is in it, and further what is that stuff composed of? Needless to say, the answer to this question has very important implications for the evolution of the Universe, determining both the ultimate fate and the course of structure formation. Remarkably, at this late date in the history of the Universe we still do not have a definitive answer to this simplest of questions---although we have some very intriguing clues. It is known with certainty that most of the material in the Universe is dark, and we have the strong suspicion that the dominant component of material in the Cosmos is not baryons, but rather is exotic relic elementary particles left over from the earliest, very hot epoch of the Universe. If true, the Dark Matter question is a most fundamental one facing both particle physics and cosmology. The leading particle dark matter candidates are: the axion, the neutralino, and a light neutrino species. All three candidates are accessible to experimental tests, and experiments are now in progress. In addition, there are several dark horse, long shot, candidates, including the superheavy magnetic monopole and soliton stars. 13 refs

  15. A 100-Year Review: Methods and impact of genetic selection in dairy cattle-From daughter-dam comparisons to deep learning algorithms.

    Science.gov (United States)

    Weigel, K A; VanRaden, P M; Norman, H D; Grosu, H

    2017-12-01

    In the early 1900s, breed society herdbooks had been established and milk-recording programs were in their infancy. Farmers wanted to improve the productivity of their cattle, but the foundations of population genetics, quantitative genetics, and animal breeding had not been laid. Early animal breeders struggled to identify genetically superior families using performance records that were influenced by local environmental conditions and herd-specific management practices. Daughter-dam comparisons were used for more than 30 yr and, although genetic progress was minimal, the attention given to performance recording, genetic theory, and statistical methods paid off in future years. Contemporary (herdmate) comparison methods allowed more accurate accounting for environmental factors and genetic progress began to accelerate when these methods were coupled with artificial insemination and progeny testing. Advances in computing facilitated the implementation of mixed linear models that used pedigree and performance data optimally and enabled accurate selection decisions. Sequencing of the bovine genome led to a revolution in dairy cattle breeding, and the pace of scientific discovery and genetic progress accelerated rapidly. Pedigree-based models have given way to whole-genome prediction, and Bayesian regression models and machine learning algorithms have joined mixed linear models in the toolbox of modern animal breeders. Future developments will likely include elucidation of the mechanisms of genetic inheritance and epigenetic modification in key biological pathways, and genomic data will be used with data from on-farm sensors to facilitate precision management on modern dairy farms. Copyright © 2017 American Dairy Science Association. Published by Elsevier Inc. All rights reserved.

  16. Simultaneous polarimeter retrievals of microphysical aerosol and ocean color parameters from the "MAPP" algorithm with comparison to high-spectral-resolution lidar aerosol and ocean products.

    Science.gov (United States)

    Stamnes, S; Hostetler, C; Ferrare, R; Burton, S; Liu, X; Hair, J; Hu, Y; Wasilewski, A; Martin, W; van Diedenhoven, B; Chowdhary, J; Cetinić, I; Berg, L K; Stamnes, K; Cairns, B

    2018-04-01

    We present an optimal-estimation-based retrieval framework, the microphysical aerosol properties from polarimetry (MAPP) algorithm, designed for simultaneous retrieval of aerosol microphysical properties and ocean color bio-optical parameters using multi-angular total and polarized radiances. Polarimetric measurements from the airborne NASA Research Scanning Polarimeter (RSP) were inverted by MAPP to produce atmosphere and ocean products. The RSP MAPP results are compared with co-incident lidar measurements made by the NASA High-Spectral-Resolution Lidar HSRL-1 and HSRL-2 instruments. Comparisons are made of the aerosol optical depth (AOD) at 355 and 532 nm, lidar column-averaged measurements of the aerosol lidar ratio and Ångstrøm exponent, and lidar ocean measurements of the particulate hemispherical backscatter coefficient and the diffuse attenuation coefficient. The measurements were collected during the 2012 Two-Column Aerosol Project (TCAP) campaign and the 2014 Ship-Aircraft Bio-Optical Research (SABOR) campaign. For the SABOR campaign, 73% RSP MAPP retrievals fall within ±0.04 AOD at 532 nm as measured by HSRL-1, with an R value of 0.933 and root-mean-square deviation of 0.0372. For the TCAP campaign, 53% of RSP MAPP retrievals are within 0.04 AOD as measured by HSRL-2, with an R value of 0.927 and root-mean-square deviation of 0.0673. Comparisons with HSRL-2 AOD at 355 nm during TCAP result in an R value of 0.959 and a root-mean-square deviation of 0.0694. The RSP retrievals using the MAPP optimal estimation framework represent a key milestone on the path to a combined lidar + polarimeter retrieval using both HSRL and RSP measurements.

  17. A Novel Pairwise Comparison-Based Method to Determine Radiation Dose Reduction Potentials of Iterative Reconstruction Algorithms, Exemplified Through Circle of Willis Computed Tomography Angiography.

    Science.gov (United States)

    Ellmann, Stephan; Kammerer, Ferdinand; Brand, Michael; Allmendinger, Thomas; May, Matthias S; Uder, Michael; Lell, Michael M; Kramer, Manuel

    2016-05-01

    The aim of this study was to determine the dose reduction potential of iterative reconstruction (IR) algorithms in computed tomography angiography (CTA) of the circle of Willis using a novel method of evaluating the quality of radiation dose-reduced images. This study relied on ReconCT, a proprietary reconstruction software that allows simulating CT scans acquired with reduced radiation dose based on the raw data of true scans. To evaluate the performance of ReconCT in this regard, a phantom study was performed to compare the image noise of true and simulated scans within simulated vessels of a head phantom. That followed, 10 patients scheduled for CTA of the circle of Willis were scanned according to our institute's standard protocol (100 kV, 145 reference mAs). Subsequently, CTA images of these patients were reconstructed as either a full-dose weighted filtered back projection or with radiation dose reductions down to 10% of the full-dose level and Sinogram-Affirmed Iterative Reconstruction (SAFIRE) with either strength 3 or 5. Images were marked with arrows pointing on vessels of different sizes, and image pairs were presented to observers. Five readers assessed image quality with 2-alternative forced choice comparisons. In the phantom study, no significant differences were observed between the noise levels of simulated and true scans in filtered back projection, SAFIRE 3, and SAFIRE 5 reconstructions.The dose reduction potential for patient scans showed a strong dependence on IR strength as well as on the size of the vessel of interest. Thus, the potential radiation dose reductions ranged from 84.4% for the evaluation of great vessels reconstructed with SAFIRE 5 to 40.9% for the evaluation of small vessels reconstructed with SAFIRE 3. This study provides a novel image quality evaluation method based on 2-alternative forced choice comparisons. In CTA of the circle of Willis, higher IR strengths and greater vessel sizes allowed higher degrees of radiation dose

  18. a Comparison of Simulated Annealing, Genetic Algorithm and Particle Swarm Optimization in Optimal First-Order Design of Indoor Tls Networks

    Science.gov (United States)

    Jia, F.; Lichti, D.

    2017-09-01

    The optimal network design problem has been well addressed in geodesy and photogrammetry but has not received the same attention for terrestrial laser scanner (TLS) networks. The goal of this research is to develop a complete design system that can automatically provide an optimal plan for high-accuracy, large-volume scanning networks. The aim in this paper is to use three heuristic optimization methods, simulated annealing (SA), genetic algorithm (GA) and particle swarm optimization (PSO), to solve the first-order design (FOD) problem for a small-volume indoor network and make a comparison of their performances. The room is simplified as discretized wall segments and possible viewpoints. Each possible viewpoint is evaluated with a score table representing the wall segments visible from each viewpoint based on scanning geometry constraints. The goal is to find a minimum number of viewpoints that can obtain complete coverage of all wall segments with a minimal sum of incidence angles. The different methods have been implemented and compared in terms of the quality of the solutions, runtime and repeatability. The experiment environment was simulated from a room located on University of Calgary campus where multiple scans are required due to occlusions from interior walls. The results obtained in this research show that PSO and GA provide similar solutions while SA doesn't guarantee an optimal solution within limited iterations. Overall, GA is considered as the best choice for this problem based on its capability of providing an optimal solution and fewer parameters to tune.

  19. Comparison between GPR measurements and ultrasonic tomography with different inversion algorithms: an application to the base of an ancient Egyptian sculpture

    International Nuclear Information System (INIS)

    Sambuelli, L; Bohm, G; Capizzi, P; Cosentino, P; Cardarelli, E

    2011-01-01

    By late 2008 one of the most important pieces of the 'Museo delle Antichità Egizie' of Turin, the sculpture of the Pharaoh with god Amun, was planned to be one of the masterpieces of a travelling exhibition in Japan. The 'Fondazione Museo delle Antichità Egizie di Torino', who manages the museum, was concerned with the integrity of the base of the statue which actually presents visible signs of restoration dating back to the early 19th century. It was required to estimate the persistence of the visible fractures, to search for unknown ones and to provide information about the overall mechanical strength of the base. To tackle the first question a GPR reflection survey along three sides of the base was performed and the results were assembled in a 3D rendering. As far as the second question is concerned, two parallel, horizontal ultrasonic 2D tomograms across the base were made. We acquired, for each section, 723 ultrasonic signals corresponding to different transmitter and receiver positions. The tomographic data were inverted using four different software packages based upon different algorithms. The obtained velocity images were then compared each other, with the GPR results and with the visible fractures in the base. A critical analysis of the comparisons is finally presented

  20. A Comparison of the SeaWiFS Chlorophyll and CZCS Pigment Algorithms Using Optical Data From the 1992 JGOFS Equatorial Pacific Time Series

    National Research Council Canada - National Science Library

    Rhea, W. J; Davis, Curtiss O

    1997-01-01

    .... This data set represents the range of conditions expected in this region, and was used to compare the SeaWiFS chlorophyll alpha algorithm with the CZCS pigment algorithm, as well as test the validity...

  1. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-site Study

    OpenAIRE

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L; Yerys, Benjamin E; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised algorithm demonstrated increased sensitivity, but lower specificity in the overall sample. Estimates were highest for females, individuals with a verb...

  2. Application of Hybrid Optimization Algorithm in the Synthesis of Linear Antenna Array

    Directory of Open Access Journals (Sweden)

    Ezgi Deniz Ülker

    2014-01-01

    Full Text Available The use of hybrid algorithms for solving real-world optimization problems has become popular since their solution quality can be made better than the algorithms that form them by combining their desirable features. The newly proposed hybrid method which is called Hybrid Differential, Particle, and Harmony (HDPH algorithm is different from the other hybrid forms since it uses all features of merged algorithms in order to perform efficiently for a wide variety of problems. In the proposed algorithm the control parameters are randomized which makes its implementation easy and provides a fast response. This paper describes the application of HDPH algorithm to linear antenna array synthesis. The results obtained with the HDPH algorithm are compared with three merged optimization techniques that are used in HDPH. The comparison shows that the performance of the proposed algorithm is comparatively better in both solution quality and robustness. The proposed hybrid algorithm HDPH can be an efficient candidate for real-time optimization problems since it yields reliable performance at all times when it gets executed.

  3. Replication and Comparison of the Newly Proposed ADOS-2, Module 4 Algorithm in ASD without ID: A Multi-Site Study

    Science.gov (United States)

    Pugliese, Cara E.; Kenworthy, Lauren; Bal, Vanessa Hus; Wallace, Gregory L.; Yerys, Benjamin E.; Maddox, Brenna B.; White, Susan W.; Popal, Haroon; Armour, Anna Chelsea; Miller, Judith; Herrington, John D.; Schultz, Robert T.; Martin, Alex; Anthony, Laura Gutermuth

    2015-01-01

    Recent updates have been proposed to the Autism Diagnostic Observation Schedule-2 Module 4 diagnostic algorithm. This new algorithm, however, has not yet been validated in an independent sample without intellectual disability (ID). This multi-site study compared the original and revised algorithms in individuals with ASD without ID. The revised…

  4. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  5. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    DEFF Research Database (Denmark)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk

    2007-01-01

    A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam...... a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (rho = 0.035 g cm(-3)), enhanced for the most energetic beam. For denser...

  6. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    1999-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint ranking algorithm for learning Optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  7. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    2001-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  8. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  9. Optimized candidal biofilm microtiter assay

    NARCIS (Netherlands)

    Krom, Bastiaan P.; Cohen, Jesse B.; Feser, Gail E. McElhaney; Cihlar, Ronald L.

    Microtiter based candidal biofilm formation is commonly being used. Here we describe the analysis of factors influencing the development of candidal biofilms such as the coating with serum, growth medium and pH. The data reported here show that optimal candidal biofilm formation is obtained when

  10. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  11. Flow enforcement algorithms for ATM networks

    DEFF Research Database (Denmark)

    Dittmann, Lars; Jacobsen, Søren B.; Moth, Klaus

    1991-01-01

    Four measurement algorithms for flow enforcement in asynchronous transfer mode (ATM) networks are presented. The algorithms are the leaky bucket, the rectangular sliding window, the triangular sliding window, and the exponentially weighted moving average. A comparison, based partly on teletraffic...

  12. A comparison of performance of automatic cloud coverage assessment algorithm for Formosat-2 image using clustering-based and spatial thresholding methods

    Science.gov (United States)

    Hsu, Kuo-Hsien

    2012-11-01

    Formosat-2 image is a kind of high-spatial-resolution (2 meters GSD) remote sensing satellite data, which includes one panchromatic band and four multispectral bands (Blue, Green, Red, near-infrared). An essential sector in the daily processing of received Formosat-2 image is to estimate the cloud statistic of image using Automatic Cloud Coverage Assessment (ACCA) algorithm. The information of cloud statistic of image is subsequently recorded as an important metadata for image product catalog. In this paper, we propose an ACCA method with two consecutive stages: preprocessing and post-processing analysis. For pre-processing analysis, the un-supervised K-means classification, Sobel's method, thresholding method, non-cloudy pixels reexamination, and cross-band filter method are implemented in sequence for cloud statistic determination. For post-processing analysis, Box-Counting fractal method is implemented. In other words, the cloud statistic is firstly determined via pre-processing analysis, the correctness of cloud statistic of image of different spectral band is eventually cross-examined qualitatively and quantitatively via post-processing analysis. The selection of an appropriate thresholding method is very critical to the result of ACCA method. Therefore, in this work, We firstly conduct a series of experiments of the clustering-based and spatial thresholding methods that include Otsu's, Local Entropy(LE), Joint Entropy(JE), Global Entropy(GE), and Global Relative Entropy(GRE) method, for performance comparison. The result shows that Otsu's and GE methods both perform better than others for Formosat-2 image. Additionally, our proposed ACCA method by selecting Otsu's method as the threshoding method has successfully extracted the cloudy pixels of Formosat-2 image for accurate cloud statistic estimation.

  13. A comparison between genetic algorithms and neural networks for optimizing fuel recharges in BWR; Una comparacion entre algoritmos geneticos y redes neuronales para optimizar recargas de combustible en BWR's

    Energy Technology Data Exchange (ETDEWEB)

    Ortiz J, J. [Instituto Nacional de Investigaciones Nucleares, Depto. Sistemas Nucleares, A.P. 18-1027, 11801 Mexico D.F. (Mexico); Requena, I. [Universidad de Granada (Spain)

    2002-07-01

    In this work the results of a genetic algorithm (AG) and a neural recurrent multi state network (RNRME) for optimizing the fuel reload of 5 cycles of the Laguna Verde nuclear power plant (CNLV) are presented. The fuel reload obtained by both methods are compared and it was observed that the RNRME creates better fuel distributions that the AG. Moreover a comparison of the utility for using one or another one techniques is make. (Author)

  14. A comparison of several cluster algorithms on artificial binary data [Part 2]. Scenarios from travel market segmentation. Part 2 (Addition to Working Paper No. 7).

    OpenAIRE

    Dolnicar, Sara; Leisch, Friedrich; Steiner, Gottfried; Weingessel, Andreas

    1998-01-01

    The search for clusters in empirical data is an important and often encountered research problem. Numerous algorithms exist that are able to render groups of objects or individuals. Of course each algorithm has its strengths and weaknesses. In order to identify these crucial points artificial data was generated - based primarily on experience with structures of empirical data - and used as benchmark for evaluating the results of numerous cluster algorithms. This work is an addition to SFB Wor...

  15. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  16. Global alignment algorithms implementations | Fatumo ...

    African Journals Online (AJOL)

    In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.

  17. A comparison of accuracy and precision of 5 gait-event detection algorithms from motion capture in horses during over ground walk

    DEFF Research Database (Denmark)

    Olsen, Emil; Boye, Jenny Katrine; Pfau, Thilo

    2012-01-01

    and use robust and validated algorithms. It is the objective of this study to compare accuracy (bias) and precision (SD) for five published human and equine motion capture foot-on/off and stance phase detection algorithms during walk. Six horses were walked over 8 seamlessly embedded force plates...... of mass generally provides the most accurate and precise results in walk....

  18. Comparison between different tomographic reconstruction algorithms in nuclear medicine imaging; Comparacion entre distintos algoritmos de reconstruccion tomografica en imagenes de medicina nuclear

    Energy Technology Data Exchange (ETDEWEB)

    Llacer Martos, S.; Herraiz Lablanca, M. D.; Puchal Ane, R.

    2011-07-01

    This paper compares the image quality obtained with each of the algorithms is evaluated and its running time, to optimize the choice of algorithm to use taking into account both the quality of the reconstructed image as the time spent on the reconstruction.

  19. A Clustal Alignment Improver Using Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Thomsen, Rene; Fogel, Gary B.; Krink, Thimo

    2002-01-01

    Multiple sequence alignment (MSA) is a crucial task in bioinformatics. In this paper we extended previous work with evolutionary algorithms (EA) by using MSA solutions obtained from the wellknown Clustal V algorithm as a candidate solution seed of the initial EA population. Our results clearly show...

  20. Binar Sort: A Linear Generalized Sorting Algorithm

    OpenAIRE

    Gilreath, William F.

    2008-01-01

    Sorting is a common and ubiquitous activity for computers. It is not surprising that there exist a plethora of sorting algorithms. For all the sorting algorithms, it is an accepted performance limit that sorting algorithms are linearithmic or O(N lg N). The linearithmic lower bound in performance stems from the fact that the sorting algorithms use the ordering property of the data. The sorting algorithm uses comparison by the ordering property to arrange the data elements from an initial perm...

  1. Ensemble candidate classification for the LOTAAS pulsar survey

    Science.gov (United States)

    Tan, C. M.; Lyon, R. J.; Stappers, B. W.; Cooper, S.; Hessels, J. W. T.; Kondratiev, V. I.; Michilli, D.; Sanidas, S.

    2018-03-01

    One of the biggest challenges arising from modern large-scale pulsar surveys is the number of candidates generated. Here, we implemented several improvements to the machine learning (ML) classifier previously used by the LOFAR Tied-Array All-Sky Survey (LOTAAS) to look for new pulsars via filtering the candidates obtained during periodicity searches. To assist the ML algorithm, we have introduced new features which capture the frequency and time evolution of the signal and improved the signal-to-noise calculation accounting for broad profiles. We enhanced the ML classifier by including a third class characterizing RFI instances, allowing candidates arising from RFI to be isolated, reducing the false positive return rate. We also introduced a new training data set used by the ML algorithm that includes a large sample of pulsars misclassified by the previous classifier. Lastly, we developed an ensemble classifier comprised of five different Decision Trees. Taken together these updates improve the pulsar recall rate by 2.5 per cent, while also improving the ability to identify pulsars with wide pulse profiles, often misclassified by the previous classifier. The new ensemble classifier is also able to reduce the percentage of false positive candidates identified from each LOTAAS pointing from 2.5 per cent (˜500 candidates) to 1.1 per cent (˜220 candidates).

  2. Comparison of the ESTRO formalism for monitor unit calculation with a Clarkson based algorithm of a treatment planning system and a traditional ''full-scatter'' methodology

    International Nuclear Information System (INIS)

    Pirotta, M.; Aquilina, D.; Bhikha, T.; Georg, D.

    2005-01-01

    The ESTRO formalism for monitor unit (MU) calculations was evaluated and implemented to replace a previous methodology based on dosimetric data measured in a full-scatter phantom. This traditional method relies on data normalised at the depth of dose maximum (z m ), as well as on the utilisation of the BJR 25 table for the conversion of rectangular fields into equivalent square fields. The treatment planning system (TPS) was subsequently updated to reflect the new beam data normalised at a depth z R of 10 cm. Comparisons were then carried out between the ESTRO formalism, the Clarkson-based dose calculation algorithm on the TPS (with beam data normalised at z m and z R ), and the traditional ''full-scatter'' methodology. All methodologies, except for the ''full-scatter'' methodology, separated head-scatter from phantom-scatter effects and none of the methodologies; except for the ESTRO formalism, utilised wedge depth dose information for calculations. The accuracy of MU calculations was verified against measurements in a homogeneous phantom for square and rectangular open and wedged fields, as well as blocked open and wedged fields, at 5, 10, and 20 cm depths, under fixed SSD and isocentric geometries for 6 and 10 MV. Overall, the ESTRO Formalism showed the most accurate performance, with the root mean square (RMS) error with respect to measurements remaining below 1% even for the most complex beam set-ups investigated. The RMS error for the TPS deteriorated with the introduction of a wedge, with a worse RMS error for the beam data normalised at z m (4% at 6 MV and 1.6% at 10 MV) than at z R (1.9% at 6 MV and 1.1% at 10 MV). The further addition of blocking had only a marginal impact on the accuracy of this methodology. The ''full-scatter'' methodology showed a loss in accuracy for calculations involving either wedges or blocking, and performed worst for blocked wedged fields (RMS errors of 7.1% at 6 MV and 5% at 10 MV). The origins of these discrepancies were

  3. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  4. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    International Nuclear Information System (INIS)

    Choo, Ji Yung; Goo, Jin Mo; Park, Chang Min; Park, Sang Joon; Lee, Chang Hyun; Shim, Mi-Suk

    2014-01-01

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  5. Quantitative analysis of emphysema and airway measurements according to iterative reconstruction algorithms: comparison of filtered back projection, adaptive statistical iterative reconstruction and model-based iterative reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Choo, Ji Yung [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Korea University Ansan Hospital, Ansan-si, Department of Radiology, Gyeonggi-do (Korea, Republic of); Goo, Jin Mo; Park, Chang Min; Park, Sang Joon [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University, Cancer Research Institute, Seoul (Korea, Republic of); Lee, Chang Hyun; Shim, Mi-Suk [Seoul National University Medical Research Center, Department of Radiology, Seoul National University College of Medicine, and Institute of Radiation Medicine, Seoul (Korea, Republic of)

    2014-04-15

    To evaluate filtered back projection (FBP) and two iterative reconstruction (IR) algorithms and their effects on the quantitative analysis of lung parenchyma and airway measurements on computed tomography (CT) images. Low-dose chest CT obtained in 281 adult patients were reconstructed using three algorithms: FBP, adaptive statistical IR (ASIR) and model-based IR (MBIR). Measurements of each dataset were compared: total lung volume, emphysema index (EI), airway measurements of the lumen and wall area as well as average wall thickness. Accuracy of airway measurements of each algorithm was also evaluated using an airway phantom. EI using a threshold of -950 HU was significantly different among the three algorithms in decreasing order of FBP (2.30 %), ASIR (1.49 %) and MBIR (1.20 %) (P < 0.01). Wall thickness was also significantly different among the three algorithms with FBP (2.09 mm) demonstrating thicker walls than ASIR (2.00 mm) and MBIR (1.88 mm) (P < 0.01). Airway phantom analysis revealed that MBIR showed the most accurate value for airway measurements. The three algorithms presented different EIs and wall thicknesses, decreasing in the order of FBP, ASIR and MBIR. Thus, care should be taken in selecting the appropriate IR algorithm on quantitative analysis of the lung. (orig.)

  6. Percentage depth dose calculation accuracy of model based algorithms in high energy photon small fields through heterogeneous media and comparison with plastic scintillator dosimetry.

    Science.gov (United States)

    Alagar, Ananda Giri Babu; Mani, Ganesh Kadirampatti; Karunakaran, Kaviarasu

    2016-01-08

    Small fields smaller than 4 × 4 cm2 are used in stereotactic and conformal treatments where heterogeneity is normally present. Since dose calculation accuracy in both small fields and heterogeneity often involves more discrepancy, algorithms used by treatment planning systems (TPS) should be evaluated for achieving better treatment results. This report aims at evaluating accuracy of four model-based algorithms, X-ray Voxel Monte Carlo (XVMC) from Monaco, Superposition (SP) from CMS-Xio, AcurosXB (AXB) and analytical anisotropic algorithm (AAA) from Eclipse are tested against the measurement. Measurements are done using Exradin W1 plastic scintillator in Solid Water phantom with heterogeneities like air, lung, bone, and aluminum, irradiated with 6 and 15 MV photons of square field size ranging from 1 to 4 cm2. Each heterogeneity is introduced individually at two different depths from depth-of-dose maximum (Dmax), one setup being nearer and another farther from the Dmax. The central axis percentage depth-dose (CADD) curve for each setup is measured separately and compared with the TPS algorithm calculated for the same setup. The percentage normalized root mean squared deviation (%NRMSD) is calculated, which represents the whole CADD curve's deviation against the measured. It is found that for air and lung heterogeneity, for both 6 and 15 MV, all algorithms show maximum deviation for field size 1 × 1 cm2 and gradually reduce when field size increases, except for AAA. For aluminum and bone, all algorithms' deviations are less for 15 MV irrespective of setup. In all heterogeneity setups, 1 × 1 cm2 field showed maximum deviation, except in 6MV bone setup. All algorithms in the study, irrespective of energy and field size, when any heterogeneity is nearer to Dmax, the dose deviation is higher compared to the same heterogeneity far from the Dmax. Also, all algorithms show maximum deviation in lower-density materials compared to high-density materials.

  7. Decoding the encoding of functional brain networks: An fMRI classification comparison of non-negative matrix factorization (NMF), independent component analysis (ICA), and sparse coding algorithms.

    Science.gov (United States)

    Xie, Jianwen; Douglas, Pamela K; Wu, Ying Nian; Brody, Arthur L; Anderson, Ariana E

    2017-04-15

    Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet other mathematical constraints provide alternate biologically-plausible frameworks for generating brain networks. Non-negative matrix factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms (L1 Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking, where the total observed activity in a single voxel originates from a restricted number of possible brain networks. The assumptions of independence, positivity, and sparsity to encode task-related brain networks are compared; the resulting brain networks within scan for different constraints are used as basis functions to encode observed functional activity. These encodings are then decoded using machine learning, by using the time series weights to predict within scan whether a subject is viewing a video, listening to an audio cue, or at rest, in 304 fMRI scans from 51 subjects. The sparse coding algorithm of L1 Regularized Learning outperformed 4 variations of ICA (pcoding algorithms. Holding constant the effect of the extraction algorithm, encodings using sparser spatial networks (containing more zero-valued voxels) had higher classification accuracy (pcoding algorithms suggests that algorithms which enforce sparsity, discourage multitasking, and promote local specialization may capture better the underlying source processes than those which allow inexhaustible local processes such as ICA. Negative BOLD signal may capture task-related activations. Copyright © 2017 Elsevier B.V. All rights reserved.

  8. A comparison of genetic algorithm and artificial bee colony approaches in solving blocking hybrid flowshop scheduling problem with sequence dependent setup/changeover times

    Directory of Open Access Journals (Sweden)

    Pongpan Nakkaew

    2016-06-01

    Full Text Available In manufacturing process where efficiency is crucial in order to remain competitive, flowshop is a common configuration in which machines are arranged in series and products are produced through the stages one by one. In certain production processes, the machines are frequently configured in the way that each production stage may contain multiple processing units in parallel or hybrid. Moreover, along with precedent conditions, the sequence dependent setup times may exist. Finally, in case there is no buffer, a machine is said to be blocked if the next stage to handle its output is being occupied. Such NP-Hard problem, referred as Blocking Hybrid Flowshop Scheduling Problem with Sequence Dependent Setup/Changeover Times, is usually not possible to find the best exact solution to satisfy optimization objectives such as minimization of the overall production time. Thus, it is usually solved by approximate algorithms such as metaheuristics. In this paper, we investigate comparatively the effectiveness of the two approaches: a Genetic Algorithm (GA and an Artificial Bee Colony (ABC algorithm. GA is inspired by the process of natural selection. ABC, in the same manner, resembles the way types of bees perform specific functions and work collectively to find their foods by means of division of labor. Additionally, we apply an algorithm to improve the GA and ABC algorithms so that they can take advantage of parallel processing resources of modern multiple core processors while eliminate the need for screening the optimal parameters of both algorithms in advance.

  9. Comparison of 30-2 Standard and Fast programs of Swedish Interactive Threshold Algorithm of Humphrey Field Analyzer for perimetry in patients with intracranial tumors.

    Science.gov (United States)

    Singh, Manav Deep; Jain, Kanika

    2017-11-01

    To find out whether 30-2 Swedish Interactive Threshold Algorithm (SITA) Fast is comparable to 30-2 SITA Standard as a tool for perimetry among the patients with intracranial tumors. This was a prospective cross-sectional study involving 80 patients aged ≥18 years with imaging proven intracranial tumors and visual acuity better than 20/60. The patients underwent multiple visual field examinations using the two algorithms till consistent and repeatable results were obtained. A total of 140 eyes of 80 patients were analyzed. Almost 60% of patients undergoing perimetry with SITA Standard required two or more sessions to obtain consistent results, whereas the same could be obtained in 81.42% with SITA Fast in the first session itself. Of 140 eyes, 70 eyes had recordable field defects and the rest had no defects as detected by either of the two algorithms. Mean deviation (MD) (P = 0.56), pattern standard deviation (PSD) (P = 0.22), visual field index (P = 0.83) and number of depressed points at P 0.5% on MD and PSD probability plots showed no statistically significant difference between two algorithms. Bland-Altman test showed that considerable variability existed between two algorithms. Perimetry performed by SITA Standard and SITA Fast algorithm of Humphrey Field Analyzer gives comparable results among the patients of intracranial tumors. Being more time efficient and with a shorter learning curve, SITA Fast my be recommended as a standard test for the purpose of perimetry among these patients.

  10. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules.

    Science.gov (United States)

    Cohen, Julien G; Kim, Hyungjin; Park, Su Bin; van Ginneken, Bram; Ferretti, Gilbert R; Lee, Chang Hyun; Goo, Jin Mo; Park, Chang Min

    2017-08-01

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. • Intra- and interobserver reproducibility of measurements did not differ between FBP and MBIR. • Differences in SSNs' semi-automatic measurement induced by reconstruction algorithms were not clinically significant. • Semi-automatic measurement may be conducted regardless of reconstruction algorithm. • SSNs' semi-automated classification agreement (pure vs. part-solid) did not significantly differ between algorithms.

  11. Comparison of 2D and 3D algorithms for adding a margin to the gross tumor volume in the conformal radiotherapy planning of prostate cancer

    International Nuclear Information System (INIS)

    Khoo, Vincent S.; Bedford, James L.; Webb, Steve; Dearnaley, David P.

    1998-01-01

    Purpose: To evaluate the adequacy of tumor volume coverage using a three-dimensional (3D) margin-growing algorithm compared to a two-dimensional (2D) margin-growing algorithm in the conformal radiotherapy planning of prostate cancer. Methods and Materials: Two gross tumor volumes (GTV) were segmented in each of 10 patients with localized prostate cancer; prostate gland only (PO) and prostate with seminal vesicles (PSV). A predetermined margin of 10 mm was applied to these two groups (PO and PSV) using both 2D and 3D margin-growing algorithms. The 2D algorithm added a transaxial margin to each GTV slice, whereas the 3D algorithm added a volumetric margin all around the GTV. The true planning target volume (PTV) was defined as the region delineated by the 3D algorithm. The adequacy of geometric coverage of the GTV by the two algorithms was examined in a series of transaxial planes throughout the target volume. Results: The 2D margin-growing algorithm underestimated the PTV by 17% (range 12-20) in the PO group and by 20% (range 13-28) for the PSV group when compared to the 3D-margin algorithm. For the PO group, the mean transaxial difference between the 2D and 3D algorithm was 3.8 mm inferiorly (range 0-20), 1.8 mm centrally (range 0-9), and 4.4 mm superiorly (range 0-22). Considering all of these regions, the mean discrepancy anteriorly was 5.1 mm (range 0-22), posteriorly 2.2 (range 0-20), right border 2.8 mm (range 0-14), and left border 3.1 mm (range 0-12). For the PSV group, the mean discrepancy in the inferior region was 3.8 mm (range 0-20), central region of the prostate was 1.8 mm ( range 0-9), the junction region of the prostate and the seminal vesicles was 5.5 mm (range 0-30), and the superior region of the seminal vesicles was 4.2 mm (range 0-55). When the different borders were considered in the PSV group, the mean discrepancies for the anterior, posterior, right, and left borders were 6.4 mm (range 0-55), 2.5 mm (range 0-20), 2.6 mm (range 0-14), and 3

  12. On the dosimetric behaviour of photon dose calculation algorithms in the presence of simple geometric heterogeneities: comparison with Monte Carlo calculations

    Science.gov (United States)

    Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk; Brink, Carsten; Clivio, Alessandro; Knöös, Tommy; Nicolini, Giorgia; Cozzi, Luca

    2007-03-01

    A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam with modified Batho correction; Nucletron Helax-TMS: the collapsed cone and the pencil beam with equivalent path length correction; CMS XiO: the multigrid superposition and the fast Fourier transform convolution; Philips Pinnacle: the collapsed cone. Monte Carlo simulations (MC) performed with the EGSnrc codes BEAMnrc and DOSxyznrc from NRCC in Ottawa were used as a benchmark. The study was carried out in simple geometrical water phantoms (ρ = 1.00 g cm-3) with inserts of different densities simulating light lung tissue (ρ = 0.035 g cm-3), normal lung (ρ = 0.20 g cm-3) and cortical bone tissue (ρ = 1.80 g cm-3). Experiments were performed for low- and high-energy photon beams (6 and 15 MV) and for square (13 × 13 cm2) and elongated rectangular (2.8 × 13 cm2) fields. Analysis was carried out on the basis of depth dose curves and transverse profiles at several depths. Assuming the MC data as reference, γ index analysis was carried out distinguishing between regions inside the non-water inserts or inside the uniform water. For this study, a distance to agreement was set to 3 mm while the dose difference varied from 2% to 10%. In general all algorithms based on pencil-beam convolutions showed a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (ρ = 0.035 g cm-3), enhanced for the most energetic beam. For denser, and more clinical, densities a better agreement among the sophisticated algorithms with respect to MC was observed.

  13. Application of a genetic algorithm in the conformational analysis of methylene-acetal-linked thymine dimers in DNA: Comparison with distance geometry calculations

    International Nuclear Information System (INIS)

    Beckers, Mischa L.M.; Buydens, Lutgarde M.C.; Pikkemaat, Jeroen A.; Altona, Cornelis

    1997-01-01

    The three-dimensional spatial structure of a methylene-acetal-linked thymine dimer present in a 10 base-pair (bp) sense-antisense DNA duplex was studied with a genetic algorithm designed to interpret NOE distance restraints. Trial solutions were represented by torsion angles. This means that bond angles for the dimer trial structures are kept fixed during the genetic algorithm optimization. Bond angle values were extracted from a 10 bp sense-antisense duplex model that was subjected to energy minimization by means of a modified AMBER force field. A set of 63 proton-proton distance restraints defining the methylene-acetal-linked thymine dimer was available. The genetic algorithm minimizes the difference between distances in the trial structures and distance restraints. A large conformational search space could be covered in the genetic algorithm optimization by allowing a wide range of torsion angles. The genetic algorithm optimization in all cases led to one family of structures. This family of the methylene-acetal-linked thymine dimer in the duplex differs from the family that was suggested from distance geometry calculations. It is demonstrated that the bond angle geometry around the methylene-acetal linkage plays an important role in the optimization

  14. Comparison of 2D and 3D algorithms for adding a margin to the gross tumor volume in the conformal radiotherapy planning of prostate cancer

    International Nuclear Information System (INIS)

    Khoo, V.S.; Bedford, J.L.; Webb, S.; Dearnaley, D.P.

    1997-01-01

    Purpose: To evaluate the adequacy of tumor volume coverage using a three dimensional (3D) margin growing algorithm compared to a two dimensional (2D) margin growing algorithm in the conformal radiotherapy planning of prostate cancer. Methods and Materials: Two gross tumor volumes (GTV) were segmented in each of ten patients with localized prostate cancer: prostate gland only (PO) and prostate with seminal vesicles (PSV). A margin of 10 mm was applied to these two groups (PO and PSV) using both the 2D and 3D margin growing algorithms. The true planning target volume (PTV) was defined as the region delineated by the 3D algorithm. Adequacy of geometric coverage of the GTV with the two algorithms was examined throughout the target volume. Discrepancies between the two margin methods were measured in the transaxial plane. Results: The 2D algorithm underestimated the PTV by 17% (range 12-20) in the PO group and by 20% (range 13-28) for the PSV group when compared to the 3D algorithm. For both the PO and PSV groups, the inferior coverage of the PTV was consistently underestimated by the 2D margin algorithm when compared to the 3D margins with a mean radial distance of 4.8 mm (range 0-10). In the central region of the prostate gland, the anterior, posterior, and lateral PTV borders were underestimated with the 2D margin in both the PO and PSV groups by a mean of 3.6 mm (range 0-9), 2.1 mm (range 0-8), and 1.8 (range 0-9) respectively. The PTV coverage of the PO group superiorly was radially underestimated by 4.5mm (range 0-14) when comparing the 2D margins to the 3D margins. For the PSV group, the junction region between the prostate and the seminal vesicles was underestimated by the 2D margin by a mean transaxial distance of 18.1 mm in the anterior PTV border (range 4-30), 7.2 mm posteriorly (range 0-20), and 3.7 mm laterally (range 0-14). The superior region of the seminal vesicles in the PSV group was also consistently underestimated with a radial discrepancy of 3.3 mm

  15. Dynamic Programming Algorithms in Speech Recognition

    Directory of Open Access Journals (Sweden)

    Titus Felix FURTUNA

    2008-01-01

    Full Text Available In a system of speech recognition containing words, the recognition requires the comparison between the entry signal of the word and the various words of the dictionary. The problem can be solved efficiently by a dynamic comparison algorithm whose goal is to put in optimal correspondence the temporal scales of the two words. An algorithm of this type is Dynamic Time Warping. This paper presents two alternatives for implementation of the algorithm designed for recognition of the isolated words.

  16. Sorting protein lists with nwCompare: a simple and fast algorithm for n-way comparison of proteomic data files.

    Science.gov (United States)

    Pont, Frédéric; Fournié, Jean Jacques

    2010-03-01

    MS, the reference technology for proteomics, routinely produces large numbers of protein lists whose fast comparison would prove very useful. Unfortunately, most softwares only allow comparisons of two to three lists at once. We introduce here nwCompare, a simple tool for n-way comparison of several protein lists without any query language, and exemplify its use with differential and shared cancer cell proteomes. As the software compares character strings, it can be applied to any type of data mining, such as genomic or metabolomic datalists.

  17. Comparison of Different Machine Learning Algorithms for Lithological Mapping Using Remote Sensing Data and Morphological Features: A Case Study in Kurdistan Region, NE Iraq

    Science.gov (United States)

    Othman, Arsalan; Gloaguen, Richard

    2015-04-01

    Topographic effects and complex vegetation cover hinder lithology classification in mountain regions based not only in field, but also in reflectance remote sensing data. The area of interest "Bardi-Zard" is located in the NE of Iraq. It is part of the Zagros orogenic belt, where seven lithological units outcrop and is known for its chromite deposit. The aim of this study is to compare three machine learning algorithms (MLAs): Maximum Likelihood (ML), Support Vector Machines (SVM), and Random Forest (RF) in the context of a supervised lithology classification task using Advanced Space-borne Thermal Emission and Reflection radiometer (ASTER) satellite, its derived, spatial information (spatial coordinates) and geomorphic data. We emphasize the enhancement in remote sensing lithological mapping accuracy that arises from the integration of geomorphic features and spatial information (spatial coordinates) in classifications. This study identifies that RF is better than ML and SVM algorithms in almost the sixteen combination datasets, which were tested. The overall accuracy of the best dataset combination with the RF map for the all seven classes reach ~80% and the producer and user's accuracies are ~73.91% and 76.09% respectively while the kappa coefficient is ~0.76. TPI is more effective with SVM algorithm than an RF algorithm. This paper demonstrates that adding geomorphic indices such as TPI and spatial information in the dataset increases the lithological classification accuracy.

  18. Comparison of satellite reflectance algorithms for estimating chlorophyll-a in a temperate reservoir using coincident hyperspectral aircraft imagery and dense coincident surface observations

    Science.gov (United States)

    We analyzed 10 established and 4 new satellite reflectance algorithms for estimating chlorophyll-a (Chl-a) in a temperate reservoir in southwest Ohio using coincident hyperspectral aircraft imagery and dense water truth collected within one hour of image acquisition to develop si...

  19. Inclusive-jet cross sections in NC DIS at HERA and a comparison of the k{sub T}, anti-k{sub T} and SIScone jet algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Abramowicz, H. [Tel Aviv University (Israel). Raymond and Beverly Sackler Faculty of Exact Sciences, School of Physics; Max Planck Inst., Munich (Germany); Abt, I. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Adamczyk, L. [AGH-University of Science and Technology, Cracow (PL). Faculty of Physics and Applied Computer Science] (and others)

    2010-03-15

    For the first time, differential inclusive-jet cross sections have been measured in neutral current deep inelastic ep scattering using the anti-k{sub T} and SIScone algorithms. The measurements were made for boson virtualities Q{sup 2} > 125 GeV{sup 2} with the ZEUS detector at HERA using an integrated luminosity of 82 pb{sup -1} and the jets were identified in the Breit frame. The performance and suitability of the jet algorithms for their use in hadron-like reactions were investigated by comparing the measurements to those performed with the k{sub T} algorithm. Next-to-leading-order QCD calculations give a good description of the measurements. Measurements of the ratios of cross sections using different jet algorithms are also presented; the measured ratios are well described by calculations including up to O({alpha}{sub s}{sup 3}) terms. Values of {alpha}{sub s}(M{sub Z}) were extracted from the data; the results are compatible with and have similar precision to the value extracted from the k{sub T} analysis. (orig.)

  20. Inclusive-jet cross sections in NC DIS at HERA and a comparison of the kT, anti-kT and SIScone jet algorithms

    International Nuclear Information System (INIS)

    Abramowicz, H.; Abt, I.; Adamczyk, L.

    2010-03-01

    For the first time, differential inclusive-jet cross sections have been measured in neutral current deep inelastic ep scattering using the anti-k T and SIScone algorithms. The measurements were made for boson virtualities Q 2 > 125 GeV 2 with the ZEUS detector at HERA using an integrated luminosity of 82 pb -1 and the jets were identified in the Breit frame. The performance and suitability of the jet algorithms for their use in hadron-like reactions were investigated by comparing the measurements to those performed with the k T algorithm. Next-to-leading-order QCD calculations give a good description of the measurements. Measurements of the ratios of cross sections using different jet algorithms are also presented; the measured ratios are well described by calculations including up to O(α s 3 ) terms. Values of α s (M Z ) were extracted from the data; the results are compatible with and have similar precision to the value extracted from the k T analysis. (orig.)

  1. Comparison of build-up region doses in oblique tangential 6 MV photon beams calculated by AAA and CCC algorithms in breast Rando phantom

    Science.gov (United States)

    Masunun, P.; Tangboonduangjit, P.; Dumrongkijudom, N.

    2016-03-01

    The purpose of this study is to compare the build-up region doses on breast Rando phantom surface with the bolus covered, the doses in breast Rando phantom and also the doses in a lung that is the heterogeneous region by two algorithms. The AAA in Eclipse TPS and the collapsed cone convolution algorithm in Pinnacle treatment planning system were used to plan in tangential field technique with 6 MV photon beam at 200 cGy total doses in Breast Rando phantom with bolus covered (5 mm and 10 mm). TLDs were calibrated with Cobalt-60 and used to measure the doses in irradiation process. The results in treatment planning show that the doses in build-up region and the doses in breast phantom were closely matched in both algorithms which are less than 2% differences. However, overestimate of doses in a lung (L2) were found in AAA with 13.78% and 6.06% differences at 5 mm and 10 mm bolus thickness, respectively when compared with CCC algorithm. The TLD measurements show the underestimate in buildup region and in breast phantom but the doses in a lung (L2) were overestimated when compared with the doses in the two plannings at both thicknesses of the bolus.

  2. Comparison of turnaround time and total cost of HIV testing before and after implementation of the 2014 CDC/APHL Laboratory Testing Algorithm for diagnosis of HIV infection.

    Science.gov (United States)

    Chen, Derrick J; Yao, Joseph D

    2017-06-01

    Updated recommendations for HIV diagnostic laboratory testing published by the Centers for Disease Control and Prevention and the Association of Public Health Laboratories incorporate 4th generation HIV immunoassays, which are capable of identifying HIV infection prior to seroconversion. The purpose of this study was to compare turnaround time and cost between 3rd and 4th generation HIV immunoassay-based testing algorithms for initially reactive results. The clinical microbiology laboratory database at Mayo Clinic, Rochester, MN was queried for 3rd generation (from November 2012 to May 2014) and 4th generation (from May 2014 to November 2015) HIV immunoassay results. All results from downstream supplemental testing were recorded. Turnaround time (defined as the time of initial sample receipt in the laboratory to the time the final supplemental test in the algorithm was resulted) and cost (based on 2016 Medicare reimbursement rates) were assessed. A total of 76,454 and 78,998 initial tests were performed during the study period using the 3rd generation and 4th generation HIV immunoassays, respectively. There were 516 (0.7%) and 581 (0.7%) total initially reactive results, respectively. Of these, 304 (58.9%) and 457 (78.7%) were positive by supplemental testing. There were 10 (0.01%) cases of acute HIV infection identified with the 4th generation algorithm. The most frequent tests performed to confirm an HIV-positive case using the 3rd generation algorithm, which were reactive initial immunoassay and positive HIV-1 Western blot, took a median time of 1.1 days to complete at a cost of $45.00. In contrast, the most frequent tests performed to confirm an HIV-positive case using the 4th generation algorithm, which included a reactive initial immunoassay and positive HIV-1/-2 antibody differentiation immunoassay for HIV-1, took a median time of 0.4 days and cost $63.25. Overall median turnaround time was 2.2 and 1.5 days, and overall median cost was $63.90 and $72.50 for

  3. SU-E-T-304: Dosimetric Comparison of Cavernous Sinus Tumors: Heterogeneity Corrected Pencil Beam (PB-Hete) Vs. X-Ray Voxel Monte Carlo (XVMC) Algorithms for Stereotactic Radiotherapy (SRT)

    Energy Technology Data Exchange (ETDEWEB)

    Pokhrel, D; Sood, S; Badkul, R; Jiang, H; Saleh, H; Wang, F [University of Kansas Hospital, Kansas City, KS (United States)

    2015-06-15

    Purpose: To compare dose distributions calculated using PB-hete vs. XVMC algorithms for SRT treatments of cavernous sinus tumors. Methods: Using PB-hete SRT, five patients with cavernous sinus tumors received the prescription dose of 25 Gy in 5 fractions for planning target volume PTV(V100%)=95%. Gross tumor volume (GTV) and organs at risk (OARs) were delineated on T1/T2 MRI-CT-fused images. PTV (range 2.1–84.3cc, mean=21.7cc) was generated using a 5mm uniform-margin around GTV. PB-hete SRT plans included a combination of non-coplanar conformal arcs/static beams delivered by Novalis-TX consisting of HD-MLCs and a 6MV-SRS(1000 MU/min) beam. Plans were re-optimized using XVMC algorithm with identical beam geometry and MLC positions. Comparison of plan specific PTV(V99%), maximal, mean, isocenter doses, and total monitor units(MUs) were evaluated. Maximal dose to OARs such as brainstem, optic-pathway, spinal cord, and lenses as well as normal tissue volume receiving 12Gy(V12) were compared between two algorithms. All analysis was performed using two-tailed paired t-tests of an upper-bound p-value of <0.05. Results: Using either algorithm, no dosimetrically significant differences in PTV coverage (PTVV99%,maximal, mean, isocenter doses) and total number of MUs were observed (all p-values >0.05, mean ratios within 2%). However, maximal doses to optic-chiasm and nerves were significantly under-predicted using PB-hete (p=0.04). Maximal brainstem, spinal cord, lens dose and V12 were all comparable between two algorithms, with exception of one patient with the largest PTV who exhibited 11% higher V12 with XVMC. Conclusion: Unlike lung tumors, XVMC and PB-hete treatment plans provided similar PTV coverage for cavernous sinus tumors. Majority of OARs doses were comparable between two algorithms, except for small structures such as optic chiasm/nerves which could potentially receive higher doses when using XVMC algorithm. Special attention may need to be paid on a case

  4. Comparing Online Algorithms for Bin Packing Problems

    DEFF Research Database (Denmark)

    Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard

    2012-01-01

    The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst-ord......-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair....

  5. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  6. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  7. Modified Bat Algorithm Based on Lévy Flight and Opposition Based Learning

    Directory of Open Access Journals (Sweden)

    Xian Shan

    2016-01-01

    Full Text Available Bat Algorithm (BA is a swarm intelligence algorithm which has been intensively applied to solve academic and real life optimization problems. However, due to the lack of good balance between exploration and exploitation, BA sometimes fails at finding global optimum and is easily trapped into local optima. In order to overcome the premature problem and improve the local searching ability of Bat Algorithm for optimization problems, we propose an improved BA called OBMLBA. In the proposed algorithm, a modified search equation with more useful information from the search experiences is introduced to generate a candidate solution, and Lévy Flight random walk is incorporated with BA in order to avoid being trapped into local optima. Furthermore, the concept of opposition based learning (OBL is embedded to BA to enhance the diversity and convergence capability. To evaluate the performance of the proposed approach, 16 benchmark functions have been employed. The results obtained by the experiments demonstrate the effectiveness and efficiency of OBMLBA for global optimization problems. Comparisons with some other BA variants and other state-of-the-art algorithms have shown the proposed approach significantly improves the performance of BA. Performances of the proposed algorithm on large scale optimization problems and real world optimization problems are not discussed in the paper, and it will be studied in the future work.

  8. Autonomous algorithms for image restoration

    OpenAIRE

    Griniasty , Meir

    1994-01-01

    We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.

  9. Pareto design of state feedback tracking control of a biped robot via multiobjective PSO in comparison with sigma method and genetic algorithms: modified NSGAII and MATLAB's toolbox.

    Science.gov (United States)

    Mahmoodabadi, M J; Taherkhorsandi, M; Bagheri, A

    2014-01-01

    An optimal robust state feedback tracking controller is introduced to control a biped robot. In the literature, the parameters of the controller are usually determined by a tedious trial and error process. To eliminate this process and design the parameters of the proposed controller, the multiobjective evolutionary algorithms, that is, the proposed method, modified NSGAII, Sigma method, and MATLAB's Toolbox MOGA, are employed in this study. Among the used evolutionary optimization algorithms to design the controller for biped robots, the proposed method operates better in the aspect of designing the controller since it provides ample opportunities for designers to choose the most appropriate point based upon the design criteria. Three points are chosen from the nondominated solutions of the obtained Pareto front based on two conflicting objective functions, that is, the normalized summation of angle errors and normalized summation of control effort. Obtained results elucidate the efficiency of the proposed controller in order to control a biped robot.

  10. Spot quantification in two dimensional gel electrophoresis image analysis: comparison of different approaches and presentation of a novel compound fitting algorithm

    Science.gov (United States)

    2014-01-01

    Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860

  11. Comparison of a sentinel lymph node mapping algorithm and comprehensive lymphadenectomy in the detection of stage IIIC endometrial carcinoma at higher risk for nodal disease.

    Science.gov (United States)

    Ducie, Jennifer A; Eriksson, Ane Gerda Zahl; Ali, Narisha; McGree, Michaela E; Weaver, Amy L; Bogani, Giorgio; Cliby, William A; Dowdy, Sean C; Bakkum-Gamez, Jamie N; Soslow, Robert A; Keeney, Gary L; Abu-Rustum, Nadeem R; Mariani, Andrea; Leitao, Mario M

    2017-12-01

    To determine if a sentinel lymph node (SLN) mapping algorithm will detect metastatic nodal disease in patients with intermediate-/high-risk endometrial carcinoma. Patients were identified and surgically staged at two collaborating institutions. The historical cohort (2004-2008) at one institution included patients undergoing complete pelvic and paraaortic lymphadenectomy to the renal veins (LND cohort). At the second institution an SLN mapping algorithm, including pathologic ultra-staging, was performed (2006-2013) (SLN cohort). Intermediate-risk was defined as endometrioid histology (any grade), ≥50% myometrial invasion; high-risk as serous or clear cell histology (any myometrial invasion). Patients with gross peritoneal disease were excluded. Isolated tumor cells, micro-metastases, and macro-metastases were considered node-positive. We identified 210 patients in the LND cohort, 202 in the SLN cohort. Nodal assessment was performed for most patients. In the intermediate-risk group, stage IIIC disease was diagnosed in 30/107 (28.0%) (LND), 29/82 (35.4%) (SLN) (P=0.28). In the high-risk group, stage IIIC disease was diagnosed in 20/103 (19.4%) (LND), 26 (21.7%) (SLN) (P=0.68). Paraaortic lymph node (LN) assessment was performed significantly more often in intermediate-/high-risk groups in the LND cohort (P<0.001). In the intermediate-risk group, paraaortic LN metastases were detected in 20/96 (20.8%) (LND) vs. 3/28 (10.7%) (SLN) (P=0.23). In the high-risk group, paraaortic LN metastases were detected in 13/82 (15.9%) (LND) and 10/56 (17.9%) (SLN) (%, P=0.76). SLN mapping algorithm provides similar detection rates of stage IIIC endometrial cancer. The SLN algorithm does not compromise overall detection compared to standard LND. Copyright © 2017 Elsevier Inc. All rights reserved.

  12. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  13. Integration of spectral, spatial and morphometric data into lithological mapping: A comparison of different Machine Learning Algorithms in the Kurdistan Region, NE Iraq

    Science.gov (United States)

    Othman, Arsalan A.; Gloaguen, Richard

    2017-09-01

    Lithological mapping in mountainous regions is often impeded by limited accessibility due to relief. This study aims to evaluate (1) the performance of different supervised classification approaches using remote sensing data and (2) the use of additional information such as geomorphology. We exemplify the methodology in the Bardi-Zard area in NE Iraq, a part of the Zagros Fold - Thrust Belt, known for its chromite deposits. We highlighted the improvement of remote sensing geological classification by integrating geomorphic features and spatial information in the classification scheme. We performed a Maximum Likelihood (ML) classification method besides two Machine Learning Algorithms (MLA): Support Vector Machine (SVM) and Random Forest (RF) to allow the joint use of geomorphic features, Band Ratio (BR), Principal Component Analysis (PCA), spatial information (spatial coordinates) and multispectral data of the Advanced Space-borne Thermal Emission and Reflection radiometer (ASTER) satellite. The RF algorithm showed reliable results and discriminated serpentinite, talus and terrace deposits, red argillites with conglomerates and limestone, limy conglomerates and limestone conglomerates, tuffites interbedded with basic lavas, limestone and Metamorphosed limestone and reddish green shales. The best overall accuracy (∼80%) was achieved by Random Forest (RF) algorithms in the majority of the sixteen tested combination datasets.

  14. Comparison of the effects of model-based iterative reconstruction and filtered back projection algorithms on software measurements in pulmonary subsolid nodules

    Energy Technology Data Exchange (ETDEWEB)

    Cohen, Julien G. [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Kim, Hyungjin; Park, Su Bin [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Ginneken, Bram van [Radboud University Nijmegen Medical Center, Department of Radiology and Nuclear Medicine, Nijmegen (Netherlands); Ferretti, Gilbert R. [Centre Hospitalier Universitaire de Grenoble, Clinique Universitaire de Radiologie et Imagerie Medicale (CURIM), Universite Grenoble Alpes, Grenoble Cedex 9 (France); Institut A Bonniot, INSERM U 823, La Tronche (France); Lee, Chang Hyun [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Goo, Jin Mo; Park, Chang Min [Seoul National University College of Medicine, Department of Radiology, Seoul (Korea, Republic of); Seoul National University Medical Research Center, Institute of Radiation Medicine, Seoul (Korea, Republic of); Seoul National University College of Medicine, Cancer Research Institute, Seoul (Korea, Republic of)

    2017-08-15

    To evaluate the differences between filtered back projection (FBP) and model-based iterative reconstruction (MBIR) algorithms on semi-automatic measurements in subsolid nodules (SSNs). Unenhanced CT scans of 73 SSNs obtained using the same protocol and reconstructed with both FBP and MBIR algorithms were evaluated by two radiologists. Diameter, mean attenuation, mass and volume of whole nodules and their solid components were measured. Intra- and interobserver variability and differences between FBP and MBIR were then evaluated using Bland-Altman method and Wilcoxon tests. Longest diameter, volume and mass of nodules and those of their solid components were significantly higher using MBIR (p < 0.05) with mean differences of 1.1% (limits of agreement, -6.4 to 8.5%), 3.2% (-20.9 to 27.3%) and 2.9% (-16.9 to 22.7%) and 3.2% (-20.5 to 27%), 6.3% (-51.9 to 64.6%), 6.6% (-50.1 to 63.3%), respectively. The limits of agreement between FBP and MBIR were within the range of intra- and interobserver variability for both algorithms with respect to the diameter, volume and mass of nodules and their solid components. There were no significant differences in intra- or interobserver variability between FBP and MBIR (p > 0.05). Semi-automatic measurements of SSNs significantly differed between FBP and MBIR; however, the differences were within the range of measurement variability. (orig.)

  15. Comparison of pacing algorithms to avoid unnecessary ventricular pacing in patients with sick sinus node syndrome: a single-centre, observational, parallel study.

    Science.gov (United States)

    Poghosyan, Hermine R; Jamalyan, Smbat V

    2012-10-01

    Reduction of unnecessary ventricular pacing (uVP) is an essential component in the treatment strategy in any pacing population in general. The aim of this study was to evaluate the efficacy of different algorithms to reduce uVP in an adult population with sick sinus syndrome (SSS) treated outside of clinical trials. Evaluation of the relationship between different types of pacing algorithms and clinical outcomes is also provided. This was a single-centre, observational, parallel study, based on retrospective analysis of the Arrhythmology Cardiology Center of Armenia electronic clinical database. This study evaluated atrial pacing percentage (AP%), ventricular pacing percentage (VP%), and the incidence of atrial high rate episodes in 56 patients with SSS using three different pacing strategies: managed VP, search atrioventricular (AV), and fixed long AV. We did not find statistically significant differences in the amount of VP between the groups. Although the atrial high rate percentage (AHR%) tended to be higher in the fixed long AV group, this difference was not statistically significant. Mean VP% and AP% were similar in all three groups. In our study, all three programmed strategies produced the same mean AP% and VP%, and were equally efficient in uVP reduction. There was no relationship between chosen algorithms and the incidence of pacemaker syndrome, hospitalizations, or change in New York Heart Association class. The percentage of AHR was not associated with pacing strategy or co-morbidities but showed borderline correlation with left atrial size.

  16. Hybrid Projected Gradient-Evolutionary Search Algorithm for Mixed Integer Nonlinear Optimization Problems

    National Research Council Canada - National Science Library

    Homaifar, Abdollah; Esterline, Albert; Kimiaghalam, Bahram

    2005-01-01

    The Hybrid Projected Gradient-Evolutionary Search Algorithm (HPGES) algorithm uses a specially designed evolutionary-based global search strategy to efficiently create candidate solutions in the solution space...

  17. Novel medical image enhancement algorithms

    Science.gov (United States)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  18. Laboratory diagnosis of Clostridium difficile infection: Comparison of Techlab C. diff Quik Chek Complete, Xpert C. difficile, and multistep algorithmic approach.

    Science.gov (United States)

    Seo, Ja Young; Jeong, Ji Hun; Kim, Kyung Hee; Ahn, Jeong-Yeal; Park, Pil-Whan; Seo, Yiel-Hea

    2017-11-01

    Clostridium difficile is a major pathogen responsible for nosocomial infectious diarrhea. We explored optimal laboratory strategies for diagnosis of C. difficile infection (CDI) in our clinical settings, a 1400-bed tertiary care hospital. Using 191 fresh stool samples from adult patients, we evaluated the performance of Xpert C. difficile (Xpert CD), C. diff Quik Chek Complete (which simultaneously detects glutamate dehydrogenase [GDH] and C. difficile toxins [CDT]), toxigenic culture, and a two-step algorithm composed of GDH/CDT as a screening test and Xpert CD as a confirmatory test. Clostridium difficile was detected in 35 samples (18.3%), and all isolates were toxigenic strains. The sensitivity, specificity, positive predictive value (PPV), and negative predictive value of each assay for detecting CDI were as follows: Quik Chek Complete CDT (45.7%, 100%, 100%, 89.1%), Quik Chek Complete GDH (97.1%, 99.4%, 97.1%, 99.4%), Xpert CD (94.3%, 100%, 100%, 98.7%), and toxigenic culture (91.4%, 100%, 100%, 98.1%). A two-step algorithm performed identically with Xpert CD assay. Our data showed that most C. difficile isolates from adult patients were toxigenic. We demonstrated that a two-step algorithm based on GDH/CDT assay followed by Xpert CD assay as a confirmatory test was rapid, reliable, and cost effective for diagnosis of CDI in an adult patient setting with high prevalence of toxigenic C. difficile. © 2017 Wiley Periodicals, Inc.

  19. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  20. WE-AB-209-06: Dynamic Collimator Trajectory Algorithm for Use in VMAT Treatment Deliveries

    Energy Technology Data Exchange (ETDEWEB)

    MacDonald, L [Department of Medical Physics, Dalhousie University, Halifax, Nova Scotia, CA (Canada); Thomas, C; Syme, A [Department of Medical Physics, Dalhousie University, Halifax, Nova Scotia, CA (Canada); Department of Radiation Oncology, Dalhousie University, Halifax, Nova Scotia (Canada); Medical Physics, Nova Scotia Cancer Centre, Halifax, Nova Scotia (Canada)

    2016-06-15

    Purpose: To develop advanced dynamic collimator positioning algorithms for optimal beam’s-eye-view (BEV) fitting of targets in VMAT procedures, including multiple metastases stereotactic radiosurgery procedures. Methods: A trajectory algorithm was developed, which can dynamically modify the angle of the collimator as a function of VMAT control point to provide optimized collimation of target volume(s). Central to this algorithm is a concept denoted “whitespace”, defined as area within the jaw-defined BEV field, outside of the PTV, and not shielded by the MLC when fit to the PTV. Calculating whitespace at all collimator angles and every control point, a two-dimensional topographical map depicting the tightness-of-fit of the MLC was generated. A variety of novel searching algorithms identified a number of candidate trajectories of continuous collimator motion. Ranking these candidate trajectories according to their accrued whitespace value produced an optimal solution for navigation of this map. Results: All trajectories were normalized to minimum possible (i.e. calculated without consideration of collimator motion constraints) accrued whitespace. On an acoustic neuroma case, a random walk algorithm generated a trajectory with 151% whitespace; random walk including a mandatory anchor point improved this to 148%; gradient search produced a trajectory with 137%; and bi-directional gradient search generated a trajectory with 130% whitespace. For comparison, a fixed collimator angle of 30° and 330° accumulated 272% and 228% of whitespace, respectively. The algorithm was tested on a clinical case with two metastases (single isocentre) and identified collimator angles that allow for simultaneous irradiation of the PTVs while minimizing normal tissue irradiation. Conclusion: Dynamic collimator trajectories have the potential to improve VMAT deliveries through increased efficiency and reduced normal tissue dose, especially in treatment of multiple cranial metastases

  1. WE-AB-209-06: Dynamic Collimator Trajectory Algorithm for Use in VMAT Treatment Deliveries

    International Nuclear Information System (INIS)

    MacDonald, L; Thomas, C; Syme, A

    2016-01-01

    Purpose: To develop advanced dynamic collimator positioning algorithms for optimal beam’s-eye-view (BEV) fitting of targets in VMAT procedures, including multiple metastases stereotactic radiosurgery procedures. Methods: A trajectory algorithm was developed, which can dynamically modify the angle of the collimator as a function of VMAT control point to provide optimized collimation of target volume(s). Central to this algorithm is a concept denoted “whitespace”, defined as area within the jaw-defined BEV field, outside of the PTV, and not shielded by the MLC when fit to the PTV. Calculating whitespace at all collimator angles and every control point, a two-dimensional topographical map depicting the tightness-of-fit of the MLC was generated. A variety of novel searching algorithms identified a number of candidate trajectories of continuous collimator motion. Ranking these candidate trajectories according to their accrued whitespace value produced an optimal solution for navigation of this map. Results: All trajectories were normalized to minimum possible (i.e. calculated without consideration of collimator motion constraints) accrued whitespace. On an acoustic neuroma case, a random walk algorithm generated a trajectory with 151% whitespace; random walk including a mandatory anchor point improved this to 148%; gradient search produced a trajectory with 137%; and bi-directional gradient search generated a trajectory with 130% whitespace. For comparison, a fixed collimator angle of 30° and 330° accumulated 272% and 228% of whitespace, respectively. The algorithm was tested on a clinical case with two metastases (single isocentre) and identified collimator angles that allow for simultaneous irradiation of the PTVs while minimizing normal tissue irradiation. Conclusion: Dynamic collimator trajectories have the potential to improve VMAT deliveries through increased efficiency and reduced normal tissue dose, especially in treatment of multiple cranial metastases

  2. External validation of the DHAKA score and comparison with the current IMCI algorithm for the assessment of dehydration in children with diarrhoea: a prospective cohort study.

    Science.gov (United States)

    Levine, Adam C; Glavis-Bloom, Justin; Modi, Payal; Nasrin, Sabiha; Atika, Bita; Rege, Soham; Robertson, Sarah; Schmid, Christopher H; Alam, Nur H

    2016-10-01

    Dehydration due to diarrhoea is a leading cause of child death worldwide, yet no clinical tools for assessing dehydration have been validated in resource-limited settings. The Dehydration: Assessing Kids Accurately (DHAKA) score was derived for assessing dehydration in children with diarrhoea in a low-income country setting. In this study, we aimed to externally validate the DHAKA score in a new population of children and compare its accuracy and reliability to the current Integrated Management of Childhood Illness (IMCI) algorithm. DHAKA was a prospective cohort study done in children younger than 60 months presenting to the International Centre for Diarrhoeal Disease Research, Bangladesh, with acute diarrhoea (defined by WHO as three or more loose stools per day for less than 14 days). Local nurses assessed children and classified their dehydration status using both the DHAKA score and the IMCI algorithm. Serial weights were obtained and dehydration status was established by percentage weight change with rehydration. We did regression analyses to validate the DHAKA score and compared the accuracy and reliability of the DHAKA score and IMCI algorithm with receiver operator characteristic (ROC) curves and the weighted κ statistic. This study was registered with ClinicalTrials.gov, number NCT02007733. Between March 22, 2015, and May 15, 2015, 496 patients were included in our primary analyses. On the basis of our criterion standard, 242 (49%) of 496 children had no dehydration, 184 (37%) of 496 had some dehydration, and 70 (14%) of 496 had severe dehydration. In multivariable regression analyses, each 1-point increase in the DHAKA score predicted an increase of 0·6% in the percentage dehydration of the child and increased the odds of both some and severe dehydration by a factor of 1·4. Both the accuracy and reliability of the DHAKA score were significantly greater than those of the IMCI algorithm. The DHAKA score is the first clinical tool for assessing

  3. Genetic algorithm essentials

    CERN Document Server

    Kramer, Oliver

    2017-01-01

    This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.

  4. Teacher Candidate Selection and Evaluation.

    Science.gov (United States)

    Collins, Mary Lynn; And Others

    Summaries are presented of three papers presented at a summer workshop on Quality Assurance in Teacher Education conducted by the Association of Teacher Educators. The general topic covered by these presentations was teacher candidate selection and evaluation. Papers focused upon the following questions: (1) What entry level criteria should be…

  5. Candidate Prediction Models and Methods

    DEFF Research Database (Denmark)

    Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik

    2005-01-01

    This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...... the possibilities w.r.t. different numerical weather predictions actually available to the project....

  6. Candidate cave entrances on Mars

    Science.gov (United States)

    Cushing, Glen E.

    2012-01-01

    This paper presents newly discovered candidate cave entrances into Martian near-surface lava tubes, volcano-tectonic fracture systems, and pit craters and describes their characteristics and exploration possibilities. These candidates are all collapse features that occur either intermittently along laterally continuous trench-like depressions or in the floors of sheer-walled atypical pit craters. As viewed from orbit, locations of most candidates are visibly consistent with known terrestrial features such as tube-fed lava flows, volcano-tectonic fractures, and pit craters, each of which forms by mechanisms that can produce caves. Although we cannot determine subsurface extents of the Martian features discussed here, some may continue unimpeded for many kilometers if terrestrial examples are indeed analogous. The features presented here were identified in images acquired by the Mars Odyssey's Thermal Emission Imaging System visible-wavelength camera, and by the Mars Reconnaissance Orbiter's Context Camera. Select candidates have since been targeted by the High-Resolution Imaging Science Experiment. Martian caves are promising potential sites for future human habitation and astrobiology investigations; understanding their characteristics is critical for long-term mission planning and for developing the necessary exploration technologies.

  7. Halopentacenes: Promising Candidates for Organic Semiconductors

    International Nuclear Information System (INIS)

    Gong-He, Du; Zhao-Yu, Ren; Ji-Ming, Zheng; Ping, Guo

    2009-01-01

    We introduce polar substituents such as F, Cl, Br into pentacene to enhance the dissolubility in common organic solvents while retaining the high charge-carrier mobilities of pentacene. Geometric structures, dipole moments, frontier molecule orbits, ionization potentials and electron affinities, as well as reorganization energies of those molecules, and of pentacene for comparison, are successively calculated by density functional theory. The results indicate that halopentacenes have rather small reorganization energies (< 0.2 eV), and when the substituents are in position 2 or positions 2 and 9, they are polarity molecules. Thus we conjecture that they can easily be dissolved in common organic solvents, and are promising candidates for organic semiconductors. (condensed matter: electronicstructure, electrical, magnetic, and opticalproperties)

  8. Comparison of frequency difference reconstruction algorithms for the detection of acute stroke using EIT in a realistic head-shaped tank

    International Nuclear Information System (INIS)

    Packham, B; Koo, H; Romsauerova, A; Holder, D S; Ahn, S; Jun, S C; McEwan, A

    2012-01-01

    Imaging of acute stroke might be possible using multi-frequency electrical impedance tomography (MFEIT) but requires absolute or frequency difference imaging. Simple linear frequency difference reconstruction has been shown to be ineffective in imaging with a frequency-dependant background conductivity; this has been overcome with a weighted frequency difference approach with correction for the background but this has only been validated for a cylindrical and hemispherical tank. The feasibility of MFEIT for imaging of acute stroke in a realistic head geometry was examined by imaging a potato perturbation against a saline background and a carrot-saline frequency-dependant background conductivity, in a head-shaped tank with the UCLH Mk2.5 MFEIT system. Reconstruction was performed with time difference (TD), frequency difference (FD), FD adjacent (FDA), weighted FD (WFD) and weighted FDA (WFDA) linear algorithms. The perturbation in reconstructed images corresponded to the true position to <9.5% of image diameter with an image SNR of >5.4 for all algorithms in saline but only for TD, WFDA and WFD in the carrot-saline background. No reliable imaging was possible with FD and FDA. This indicates that the WFD approach is also effective for a realistic head geometry and supports its use for human imaging in the future. (paper)

  9. Validation of clinical testing for warfarin sensitivity: comparison of CYP2C9-VKORC1 genotyping assays and warfarin-dosing algorithms.

    Science.gov (United States)

    Langley, Michael R; Booker, Jessica K; Evans, James P; McLeod, Howard L; Weck, Karen E

    2009-05-01

    Responses to warfarin (Coumadin) anticoagulation therapy are affected by genetic variability in both the CYP2C9 and VKORC1 genes. Validation of pharmacogenetic testing for warfarin responses includes demonstration of analytical validity of testing platforms and of the clinical validity of testing. We compared four platforms for determining the relevant single nucleotide polymorphisms (SNPs) in both CYP2C9 and VKORC1 that are associated with warfarin sensitivity (Third Wave Invader Plus, ParagonDx/Cepheid Smart Cycler, Idaho Technology LightCycler, and AutoGenomics Infiniti). Each method was examined for accuracy, cost, and turnaround time. All genotyping methods demonstrated greater than 95% accuracy for identifying the relevant SNPs (CYP2C9 *2 and *3; VKORC1 -1639 or 1173). The ParagonDx and Idaho Technology assays had the shortest turnaround and hands-on times. The Third Wave assay was readily scalable to higher test volumes but had the longest hands-on time. The AutoGenomics assay interrogated the largest number of SNPs but had the longest turnaround time. Four published warfarin-dosing algorithms (Washington University, UCSF, Louisville, and Newcastle) were compared for accuracy for predicting warfarin dose in a retrospective analysis of a local patient population on long-term, stable warfarin therapy. The predicted doses from both the Washington University and UCSF algorithms demonstrated the best correlation with actual warfarin doses.

  10. Comparison of computed tomography based parametric and patient-specific finite element models of the healthy and metastatic spine using a mesh-morphing algorithm.

    Science.gov (United States)

    O'Reilly, Meaghan Anne; Whyne, Cari Marisa

    2008-08-01

    A comparative analysis of parametric and patient-specific finite element (FE) modeling of spinal motion segments. To develop patient-specific FE models of spinal motion segments using mesh-morphing methods applied to a parametric FE model. To compare strain and displacement patterns in parametric and morphed models for both healthy and metastatically involved vertebrae. Parametric FE models may be limited in their ability to fully represent patient-specific geometries and material property distributions. Generation of multiple patient-specific FE models has been limited because of computational expense. Morphing methods have been successfully used to generate multiple specimen-specific FE models of caudal rat vertebrae. FE models of a healthy and a metastatic T6-T8 spinal motion segment were analyzed with and without patient-specific material properties. Parametric and morphed models were compared using a landmark-based morphing algorithm. Morphing of the parametric FE model and including patient-specific material properties both had a strong impact on magnitudes and patterns of vertebral strain and displacement. Small but important geometric differences can be represented through morphing of parametric FE models. The mesh-morphing algorithm developed provides a rapid method for generating patient-specific FE models of spinal motion segments.

  11. Integrative analysis to select cancer candidate biomarkers to targeted validation

    Science.gov (United States)

    Heberle, Henry; Domingues, Romênia R.; Granato, Daniela C.; Yokoo, Sami; Canevarolo, Rafael R.; Winck, Flavia V.; Ribeiro, Ana Carolina P.; Brandão, Thaís Bianca; Filgueiras, Paulo R.; Cruz, Karen S. P.; Barbuto, José Alexandre; Poppi, Ronei J.; Minghim, Rosane; Telles, Guilherme P.; Fonseca, Felipe Paiva; Fox, Jay W.; Santos-Silva, Alan R.; Coletta, Ricardo D.; Sherman, Nicholas E.; Paes Leme, Adriana F.

    2015-01-01

    Targeted proteomics has flourished as the method of choice for prospecting for and validating potential candidate biomarkers in many diseases. However, challenges still remain due to the lack of standardized routines that can prioritize a limited number of proteins to be further validated in human samples. To help researchers identify candidate biomarkers that best characterize their samples under study, a well-designed integrative analysis pipeline, comprising MS-based discovery, feature selection methods, clustering techniques, bioinformatic analyses and targeted approaches was performed using discovery-based proteomic data from the secretomes of three classes of human cell lines (carcinoma, melanoma and non-cancerous). Three feature selection algorithms, namely, Beta-binomial, Nearest Shrunken Centroids (NSC), and Support Vector Machine-Recursive Features Elimination (SVM-RFE), indicated a panel of 137 candidate biomarkers for carcinoma and 271 for melanoma, which were differentially abundant between the tumor classes. We further tested the strength of the pipeline in selecting candidate biomarkers by immunoblotting, human tissue microarrays, label-free targeted MS and functional experiments. In conclusion, the proposed integrative analysis was able to pre-qualify and prioritize candidate biomarkers from discovery-based proteomics to targeted MS. PMID:26540631

  12. A study of Hough Transform-based fingerprint alignment algorithms

    CSIR Research Space (South Africa)

    Mlambo, CS

    2014-10-01

    Full Text Available the implementation of each algorithm. The comparison is performed by considering the alignment results computed using each group of algorithms when varying number of minutiae points, rotation angle, and translation. In addition, the memory usage, computing time...

  13. A Generic Algorithm to Estimate LAI, FAPAR and FCOVER Variables from SPOT4_HRVIR and Landsat Sensors: Evaluation of the Consistency and Comparison with Ground Measurements

    Directory of Open Access Journals (Sweden)

    Wenjuan Li

    2015-11-01

    Full Text Available The leaf area index (LAI and the fraction of photosynthetically active radiation absorbed by green vegetation (FAPAR are essential climatic variables in surface process models. FCOVER is also important to separate vegetation and soil for energy balance processes. Currently, several LAI, FAPAR and FCOVER satellite products are derived moderate to coarse spatial resolution. The launch of Sentinel-2 in 2015 will provide data at decametric resolution with a high revisit frequency to allow quantifying the canopy functioning at the local to regional scales. The aim of this study is thus to evaluate the performances of a neural network based algorithm to derive LAI, FAPAR and FCOVER products at decametric spatial resolution and high temporal sampling. The algorithm is generic, i.e., it is applied without any knowledge of the landcover. A time series of high spatial resolution SPOT4_HRVIR (16 scenes and Landsat 8 (18 scenes images acquired in 2013 over the France southwestern site were used to generate the LAI, FAPAR and FCOVER products. For each sensor and each biophysical variable, a neural network was first trained over PROSPECT+SAIL radiative transfer model simulations of top of canopy reflectance data for green, red, near-infra red and short wave infra-red bands. Our results show a good spatial and temporal consistency between the variables derived from both sensors: almost half the pixels show an absolute difference between SPOT and LANDSAT estimates of lower that 0.5 unit for LAI, and 0.05 unit for FAPAR and FCOVER. Finally, downward-looking digital hemispherical cameras were completed over the main land cover types to validate the accuracy of the products. Results show that the derived products are strongly correlated with the field measurements (R2 > 0.79, corresponding to a RMSE = 0.49 for LAI, RMSE = 0.10 (RMSE = 0.12 for black-sky (white sky FAPAR and RMSE = 0.15 for FCOVER. It is concluded that the proposed generic algorithm provides a good

  14. A Comparison of Amplitude-Based and Phase-Based Positron Emission Tomography Gating Algorithms for Segmentation of Internal Target Volumes of Tumors Subject to Respiratory Motion

    International Nuclear Information System (INIS)

    Jani, Shyam S.; Robinson, Clifford G.; Dahlbom, Magnus; White, Benjamin M.; Thomas, David H.; Gaudio, Sergio; Low, Daniel A.; Lamb, James M.

    2013-01-01

    Purpose: To quantitatively compare the accuracy of tumor volume segmentation in amplitude-based and phase-based respiratory gating algorithms in respiratory-correlated positron emission tomography (PET). Methods and Materials: List-mode fluorodeoxyglucose-PET data was acquired for 10 patients with a total of 12 fluorodeoxyglucose-avid tumors and 9 lymph nodes. Additionally, a phantom experiment was performed in which 4 plastic butyrate spheres with inner diameters ranging from 1 to 4 cm were imaged as they underwent 1-dimensional motion based on 2 measured patient breathing trajectories. PET list-mode data were gated into 8 bins using 2 amplitude-based (equal amplitude bins [A1] and equal counts per bin [A2]) and 2 temporal phase-based gating algorithms. Gated images were segmented using a commercially available gradient-based technique and a fixed 40% threshold of maximum uptake. Internal target volumes (ITVs) were generated by taking the union of all 8 contours per gated image. Segmented phantom ITVs were compared with their respective ground-truth ITVs, defined as the volume subtended by the tumor model positions covering 99% of breathing amplitude. Superior-inferior distances between sphere centroids in the end-inhale and end-exhale phases were also calculated. Results: Tumor ITVs from amplitude-based methods were significantly larger than those from temporal-based techniques (P=.002). For lymph nodes, A2 resulted in ITVs that were significantly larger than either of the temporal-based techniques (P<.0323). A1 produced the largest and most accurate ITVs for spheres with diameters of ≥2 cm (P=.002). No significant difference was shown between algorithms in the 1-cm sphere data set. For phantom spheres, amplitude-based methods recovered an average of 9.5% more motion displacement than temporal-based methods under regular breathing conditions and an average of 45.7% more in the presence of baseline drift (P<.001). Conclusions: Target volumes in images generated

  15. A Performance Comparison Of A CFAR Ship Detection Algorithm Using Envisat, RadarSat, COSMO-SkyMed and Terra SAR-X Images

    Science.gov (United States)

    Lorenzzetti, Joao A.; Paes, Rafael L.; Gheradi, Douglas M.

    2010-04-01

    In this paper we discuss the results of a CFAR ship detection algorithm for a series of SAR images of the Brazilian coast. The following configuration for the CFAR target/buffer/background windows gave the best results: 3x3/5x5/13x13 for a PFA of 0.1% for pixel spacing greater than 50m. For pixel spacing less than 50m, best results were achieved for PFA of 1% and windows sizes of 5x5/7x7/15x15. Results indicate that CFAR as implemented gave good results as measured by the Figure of Merit, as defined by Foulkes and Booth (2000), which varied from 0.79 for CosmoSkymed to 0.88 for Envisat. Results obtained should be taken so far only as an indication of the performance of the implemented CFAR due to the limited sample of images.

  16. Cash balance management: A comparison between genetic algorithms and particle swarm optimization - doi: 10.4025/actascitechnol.v34i4.12194

    Directory of Open Access Journals (Sweden)

    Marcelo Botelho da Costa Moraes

    2012-10-01

    Full Text Available This work aimed to apply genetic algorithms (GA and particle swarm optimization (PSO in cash balance management using Miller-Orr model, which consists in a stochastic model that does not define a single ideal point for cash balance, but an oscillation range between a lower bound, an ideal balance and an upper bound. Thus, this paper proposes the application of GA and PSO to minimize the Total Cost of cash maintenance, obtaining the parameter of the lower bound of the Miller-Orr model, using for this the assumptions presented in literature. Computational experiments were applied in the development and validation of the models. The results indicated that both the GA and PSO are applicable in determining the cash level from the lower limit, with best results of PSO model, which had not yet been applied in this type of problem.

  17. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  18. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  19. Comparison of a sentinel lymph node and a selective lymphadenectomy algorithm in patients with endometrioid endometrial carcinoma and limited myometrial invasion.

    Science.gov (United States)

    Zahl Eriksson, Ane Gerda; Ducie, Jen; Ali, Narisha; McGree, Michaela E; Weaver, Amy L; Bogani, Giorgio; Cliby, William A; Dowdy, Sean C; Bakkum-Gamez, Jamie N; Abu-Rustum, Nadeem R; Mariani, Andrea; Leitao, Mario M

    2016-03-01

    To assess clinicopathologic outcomes between two nodal assessment approaches in patients with endometrioid endometrial carcinoma and limited myoinvasion. Patients with endometrial cancer at two institutions were reviewed. At one institution, a complete pelvic and para-aortic lymphadenectomy to the renal veins was performed in select cases deemed at risk for nodal metastasis due to grade 3 cancer and/or primary tumor diameter>2cm (LND cohort). This is a historic approach at this institution. At the other institution, a sentinel lymph node mapping algorithm was used per institutional protocol (SLN cohort). Low risk was defined as endometrioid adenocarcinoma with myometrial invasion <50%. Macrometastasis, micrometastasis, and isolated tumor cells were all considered node-positive. Of 1135 cases identified, 642 (57%) were managed with an SLN approach and 493 (43%) with an LND approach. Pelvic nodes (PLNs) were removed in 93% and 58% of patients, respectively (P<0.001); para-aortic nodes (PANs) were removed in 14.5% and 50% of patients, respectively (P<0.001). Median number of PLNs removed was 6 and 34, respectively; median number of PANs removed was 5 and 16, respectively (both P<0.001). Metastasis to PLNs was detected in 5.1% and 2.6% of patients, respectively (P=0.03), and to PANs in 0.8% and 1.0%, respectively (P=0.75). The 3-year disease-free survival rates were 94.9% (95% CI, 92.4-97.5) and 96.8% (95% CI, 95.2-98.5), respectively. Our findings support the use of either strategy for endometrial cancer staging, with no apparent detriment in adhering to the SLN algorithm. The clinical significance of disease detected on ultrastaging and the role of adjuvant therapy is yet to be determined. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Poster - Thur Eve - 06: Comparison of an open source genetic algorithm to the commercially used IPSA for generation of seed distributions in LDR prostate brachytherapy.

    Science.gov (United States)

    McGeachy, P; Khan, R

    2012-07-01

    In early stage prostate cancer, low dose rate (LDR) prostate brachytherapy is a favorable treatment modality, where small radioactive seeds are permanently implanted throughout the prostate. Treatment centres currently rely on a commercial optimization algorithm, IPSA, to generate seed distributions for treatment plans. However, commercial software does not allow the user access to the source code, thus reducing the flexibility for treatment planning and impeding any implementation of new and, perhaps, improved clinical techniques. An open source genetic algorithm (GA) has been encoded in MATLAB to generate seed distributions for a simplified prostate and urethra model. To assess the quality of the seed distributions created by the GA, both the GA and IPSA were used to generate seed distributions for two clinically relevant scenarios and the quality of the GA distributions relative to IPSA distributions and clinically accepted standards for seed distributions was investigated. The first clinically relevant scenario involved generating seed distributions for three different prostate volumes (19.2 cc, 32.4 cc, and 54.7 cc). The second scenario involved generating distributions for three separate seed activities (0.397 mCi, 0.455 mCi, and 0.5 mCi). Both GA and IPSA met the clinically accepted criteria for the two scenarios, where distributions produced by the GA were comparable to IPSA in terms of full coverage of the prostate by the prescribed dose, and minimized dose to the urethra, which passed straight through the prostate. Further, the GA offered improved reduction of high dose regions (i.e hot spots) within the planned target volume. © 2012 American Association of Physicists in Medicine.

  1. Performance comparison of machine learning algorithms and number of independent components used in fMRI decoding of belief vs. disbelief.

    Science.gov (United States)

    Douglas, P K; Harris, Sam; Yuille, Alan; Cohen, Mark S

    2011-05-15

    Machine learning (ML) has become a popular tool for mining functional neuroimaging data, and there are now hopes of performing such analyses efficiently in real-time. Towards this goal, we compared accuracy of six different ML algorithms applied to neuroimaging data of persons engaged in a bivariate task, asserting their belief or disbelief of a variety of propositional statements. We performed unsupervised dimension reduction and automated feature extraction using independent component (IC) analysis and extracted IC time courses. Optimization of classification hyperparameters across each classifier occurred prior to assessment. Maximum accuracy was achieved at 92% for Random Forest, followed by 91% for AdaBoost, 89% for Naïve Bayes, 87% for a J48 decision tree, 86% for K*, and 84% for support vector machine. For real-time decoding applications, finding a parsimonious subset of diagnostic ICs might be useful. We used a forward search technique to sequentially add ranked ICs to the feature subspace. For the current data set, we determined that approximately six ICs represented a meaningful basis set for classification. We then projected these six IC spatial maps forward onto a later scanning session within subject. We then applied the optimized ML algorithms to these new data instances, and found that classification accuracy results were reproducible. Additionally, we compared our classification method to our previously published general linear model results on this same data set. The highest ranked IC spatial maps show similarity to brain regions associated with contrasts for belief > disbelief, and disbelief < belief. Copyright © 2010 Elsevier Inc. All rights reserved.

  2. A pencil beam algorithm for helium ion beam therapy

    Energy Technology Data Exchange (ETDEWEB)

    Fuchs, Hermann; Stroebele, Julia; Schreiner, Thomas; Hirtl, Albert; Georg, Dietmar [Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University of Vienna, 1090 Vienna (Austria); Department of Radiation Oncology, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria) and Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria); Department of Radiation Oncology, Medical University of Vienna/AKH Vienna (Austria) and Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria); PEG MedAustron, 2700 Wiener Neustadt (Austria); Department of Nuclear Medicine, Medical University of Vienna, 1090 Vienna (Austria); Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University of Vienna, 1090 Vienna (Austria); Department of Radiation Oncology, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria) and Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria)

    2012-11-15

    Purpose: To develop a flexible pencil beam algorithm for helium ion beam therapy. Dose distributions were calculated using the newly developed pencil beam algorithm and validated using Monte Carlo (MC) methods. Methods: The algorithm was based on the established theory of fluence weighted elemental pencil beam (PB) kernels. Using a new real-time splitting approach, a minimization routine selects the optimal shape for each sub-beam. Dose depositions along the beam path were determined using a look-up table (LUT). Data for LUT generation were derived from MC simulations in water using GATE 6.1. For materials other than water, dose depositions were calculated by the algorithm using water-equivalent depth scaling. Lateral beam spreading caused by multiple scattering has been accounted for by implementing a non-local scattering formula developed by Gottschalk. A new nuclear correction was modelled using a Voigt function and implemented by a LUT approach. Validation simulations have been performed using a phantom filled with homogeneous materials or heterogeneous slabs of up to 3 cm. The beams were incident perpendicular to the phantoms surface with initial particle energies ranging from 50 to 250 MeV/A with a total number of 10{sup 7} ions per beam. For comparison a special evaluation software was developed calculating the gamma indices for dose distributions. Results: In homogeneous phantoms, maximum range deviations between PB and MC of less than 1.1% and differences in the width of the distal energy falloff of the Bragg-Peak from 80% to 20% of less than 0.1 mm were found. Heterogeneous phantoms using layered slabs satisfied a {gamma}-index criterion of 2%/2mm of the local value except for some single voxels. For more complex phantoms using laterally arranged bone-air slabs, the {gamma}-index criterion was exceeded in some areas giving a maximum {gamma}-index of 1.75 and 4.9% of the voxels showed {gamma}-index values larger than one. The calculation precision of the

  3. Candidate genes in panic disorder

    DEFF Research Database (Denmark)

    Howe, A. S.; Buttenschön, Henriette N; Bani-Fatemi, A.

    2016-01-01

    The utilization of molecular genetics approaches in examination of panic disorder (PD) has implicated several variants as potential susceptibility factors for panicogenesis. However, the identification of robust PD susceptibility genes has been complicated by phenotypic diversity, underpowered...... association studies and ancestry-specific effects. In the present study, we performed a succinct review of case-control association studies published prior to April 2015. Meta-analyses were performed for candidate gene variants examined in at least three studies using the Cochrane Mantel-Haenszel fixed......-effect model. Secondary analyses were also performed to assess the influences of sex, agoraphobia co-morbidity and ancestry-specific effects on panicogenesis. Meta-analyses were performed on 23 variants in 20 PD candidate genes. Significant associations after correction for multiple testing were observed...

  4. Preoperative evaluation of living renal donors: value of contrast-enhanced 3D magnetic resonance angiography and comparison of three rendering algorithms

    International Nuclear Information System (INIS)

    Fink, C.; Hallscheidt, P.J.; Hosch, W.P.; Kauffmann, G.W.; Duex, M.; Ott, R.C.; Wiesel, M.

    2003-01-01

    The aim of this study was to assess the value of contrast-enhanced three-dimensional MR angiography (CE 3D MRA) in the preoperative assessment of potential living renal donors, and to compare the accuracy for the depiction of the vascular anatomy using three different rendering algorithms. Twenty-three potential living renal donors were examined with CE 3D MRA (TE/TR=1.3 ms/3.7 ms, field of view 260-320 x 350 mm, 384-448 x 512 matrix, slab thickness 9.4 cm, 72 partitions, section thickness 1.3 mm, scan time 24 s, 0.1 mmol/kg body weight gadobenate dimeglumine). Magnetic resonance angiography data sets were processed with maximum intensity projection (MIP), volume rendering (VR), and shaded-surface display (SSD) algorithms. The image analysis was performed independently by three MR-experienced radiologists recording the number of renal arteries, the presence of early branching or vascular pathology. The combination of digital subtraction angiography (DSA) and intraoperative findings served as the gold standard for the image analysis. In total, 52 renal arteries were correspondingly observed in 23 patients at DSA and surgery. Other findings were 3 cases of early branching of the renal arteries, 4 cases of arterial stenosis and 1 case of bilateral fibromuscular dysplasia. With MRA source data all 52 renal arteries were correctly identified by all readers, compared with 51 (98.1%), 51-52 (98.1-100%) and 49-50 renal arteries (94.2-96.2%) with the MIP, VR and SSD projections, respectively. Similarly, the sensitivity, specificity and accuracy was highest with the MRA source data followed by MIP, VR and SSD. Time requirements were lowest for the MIP reconstructions and highest for the VR reconstructions. Contrast-enhanced 3D MRA is a reliable, non-invasive tool for the preoperative evaluation of potential living renal donors. Maximum intensity projection is favourable for the processing of 3D MRA data, as it has minimal time and computational requirements, while having

  5. Preoperative evaluation of living renal donors: value of contrast-enhanced 3D magnetic resonance angiography and comparison of three rendering algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Fink, C. [Abteilung Radiologische Diagnostik, Radiologische Universitaetsklinik Heidelberg, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany); Abteilung Onkologische Diagnostik und Therapie, Forschungsschwerpunkt Radiologische Diagnostik und Therapie, Deutsches Krebsforschungszentrum, Im Neuenheimer Feld 280, 69120 Heidelberg (Germany); Hallscheidt, P.J.; Hosch, W.P.; Kauffmann, G.W.; Duex, M. [Abteilung Radiologische Diagnostik, Radiologische Universitaetsklinik Heidelberg, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany); Ott, R.C.; Wiesel, M. [Abteilung Urologie und Poliklinik, Chirurgische Universitaetsklinik Heidelberg, Im Neuenheimer Feld 110, 69120 Heidelberg (Germany)

    2003-04-01

    The aim of this study was to assess the value of contrast-enhanced three-dimensional MR angiography (CE 3D MRA) in the preoperative assessment of potential living renal donors, and to compare the accuracy for the depiction of the vascular anatomy using three different rendering algorithms. Twenty-three potential living renal donors were examined with CE 3D MRA (TE/TR=1.3 ms/3.7 ms, field of view 260-320 x 350 mm, 384-448 x 512 matrix, slab thickness 9.4 cm, 72 partitions, section thickness 1.3 mm, scan time 24 s, 0.1 mmol/kg body weight gadobenate dimeglumine). Magnetic resonance angiography data sets were processed with maximum intensity projection (MIP), volume rendering (VR), and shaded-surface display (SSD) algorithms. The image analysis was performed independently by three MR-experienced radiologists recording the number of renal arteries, the presence of early branching or vascular pathology. The combination of digital subtraction angiography (DSA) and intraoperative findings served as the gold standard for the image analysis. In total, 52 renal arteries were correspondingly observed in 23 patients at DSA and surgery. Other findings were 3 cases of early branching of the renal arteries, 4 cases of arterial stenosis and 1 case of bilateral fibromuscular dysplasia. With MRA source data all 52 renal arteries were correctly identified by all readers, compared with 51 (98.1%), 51-52 (98.1-100%) and 49-50 renal arteries (94.2-96.2%) with the MIP, VR and SSD projections, respectively. Similarly, the sensitivity, specificity and accuracy was highest with the MRA source data followed by MIP, VR and SSD. Time requirements were lowest for the MIP reconstructions and highest for the VR reconstructions. Contrast-enhanced 3D MRA is a reliable, non-invasive tool for the preoperative evaluation of potential living renal donors. Maximum intensity projection is favourable for the processing of 3D MRA data, as it has minimal time and computational requirements, while having

  6. Comparison of Acuros (AXB) and Anisotropic Analytical Algorithm (AAA) for dose calculation in treatment of oesophageal cancer: effects on modelling tumour control probability

    International Nuclear Information System (INIS)

    Padmanaban, Sriram; Warren, Samantha; Walsh, Anthony; Partridge, Mike; Hawkins, Maria A

    2014-01-01

    To investigate systematic changes in dose arising when treatment plans optimised using the Anisotropic Analytical Algorithm (AAA) are recalculated using Acuros XB (AXB) in patients treated with definitive chemoradiotherapy (dCRT) for locally advanced oesophageal cancers. We have compared treatment plans created using AAA with those recalculated using AXB. Although the Anisotropic Analytical Algorithm (AAA) is currently more widely used in clinical routine, Acuros XB (AXB) has been shown to more accurately calculate the dose distribution, particularly in heterogeneous regions. Studies to predict clinical outcome should be based on modelling the dose delivered to the patient as accurately as possible. CT datasets from ten patients were selected for this retrospective study. VMAT (Volumetric modulated arc therapy) plans with 2 arcs, collimator rotation ± 5-10° and dose prescription 50 Gy / 25 fractions were created using Varian Eclipse (v10.0). The initial dose calculation was performed with AAA, and AXB plans were created by re-calculating the dose distribution using the same number of monitor units (MU) and multileaf collimator (MLC) files as the original plan. The difference in calculated dose to organs at risk (OAR) was compared using dose-volume histogram (DVH) statistics and p values were calculated using the Wilcoxon signed rank test. The potential clinical effect of dosimetric differences in the gross tumour volume (GTV) was evaluated using three different TCP models from the literature. PTV Median dose was apparently 0.9 Gy lower (range: 0.5 Gy - 1.3 Gy; p < 0.05) for VMAT AAA plans re-calculated with AXB and GTV mean dose was reduced by on average 1.0 Gy (0.3 Gy −1.5 Gy; p < 0.05). An apparent difference in TCP of between 1.2% and 3.1% was found depending on the choice of TCP model. OAR mean dose was lower in the AXB recalculated plan than the AAA plan (on average, dose reduction: lung 1.7%, heart 2.4%). Similar trends were seen for CRT plans

  7. Comparison of Acuros (AXB) and Anisotropic Analytical Algorithm (AAA) for dose calculation in treatment of oesophageal cancer: effects on modelling tumour control probability.

    Science.gov (United States)

    Padmanaban, Sriram; Warren, Samantha; Walsh, Anthony; Partridge, Mike; Hawkins, Maria A

    2014-12-23

    To investigate systematic changes in dose arising when treatment plans optimised using the Anisotropic Analytical Algorithm (AAA) are recalculated using Acuros XB (AXB) in patients treated with definitive chemoradiotherapy (dCRT) for locally advanced oesophageal cancers. We have compared treatment plans created using AAA with those recalculated using AXB. Although the Anisotropic Analytical Algorithm (AAA) is currently more widely used in clinical routine, Acuros XB (AXB) has been shown to more accurately calculate the dose distribution, particularly in heterogeneous regions. Studies to predict clinical outcome should be based on modelling the dose delivered to the patient as accurately as possible. CT datasets from ten patients were selected for this retrospective study. VMAT (Volumetric modulated arc therapy) plans with 2 arcs, collimator rotation ± 5-10° and dose prescription 50 Gy / 25 fractions were created using Varian Eclipse (v10.0). The initial dose calculation was performed with AAA, and AXB plans were created by re-calculating the dose distribution using the same number of monitor units (MU) and multileaf collimator (MLC) files as the original plan. The difference in calculated dose to organs at risk (OAR) was compared using dose-volume histogram (DVH) statistics and p values were calculated using the Wilcoxon signed rank test. The potential clinical effect of dosimetric differences in the gross tumour volume (GTV) was evaluated using three different TCP models from the literature. PTV Median dose was apparently 0.9 Gy lower (range: 0.5 Gy - 1.3 Gy; p AAA plans re-calculated with AXB and GTV mean dose was reduced by on average 1.0 Gy (0.3 Gy -1.5 Gy; p AAA plan (on average, dose reduction: lung 1.7%, heart 2.4%). Similar trends were seen for CRT plans. Differences in dose distribution are observed with VMAT and CRT plans recalculated with AXB particularly within soft tissue at the tumour/lung interface, where AXB has been shown to more

  8. Comparison of proton therapy treatment planning for head tumors with a pencil beam algorithm on dual and single energy CT images

    Energy Technology Data Exchange (ETDEWEB)

    Hudobivnik, Nace; Dedes, George; Parodi, Katia; Landry, Guillaume, E-mail: g.landry@lmu.de [Department of Medical Physics, Ludwig-Maximilians-University, Munich 85748 (Germany); Schwarz, Florian; Johnson, Thorsten; Sommer, Wieland H. [Institute for Clinical Radiology, Ludwig Maximilians University Hospital Munich, 81377 Munich (Germany); Agolli, Linda [Department of Radiation Oncology, Ludwig-Maximilians-University, Munich 81377, Germany and Radiation Oncology, Sant’ Andrea Hospital, Sapienza University, Rome 00189 (Italy); Tessonnier, Thomas [Department of Medical Physics, Ludwig-Maximilians-University, Munich 85748, Germany and Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg (Germany); Verhaegen, Frank [Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht 6229 ET, the Netherlands and Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3A 0G4 (Canada); Thieke, Christian; Belka, Claus [Department of Radiation Oncology, Ludwig-Maximilians-University, Munich 81377 (Germany)

    2016-01-15

    Purpose: Dual energy CT (DECT) has recently been proposed as an improvement over single energy CT (SECT) for stopping power ratio (SPR) estimation for proton therapy treatment planning (TP), thereby potentially reducing range uncertainties. Published literature investigated phantoms. This study aims at performing proton therapy TP on SECT and DECT head images of the same patients and at evaluating whether the reported improved DECT SPR accuracy translates into clinically relevant range shifts in clinical head treatment scenarios. Methods: Two phantoms were scanned at a last generation dual source DECT scanner at 90 and 150 kVp with Sn filtration. The first phantom (Gammex phantom) was used to calibrate the scanner in terms of SPR while the second served as evaluation (CIRS phantom). DECT images of five head trauma patients were used as surrogate cancer patient images for TP of proton therapy. Pencil beam algorithm based TP was performed on SECT and DECT images and the dose distributions corresponding to the optimized proton plans were calculated using a Monte Carlo (MC) simulation platform using the same patient geometry for both plans obtained from conversion of the 150 kVp images. Range shifts between the MC dose distributions from SECT and DECT plans were assessed using 2D range maps. Results: SPR root mean square errors (RMSEs) for the inserts of the Gammex phantom were 1.9%, 1.8%, and 1.2% for SECT phantom calibration (SECT{sub phantom}), SECT stoichiometric calibration (SECT{sub stoichiometric}), and DECT calibration, respectively. For the CIRS phantom, these were 3.6%, 1.6%, and 1.0%. When investigating patient anatomy, group median range differences of up to −1.4% were observed for head cases when comparing SECT{sub stoichiometric} with DECT. For this calibration the 25th and 75th percentiles varied from −2% to 0% across the five patients. The group median was found to be limited to 0.5% when using SECT{sub phantom} and the 25th and 75th percentiles

  9. Amodified probabilistic genetic algorithm for the solution of complex constrained optimization problems

    OpenAIRE

    Vorozheikin, A.; Gonchar, T.; Panfilov, I.; Sopov, E.; Sopov, S.

    2009-01-01

    A new algorithm for the solution of complex constrained optimization problems based on the probabilistic genetic algorithm with optimal solution prediction is proposed. The efficiency investigation results in comparison with standard genetic algorithm are presented.

  10. Particle algorithms for population dynamics in flows

    International Nuclear Information System (INIS)

    Perlekar, Prasad; Toschi, Federico; Benzi, Roberto; Pigolotti, Simone

    2011-01-01

    We present and discuss particle based algorithms to numerically study the dynamics of population subjected to an advecting flow condition. We discuss few possible variants of the algorithms and compare them in a model compressible flow. A comparison against appropriate versions of the continuum stochastic Fisher equation (sFKPP) is also presented and discussed. The algorithms can be used to study populations genetics in fluid environments.

  11. Testing algorithms for critical slowing down

    Directory of Open Access Journals (Sweden)

    Cossu Guido

    2018-01-01

    Full Text Available We present the preliminary tests on two modifications of the Hybrid Monte Carlo (HMC algorithm. Both algorithms are designed to travel much farther in the Hamiltonian phase space for each trajectory and reduce the autocorrelations among physical observables thus tackling the critical slowing down towards the continuum limit. We present a comparison of costs of the new algorithms with the standard HMC evolution for pure gauge fields, studying the autocorrelation times for various quantities including the topological charge.

  12. Algorithms for reconstructing images for industrial applications

    International Nuclear Information System (INIS)

    Lopes, R.T.; Crispim, V.R.

    1986-01-01

    Several algorithms for reconstructing objects from their projections are being studied in our Laboratory, for industrial applications. Such algorithms are useful locating the position and shape of different composition of materials in the object. A Comparative study of two algorithms is made. The two investigated algorithsm are: The MART (Multiplicative - Algebraic Reconstruction Technique) and the Convolution Method. The comparison are carried out from the point view of the quality of the image reconstructed, number of views and cost. (Author) [pt

  13. Comparison Between Manual Auditing and a Natural Language Process With Machine Learning Algorithm to Evaluate Faculty Use of Standardized Reports in Radiology.

    Science.gov (United States)

    Guimaraes, Carolina V; Grzeszczuk, Robert; Bisset, George S; Donnelly, Lane F

    2018-03-01

    When implementing or monitoring department-sanctioned standardized radiology reports, feedback about individual faculty performance has been shown to be a useful driver of faculty compliance. Most commonly, these data are derived from manual audit, which can be both time-consuming and subject to sampling error. The purpose of this study was to evaluate whether a software program using natural language processing and machine learning could accurately audit radiologist compliance with the use of standardized reports compared with performed manual audits. Radiology reports from a 1-month period were loaded into such a software program, and faculty compliance with use of standardized reports was calculated. For that same period, manual audits were performed (25 reports audited for each of 42 faculty members). The mean compliance rates calculated by automated auditing were then compared with the confidence interval of the mean rate by manual audit. The mean compliance rate for use of standardized reports as determined by manual audit was 91.2% with a confidence interval between 89.3% and 92.8%. The mean compliance rate calculated by automated auditing was 92.0%, within that confidence interval. This study shows that by use of natural language processing and machine learning algorithms, an automated analysis can accurately define whether reports are compliant with use of standardized report templates and language, compared with manual audits. This may avoid significant labor costs related to conducting the manual auditing process. Copyright © 2017 American College of Radiology. Published by Elsevier Inc. All rights reserved.

  14. On the application of artificial bee colony (ABC algorithm for optimization of well placements in fractured reservoirs; efficiency comparison with the particle swarm optimization (PSO methodology

    Directory of Open Access Journals (Sweden)

    Behzad Nozohour-leilabady

    2016-03-01

    Full Text Available The application of a recent optimization technique, the artificial bee colony (ABC, was investigated in the context of finding the optimal well locations. The ABC performance was compared with the corresponding results from the particle swarm optimization (PSO algorithm, under essentially similar conditions. Treatment of out-of-boundary solution vectors was accomplished via the Periodic boundary condition (PBC, which presumably accelerates convergence towards the global optimum. Stochastic searches were initiated from several random staring points, to minimize starting-point dependency in the established results. The optimizations were aimed at maximizing the Net Present Value (NPV objective function over the considered oilfield production durations. To deal with the issue of reservoir heterogeneity, random permeability was applied via normal/uniform distribution functions. In addition, the issue of increased number of optimization parameters was address, by considering scenarios with multiple injector and producer wells, and cases with deviated wells in a real reservoir model. The typical results prove ABC to excel PSO (in the cases studied after relatively short optimization cycles, indicating the great premise of ABC methodology to be used for well-optimization purposes.

  15. Improved autonomous star identification algorithm

    International Nuclear Information System (INIS)

    Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong

    2015-01-01

    The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)

  16. Global Optimization of a Periodic System using a Genetic Algorithm

    Science.gov (United States)

    Stucke, David; Crespi, Vincent

    2001-03-01

    We use a novel application of a genetic algorithm global optimizatin technique to find the lowest energy structures for periodic systems. We apply this technique to colloidal crystals for several different stoichiometries of binary and trinary colloidal crystals. This application of a genetic algorithm is decribed and results of likely candidate structures are presented.

  17. Comparison of imaging modalities and source-localization algorithms in locating the induced activity during deep brain stimulation of the STN.

    Science.gov (United States)

    Mideksa, K G; Singh, A; Hoogenboom, N; Hellriegel, H; Krause, H; Schnitzler, A; Deuschl, G; Raethjen, J; Schmidt, G; Muthuraman, M

    2016-08-01

    One of the most commonly used therapy to treat patients with Parkinson's disease (PD) is deep brain stimulation (DBS) of the subthalamic nucleus (STN). Identifying the most optimal target area for the placement of the DBS electrodes have become one of the intensive research area. In this study, the first aim is to investigate the capabilities of different source-analysis techniques in detecting deep sources located at the sub-cortical level and validating it using the a-priori information about the location of the source, that is, the STN. Secondly, we aim at an investigation of whether EEG or MEG is best suited in mapping the DBS-induced brain activity. To do this, simultaneous EEG and MEG measurement were used to record the DBS-induced electromagnetic potentials and fields. The boundary-element method (BEM) have been used to solve the forward problem. The position of the DBS electrodes was then estimated using the dipole (moving, rotating, and fixed MUSIC), and current-density-reconstruction (CDR) (minimum-norm and sLORETA) approaches. The source-localization results from the dipole approaches demonstrated that the fixed MUSIC algorithm best localizes deep focal sources, whereas the moving dipole detects not only the region of interest but also neighboring regions that are affected by stimulating the STN. The results from the CDR approaches validated the capability of sLORETA in detecting the STN compared to minimum-norm. Moreover, the source-localization results using the EEG modality outperformed that of the MEG by locating the DBS-induced activity in the STN.

  18. Evaluation of glioblastomas and lymphomas with whole-brain CT perfusion: Comparison between a delay-invariant singular-value decomposition algorithm and a Patlak plot.

    Science.gov (United States)

    Hiwatashi, Akio; Togao, Osamu; Yamashita, Koji; Kikuchi, Kazufumi; Yoshimoto, Koji; Mizoguchi, Masahiro; Suzuki, Satoshi O; Yoshiura, Takashi; Honda, Hiroshi

    2016-07-01

    Correction of contrast leakage is recommended when enhancing lesions during perfusion analysis. The purpose of this study was to assess the diagnostic performance of computed tomography perfusion (CTP) with a delay-invariant singular-value decomposition algorithm (SVD+) and a Patlak plot in differentiating glioblastomas from lymphomas. This prospective study included 17 adult patients (12 men and 5 women) with pathologically proven glioblastomas (n=10) and lymphomas (n=7). CTP data were analyzed using SVD+ and a Patlak plot. The relative tumor blood volume and flow compared to contralateral normal-appearing gray matter (rCBV and rCBF derived from SVD+, and rBV and rFlow derived from the Patlak plot) were used to differentiate between glioblastomas and lymphomas. The Mann-Whitney U test and receiver operating characteristic (ROC) analyses were used for statistical analysis. Glioblastomas showed significantly higher rFlow (3.05±0.49, mean±standard deviation) than lymphomas (1.56±0.53; P0.05), rCBF (1.38±0.41 vs. 1.29±0.47; P>0.05), or rCBV (1.78±0.47 vs. 1.87±0.66; P>0.05). ROC analysis showed the best diagnostic performance with rFlow (Az=0.871), followed by rBV (Az=0.771), rCBF (Az=0.614), and rCBV (Az=0.529). CTP analysis with a Patlak plot was helpful in differentiating between glioblastomas and lymphomas, but CTP analysis with SVD+ was not. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  19. Comparison of Planning Quality and Efficiency Between Conventional and Knowledge-based Algorithms in Nasopharyngeal Cancer Patients Using Intensity Modulated Radiation Therapy.

    Science.gov (United States)

    Chang, Amy T Y; Hung, Albert W M; Cheung, Fion W K; Lee, Michael C H; Chan, Oscar S H; Philips, Helen; Cheng, Yung-Tang; Ng, Wai-Tong

    2016-07-01

    Intensity modulated radiation therapy (IMRT) is widely used to achieve a highly conformal dose and improve treatment outcome. However, plan quality and planning time are institute and planner dependent, and no standardized tool exists to recognize an optimal plan. RapidPlan, a knowledge-based algorithm, can generate constraints to assist optimization and produce high-quality IMRT plans. This report evaluated the quality and efficiency of using RapidPlan in nasopharyngeal carcinoma (NPC) IMRT planning. RapidPlan was configured using 79 radical IMRT plans for NPC; 20 consecutive NPC patients indicated for radical radiation therapy between October 2014 and May 2015 were then recruited to assess its performance. The ability of RapidPlan to produce acceptable plans was evaluated. For plans that could not achieve clinical acceptance, manual touch-up was performed. The IMRT plans produced without RapidPlan (manual plans) and with RapidPlan (RP-2 plans, including those with manual touch-up) were compared in terms of dosimetric quality and planning efficiency. RapidPlan by itself could produce clinically acceptable plans for 9 of the 20 patients; manual touch-up increased the number of acceptable plans (RP-2 plans) to 19. The target dose coverage and conformity were very similar. No difference was found in the maximum dose to the brainstem and optic chiasm. RP-2 plans delivered a higher maximum dose to the spinal cord (46.4 Gy vs 43.9 Gy, P=.002) but a lower dose to the parotid (mean dose to right parotid, 37.3 Gy vs 45.4 Gy; left, 34.4 Gy vs 43.1 Gy; Pquality IMRT plans for NPC patients. Copyright © 2016 Elsevier Inc. All rights reserved.

  20. Vascular diameter measurement in CT angiography: comparison of model-based iterative reconstruction and standard filtered back projection algorithms in vitro.

    Science.gov (United States)

    Suzuki, Shigeru; Machida, Haruhiko; Tanaka, Isao; Ueno, Eiko

    2013-03-01

    The purpose of this study was to evaluate the performance of model-based iterative reconstruction (MBIR) in measurement of the inner diameter of models of blood vessels and compare performance between MBIR and a standard filtered back projection (FBP) algorithm. Vascular models with wall thicknesses of 0.5, 1.0, and 1.5 mm were scanned with a 64-MDCT unit and densities of contrast material yielding 275, 396, and 542 HU. Images were reconstructed images by MBIR and FBP, and the mean diameter of each model vessel was measured by software automation. Twenty separate measurements were repeated for each vessel, and variance among the repeated measures was analyzed for determination of measurement error. For all nine model vessels, CT attenuation profiles were compared along a line passing through the luminal center on axial images reconstructed with FBP and MBIR, and the 10-90% edge rise distances at the boundary between the vascular wall and the lumen were evaluated. For images reconstructed with FBP, measurement errors were smallest for models with 1.5-mm wall thickness, except those filled with 275-HU contrast material, and errors grew as the density of the contrast material decreased. Measurement errors with MBIR were comparable to or less than those with FBP. In CT attenuation profiles of images reconstructed with MBIR, the 10-90% edge rise distances at the boundary between the lumen and vascular wall were relatively short for each vascular model compared with those of the profile curves of FBP images. MBIR is better than standard FBP for reducing reconstruction blur and improving the accuracy of diameter measurement at CT angiography.

  1. A comparison of CA125, HE4, risk ovarian malignancy algorithm (ROMA, and risk malignancy index (RMI for the classification of ovarian masses

    Directory of Open Access Journals (Sweden)

    Cristina Anton

    2012-01-01

    Full Text Available OBJECTIVE: Differentiation between benign and malignant ovarian neoplasms is essential for creating a system for patient referrals. Therefore, the contributions of the tumor markers CA125 and human epididymis protein 4 (HE4 as well as the risk ovarian malignancy algorithm (ROMA and risk malignancy index (RMI values were considered individually and in combination to evaluate their utility for establishing this type of patient referral system. METHODS: Patients who had been diagnosed with ovarian masses through imaging analyses (n = 128 were assessed for their expression of the tumor markers CA125 and HE4. The ROMA and RMI values were also determined. The sensitivity and specificity of each parameter were calculated using receiver operating characteristic curves according to the area under the curve (AUC for each method. RESULTS: The sensitivities associated with the ability of CA125, HE4, ROMA, or RMI to distinguish between malignant versus benign ovarian masses were 70.4%, 79.6%, 74.1%, and 63%, respectively. Among carcinomas, the sensitivities of CA125, HE4, ROMA (pre-and post-menopausal, and RMI were 93.5%, 87.1%, 80%, 95.2%, and 87.1%, respectively. The most accurate numerical values were obtained with RMI, although the four parameters were shown to be statistically equivalent. CONCLUSION: There were no differences in accuracy between CA125, HE4, ROMA, and RMI for differentiating between types of ovarian masses. RMI had the lowest sensitivity but was the most numerically accurate method. HE4 demonstrated the best overall sensitivity for the evaluation of malignant ovarian tumors and the differential diagnosis of endometriosis. All of the parameters demonstrated increased sensitivity when tumors with low malignancy potential were considered low-risk, which may be used as an acceptable assessment method for referring patients to reference centers.

  2. Application of Viterbi’s Algorithm for Predicting Rainfall Occurrence and Simulating Wet\\Dry Spells – Comparison with Common Methods

    Directory of Open Access Journals (Sweden)

    M. Ghamghami

    2015-06-01

    Full Text Available Today, there arevarious statistical models for the discrete simulation of the rainfall occurrence/non-occurrence with more emphasizing on long-term climatic statistics. Nevertheless, the accuracy of such models or predictions should be improved in short timescale. In the present paper, it is assumed that the rainfall occurrence/non-occurrence sequences follow a two-layer Hidden Markov Model (HMM consist of a hidden layer (discrete time series of rainfall occurrence and non-occurrence and an observable layer (weather variables, which is considered as a case study in Khoramabad station during the period of 1961-2005. The decoding algorithm of Viterbi has been used for simulation of wet/dry sequences. Performance of five weather variables, as the observable variables, including air pressure, vapor pressure, diurnal air temperature, relative humidity and dew point temperature for choosing the best observed variables were evaluated using some measures oferror evaluation. Results showed that the variable of diurnal air temperatureis the best observable variable for decoding process of wet/dry sequences, which detects the strong physical relationship between those variables. Also the Viterbi output was compared with ClimGen and LARS-WG weather generators, in terms of two accuracy measures including similarity of climatic statistics and forecasting skills. Finally, it is concluded that HMM has more skills rather than the other two weather generators in simulation of wet and dry spells. Therefore, we recommend the use of HMM instead of two other approaches for generation of wet and dry sequences.

  3. Comparison of two algorithmic data processing strategies for metabolic fingerprinting by comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry.

    Science.gov (United States)

    Almstetter, Martin F; Appel, Inka J; Dettmer, Katja; Gruber, Michael A; Oefner, Peter J

    2011-09-28

    The alignment algorithm Statistical Compare (SC) developed by LECO Corporation for the processing of comprehensive two-dimensional gas chromatography-time-of-flight mass spectrometry (GC×GC-TOFMS) data was validated and compared to the in-house developed retention time correction and data alignment tool INCA (Integrative Normalization and Comparative Analysis) by a spike-in experiment and the comparative metabolic fingerprinting of a wild type versus a double mutant strain of Escherichia coli (E. coli). Starting with the same peak lists generated by LECO's ChromaTOF software, the accuracy of peak alignment and detection of 1.1- to 4-fold changes in metabolite concentration was assessed by spiking 20 standard compounds into an aqueous methanol extract of E. coli. To provide the same quality input signals for both alignment routines, the universal m/z 73 trace of the trimethylsilyl (TMS) group was used as a quantitative measure for all features. The performance of data processing and alignment was evaluated and illustrated by ROC curves. Statistical Compare performed marginally better at the lower fold changes, while INCA did so at the higher fold changes. Using SC, quantitative precision could be improved substantially by exploiting the signal intensities of metabolite-specific unique (U) m/z ion traces rather than the universal m/z 73 trace. A list of 56 features that distinguished the two E. coli strains was obtained by the SC alignment using m/z U with an estimated false discovery rate (FDR) of <0.05. Ultimately, 23 metabolites could be identified, one additional and five less than with INCA due to the failure of SC to extract unitized m/z U's across all fingerprints with suitable spectral intensities for the latter metabolites. Copyright © 2011 Elsevier B.V. All rights reserved.

  4. Experimental validation of plant peroxisomal targeting prediction algorithms by systematic comparison of in vivo import efficiency and in vitro PTS1 binding affinity.

    Science.gov (United States)

    Skoulding, Nicola S; Chowdhary, Gopal; Deus, Mara J; Baker, Alison; Reumann, Sigrun; Warriner, Stuart L

    2015-03-13

    Most peroxisomal matrix proteins possess a C-terminal targeting signal type 1 (PTS1). Accurate prediction of functional PTS1 sequences and their relative strength by computational methods is essential for determination of peroxisomal proteomes in silico but has proved challenging due to high levels of sequence variability of non-canonical targeting signals, particularly in higher plants, and low levels of availability of experimentally validated non-canonical examples. In this study, in silico predictions were compared with in vivo targeting analyses and in vitro thermodynamic binding of mutated variants within the context of one model targeting sequence. There was broad agreement between the methods for entire PTS1 domains and position-specific single amino acid residues, including residues upstream of the PTS1 tripeptide. The hierarchy Leu>Met>Ile>Val at the C-terminal position was determined for all methods but both experimental approaches suggest that Tyr is underweighted in the prediction algorithm due to the absence of this residue in the positive training dataset. A combination of methods better defines the score range that discriminates a functional PTS1. In vitro binding to the PEX5 receptor could discriminate among strong targeting signals while in vivo targeting assays were more sensitive, allowing detection of weak functional import signals that were below the limit of detection in the binding assay. Together, the data provide a comprehensive assessment of the factors driving PTS1 efficacy and provide a framework for the more quantitative assessment of the protein import pathway in higher plants. Copyright © 2014 Elsevier Ltd. All rights reserved.

  5. Alternative dark matter candidates. Axions

    International Nuclear Information System (INIS)

    Ringwald, Andreas

    2017-01-01

    The axion is arguably one of the best motivated candidates for dark matter. For a decay constant >or similar 10 9 GeV, axions are dominantly produced non-thermally in the early universe and hence are ''cold'', their velocity dispersion being small enough to fit to large scale structure. Moreover, such a large decay constant ensures the stability at cosmological time scales and its behaviour as a collisionless fluid at cosmological length scales. Here, we review the state of the art of axion dark matter predictions and of experimental efforts to search for axion dark matter in laboratory experiments.

  6. Alternative dark matter candidates. Axions

    Energy Technology Data Exchange (ETDEWEB)

    Ringwald, Andreas

    2017-01-15

    The axion is arguably one of the best motivated candidates for dark matter. For a decay constant >or similar 10{sup 9} GeV, axions are dominantly produced non-thermally in the early universe and hence are ''cold'', their velocity dispersion being small enough to fit to large scale structure. Moreover, such a large decay constant ensures the stability at cosmological time scales and its behaviour as a collisionless fluid at cosmological length scales. Here, we review the state of the art of axion dark matter predictions and of experimental efforts to search for axion dark matter in laboratory experiments.

  7. Comparison of Matrix Frequency-Doubling Technology (FDT) Perimetry with the SWEDISH Interactive Thresholding Algorithm (SITA) Standard Automated Perimetry (SAP) in Mild Glaucoma.

    Science.gov (United States)

    Doozandeh, Azadeh; Irandoost, Farnoosh; Mirzajani, Ali; Yazdani, Shahin; Pakravan, Mohammad; Esfandiari, Hamed

    2017-01-01

    This study aimed to compare second-generation frequency-doubling technology (FDT) perimetry with standard automated perimetry (SAP) in mild glaucoma. Forty-seven eyes of 47 participants who had mild visual field defect by SAP w