WorldWideScience

Sample records for candid comparison algorithm

  1. Experience with CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.; Cannon, M.

    1994-10-01

    This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.

  2. PEACE: pulsar evaluation algorithm for candidate extraction - a software package for post-analysis processing of pulsar survey candidates

    Science.gov (United States)

    Lee, K. J.; Stovall, K.; Jenet, F. A.; Martinez, J.; Dartez, L. P.; Mata, A.; Lunsford, G.; Cohen, S.; Biwer, C. M.; Rohr, M.; Flanigan, J.; Walker, A.; Banaszak, S.; Allen, B.; Barr, E. D.; Bhat, N. D. R.; Bogdanov, S.; Brazier, A.; Camilo, F.; Champion, D. J.; Chatterjee, S.; Cordes, J.; Crawford, F.; Deneva, J.; Desvignes, G.; Ferdman, R. D.; Freire, P.; Hessels, J. W. T.; Karuppusamy, R.; Kaspi, V. M.; Knispel, B.; Kramer, M.; Lazarus, P.; Lynch, R.; Lyne, A.; McLaughlin, M.; Ransom, S.; Scholz, P.; Siemens, X.; Spitler, L.; Stairs, I.; Tan, M.; van Leeuwen, J.; Zhu, W. W.

    2013-07-01

    Modern radio pulsar surveys produce a large volume of prospective candidates, the majority of which are polluted by human-created radio frequency interference or other forms of noise. Typically, large numbers of candidates need to be visually inspected in order to determine if they are real pulsars. This process can be labour intensive. In this paper, we introduce an algorithm called Pulsar Evaluation Algorithm for Candidate Extraction (PEACE) which improves the efficiency of identifying pulsar signals. The algorithm ranks the candidates based on a score function. Unlike popular machine-learning-based algorithms, no prior training data sets are required. This algorithm has been applied to data from several large-scale radio pulsar surveys. Using the human-based ranking results generated by students in the Arecibo Remote Command Center programme, the statistical performance of PEACE was evaluated. It was found that PEACE ranked 68 per cent of the student-identified pulsars within the top 0.17 per cent of sorted candidates, 95 per cent within the top 0.34 per cent and 100 per cent within the top 3.7 per cent. This clearly demonstrates that PEACE significantly increases the pulsar identification rate by a factor of about 50 to 1000. To date, PEACE has been directly responsible for the discovery of 47 new pulsars, 5 of which are millisecond pulsars that may be useful for pulsar timing based gravitational-wave detection projects.

  3. Application of a kernel-based online learning algorithm to the classification of nodule candidates in computer-aided detection of CT lung nodules

    International Nuclear Information System (INIS)

    Matsumoto, S.; Ohno, Y.; Takenaka, D.; Sugimura, K.; Yamagata, H.

    2007-01-01

    Classification of the nodule candidates in computer-aided detection (CAD) of lung nodules in CT images was addressed by constructing a nonlinear discriminant function using a kernel-based learning algorithm called the kernel recursive least-squares (KRLS) algorithm. Using the nodule candidates derived from the processing by a CAD scheme of 100 CT datasets containing 253 non-calcified nodules or 3 mm or larger as determined by the consensus of two thoracic radiologists, the following trial were carried out 100 times: by randomly selecting 50 datasets for training, a nonlinear discriminant function was obtained using the nodule candidates in the training datasets and tested with the remaining candidates; for comparison, a rule-based classification was tested in a similar manner. At the number of false positives per case of about 5, the nonlinear classification method showed an improved sensitivity of 80% (mean over the 100 trials) compared with 74% of the rule-based method. (orig.)

  4. Upper-Lower Bounds Candidate Sets Searching Algorithm for Bayesian Network Structure Learning

    Directory of Open Access Journals (Sweden)

    Guangyi Liu

    2014-01-01

    Full Text Available Bayesian network is an important theoretical model in artificial intelligence field and also a powerful tool for processing uncertainty issues. Considering the slow convergence speed of current Bayesian network structure learning algorithms, a fast hybrid learning method is proposed in this paper. We start with further analysis of information provided by low-order conditional independence testing, and then two methods are given for constructing graph model of network, which is theoretically proved to be upper and lower bounds of the structure space of target network, so that candidate sets are given as a result; after that a search and scoring algorithm is operated based on the candidate sets to find the final structure of the network. Simulation results show that the algorithm proposed in this paper is more efficient than similar algorithms with the same learning precision.

  5. Clustering and Candidate Motif Detection in Exosomal miRNAs by Application of Machine Learning Algorithms.

    Science.gov (United States)

    Gaur, Pallavi; Chaturvedi, Anoop

    2017-07-22

    The clustering pattern and motifs give immense information about any biological data. An application of machine learning algorithms for clustering and candidate motif detection in miRNAs derived from exosomes is depicted in this paper. Recent progress in the field of exosome research and more particularly regarding exosomal miRNAs has led much bioinformatic-based research to come into existence. The information on clustering pattern and candidate motifs in miRNAs of exosomal origin would help in analyzing existing, as well as newly discovered miRNAs within exosomes. Along with obtaining clustering pattern and candidate motifs in exosomal miRNAs, this work also elaborates the usefulness of the machine learning algorithms that can be efficiently used and executed on various programming languages/platforms. Data were clustered and sequence candidate motifs were detected successfully. The results were compared and validated with some available web tools such as 'BLASTN' and 'MEME suite'. The machine learning algorithms for aforementioned objectives were applied successfully. This work elaborated utility of machine learning algorithms and language platforms to achieve the tasks of clustering and candidate motif detection in exosomal miRNAs. With the information on mentioned objectives, deeper insight would be gained for analyses of newly discovered miRNAs in exosomes which are considered to be circulating biomarkers. In addition, the execution of machine learning algorithms on various language platforms gives more flexibility to users to try multiple iterations according to their requirements. This approach can be applied to other biological data-mining tasks as well.

  6. CANDID: Comparison algorithm for navigating digital image databases

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.M.; Cannon, T.M.

    1994-02-21

    In this paper, we propose a method for calculating the similarity between two digital images. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized distance between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to an example target image. This algorithm is applied to the problem of search and retrieval for database containing pulmonary CT imagery, and experimental results are provided.

  7. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  8. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  9. Efficient RNA structure comparison algorithms.

    Science.gov (United States)

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  10. Tradeoffs Between Branch Mispredictions and Comparisons for Sorting Algorithms

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Moruz, Gabriel

    2005-01-01

    Branch mispredictions is an important factor affecting the running time in practice. In this paper we consider tradeoffs between the number of branch mispredictions and the number of comparisons for sorting algorithms in the comparison model. We prove that a sorting algorithm using O(dnlog n......) comparisons performs Omega(nlogd n) branch mispredictions. We show that Multiway MergeSort achieves this tradeoff by adopting a multiway merger with a low number of branch mispredictions. For adaptive sorting algorithms we similarly obtain that an algorithm performing O(dn(1+log (1+Inv/n))) comparisons must...... perform Omega(nlogd (1+Inv/n)) branch mispredictions, where Inv is the number of inversions in the input. This tradeoff can be achieved by GenericSort by Estivill-Castro and Wood by adopting a multiway division protocol and a multiway merging algorithm with a low number of branch mispredictions....

  11. Trust Based Algorithm for Candidate Node Selection in Hybrid MANET-DTN

    Directory of Open Access Journals (Sweden)

    Jan Papaj

    2014-01-01

    Full Text Available The hybrid MANET - DTN is a mobile network that enables transport of the data between groups of the disconnected mobile nodes. The network provides benefits of the Mobile Ad-Hoc Networks (MANET and Delay Tolerant Network (DTN. The main problem of the MANET occurs if the communication path is broken or disconnected for some short time period. On the other side, DTN allows sending data in the disconnected environment with respect to higher tolerance to delay. Hybrid MANET - DTN provides optimal solution for emergency situation in order to transport information. Moreover, the security is the critical factor because the data are transported by mobile devices. In this paper, we investigate the issue of secure candidate node selection for transportation of the data in a disconnected environment for hybrid MANET- DTN. To achieve the secure selection of the reliable mobile nodes, the trust algorithm is introduced. The algorithm enables select reliable nodes based on collecting routing information. This algorithm is implemented to the simulator OPNET modeler.

  12. Query by image example: The CANDID approach

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, P.M.; Cannon, M. [Los Alamos National Lab., NM (United States). Computer Research and Applications Group; Hush, D.R. [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Electrical and Computer Engineering

    1995-02-01

    CANDID (Comparison Algorithm for Navigating Digital Image Databases) was developed to enable content-based retrieval of digital imagery from large databases using a query-by-example methodology. A user provides an example image to the system, and images in the database that are similar to that example are retrieved. The development of CANDID was inspired by the N-gram approach to document fingerprinting, where a ``global signature`` is computed for every document in a database and these signatures are compared to one another to determine the similarity between any two documents. CANDID computes a global signature for every image in a database, where the signature is derived from various image features such as localized texture, shape, or color information. A distance between probability density functions of feature vectors is then used to compare signatures. In this paper, the authors present CANDID and highlight two results from their current research: subtracting a ``background`` signature from every signature in a database in an attempt to improve system performance when using inner-product similarity measures, and visualizing the contribution of individual pixels in the matching process. These ideas are applicable to any histogram-based comparison technique.

  13. Comparison of turbulence mitigation algorithms

    Science.gov (United States)

    Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric

    2017-07-01

    When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.

  14. Reranking candidate gene models with cross-species comparison for improved gene prediction

    Directory of Open Access Journals (Sweden)

    Pereira Fernando CN

    2008-10-01

    Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.

  15. Comparison of greedy algorithms for α-decision tree construction

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2011-01-01

    A comparison among different heuristics that are used by greedy algorithms which constructs approximate decision trees (α-decision trees) is presented. The comparison is conducted using decision tables based on 24 data sets from UCI Machine Learning Repository [2]. Complexity of decision trees is estimated relative to several cost functions: depth, average depth, number of nodes, number of nonterminal nodes, and number of terminal nodes. Costs of trees built by greedy algorithms are compared with minimum costs calculated by an algorithm based on dynamic programming. The results of experiments assign to each cost function a set of potentially good heuristics that minimize it. © 2011 Springer-Verlag.

  16. Comparison of genetic algorithms with conjugate gradient methods

    Science.gov (United States)

    Bosworth, J. L.; Foo, N. Y.; Zeigler, B. P.

    1972-01-01

    Genetic algorithms for mathematical function optimization are modeled on search strategies employed in natural adaptation. Comparisons of genetic algorithms with conjugate gradient methods, which were made on an IBM 1800 digital computer, show that genetic algorithms display superior performance over gradient methods for functions which are poorly behaved mathematically, for multimodal functions, and for functions obscured by additive random noise. Genetic methods offer performance comparable to gradient methods for many of the standard functions.

  17. Sorting on STAR. [CDC computer algorithm timing comparison

    Science.gov (United States)

    Stone, H. S.

    1978-01-01

    Timing comparisons are given for three sorting algorithms written for the CDC STAR computer. One algorithm is Hoare's (1962) Quicksort, which is the fastest or nearly the fastest sorting algorithm for most computers. A second algorithm is a vector version of Quicksort that takes advantage of the STAR's vector operations. The third algorithm is an adaptation of Batcher's (1968) sorting algorithm, which makes especially good use of vector operations but has a complexity of N(log N)-squared as compared with a complexity of N log N for the Quicksort algorithms. In spite of its worse complexity, Batcher's sorting algorithm is competitive with the serial version of Quicksort for vectors up to the largest that can be treated by STAR. Vector Quicksort outperforms the other two algorithms and is generally preferred. These results indicate that unusual instruction sets can introduce biases in program execution time that counter results predicted by worst-case asymptotic complexity analysis.

  18. A Modified Image Comparison Algorithm Using Histogram Features

    OpenAIRE

    Al-Oraiqat, Anas M.; Kostyukova, Natalya S.

    2018-01-01

    This article discuss the problem of color image content comparison. Particularly, methods of image content comparison are analyzed, restrictions of color histogram are described and a modified method of images content comparison is proposed. This method uses the color histograms and considers color locations. Testing and analyzing of based and modified algorithms are performed. The modified method shows 97% average precision for a collection containing about 700 images without loss of the adv...

  19. A Comparison of Evolutionary Algorithms for Tracking Time-Varying Recursive Systems

    Directory of Open Access Journals (Sweden)

    White Michael S

    2003-01-01

    Full Text Available A comparison is made of the behaviour of some evolutionary algorithms in time-varying adaptive recursive filter systems. Simulations show that an algorithm including random immigrants outperforms a more conventional algorithm using the breeder genetic algorithm as the mutation operator when the time variation is discontinuous, but neither algorithm performs well when the time variation is rapid but smooth. To meet this deficit, a new hybrid algorithm which uses a hill climber as an additional genetic operator, applied for several steps at each generation, is introduced. A comparison is made of the effect of applying the hill climbing operator a few times to all members of the population or a larger number of times solely to the best individual; it is found that applying to the whole population yields the better results, substantially improved compared with those obtained using earlier methods.

  20. A genetic algorithm for the optimization of fiber angles in composite laminates

    International Nuclear Information System (INIS)

    Hwang, Shun Fa; Hsu, Ya Chu; Chen, Yuder

    2014-01-01

    A genetic algorithm for the optimization of composite laminates is proposed in this work. The well-known roulette selection criterion, one-point crossover operator, and uniform mutation operator are used in this genetic algorithm to create the next population. To improve the hill-climbing capability of the algorithm, adaptive mechanisms designed to adjust the probabilities of the crossover and mutation operators are included, and the elite strategy is enforced to ensure the quality of the optimum solution. The proposed algorithm includes a new operator called the elite comparison, which compares and uses the differences in the design variables of the two best solutions to find possible combinations. This genetic algorithm is tested in four optimization problems of composite laminates. Specifically, the effect of the elite comparison operator is evaluated. Results indicate that the elite comparison operator significantly accelerates the convergence of the algorithm, which thus becomes a good candidate for the optimization of composite laminates.

  1. Dynamic programming algorithms for biological sequence comparison.

    Science.gov (United States)

    Pearson, W R; Miller, W

    1992-01-01

    Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.

  2. Does a Least-Preferred Candidate Win a Seat? A Comparison of Three Electoral Systems

    Directory of Open Access Journals (Sweden)

    Yoichi Hizen

    2015-01-01

    Full Text Available In this paper, the differences between two variations of proportional representation (PR, open-list PR and closed-list PR, are analyzed in terms of their ability to accurately reflect voter preference. The single nontransferable vote (SNTV is also included in the comparison as a benchmark. We construct a model of voting equilibria with a candidate who is least preferred by voters in the sense that replacing the least-preferred candidate in the set of winners with any loser is Pareto improving, and our focus is on whether the least-preferred candidate wins under each electoral system. We demonstrate that the least-preferred candidate never wins under the SNTV, but can win under open-list PR, although this is less likely than winning under closed-list PR.

  3. Comparison of public peak detection algorithms for MALDI mass spectrometry data analysis.

    Science.gov (United States)

    Yang, Chao; He, Zengyou; Yu, Weichuan

    2009-01-06

    In mass spectrometry (MS) based proteomic data analysis, peak detection is an essential step for subsequent analysis. Recently, there has been significant progress in the development of various peak detection algorithms. However, neither a comprehensive survey nor an experimental comparison of these algorithms is yet available. The main objective of this paper is to provide such a survey and to compare the performance of single spectrum based peak detection methods. In general, we can decompose a peak detection procedure into three consequent parts: smoothing, baseline correction and peak finding. We first categorize existing peak detection algorithms according to the techniques used in different phases. Such a categorization reveals the differences and similarities among existing peak detection algorithms. Then, we choose five typical peak detection algorithms to conduct a comprehensive experimental study using both simulation data and real MALDI MS data. The results of comparison show that the continuous wavelet-based algorithm provides the best average performance.

  4. A study and implementation of algorithm for automatic ECT result comparison

    International Nuclear Information System (INIS)

    Jang, You Hyun; Nam, Min Woo; Kim, In Chul; Joo, Kyung Mun; Kim, Jong Seog

    2012-01-01

    Automatic ECT Result Comparison Algorithm was developed and implemented with computer language to remove the human error in manual comparison with many data. The structures of two ECT Program (Eddy net and ECT IDS) that have unique file structure were analyzed to open file and upload data in PC memory. Comparison algorithm was defined graphically for easy PC programming language conversion. Automatic Result Program was programmed with C language that is suitable for future code management and has object oriented programming structure and fast development potential. Automatic Result Program has MS Excel file exporting function that is useful to use external S/W for additional analysis and intuitive result visualization function with color mapping in user friendly fashion that helps analyze efficiently

  5. A study and implementation of algorithm for automatic ECT result comparison

    Energy Technology Data Exchange (ETDEWEB)

    Jang, You Hyun; Nam, Min Woo; Kim, In Chul; Joo, Kyung Mun; Kim, Jong Seog [Central Research Institute, Daejeon (Korea, Republic of)

    2012-10-15

    Automatic ECT Result Comparison Algorithm was developed and implemented with computer language to remove the human error in manual comparison with many data. The structures of two ECT Program (Eddy net and ECT IDS) that have unique file structure were analyzed to open file and upload data in PC memory. Comparison algorithm was defined graphically for easy PC programming language conversion. Automatic Result Program was programmed with C language that is suitable for future code management and has object oriented programming structure and fast development potential. Automatic Result Program has MS Excel file exporting function that is useful to use external S/W for additional analysis and intuitive result visualization function with color mapping in user friendly fashion that helps analyze efficiently.

  6. Line Balancing Using Largest Candidate Rule Algorithm In A Garment Industry: A Case Study

    Directory of Open Access Journals (Sweden)

    V. P.Jaganathan

    2014-12-01

    Full Text Available The emergence of fast changes in fashion has given rise to the need to shorten production cycle times in the garment industry. As effective usage of resources has a significant effect on the productivity and efficiency of production operations, garment manufacturers are urged to utilize their resources effectively in order to meet dynamic customer demand. This paper focuses specifically on line balancing and layout modification. The aim of assembly line balance in sewing lines is to assign tasks to the workstations, so that the machines of the workstation can perform the assigned tasks with a balanced loading. Largest Candidate Rule Algorithm (LCR has been deployed in this paper.

  7. Quantitative Imaging Biomarkers: A Review of Statistical Methods for Computer Algorithm Comparisons

    Science.gov (United States)

    2014-01-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. PMID:24919829

  8. Quantitative imaging biomarkers: a review of statistical methods for computer algorithm comparisons.

    Science.gov (United States)

    Obuchowski, Nancy A; Reeves, Anthony P; Huang, Erich P; Wang, Xiao-Feng; Buckler, Andrew J; Kim, Hyun J Grace; Barnhart, Huiman X; Jackson, Edward F; Giger, Maryellen L; Pennello, Gene; Toledano, Alicia Y; Kalpathy-Cramer, Jayashree; Apanasovich, Tatiyana V; Kinahan, Paul E; Myers, Kyle J; Goldgof, Dmitry B; Barboriak, Daniel P; Gillies, Robert J; Schwartz, Lawrence H; Sullivan, Daniel C

    2015-02-01

    Quantitative biomarkers from medical images are becoming important tools for clinical diagnosis, staging, monitoring, treatment planning, and development of new therapies. While there is a rich history of the development of quantitative imaging biomarker (QIB) techniques, little attention has been paid to the validation and comparison of the computer algorithms that implement the QIB measurements. In this paper we provide a framework for QIB algorithm comparisons. We first review and compare various study designs, including designs with the true value (e.g. phantoms, digital reference images, and zero-change studies), designs with a reference standard (e.g. studies testing equivalence with a reference standard), and designs without a reference standard (e.g. agreement studies and studies of algorithm precision). The statistical methods for comparing QIB algorithms are then presented for various study types using both aggregate and disaggregate approaches. We propose a series of steps for establishing the performance of a QIB algorithm, identify limitations in the current statistical literature, and suggest future directions for research. © The Author(s) 2014 Reprints and permissions: sagepub.co.uk/journalsPermissions.nav.

  9. Performance Comparison of Widely-Used Maximum Power Point Tracker Algorithms under Real Environmental Conditions

    Directory of Open Access Journals (Sweden)

    DURUSU, A.

    2014-08-01

    Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.

  10. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Directory of Open Access Journals (Sweden)

    E. Dall'Asta

    2014-06-01

    Full Text Available Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM, which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  11. A comparison of semiglobal and local dense matching algorithms for surface reconstruction

    Science.gov (United States)

    Dall'Asta, E.; Roncella, R.

    2014-06-01

    Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.

  12. Many-Objective Distinct Candidates Optimization using Differential Evolution

    DEFF Research Database (Denmark)

    Justesen, Peter; Ursem, Rasmus Kjær

    2010-01-01

    for each objective. The Many-Objective Distinct Candidates Optimization using Differential Evolution (MODCODE) algorithm takes a novel approach by focusing search using a user-defined number of subpopulations each returning a distinct optimal solution within the preferred region of interest. In this paper......, we present the novel MODCODE algorithm incorporating the ROD measure to measure and control candidate distinctiveness. MODCODE is tested against GDE3 on three real world centrifugal pump design problems supplied by Grundfos. Our algorithm outperforms GDE3 on all problems with respect to all...

  13. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    Science.gov (United States)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest

  14. Comparison of evolutionary algorithms in gene regulatory network model inference.

    LENUS (Irish Health Repository)

    2010-01-01

    ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.

  15. Disease candidate gene identification and prioritization using protein interaction networks

    Directory of Open Access Journals (Sweden)

    Aronow Bruce J

    2009-02-01

    Full Text Available Abstract Background Although most of the current disease candidate gene identification and prioritization methods depend on functional annotations, the coverage of the gene functional annotations is a limiting factor. In the current study, we describe a candidate gene prioritization method that is entirely based on protein-protein interaction network (PPIN analyses. Results For the first time, extended versions of the PageRank and HITS algorithms, and the K-Step Markov method are applied to prioritize disease candidate genes in a training-test schema. Using a list of known disease-related genes from our earlier study as a training set ("seeds", and the rest of the known genes as a test list, we perform large-scale cross validation to rank the candidate genes and also evaluate and compare the performance of our approach. Under appropriate settings – for example, a back probability of 0.3 for PageRank with Priors and HITS with Priors, and step size 6 for K-Step Markov method – the three methods achieved a comparable AUC value, suggesting a similar performance. Conclusion Even though network-based methods are generally not as effective as integrated functional annotation-based methods for disease candidate gene prioritization, in a one-to-one comparison, PPIN-based candidate gene prioritization performs better than all other gene features or annotations. Additionally, we demonstrate that methods used for studying both social and Web networks can be successfully used for disease candidate gene prioritization.

  16. The role of cardiovascular magnetic resonance in candidates for Fontan operation: Proposal of a new Algorithm

    Directory of Open Access Journals (Sweden)

    Ait-Ali Lamia

    2011-11-01

    Full Text Available Abstract Background To propose a new diagnostic algorithm for candidates for Fontan and identify those who can skip cardiac catheterization (CC. Methods Forty-four candidates for Fontan (median age 4.8 years, range: 2-29 years were prospectively evaluated by trans-thoracic echocardiography (TTE, Cardiovascular magnetic resonance (CMR and CC. Before CC, according to clinical, echo and CMR findings, patients were divided in two groups: Group I comprised 18 patients deemed suitable for Fontan without requiring CC; group II comprised 26 patients indicated for CC either in order to detect more details, or for interventional procedures. Results In Group I ("CC not required" no unexpected new information affecting surgical planning was provided by CC. Conversely, in Group II new information was provided by CC in three patients (0 vs 11.5%, p = 0.35 and in six an interventional procedure was performed. During CC, minor complications occurred in one patient from Group I and in three from Group II (6 vs 14%, p = 0.7. Radiation Dose-Area product was similar in the two groups (Median 20 Gycm2, range: 5-40 vs 26.5 Gycm2, range: 9-270 p = 0.37. All 18 Group I patients and 19 Group II patients underwent a total cavo-pulmonary anastomosis; in the remaining seven group II patients, four were excluded from Fontan; two are awaiting Fontan; one refused the intervention. Conclusion In this paper we propose a new diagnostic algorithm in a pre-Fontan setting. An accurate non-invasive evaluation comprising TTE and CMR could select patients who can skip CC.

  17. Comparison of machine learning algorithms for detecting coral reef

    Directory of Open Access Journals (Sweden)

    Eduardo Tusa

    2014-09-01

    Full Text Available (Received: 2014/07/31 - Accepted: 2014/09/23This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009 because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing. We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009.

  18. Comparison of some evolutionary algorithms for optimization of the path synthesis problem

    Science.gov (United States)

    Grabski, Jakub Krzysztof; Walczak, Tomasz; Buśkiewicz, Jacek; Michałowska, Martyna

    2018-01-01

    The paper presents comparison of the results obtained in a mechanism synthesis by means of some selected evolutionary algorithms. The optimization problem considered in the paper as an example is the dimensional synthesis of the path generating four-bar mechanism. In order to solve this problem, three different artificial intelligence algorithms are employed in this study.

  19. Selection of candidate plus phenotypes of Jatropha curcas L. using method of paired comparisons

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, D.K. [Silviculture Division, Arid Forest Research Institute, P.O. Krishi Mandi, New Pali Road, Jodhpur 342005, Rajasthan (India)

    2009-03-15

    Jatropha curcas L. (Euphorbiaceae) is an oil bearing species with multiple uses and considerable potential as a biodiesel crop. The present communication deals with the method of selecting plus phenotypes of J. curcas for exploiting genetic variability for further improvement. Candidate plus tree selection is the first and most important stage in any tree improvement programme. The selection of candidate plus plants (CPPs) is based upon various important attributes associated with the species and their relative ranking. Relative preference between various traits and scoring for each trait has been worked out by using the method of paired comparisons for the selection of CPP in J. curcas L. The most important ones are seed and oil yields. (author)

  20. Comparison of machine-learning algorithms to build a predictive model for detecting undiagnosed diabetes - ELSA-Brasil: accuracy study.

    Science.gov (United States)

    Olivera, André Rodrigues; Roesler, Valter; Iochpe, Cirano; Schmidt, Maria Inês; Vigo, Álvaro; Barreto, Sandhi Maria; Duncan, Bruce Bartholow

    2017-01-01

    Type 2 diabetes is a chronic disease associated with a wide range of serious health complications that have a major impact on overall health. The aims here were to develop and validate predictive models for detecting undiagnosed diabetes using data from the Longitudinal Study of Adult Health (ELSA-Brasil) and to compare the performance of different machine-learning algorithms in this task. Comparison of machine-learning algorithms to develop predictive models using data from ELSA-Brasil. After selecting a subset of 27 candidate variables from the literature, models were built and validated in four sequential steps: (i) parameter tuning with tenfold cross-validation, repeated three times; (ii) automatic variable selection using forward selection, a wrapper strategy with four different machine-learning algorithms and tenfold cross-validation (repeated three times), to evaluate each subset of variables; (iii) error estimation of model parameters with tenfold cross-validation, repeated ten times; and (iv) generalization testing on an independent dataset. The models were created with the following machine-learning algorithms: logistic regression, artificial neural network, naïve Bayes, K-nearest neighbor and random forest. The best models were created using artificial neural networks and logistic regression. -These achieved mean areas under the curve of, respectively, 75.24% and 74.98% in the error estimation step and 74.17% and 74.41% in the generalization testing step. Most of the predictive models produced similar results, and demonstrated the feasibility of identifying individuals with highest probability of having undiagnosed diabetes, through easily-obtained clinical data.

  1. Searching for an oscillating massive scalar field as a dark matter candidate using atomic hyperfine frequency comparisons

    OpenAIRE

    Hees, A.; Guéna, J.; Abgrall, M.; Bize, S.; Wolf, P.

    2016-01-01

    We use six years of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but pr...

  2. Comparison analysis for classification algorithm in data mining and the study of model use

    Science.gov (United States)

    Chen, Junde; Zhang, Defu

    2018-04-01

    As a key technique in data mining, classification algorithm was received extensive attention. Through an experiment of classification algorithm in UCI data set, we gave a comparison analysis method for the different algorithms and the statistical test was used here. Than that, an adaptive diagnosis model for preventive electricity stealing and leakage was given as a specific case in the paper.

  3. A comparison of three optimization algorithms for intensity modulated radiation therapy

    International Nuclear Information System (INIS)

    Pflugfelder, D.; Wilkens, J.J.; Nill, S.; Oelfke, U.

    2008-01-01

    In intensity modulated treatment techniques, the modulation of each treatment field is obtained using an optimization algorithm. Multiple optimization algorithms have been proposed in the literature, e.g. steepest descent, conjugate gradient, quasi-Newton methods to name a few. The standard optimization algorithm in our in-house inverse planning tool KonRad is a quasi-Newton algorithm. Although this algorithm yields good results, it also has some drawbacks. Thus we implemented an improved optimization algorithm based on the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) routine. In this paper the improved optimization algorithm is described. To compare the two algorithms, several treatment plans are optimized using both algorithms. This included photon (IMRT) as well as proton (IMPT) intensity modulated therapy treatment plans. To present the results in a larger context the widely used conjugate gradient algorithm was also included into this comparison. On average, the improved optimization algorithm was six times faster to reach the same objective function value. However, it resulted not only in an acceleration of the optimization. Due to the faster convergence, the improved optimization algorithm usually terminates the optimization process at a lower objective function value. The average of the observed improvement in the objective function value was 37%. This improvement is clearly visible in the corresponding dose-volume-histograms. The benefit of the improved optimization algorithm is particularly pronounced in proton therapy plans. The conjugate gradient algorithm ranked in between the other two algorithms with an average speedup factor of two and an average improvement of the objective function value of 30%. (orig.)

  4. Comparison of genetic algorithm and harmony search for generator maintenance scheduling

    International Nuclear Information System (INIS)

    Khan, L.; Mumtaz, S.; Khattak, A.

    2012-01-01

    GMS (Generator Maintenance Scheduling) ranks very high in decision making of power generation management. Generators maintenance schedule decides the time period of maintenance tasks and a reliable reserve margin is also maintained during this time period. In this paper, a comparison of GA (Genetic Algorithm) and US (Harmony Search) algorithm is presented to solve generators maintenance scheduling problem for WAPDA (Water And Power Development Authority) Pakistan. GA is a search procedure, which is used in search problems to compute exact and optimized solution. GA is considered as global search heuristic technique. HS algorithm is quite efficient, because the convergence rate of this algorithm is very fast. HS algorithm is based on the concept of music improvisation process of searching for a perfect state of harmony. The two algorithms generate feasible and optimal solutions and overcome the limitations of the conventional methods including extensive computational effort, which increases exponentially as the size of the problem increases. The proposed methods are tested, validated and compared on the WAPDA electric system. (author)

  5. [Comparison of dose calculation algorithms in stereotactic radiation therapy in lung].

    Science.gov (United States)

    Tomiyama, Yuki; Araki, Fujio; Kanetake, Nagisa; Shimohigashi, Yoshinobu; Tominaga, Hirofumi; Sakata, Jyunichi; Oono, Takeshi; Kouno, Tomohiro; Hioki, Kazunari

    2013-06-01

    Dose calculation algorithms in radiation treatment planning systems (RTPSs) play a crucial role in stereotactic body radiation therapy (SBRT) in the lung with heterogeneous media. This study investigated the performance and accuracy of dose calculation for three algorithms: analytical anisotropic algorithm (AAA), pencil beam convolution (PBC) and Acuros XB (AXB) in Eclipse (Varian Medical Systems), by comparison against the Voxel Monte Carlo algorithm (VMC) in iPlan (BrainLab). The dose calculations were performed with clinical lung treatments under identical planning conditions, and the dose distributions and the dose volume histogram (DVH) were compared among algorithms. AAA underestimated the dose in the planning target volume (PTV) compared to VMC and AXB in most clinical plans. In contrast, PBC overestimated the PTV dose. AXB tended to slightly overestimate the PTV dose compared to VMC but the discrepancy was within 3%. The discrepancy in the PTV dose between VMC and AXB appears to be due to differences in physical material assignments, material voxelization methods, and an energy cut-off for electron interactions. The dose distributions in lung treatments varied significantly according to the calculation accuracy of the algorithms. VMC and AXB are better algorithms than AAA for SBRT.

  6. Identification of new candidate drugs for lung cancer using chemical-chemical interactions, chemical-protein interactions and a K-means clustering algorithm.

    Science.gov (United States)

    Lu, Jing; Chen, Lei; Yin, Jun; Huang, Tao; Bi, Yi; Kong, Xiangyin; Zheng, Mingyue; Cai, Yu-Dong

    2016-01-01

    Lung cancer, characterized by uncontrolled cell growth in the lung tissue, is the leading cause of global cancer deaths. Until now, effective treatment of this disease is limited. Many synthetic compounds have emerged with the advancement of combinatorial chemistry. Identification of effective lung cancer candidate drug compounds among them is a great challenge. Thus, it is necessary to build effective computational methods that can assist us in selecting for potential lung cancer drug compounds. In this study, a computational method was proposed to tackle this problem. The chemical-chemical interactions and chemical-protein interactions were utilized to select candidate drug compounds that have close associations with approved lung cancer drugs and lung cancer-related genes. A permutation test and K-means clustering algorithm were employed to exclude candidate drugs with low possibilities to treat lung cancer. The final analysis suggests that the remaining drug compounds have potential anti-lung cancer activities and most of them have structural dissimilarity with approved drugs for lung cancer.

  7. Comparison Spatial Pattern of Land Surface Temperature with Mono Window Algorithm and Split Window Algorithm: A Case Study in South Tangerang, Indonesia

    Science.gov (United States)

    Bunai, Tasya; Rokhmatuloh; Wibowo, Adi

    2018-05-01

    In this paper, two methods to retrieve the Land Surface Temperature (LST) from thermal infrared data supplied by band 10 and 11 of the Thermal Infrared Sensor (TIRS) onboard the Landsat 8 is compared. The first is mono window algorithm developed by Qin et al. and the second is split window algorithm by Rozenstein et al. The purpose of this study is to perform the spatial distribution of land surface temperature, as well as to determine more accurate algorithm for retrieving land surface temperature by calculated root mean square error (RMSE). Finally, we present comparison the spatial distribution of land surface temperature by both of algorithm, and more accurate algorithm is split window algorithm refers to the root mean square error (RMSE) is 7.69° C.

  8. Sensitivity study of voxel-based PET image comparison to image registration algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Yip, Stephen, E-mail: syip@lroc.harvard.edu; Chen, Aileen B.; Berbeco, Ross [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Aerts, Hugo J. W. L. [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 and Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts 02115 (United States)

    2014-11-01

    Purpose: Accurate deformable registration is essential for voxel-based comparison of sequential positron emission tomography (PET) images for proper adaptation of treatment plan and treatment response assessment. The comparison may be sensitive to the method of deformable registration as the optimal algorithm is unknown. This study investigated the impact of registration algorithm choice on therapy response evaluation. Methods: Sixteen patients with 20 lung tumors underwent a pre- and post-treatment computed tomography (CT) and 4D FDG-PET scans before and after chemoradiotherapy. All CT images were coregistered using a rigid and ten deformable registration algorithms. The resulting transformations were then applied to the respective PET images. Moreover, the tumor region defined by a physician on the registered PET images was classified into progressor, stable-disease, and responder subvolumes. Particularly, voxels with standardized uptake value (SUV) decreases >30% were classified as responder, while voxels with SUV increases >30% were progressor. All other voxels were considered stable-disease. The agreement of the subvolumes resulting from difference registration algorithms was assessed by Dice similarity index (DSI). Coefficient of variation (CV) was computed to assess variability of DSI between individual tumors. Root mean square difference (RMS{sub rigid}) of the rigidly registered CT images was used to measure the degree of tumor deformation. RMS{sub rigid} and DSI were correlated by Spearman correlation coefficient (R) to investigate the effect of tumor deformation on DSI. Results: Median DSI{sub rigid} was found to be 72%, 66%, and 80%, for progressor, stable-disease, and responder, respectively. Median DSI{sub deformable} was 63%–84%, 65%–81%, and 82%–89%. Variability of DSI was substantial and similar for both rigid and deformable algorithms with CV > 10% for all subvolumes. Tumor deformation had moderate to significant impact on DSI for progressor

  9. Application of Hybrid Optimization Algorithm in the Synthesis of Linear Antenna Array

    Directory of Open Access Journals (Sweden)

    Ezgi Deniz Ülker

    2014-01-01

    Full Text Available The use of hybrid algorithms for solving real-world optimization problems has become popular since their solution quality can be made better than the algorithms that form them by combining their desirable features. The newly proposed hybrid method which is called Hybrid Differential, Particle, and Harmony (HDPH algorithm is different from the other hybrid forms since it uses all features of merged algorithms in order to perform efficiently for a wide variety of problems. In the proposed algorithm the control parameters are randomized which makes its implementation easy and provides a fast response. This paper describes the application of HDPH algorithm to linear antenna array synthesis. The results obtained with the HDPH algorithm are compared with three merged optimization techniques that are used in HDPH. The comparison shows that the performance of the proposed algorithm is comparatively better in both solution quality and robustness. The proposed hybrid algorithm HDPH can be an efficient candidate for real-time optimization problems since it yields reliable performance at all times when it gets executed.

  10. PEACE: pulsar evaluation algorithm for candidate extraction - a software package for post-analysis processing of pulsar survey candidates

    NARCIS (Netherlands)

    Lee, K.J.; Stovall, K.; Jenet, F.A.; Martinez, J.; Dartez, L.P.; Mata, A.; Lunsford, G.; Cohen, S.; Biwer, C.M.; Rohr, M.; Flanigan, J.; Walker, A.; Banaszak, S.; Allen, B.; Barr, E.D.; Bhat, N.D.R.; Bogdanov, S.; Brazier, A.; Camilo, F.; Champion, D.J.; Chatterjee, S.; Cordes, J.; Crawford, F.; Deneva, J.; Desvignes, G.; Ferdman, R.D.; Freire, P.; Hessels, J.W.T.; Karuppusamy, R.; Kaspi, V.M.; Knispel, B.; Kramer, M.; Lazarus, P.; Lynch, R.; Lyne, A.; McLaughlin, M.; Ransom, S.; Scholz, P.; Siemens, X.; Spitler, L.; Stairs, I.; Tan, M.; van Leeuwen, J.; Zhu, W.W.

    2013-01-01

    Modern radio pulsar surveys produce a large volume of prospective candidates, the majority of which are polluted by human-created radio frequency interference or other forms of noise. Typically, large numbers of candidates need to be visually inspected in order to determine if they are real pulsars.

  11. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  12. A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller

    International Nuclear Information System (INIS)

    Tapp, P.A.

    1992-04-01

    A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs

  13. An evaluation of scanpath-comparison and machine-learning classification algorithms used to study the dynamics of analogy making.

    Science.gov (United States)

    French, Robert M; Glady, Yannick; Thibaut, Jean-Pierre

    2017-08-01

    In recent years, eyetracking has begun to be used to study the dynamics of analogy making. Numerous scanpath-comparison algorithms and machine-learning techniques are available that can be applied to the raw eyetracking data. We show how scanpath-comparison algorithms, combined with multidimensional scaling and a classification algorithm, can be used to resolve an outstanding question in analogy making-namely, whether or not children's and adults' strategies in solving analogy problems are different. (They are.) We show which of these scanpath-comparison algorithms is best suited to the kinds of analogy problems that have formed the basis of much analogy-making research over the years. Furthermore, we use machine-learning classification algorithms to examine the item-to-item saccade vectors making up these scanpaths. We show which of these algorithms best predicts, from very early on in a trial, on the basis of the frequency of various item-to-item saccades, whether a child or an adult is doing the problem. This type of analysis can also be used to predict, on the basis of the item-to-item saccade dynamics in the first third of a trial, whether or not a problem will be solved correctly.

  14. Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison

    Directory of Open Access Journals (Sweden)

    Olympia Roeva

    2005-12-01

    Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.

  15. EXPERIMENTAL COMPARISON OF HOMODYNE DEMODULATION ALGORITHMS FOR PHASE FIBER-OPTIC SENSOR

    Directory of Open Access Journals (Sweden)

    M. N. Belikin

    2015-11-01

    Full Text Available Subject of Research. The paper presents the results of experimental comparative analysis of homodyne demodulation algorithms based on differential cross multiplying method and on arctangent method under the same conditions. The dependencies of parameters for the output signals on the optical radiation intensity are studied for the considered demodulation algorithms. Method. The prototype of single fiber optic phase interferometric sensor has been used for experimental comparison of signal demodulation algorithms. Main Results. We have found that homodyne demodulation based on arctangent method provides greater (by 7 dB at average signal-to-noise ratio of output signals over the frequency band of acoustic impact from 100 Hz to 500 Hz as compared to differential cross multiplying algorithms. We have demonstrated that no change in the output signal amplitude occurs for the studied range of values of the optical pulses amplitudes. Obtained results indicate that the homodyne demodulation based on arctangent method is most suitable for application in the phase fiber-optic sensors. It provides higher repeatability of their characteristics than the differential cross multiplying algorithm. Practical Significance. Algorithms of interferometric signals demodulation are widely used in phase fiber-optic sensors. Improvement of their characteristics has a positive effect on the performance of such sensors.

  16. COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY.

    Science.gov (United States)

    Villalon, Julio; Joshi, Anand A; Toga, Arthur W; Thompson, Paul M

    2011-01-01

    Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic "Demons" algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future.

  17. Performance comparison of genetic algorithms and particle swarm optimization for model integer programming bus timetabling problem

    Science.gov (United States)

    Wihartiko, F. D.; Wijayanti, H.; Virgantari, F.

    2018-03-01

    Genetic Algorithm (GA) is a common algorithm used to solve optimization problems with artificial intelligence approach. Similarly, the Particle Swarm Optimization (PSO) algorithm. Both algorithms have different advantages and disadvantages when applied to the case of optimization of the Model Integer Programming for Bus Timetabling Problem (MIPBTP), where in the case of MIPBTP will be found the optimal number of trips confronted with various constraints. The comparison results show that the PSO algorithm is superior in terms of complexity, accuracy, iteration and program simplicity in finding the optimal solution.

  18. Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region

    Directory of Open Access Journals (Sweden)

    Matko Šarić

    2008-06-01

    Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.

  19. Comparison of the mass preconditioned HMC and the DD-HMC algorithm for two-flavour QCD

    CERN Document Server

    Marinkovic, Marina

    2010-01-01

    Mass preconditioned HMC and DD-HMC are among the most popular algorithms to simulate Wilson fermions. We present a comparison of the performance of the two algorithms for realistic quark masses and lattice sizes. In particular, we use the locally deflated solver of the DD-HMC environment also for the mass preconditioned simulations.

  20. A comparison of graph- and kernel-based -omics data integration algorithms for classifying complex traits.

    Science.gov (United States)

    Yan, Kang K; Zhao, Hongyu; Pang, Herbert

    2017-12-06

    High-throughput sequencing data are widely collected and analyzed in the study of complex diseases in quest of improving human health. Well-studied algorithms mostly deal with single data source, and cannot fully utilize the potential of these multi-omics data sources. In order to provide a holistic understanding of human health and diseases, it is necessary to integrate multiple data sources. Several algorithms have been proposed so far, however, a comprehensive comparison of data integration algorithms for classification of binary traits is currently lacking. In this paper, we focus on two common classes of integration algorithms, graph-based that depict relationships with subjects denoted by nodes and relationships denoted by edges, and kernel-based that can generate a classifier in feature space. Our paper provides a comprehensive comparison of their performance in terms of various measurements of classification accuracy and computation time. Seven different integration algorithms, including graph-based semi-supervised learning, graph sharpening integration, composite association network, Bayesian network, semi-definite programming-support vector machine (SDP-SVM), relevance vector machine (RVM) and Ada-boost relevance vector machine are compared and evaluated with hypertension and two cancer data sets in our study. In general, kernel-based algorithms create more complex models and require longer computation time, but they tend to perform better than graph-based algorithms. The performance of graph-based algorithms has the advantage of being faster computationally. The empirical results demonstrate that composite association network, relevance vector machine, and Ada-boost RVM are the better performers. We provide recommendations on how to choose an appropriate algorithm for integrating data from multiple sources.

  1. Ensemble candidate classification for the LOTAAS pulsar survey

    Science.gov (United States)

    Tan, C. M.; Lyon, R. J.; Stappers, B. W.; Cooper, S.; Hessels, J. W. T.; Kondratiev, V. I.; Michilli, D.; Sanidas, S.

    2018-03-01

    One of the biggest challenges arising from modern large-scale pulsar surveys is the number of candidates generated. Here, we implemented several improvements to the machine learning (ML) classifier previously used by the LOFAR Tied-Array All-Sky Survey (LOTAAS) to look for new pulsars via filtering the candidates obtained during periodicity searches. To assist the ML algorithm, we have introduced new features which capture the frequency and time evolution of the signal and improved the signal-to-noise calculation accounting for broad profiles. We enhanced the ML classifier by including a third class characterizing RFI instances, allowing candidates arising from RFI to be isolated, reducing the false positive return rate. We also introduced a new training data set used by the ML algorithm that includes a large sample of pulsars misclassified by the previous classifier. Lastly, we developed an ensemble classifier comprised of five different Decision Trees. Taken together these updates improve the pulsar recall rate by 2.5 per cent, while also improving the ability to identify pulsars with wide pulse profiles, often misclassified by the previous classifier. The new ensemble classifier is also able to reduce the percentage of false positive candidates identified from each LOTAAS pointing from 2.5 per cent (˜500 candidates) to 1.1 per cent (˜220 candidates).

  2. Library correlation nuclide identification algorithm

    International Nuclear Information System (INIS)

    Russ, William R.

    2007-01-01

    A novel nuclide identification algorithm, Library Correlation Nuclide Identification (LibCorNID), is proposed. In addition to the spectrum, LibCorNID requires the standard energy, peak shape and peak efficiency calibrations. Input parameters include tolerances for some expected variations in the calibrations, a minimum relative nuclide peak area threshold, and a correlation threshold. Initially, the measured peak spectrum is obtained as the residual after baseline estimation via peak erosion, removing the continuum. Library nuclides are filtered by examining the possible nuclide peak areas in terms of the measured peak spectrum and applying the specified relative area threshold. Remaining candidates are used to create a set of theoretical peak spectra based on the calibrations and library entries. These candidate spectra are then simultaneously fit to the measured peak spectrum while also optimizing the calibrations within the bounds of the specified tolerances. Each candidate with optimized area still exceeding the area threshold undergoes a correlation test. The normalized Pearson's correlation value is calculated as a comparison of the optimized nuclide peak spectrum to the measured peak spectrum with the other optimized peak spectra subtracted. Those candidates with correlation values that exceed the specified threshold are identified and their optimized activities are output. An evaluation of LibCorNID was conducted to verify identification performance in terms of detection probability and false alarm rate. LibCorNID has been shown to perform well compared to standard peak-based analyses

  3. Computerized Dental Comparison: A Critical Review of Dental Coding and Ranking Algorithms Used in Victim Identification.

    Science.gov (United States)

    Adams, Bradley J; Aschheim, Kenneth W

    2016-01-01

    Comparison of antemortem and postmortem dental records is a leading method of victim identification, especially for incidents involving a large number of decedents. This process may be expedited with computer software that provides a ranked list of best possible matches. This study provides a comparison of the most commonly used conventional coding and sorting algorithms used in the United States (WinID3) with a simplified coding format that utilizes an optimized sorting algorithm. The simplified system consists of seven basic codes and utilizes an optimized algorithm based largely on the percentage of matches. To perform this research, a large reference database of approximately 50,000 antemortem and postmortem records was created. For most disaster scenarios, the proposed simplified codes, paired with the optimized algorithm, performed better than WinID3 which uses more complex codes. The detailed coding system does show better performance with extremely large numbers of records and/or significant body fragmentation. © 2015 American Academy of Forensic Sciences.

  4. Dissecting the organ specificity of insecticide resistance candidate genes in Anopheles gambiae: known and novel candidate genes.

    Science.gov (United States)

    Ingham, Victoria A; Jones, Christopher M; Pignatelli, Patricia; Balabanidou, Vasileia; Vontas, John; Wagstaff, Simon C; Moore, Jonathan D; Ranson, Hilary

    2014-11-25

    The elevated expression of enzymes with insecticide metabolism activity can lead to high levels of insecticide resistance in the malaria vector, Anopheles gambiae. In this study, adult female mosquitoes from an insecticide susceptible and resistant strain were dissected into four different body parts. RNA from each of these samples was used in microarray analysis to determine the enrichment patterns of the key detoxification gene families within the mosquito and to identify additional candidate insecticide resistance genes that may have been overlooked in previous experiments on whole organisms. A general enrichment in the transcription of genes from the four major detoxification gene families (carboxylesterases, glutathione transferases, UDP glucornyltransferases and cytochrome P450s) was observed in the midgut and malpighian tubules. Yet the subset of P450 genes that have previously been implicated in insecticide resistance in An gambiae, show a surprisingly varied profile of tissue enrichment, confirmed by qPCR and, for three candidates, by immunostaining. A stringent selection process was used to define a list of 105 genes that are significantly (p ≤0.001) over expressed in body parts from the resistant versus susceptible strain. Over half of these, including all the cytochrome P450s on this list, were identified in previous whole organism comparisons between the strains, but several new candidates were detected, notably from comparisons of the transcriptomes from dissected abdomen integuments. The use of RNA extracted from the whole organism to identify candidate insecticide resistance genes has a risk of missing candidates if key genes responsible for the phenotype have restricted expression within the body and/or are over expression only in certain tissues. However, as transcription of genes implicated in metabolic resistance to insecticides is not enriched in any one single organ, comparison of the transcriptome of individual dissected body parts cannot

  5. Comparison of Metabolic Pathways in Escherichia coli by Using Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Patricia Ortegon

    2015-01-01

    Full Text Available In order to understand how cellular metabolism has taken its modern form, the conservation and variations between metabolic pathways were evaluated by using a genetic algorithm (GA. The GA approach considered information on the complete metabolism of the bacterium Escherichia coli K-12, as deposited in the KEGG database, and the enzymes belonging to a particular pathway were transformed into enzymatic step sequences by using the breadth-first search algorithm. These sequences represent contiguous enzymes linked to each other, based on their catalytic activities as they are encoded in the Enzyme Commission numbers. In a posterior step, these sequences were compared using a GA in an all-against-all (pairwise comparisons approach. Individual reactions were chosen based on their measure of fitness to act as parents of offspring, which constitute the new generation. The sequences compared were used to construct a similarity matrix (of fitness values that was then considered to be clustered by using a k-medoids algorithm. A total of 34 clusters of conserved reactions were obtained, and their sequences were finally aligned with a multiple-sequence alignment GA optimized to align all the reaction sequences included in each group or cluster. From these comparisons, maps associated with the metabolism of similar compounds also contained similar enzymatic step sequences, reinforcing the Patchwork Model for the evolution of metabolism in E. coli K-12, an observation that can be expanded to other organisms, for which there is metabolism information. Finally, our mapping of these reactions is discussed, with illustrations from a particular case.

  6. Performance evaluation of 2D image registration algorithms with the numeric image registration and comparison platform

    International Nuclear Information System (INIS)

    Gerganov, G.; Kuvandjiev, V.; Dimitrova, I.; Mitev, K.; Kawrakow, I.

    2012-01-01

    The objective of this work is to present the capabilities of the NUMERICS web platform for evaluation of the performance of image registration algorithms. The NUMERICS platform is a web accessible tool which provides access to dedicated numerical algorithms for registration and comparison of medical images (http://numerics.phys.uni-sofia.bg). The platform allows comparison of noisy medical images by means of different types of image comparison algorithms, which are based on statistical tests for outliers. The platform also allows 2D image registration with different techniques like Elastic Thin-Plate Spline registration, registration based on rigid transformations, affine transformations, as well as non-rigid image registration based on Mobius transformations. In this work we demonstrate how the platform can be used as a tool for evaluation of the quality of the image registration process. We demonstrate performance evaluation of a deformable image registration technique based on Mobius transformations. The transformations are applied with appropriate cost functions like: Mutual information, Correlation coefficient, Sum of Squared Differences. The accent is on the results provided by the platform to the user and their interpretation in the context of the performance evaluation of 2D image registration. The NUMERICS image registration and image comparison platform provides detailed statistical information about submitted image registration jobs and can be used to perform quantitative evaluation of the performance of different image registration techniques. (authors)

  7. Computational Comparison of Several Greedy Algorithms for the Minimum Cost Perfect Matching Problem on Large Graphs

    DEFF Research Database (Denmark)

    Wøhlk, Sanne; Laporte, Gilbert

    2017-01-01

    The aim of this paper is to computationally compare several algorithms for the Minimum Cost Perfect Matching Problem on an undirected complete graph. Our work is motivated by the need to solve large instances of the Capacitated Arc Routing Problem (CARP) arising in the optimization of garbage...... collection in Denmark. Common heuristics for the CARP involve the optimal matching of the odd-degree nodes of a graph. The algorithms used in the comparison include the CPLEX solution of an exact formulation, the LEDA matching algorithm, a recent implementation of the Blossom algorithm, as well as six...

  8. A comparison of two dose calculation algorithms-anisotropic analytical algorithm and Acuros XB-for radiation therapy planning of canine intranasal tumors.

    Science.gov (United States)

    Nagata, Koichi; Pethel, Timothy D

    2017-07-01

    Although anisotropic analytical algorithm (AAA) and Acuros XB (AXB) are both radiation dose calculation algorithms that take into account the heterogeneity within the radiation field, Acuros XB is inherently more accurate. The purpose of this retrospective method comparison study was to compare them and evaluate the dose discrepancy within the planning target volume (PTV). Radiation therapy (RT) plans of 11 dogs with intranasal tumors treated by radiation therapy at the University of Georgia were evaluated. All dogs were planned for intensity-modulated radiation therapy using nine coplanar X-ray beams that were equally spaced, then dose calculated with anisotropic analytical algorithm. The same plan with the same monitor units was then recalculated using Acuros XB for comparisons. Each dog's planning target volume was separated into air, bone, and tissue and evaluated. The mean dose to the planning target volume estimated by Acuros XB was 1.3% lower. It was 1.4% higher for air, 3.7% lower for bone, and 0.9% lower for tissue. The volume of planning target volume covered by the prescribed dose decreased by 21% when Acuros XB was used due to increased dose heterogeneity within the planning target volume. Anisotropic analytical algorithm relatively underestimates the dose heterogeneity and relatively overestimates the dose to the bone and tissue within the planning target volume for the radiation therapy planning of canine intranasal tumors. This can be clinically significant especially if the tumor cells are present within the bone, because it may result in relative underdosing of the tumor. © 2017 American College of Veterinary Radiology.

  9. Comparison of four Adaboost algorithm based artificial neural networks in wind speed predictions

    International Nuclear Information System (INIS)

    Liu, Hui; Tian, Hong-qi; Li, Yan-fei; Zhang, Lei

    2015-01-01

    Highlights: • Four hybrid algorithms are proposed for the wind speed decomposition. • Adaboost algorithm is adopted to provide a hybrid training framework. • MLP neural networks are built to do the forecasting computation. • Four important network training algorithms are included in the MLP networks. • All the proposed hybrid algorithms are suitable for the wind speed predictions. - Abstract: The technology of wind speed prediction is important to guarantee the safety of wind power utilization. In this paper, four different hybrid methods are proposed for the high-precision multi-step wind speed predictions based on the Adaboost (Adaptive Boosting) algorithm and the MLP (Multilayer Perceptron) neural networks. In the hybrid Adaboost–MLP forecasting architecture, four important algorithms are adopted for the training and modeling of the MLP neural networks, including GD-ALR-BP algorithm, GDM-ALR-BP algorithm, CG-BP-FR algorithm and BFGS algorithm. The aim of the study is to investigate the promoted forecasting percentages of the MLP neural networks by the Adaboost algorithm’ optimization under various training algorithms. The hybrid models in the performance comparison include Adaboost–GD-ALR-BP–MLP, Adaboost–GDM-ALR-BP–MLP, Adaboost–CG-BP-FR–MLP, Adaboost–BFGS–MLP, GD-ALR-BP–MLP, GDM-ALR-BP–MLP, CG-BP-FR–MLP and BFGS–MLP. Two experimental results show that: (1) the proposed hybrid Adaboost–MLP forecasting architecture is effective for the wind speed predictions; (2) the Adaboost algorithm has promoted the forecasting performance of the MLP neural networks considerably; (3) among the proposed Adaboost–MLP forecasting models, the Adaboost–CG-BP-FR–MLP model has the best performance; and (4) the improved percentages of the MLP neural networks by the Adaboost algorithm decrease step by step with the following sequence of training algorithms as: GD-ALR-BP, GDM-ALR-BP, CG-BP-FR and BFGS

  10. Computational complexity of algorithms for sequence comparison, short-read assembly and genome alignment.

    Science.gov (United States)

    Baichoo, Shakuntala; Ouzounis, Christos A

    A multitude of algorithms for sequence comparison, short-read assembly and whole-genome alignment have been developed in the general context of molecular biology, to support technology development for high-throughput sequencing, numerous applications in genome biology and fundamental research on comparative genomics. The computational complexity of these algorithms has been previously reported in original research papers, yet this often neglected property has not been reviewed previously in a systematic manner and for a wider audience. We provide a review of space and time complexity of key sequence analysis algorithms and highlight their properties in a comprehensive manner, in order to identify potential opportunities for further research in algorithm or data structure optimization. The complexity aspect is poised to become pivotal as we will be facing challenges related to the continuous increase of genomic data on unprecedented scales and complexity in the foreseeable future, when robust biological simulation at the cell level and above becomes a reality. Copyright © 2017 Elsevier B.V. All rights reserved.

  11. Searching for an Oscillating Massive Scalar Field as a Dark Matter Candidate Using Atomic Hyperfine Frequency Comparisons.

    Science.gov (United States)

    Hees, A; Guéna, J; Abgrall, M; Bize, S; Wolf, P

    2016-08-05

    We use 6 yrs of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions, and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine structure constant and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed.

  12. An extensive assessment of network alignment algorithms for comparison of brain connectomes.

    Science.gov (United States)

    Milano, Marianna; Guzzi, Pietro Hiram; Tymofieva, Olga; Xu, Duan; Hess, Christofer; Veltri, Pierangelo; Cannataro, Mario

    2017-06-06

    Recently the study of the complex system of connections in neural systems, i.e. the connectome, has gained a central role in neurosciences. The modeling and analysis of connectomes are therefore a growing area. Here we focus on the representation of connectomes by using graph theory formalisms. Macroscopic human brain connectomes are usually derived from neuroimages; the analyzed brains are co-registered in the image domain and brought to a common anatomical space. An atlas is then applied in order to define anatomically meaningful regions that will serve as the nodes of the network - this process is referred to as parcellation. The atlas-based parcellations present some known limitations in cases of early brain development and abnormal anatomy. Consequently, it has been recently proposed to perform atlas-free random brain parcellation into nodes and align brains in the network space instead of the anatomical image space, as a way to deal with the unknown correspondences of the parcels. Such process requires modeling of the brain using graph theory and the subsequent comparison of the structure of graphs. The latter step may be modeled as a network alignment (NA) problem. In this work, we first define the problem formally, then we test six existing state of the art of network aligners on diffusion MRI-derived brain networks. We compare the performances of algorithms by assessing six topological measures. We also evaluated the robustness of algorithms to alterations of the dataset. The results confirm that NA algorithms may be applied in cases of atlas-free parcellation for a fully network-driven comparison of connectomes. The analysis shows MAGNA++ is the best global alignment algorithm. The paper presented a new analysis methodology that uses network alignment for validating atlas-free parcellation brain connectomes. The methodology has been experimented on several brain datasets.

  13. Genetic algorithm enhanced by machine learning in dynamic aperture optimization

    Science.gov (United States)

    Li, Yongjun; Cheng, Weixing; Yu, Li Hua; Rainer, Robert

    2018-05-01

    With the aid of machine learning techniques, the genetic algorithm has been enhanced and applied to the multi-objective optimization problem presented by the dynamic aperture of the National Synchrotron Light Source II (NSLS-II) Storage Ring. During the evolution processes employed by the genetic algorithm, the population is classified into different clusters in the search space. The clusters with top average fitness are given "elite" status. Intervention on the population is implemented by repopulating some potentially competitive candidates based on the experience learned from the accumulated data. These candidates replace randomly selected candidates among the original data pool. The average fitness of the population is therefore improved while diversity is not lost. Maintaining diversity ensures that the optimization is global rather than local. The quality of the population increases and produces more competitive descendants accelerating the evolution process significantly. When identifying the distribution of optimal candidates, they appear to be located in isolated islands within the search space. Some of these optimal candidates have been experimentally confirmed at the NSLS-II storage ring. The machine learning techniques that exploit the genetic algorithm can also be used in other population-based optimization problems such as particle swarm algorithm.

  14. QUASI-STELLAR OBJECT SELECTION ALGORITHM USING TIME VARIABILITY AND MACHINE LEARNING: SELECTION OF 1620 QUASI-STELLAR OBJECT CANDIDATES FROM MACHO LARGE MAGELLANIC CLOUD DATABASE

    International Nuclear Information System (INIS)

    Kim, Dae-Won; Protopapas, Pavlos; Alcock, Charles; Trichas, Markos; Byun, Yong-Ik; Khardon, Roni

    2011-01-01

    We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.

  15. A comparison of two photon planning algorithms for 8 MV and 25 MV X-ray beams in lung

    International Nuclear Information System (INIS)

    Kan, M.W.K.; Young, E.C.M.; Yu, P.K.N.

    1995-01-01

    The results of a comparison of two photon planning algorithms, the Clarkson Scatter Integration algorithm and the Equivalent Tissue-air Ratio algorithm are reported, using a simple lung phantom for 8 MV and 25 MV X-ray beams of field sizes 5 cm x 5 cm and 10 cm x 10 cm. Central axis depth-dose distributions were measured with a thimble chamber or a Markus parallel-plate chamber. Dose profile distributions were measured with TLD rods and films. Measured dose distributions were then compared to predicted dose distributions. Both algorithms overestimate the dose at mid-lung as they do not account for the effect of electronic disequilibrium. The Clarkson algorithm consistently shows less accurate results in comparison with the ETAR algorithm. There is additional error in the case of the Clarkson algorithm because of the assumption of a unit density medium in calculating scatter, which gives an overestimate in the effective scatter-air ratios in lung. For a 5 cm x 5 cm field, the error of dose prediction for 25 MV x-ray beam at mid-lung is 15.8 % and 12.8 % for Clarkson and ETAR algorithm respectively. At 8 MV the error is 9.3 % and 5.1 % respectively. In addition, both algorithms underestimate the penumbral width at mid-lung as they do not account for the penumbral flaring effect in low density medium. 25 refs., 2 tabs., 5 figs

  16. Definition and Analysis of a System for the Automated Comparison of Curriculum Sequencing Algorithms in Adaptive Distance Learning

    Science.gov (United States)

    Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia

    2011-01-01

    LS-Lab provides automatic support to comparison/evaluation of the Learning Object Sequences produced by different Curriculum Sequencing Algorithms. Through this framework a teacher can verify the correspondence between the behaviour of different sequencing algorithms and her pedagogical preferences. In fact the teacher can compare algorithms…

  17. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  18. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  19. Comparison of lithium and the eutectic lead lithium alloy, two candidate liquid metal breeder materials for self-cooled blankets

    International Nuclear Information System (INIS)

    Malang, S.; Mattas, R.

    1994-06-01

    Liquid metals are attractive candidates for both near-term and long-term fusion applications. The subjects of this comparison are the differences between the two candidate liquid metal breeder materials Li and LiPb for use in breeding blankets in the areas of neutronics, magnetohydrodynamics, tritium control, compatibility with structural materials, heat extraction system, safety, and required R ampersand D program. Both candidates appear to be promising for use in self-cooled breeding blankets which have inherent simplicity with the liquid metal serving as both breeders and coolant. The remaining feasibility question for both breeder materials is the electrical insulation between liquid metal and duct walls. Different ceramic coatings are required for the two breeders, and their crucial issues, namely self-healing of insulator cracks and radiation induced electrical degradation are not yet demonstrated. Each liquid metal breeder has advantages and concerns associated with it, and further development is needed to resolve these concerns

  20. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm.

    Science.gov (United States)

    Wang, Yun-Ting; Peng, Chao-Chung; Ravankar, Ankit A; Ravankar, Abhijeet

    2018-04-23

    In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR) device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP) with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.

  1. A Single LiDAR-Based Feature Fusion Indoor Localization Algorithm

    Directory of Open Access Journals (Sweden)

    Yun-Ting Wang

    2018-04-01

    Full Text Available In past years, there has been significant progress in the field of indoor robot localization. To precisely recover the position, the robots usually relies on multiple on-board sensors. Nevertheless, this affects the overall system cost and increases computation. In this research work, we considered a light detection and ranging (LiDAR device as the only sensor for detecting surroundings and propose an efficient indoor localization algorithm. To attenuate the computation effort and preserve localization robustness, a weighted parallel iterative closed point (WP-ICP with interpolation is presented. As compared to the traditional ICP, the point cloud is first processed to extract corners and line features before applying point registration. Later, points labeled as corners are only matched with the corner candidates. Similarly, points labeled as lines are only matched with the lines candidates. Moreover, their ICP confidence levels are also fused in the algorithm, which make the pose estimation less sensitive to environment uncertainties. The proposed WP-ICP architecture reduces the probability of mismatch and thereby reduces the ICP iterations. Finally, based on given well-constructed indoor layouts, experiment comparisons are carried out under both clean and perturbed environments. It is shown that the proposed method is effective in significantly reducing computation effort and is simultaneously able to preserve localization precision.

  2. Comparison of vessel enhancement algorithms applied to time-of-flight MRA images for cerebrovascular segmentation.

    Science.gov (United States)

    Phellan, Renzo; Forkert, Nils D

    2017-11-01

    Vessel enhancement algorithms are often used as a preprocessing step for vessel segmentation in medical images to improve the overall segmentation accuracy. Each algorithm uses different characteristics to enhance vessels, such that the most suitable algorithm may vary for different applications. This paper presents a comparative analysis of the accuracy gains in vessel segmentation generated by the use of nine vessel enhancement algorithms: Multiscale vesselness using the formulas described by Erdt (MSE), Frangi (MSF), and Sato (MSS), optimally oriented flux (OOF), ranking orientations responses path operator (RORPO), the regularized Perona-Malik approach (RPM), vessel enhanced diffusion (VED), hybrid diffusion with continuous switch (HDCS), and the white top hat algorithm (WTH). The filters were evaluated and compared based on time-of-flight MRA datasets and corresponding manual segmentations from 5 healthy subjects and 10 patients with an arteriovenous malformation. Additionally, five synthetic angiographic datasets with corresponding ground truth segmentation were generated with three different noise levels (low, medium, and high) and also used for comparison. The parameters for each algorithm and subsequent segmentation were optimized using leave-one-out cross evaluation. The Dice coefficient, Matthews correlation coefficient, area under the ROC curve, number of connected components, and true positives were used for comparison. The results of this study suggest that vessel enhancement algorithms do not always lead to more accurate segmentation results compared to segmenting nonenhanced images directly. Multiscale vesselness algorithms, such as MSE, MSF, and MSS proved to be robust to noise, while diffusion-based filters, such as RPM, VED, and HDCS ranked in the top of the list in scenarios with medium or no noise. Filters that assume tubular-shapes, such as MSE, MSF, MSS, OOF, RORPO, and VED show a decrease in accuracy when considering patients with an AVM

  3. SU-E-T-516: Dosimetric Validation of AcurosXB Algorithm in Comparison with AAA & CCC Algorithms for VMAT Technique.

    Science.gov (United States)

    Kathirvel, M; Subramanian, V Sai; Arun, G; Thirumalaiswamy, S; Ramalingam, K; Kumar, S Ashok; Jagadeesh, K

    2012-06-01

    To dosimetrically validate AcurosXB algorithm for Volumetric Modulated Arc Therapy (VMAT) in comparison with standard clinical Anisotropic Analytic Algorithm(AAA) and Collapsed Cone Convolution(CCC) dose calculation algorithms. AcurosXB dose calculation algorithm is available with Varian Eclipse treatment planning system (V10). It uses grid-based Boltzmann equation solver to predict dose precisely in lesser time. This study was made to realize algorithms ability to predict dose accurately as its delivery for which five clinical cases each of Brain, Head&Neck, Thoracic, Pelvic and SBRT were taken. Verification plans were created on multicube phantom with iMatrixx-2D detector array and then dose prediction was done with AcurosXB, AAA & CCC (COMPASS System) algorithm and the same were delivered onto CLINAC-iX treatment machine. Delivered dose was captured in iMatrixx plane for all 25 plans. Measured dose was taken as reference to quantify the agreement between AcurosXB calculation algorithm against previously validated AAA and CCC algorithm. Gamma evaluation was performed with clinical criteria distance-to-agreement 3&2mm and dose difference 3&2% in omnipro-I'MRT software. Plans were evaluated in terms of correlation coefficient, quantitative area gamma and average gamma. Study shows good agreement between mean correlation 0.9979±0.0012, 0.9984±0.0009 & 0.9979±0.0011 for AAA, CCC & Acuros respectively. Mean area gamma for criteria 3mm/3% was found to be 98.80±1.04, 98.14±2.31, 98.08±2.01 and 2mm/2% was found to be 93.94±3.83, 87.17±10.54 & 92.36±5.46 for AAA, CCC & Acuros respectively. Mean average gamma for 3mm/3% was 0.26±0.07, 0.42±0.08, 0.28±0.09 and 2mm/2% was found to be 0.39±0.10, 0.64±0.11, 0.42±0.13 for AAA, CCC & Acuros respectively. This study demonstrated that the AcurosXB algorithm had a good agreement with the AAA & CCC in terms of dose prediction. In conclusion AcurosXB algorithm provides a valid, accurate and speedy alternative to AAA

  4. Comparison of different chaotic maps in particle swarm optimization algorithm for long-term cascaded hydroelectric system scheduling

    International Nuclear Information System (INIS)

    He Yaoyao; Zhou Jianzhong; Xiang Xiuqiao; Chen Heng; Qin Hui

    2009-01-01

    The goal of this paper is to present a novel chaotic particle swarm optimization (CPSO) algorithm and compares the efficiency of three one-dimensional chaotic maps within symmetrical region for long-term cascaded hydroelectric system scheduling. The introduced chaotic maps improve the global optimal capability of CPSO algorithm. Moreover, a piecewise linear interpolation function is employed to transform all constraints into restrict upriver water level for implementing the maximum of objective function. Numerical results and comparisons demonstrate the effect and speed of different algorithms on a practical hydro-system.

  5. An improved multi-domain convolution tracking algorithm

    Science.gov (United States)

    Sun, Xin; Wang, Haiying; Zeng, Yingsen

    2018-04-01

    Along with the wide application of the Deep Learning in the field of Computer vision, Deep learning has become a mainstream direction in the field of object tracking. The tracking algorithm in this paper is based on the improved multidomain convolution neural network, and the VOT video set is pre-trained on the network by multi-domain training strategy. In the process of online tracking, the network evaluates candidate targets sampled from vicinity of the prediction target in the previous with Gaussian distribution, and the candidate target with the highest score is recognized as the prediction target of this frame. The Bounding Box Regression model is introduced to make the prediction target closer to the ground-truths target box of the test set. Grouping-update strategy is involved to extract and select useful update samples in each frame, which can effectively prevent over fitting. And adapt to changes in both target and environment. To improve the speed of the algorithm while maintaining the performance, the number of candidate target succeed in adjusting dynamically with the help of Self-adaption parameter Strategy. Finally, the algorithm is tested by OTB set, compared with other high-performance tracking algorithms, and the plot of success rate and the accuracy are drawn. which illustrates outstanding performance of the tracking algorithm in this paper.

  6. A multithreaded parallel implementation of a dynamic programming algorithm for sequence comparison.

    Science.gov (United States)

    Martins, W S; Del Cuvillo, J B; Useche, F J; Theobald, K B; Gao, G R

    2001-01-01

    This paper discusses the issues involved in implementing a dynamic programming algorithm for biological sequence comparison on a general-purpose parallel computing platform based on a fine-grain event-driven multithreaded program execution model. Fine-grain multithreading permits efficient parallelism exploitation in this application both by taking advantage of asynchronous point-to-point synchronizations and communication with low overheads and by effectively tolerating latency through the overlapping of computation and communication. We have implemented our scheme on EARTH, a fine-grain event-driven multithreaded execution and architecture model which has been ported to a number of parallel machines with off-the-shelf processors. Our experimental results show that the dynamic programming algorithm can be efficiently implemented on EARTH systems with high performance (e.g., speedup of 90 on 120 nodes), good programmability and reasonable cost.

  7. A comparison and evaluation of five biclustering algorithms by quantifying goodness of biclusters for gene expression data

    Directory of Open Access Journals (Sweden)

    Li Li

    2012-07-01

    Full Text Available Abstract Background Several biclustering algorithms have been proposed to identify biclusters, in which genes share similar expression patterns across a number of conditions. However, different algorithms would yield different biclusters and further lead to distinct conclusions. Therefore, some testing and comparisons between these algorithms are strongly required. Methods In this study, five biclustering algorithms (i.e. BIMAX, FABIA, ISA, QUBIC and SAMBA were compared with each other in the cases where they were used to handle two expression datasets (GDS1620 and pathway with different dimensions in Arabidopsis thaliana (A. thaliana GO (gene ontology annotation and PPI (protein-protein interaction network were used to verify the corresponding biological significance of biclusters from the five algorithms. To compare the algorithms’ performance and evaluate quality of identified biclusters, two scoring methods, namely weighted enrichment (WE scoring and PPI scoring, were proposed in our study. For each dataset, after combining the scores of all biclusters into one unified ranking, we could evaluate the performance and behavior of the five biclustering algorithms in a better way. Results Both WE and PPI scoring methods has been proved effective to validate biological significance of the biclusters, and a significantly positive correlation between the two sets of scores has been tested to demonstrate the consistence of these two methods. A comparative study of the above five algorithms has revealed that: (1 ISA is the most effective one among the five algorithms on the dataset of GDS1620 and BIMAX outperforms the other algorithms on the dataset of pathway. (2 Both ISA and BIMAX are data-dependent. The former one does not work well on the datasets with few genes, while the latter one holds well for the datasets with more conditions. (3 FABIA and QUBIC perform poorly in this study and they may be suitable to large datasets with more genes and

  8. Quantitative comparison of direct phase retrieval algorithms in in-line phase tomography

    International Nuclear Information System (INIS)

    Langer, Max; Cloetens, Peter; Guigay, Jean-Pierre; Peyrin, Francoise

    2008-01-01

    A well-known problem in x-ray microcomputed tomography is low sensitivity. Phase contrast imaging offers an increase of sensitivity of up to a factor of 10 3 in the hard x-ray region, which makes it possible to image soft tissue and small density variations. If a sufficiently coherent x-ray beam, such as that obtained from a third generation synchrotron, is used, phase contrast can be obtained by simply moving the detector downstream of the imaged object. This setup is known as in-line or propagation based phase contrast imaging. A quantitative relationship exists between the phase shift induced by the object and the recorded intensity and inversion of this relationship is called phase retrieval. Since the phase shift is proportional to projections through the three-dimensional refractive index distribution in the object, once the phase is retrieved, the refractive index can be reconstructed by using the phase as input to a tomographic reconstruction algorithm. A comparison between four phase retrieval algorithms is presented. The algorithms are based on the transport of intensity equation (TIE), transport of intensity equation for weak absorption, the contrast transfer function (CTF), and a mixed approach between the CTF and TIE, respectively. The compared methods all rely on linearization of the relationship between phase shift and recorded intensity to yield fast phase retrieval algorithms. The phase retrieval algorithms are compared using both simulated and experimental data, acquired at the European Synchrotron Radiation Facility third generation synchrotron light source. The algorithms are evaluated in terms of two different reconstruction error metrics. While being slightly less computationally effective, the mixed approach shows the best performance in terms of the chosen criteria.

  9. Portfolio management using value at risk: A comparison between genetic algorithms and particle swarm optimization

    NARCIS (Netherlands)

    V.A.F. Dallagnol (V. A F); J.H. van den Berg (Jan); L. Mous (Lonneke)

    2009-01-01

    textabstractIn this paper, it is shown a comparison of the application of particle swarm optimization and genetic algorithms to portfolio management, in a constrained portfolio optimization problem where no short sales are allowed. The objective function to be minimized is the value at risk

  10. Inversion of Land Surface Temperature (LST Using Terra ASTER Data: A Comparison of Three Algorithms

    Directory of Open Access Journals (Sweden)

    Milton Isaya Ndossi

    2016-12-01

    Full Text Available Land Surface Temperature (LST is an important measurement in studies related to the Earth surface’s processes. The Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER instrument onboard the Terra spacecraft is the currently available Thermal Infrared (TIR imaging sensor with the highest spatial resolution. This study involves the comparison of LSTs inverted from the sensor using the Split Window Algorithm (SWA, the Single Channel Algorithm (SCA and the Planck function. This study has used the National Oceanic and Atmospheric Administration’s (NOAA data to model and compare the results from the three algorithms. The data from the sensor have been processed by the Python programming language in a free and open source software package (QGIS to enable users to make use of the algorithms. The study revealed that the three algorithms are suitable for LST inversion, whereby the Planck function showed the highest level of accuracy, the SWA had moderate level of accuracy and the SCA had the least accuracy. The algorithms produced results with Root Mean Square Errors (RMSE of 2.29 K, 3.77 K and 2.88 K for the Planck function, the SCA and SWA respectively.

  11. A comparison between two algorithms for the retrieval of soil moisture using AMSR-E data

    Science.gov (United States)

    A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS) watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground condit...

  12. In Pursuit of LSST Science Requirements: A Comparison of Photometry Algorithms

    Science.gov (United States)

    Becker, Andrew C.; Silvestri, Nicole M.; Owen, Russell E.; Ivezić, Željko; Lupton, Robert H.

    2007-12-01

    We have developed an end-to-end photometric data-processing pipeline to compare current photometric algorithms commonly used on ground-based imaging data. This test bed is exceedingly adaptable and enables us to perform many research and development tasks, including image subtraction and co-addition, object detection and measurements, the production of photometric catalogs, and the creation and stocking of database tables with time-series information. This testing has been undertaken to evaluate existing photometry algorithms for consideration by a next-generation image-processing pipeline for the Large Synoptic Survey Telescope (LSST). We outline the results of our tests for four packages: the Sloan Digital Sky Survey's Photo package, DAOPHOT and ALLFRAME, DOPHOT, and two versions of Source Extractor (SExtractor). The ability of these algorithms to perform point-source photometry, astrometry, shape measurements, and star-galaxy separation and to measure objects at low signal-to-noise ratio is quantified. We also perform a detailed crowded-field comparison of DAOPHOT and ALLFRAME, and profile the speed and memory requirements in detail for SExtractor. We find that both DAOPHOT and Photo are able to perform aperture photometry to high enough precision to meet LSST's science requirements, and less adequately at PSF-fitting photometry. Photo performs the best at simultaneous point- and extended-source shape and brightness measurements. SExtractor is the fastest algorithm, and recent upgrades in the software yield high-quality centroid and shape measurements with little bias toward faint magnitudes. ALLFRAME yields the best photometric results in crowded fields.

  13. A Framing Link Based Tabu Search Algorithm for Large-Scale Multidepot Vehicle Routing Problems

    Directory of Open Access Journals (Sweden)

    Xuhao Zhang

    2014-01-01

    Full Text Available A framing link (FL based tabu search algorithm is proposed in this paper for a large-scale multidepot vehicle routing problem (LSMDVRP. Framing links are generated during continuous great optimization of current solutions and then taken as skeletons so as to improve optimal seeking ability, speed up the process of optimization, and obtain better results. Based on the comparison between pre- and postmutation routes in the current solution, different parts are extracted. In the current optimization period, links involved in the optimal solution are regarded as candidates to the FL base. Multiple optimization periods exist in the whole algorithm, and there are several potential FLs in each period. If the update condition is satisfied, the FL base is updated, new FLs are added into the current route, and the next period starts. Through adjusting the borderline of multidepot sharing area with dynamic parameters, the authors define candidate selection principles for three kinds of customer connections, respectively. Link split and the roulette approach are employed to choose FLs. 18 LSMDVRP instances in three groups are studied and new optimal solution values for nine of them are obtained, with higher computation speed and reliability.

  14. Algorithm, applications and evaluation for protein comparison by Ramanujan Fourier transform.

    Science.gov (United States)

    Zhao, Jian; Wang, Jiasong; Hua, Wei; Ouyang, Pingkai

    2015-12-01

    The amino acid sequence of a protein determines its chemical properties, chain conformation and biological functions. Protein sequence comparison is of great importance to identify similarities of protein structures and infer their functions. Many properties of a protein correspond to the low-frequency signals within the sequence. Low frequency modes in protein sequences are linked to the secondary structures, membrane protein types, and sub-cellular localizations of the proteins. In this paper, we present Ramanujan Fourier transform (RFT) with a fast algorithm to analyze the low-frequency signals of protein sequences. The RFT method is applied to similarity analysis of protein sequences with the Resonant Recognition Model (RRM). The results show that the proposed fast RFT method on protein comparison is more efficient than commonly used discrete Fourier transform (DFT). RFT can detect common frequencies as significant feature for specific protein families, and the RFT spectrum heat-map of protein sequences demonstrates the information conservation in the sequence comparison. The proposed method offers a new tool for pattern recognition, feature extraction and structural analysis on protein sequences. Copyright © 2015 Elsevier Ltd. All rights reserved.

  15. WE-AB-209-06: Dynamic Collimator Trajectory Algorithm for Use in VMAT Treatment Deliveries

    Energy Technology Data Exchange (ETDEWEB)

    MacDonald, L [Department of Medical Physics, Dalhousie University, Halifax, Nova Scotia, CA (Canada); Thomas, C; Syme, A [Department of Medical Physics, Dalhousie University, Halifax, Nova Scotia, CA (Canada); Department of Radiation Oncology, Dalhousie University, Halifax, Nova Scotia (Canada); Medical Physics, Nova Scotia Cancer Centre, Halifax, Nova Scotia (Canada)

    2016-06-15

    Purpose: To develop advanced dynamic collimator positioning algorithms for optimal beam’s-eye-view (BEV) fitting of targets in VMAT procedures, including multiple metastases stereotactic radiosurgery procedures. Methods: A trajectory algorithm was developed, which can dynamically modify the angle of the collimator as a function of VMAT control point to provide optimized collimation of target volume(s). Central to this algorithm is a concept denoted “whitespace”, defined as area within the jaw-defined BEV field, outside of the PTV, and not shielded by the MLC when fit to the PTV. Calculating whitespace at all collimator angles and every control point, a two-dimensional topographical map depicting the tightness-of-fit of the MLC was generated. A variety of novel searching algorithms identified a number of candidate trajectories of continuous collimator motion. Ranking these candidate trajectories according to their accrued whitespace value produced an optimal solution for navigation of this map. Results: All trajectories were normalized to minimum possible (i.e. calculated without consideration of collimator motion constraints) accrued whitespace. On an acoustic neuroma case, a random walk algorithm generated a trajectory with 151% whitespace; random walk including a mandatory anchor point improved this to 148%; gradient search produced a trajectory with 137%; and bi-directional gradient search generated a trajectory with 130% whitespace. For comparison, a fixed collimator angle of 30° and 330° accumulated 272% and 228% of whitespace, respectively. The algorithm was tested on a clinical case with two metastases (single isocentre) and identified collimator angles that allow for simultaneous irradiation of the PTVs while minimizing normal tissue irradiation. Conclusion: Dynamic collimator trajectories have the potential to improve VMAT deliveries through increased efficiency and reduced normal tissue dose, especially in treatment of multiple cranial metastases

  16. WE-AB-209-06: Dynamic Collimator Trajectory Algorithm for Use in VMAT Treatment Deliveries

    International Nuclear Information System (INIS)

    MacDonald, L; Thomas, C; Syme, A

    2016-01-01

    Purpose: To develop advanced dynamic collimator positioning algorithms for optimal beam’s-eye-view (BEV) fitting of targets in VMAT procedures, including multiple metastases stereotactic radiosurgery procedures. Methods: A trajectory algorithm was developed, which can dynamically modify the angle of the collimator as a function of VMAT control point to provide optimized collimation of target volume(s). Central to this algorithm is a concept denoted “whitespace”, defined as area within the jaw-defined BEV field, outside of the PTV, and not shielded by the MLC when fit to the PTV. Calculating whitespace at all collimator angles and every control point, a two-dimensional topographical map depicting the tightness-of-fit of the MLC was generated. A variety of novel searching algorithms identified a number of candidate trajectories of continuous collimator motion. Ranking these candidate trajectories according to their accrued whitespace value produced an optimal solution for navigation of this map. Results: All trajectories were normalized to minimum possible (i.e. calculated without consideration of collimator motion constraints) accrued whitespace. On an acoustic neuroma case, a random walk algorithm generated a trajectory with 151% whitespace; random walk including a mandatory anchor point improved this to 148%; gradient search produced a trajectory with 137%; and bi-directional gradient search generated a trajectory with 130% whitespace. For comparison, a fixed collimator angle of 30° and 330° accumulated 272% and 228% of whitespace, respectively. The algorithm was tested on a clinical case with two metastases (single isocentre) and identified collimator angles that allow for simultaneous irradiation of the PTVs while minimizing normal tissue irradiation. Conclusion: Dynamic collimator trajectories have the potential to improve VMAT deliveries through increased efficiency and reduced normal tissue dose, especially in treatment of multiple cranial metastases

  17. A Clustal Alignment Improver Using Evolutionary Algorithms

    DEFF Research Database (Denmark)

    Thomsen, Rene; Fogel, Gary B.; Krink, Thimo

    2002-01-01

    Multiple sequence alignment (MSA) is a crucial task in bioinformatics. In this paper we extended previous work with evolutionary algorithms (EA) by using MSA solutions obtained from the wellknown Clustal V algorithm as a candidate solution seed of the initial EA population. Our results clearly show...

  18. A FIRST COMPARISON OF KEPLER PLANET CANDIDATES IN SINGLE AND MULTIPLE SYSTEMS

    International Nuclear Information System (INIS)

    Latham, David W.; Quinn, Samuel N.; Carter, Joshua A.; Holman, Matthew J.; Rowe, Jason F.; Borucki, William J.; Bryson, Stephen T.; Howell, Steve B.; Batalha, Natalie M.; Brown, Timothy M.; Buchhave, Lars A.; Caldwell, Douglas A.; Christiansen, Jessie L.; Ciardi, David R.; Cochran, William D.; Dunham, Edward W.; Fabrycky, Daniel C.; Ford, Eric B.; Gautier, Thomas N. III; Gilliland, Ronald L.

    2011-01-01

    In this Letter, we present an overview of the rich population of systems with multiple candidate transiting planets found in the first four months of Kepler data. The census of multiples includes 115 targets that show two candidate planets, 45 with three, eight with four, and one each with five and six, for a total of 170 systems with 408 candidates. When compared to the 827 systems with only one candidate, the multiples account for 17% of the total number of systems, and one-third of all the planet candidates. We compare the characteristics of candidates found in multiples with those found in singles. False positives due to eclipsing binaries are much less common for the multiples, as expected. Singles and multiples are both dominated by planets smaller than Neptune; 69 +2 -3 % for singles and 86 +2 -5 % for multiples. This result, that systems with multiple transiting planets are less likely to include a transiting giant planet, suggests that close-in giant planets tend to disrupt the orbital inclinations of small planets in flat systems, or maybe even prevent the formation of such systems in the first place.

  19. Fast-Solving Quasi-Optimal LS-S3VM Based on an Extended Candidate Set.

    Science.gov (United States)

    Ma, Yuefeng; Liang, Xun; Kwok, James T; Li, Jianping; Zhou, Xiaoping; Zhang, Haiyan

    2018-04-01

    The semisupervised least squares support vector machine (LS-S 3 VM) is an important enhancement of least squares support vector machines in semisupervised learning. Given that most data collected from the real world are without labels, semisupervised approaches are more applicable than standard supervised approaches. Although a few training methods for LS-S 3 VM exist, the problem of deriving the optimal decision hyperplane efficiently and effectually has not been solved. In this paper, a fully weighted model of LS-S 3 VM is proposed, and a simple integer programming (IP) model is introduced through an equivalent transformation to solve the model. Based on the distances between the unlabeled data and the decision hyperplane, a new indicator is designed to represent the possibility that the label of an unlabeled datum should be reversed in each iteration during training. Using the indicator, we construct an extended candidate set consisting of the indices of unlabeled data with high possibilities, which integrates more information from unlabeled data. Our algorithm is degenerated into a special scenario of the previous algorithm when the extended candidate set is reduced into a set with only one element. Two strategies are utilized to determine the descent directions based on the extended candidate set. Furthermore, we developed a novel method for locating a good starting point based on the properties of the equivalent IP model. Combined with the extended candidate set and the carefully computed starting point, a fast algorithm to solve LS-S 3 VM quasi-optimally is proposed. The choice of quasi-optimal solutions results in low computational cost and avoidance of overfitting. Experiments show that our algorithm equipped with the two designed strategies is more effective than other algorithms in at least one of the following three aspects: 1) computational complexity; 2) generalization ability; and 3) flexibility. However, our algorithm and other algorithms have

  20. A tunable algorithm for collective decision-making.

    Science.gov (United States)

    Pratt, Stephen C; Sumpter, David J T

    2006-10-24

    Complex biological systems are increasingly understood in terms of the algorithms that guide the behavior of system components and the information pathways that link them. Much attention has been given to robust algorithms, or those that allow a system to maintain its functions in the face of internal or external perturbations. At the same time, environmental variation imposes a complementary need for algorithm versatility, or the ability to alter system function adaptively as external circumstances change. An important goal of systems biology is thus the identification of biological algorithms that can meet multiple challenges rather than being narrowly specified to particular problems. Here we show that emigrating colonies of the ant Temnothorax curvispinosus tune the parameters of a single decision algorithm to respond adaptively to two distinct problems: rapid abandonment of their old nest in a crisis and deliberative selection of the best available new home when their old nest is still intact. The algorithm uses a stepwise commitment scheme and a quorum rule to integrate information gathered by numerous individual ants visiting several candidate homes. By varying the rates at which they search for and accept these candidates, the ants yield a colony-level response that adaptively emphasizes either speed or accuracy. We propose such general but tunable algorithms as a design feature of complex systems, each algorithm providing elegant solutions to a wide range of problems.

  1. Comparison of analyzer-based imaging computed tomography extraction algorithms and application to bone-cartilage imaging

    International Nuclear Information System (INIS)

    Diemoz, Paul C; Bravin, Alberto; Coan, Paola; Glaser, Christian

    2010-01-01

    In x-ray phase-contrast analyzer-based imaging, the contrast is provided by a combination of absorption, refraction and scattering effects. Several extraction algorithms, which attempt to separate and quantify these different physical contributions, have been proposed and applied. In a previous work, we presented a quantitative comparison of five among the most well-known extraction algorithms based on the geometrical optics approximation applied to planar images: diffraction-enhanced imaging (DEI), extended diffraction-enhanced imaging (E-DEI), generalized diffraction-enhanced imaging (G-DEI), multiple-image radiography (MIR) and Gaussian curve fitting (GCF). In this paper, we compare these algorithms in the case of the computed tomography (CT) modality. The extraction algorithms are applied to analyzer-based CT images of both plastic phantoms and biological samples (cartilage-on-bone cylinders). Absorption, refraction and scattering signals are derived. Results obtained with the different algorithms may vary greatly, especially in the case of large refraction angles. We show that ABI-CT extraction algorithms can provide an excellent tool to enhance the visualization of cartilage internal structures, which may find applications in a clinical context. Besides, by using the refraction images, the refractive index decrements for both the cartilage matrix and the cartilage cells have been estimated.

  2. Inferring Gene Regulatory Networks Using Conditional Regulation Pattern to Guide Candidate Genes.

    Directory of Open Access Journals (Sweden)

    Fei Xiao

    Full Text Available Combining path consistency (PC algorithms with conditional mutual information (CMI are widely used in reconstruction of gene regulatory networks. CMI has many advantages over Pearson correlation coefficient in measuring non-linear dependence to infer gene regulatory networks. It can also discriminate the direct regulations from indirect ones. However, it is still a challenge to select the conditional genes in an optimal way, which affects the performance and computation complexity of the PC algorithm. In this study, we develop a novel conditional mutual information-based algorithm, namely RPNI (Regulation Pattern based Network Inference, to infer gene regulatory networks. For conditional gene selection, we define the co-regulation pattern, indirect-regulation pattern and mixture-regulation pattern as three candidate patterns to guide the selection of candidate genes. To demonstrate the potential of our algorithm, we apply it to gene expression data from DREAM challenge. Experimental results show that RPNI outperforms existing conditional mutual information-based methods in both accuracy and time complexity for different sizes of gene samples. Furthermore, the robustness of our algorithm is demonstrated by noisy interference analysis using different types of noise.

  3. A COMPARISON BETWEEN TWO ALGORITHMS FOR THE RETRIEVAL OF SOIL MOISTURE USING AMSR-E DATA

    Directory of Open Access Journals (Sweden)

    Simonetta ePaloscia

    2015-04-01

    Full Text Available A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions and provide long-term in-situ data. One of the algorithms is the artificial neural network-based algorithm developed by the Institute of Applied Physics of the National Research Council (IFAC-CNR (HydroAlgo and the second one is the Single Channel Algorithm (SCA developed by USDA-ARS (US Department of Agriculture-Agricultural Research Service. Both algorithms are based on the same radiative transfer equations but are implemented very differently. Both made use of datasets provided by the Japanese Aerospace Exploration Agency (JAXA, within the framework of Advanced Microwave Scanning Radiometer–Earth Observing System (AMSR-E and Global Change Observation Mission–Water GCOM/AMSR-2 programs. Results demonstrated that both algorithms perform better than the mission specified accuracy, with Root Mean Square Error (RMSE ≤0.06 m3/m3 and Bias <0.02 m3/m3. These results expand on previous investigations using different algorithms and sites. The novelty of the paper consists of the fact that it is the first intercomparison of the HydroAlgo algorithm with a more traditional retrieval algorithm, which offers an approach to higher spatial resolution products.

  4. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET.

    Science.gov (United States)

    Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M

    2014-07-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.

  5. Comparison of Genetic Algorithm and Hill Climbing for Shortest Path Optimization Mapping

    Directory of Open Access Journals (Sweden)

    Fronita Mona

    2018-01-01

    Full Text Available Traveling Salesman Problem (TSP is an optimization to find the shortest path to reach several destinations in one trip without passing through the same city and back again to the early departure city, the process is applied to the delivery systems. This comparison is done using two methods, namely optimization genetic algorithm and hill climbing. Hill Climbing works by directly selecting a new path that is exchanged with the neighbour’s to get the track distance smaller than the previous track, without testing. Genetic algorithms depend on the input parameters, they are the number of population, the probability of crossover, mutation probability and the number of generations. To simplify the process of determining the shortest path supported by the development of software that uses the google map API. Tests carried out as much as 20 times with the number of city 8, 16, 24 and 32 to see which method is optimal in terms of distance and time computation. Based on experiments conducted with a number of cities 3, 4, 5 and 6 producing the same value and optimal distance for the genetic algorithm and hill climbing, the value of this distance begins to differ with the number of city 7. The overall results shows that these tests, hill climbing are more optimal to number of small cities and the number of cities over 30 optimized using genetic algorithms.

  6. Common barrel and forward CA tracking algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Mykhailo, Pugach [Goethe-Universitaet, Frankfurt (Germany); Frankfurt Institute for Advanced Studies, Frankfurt (Germany); KINR, Kyiv (Ukraine); Gorbunov, Sergey; Kisel, Ivan [Goethe-Universitaet, Frankfurt (Germany); Frankfurt Institute for Advanced Studies, Frankfurt (Germany); Collaboration: PANDA-Collaboration

    2016-07-01

    There are complex detector setups which consist of barrel (cylindrical) and forward parts, and such systems require a special approach in the registered charged particles track finding procedure. Currently the tracking procedure might be performed in both parts of such detector independently from each other, but the final goal on this direction is a creation of a combined tracking, which will work in both parts of the detector simultaneously. The basic algorithm is based on Kalman Filter (KF) and Cellular Automata (CA). And the tracking procedure in such a complex system is rather extraordinary as far as it requires 2 different models to describe the state vector of segments of the reconstructed track in the mathematical apparatus of the KF-algorithm. To overcome this specifics a mathematical apparatus of transition matrices must be developed and implemented, so that one can transfer from one track model to another. Afterwards the work of the CA is performed, which reduces to segments sorting, their union into track-candidates and selection of the best candidates by the chi-square criteria after fitting of the track-candidate by the KF. In this report the algorithm, status and perspectives of such combined tracking are described.

  7. On factoring RSA modulus using random-restart hill-climbing algorithm and Pollard’s rho algorithm

    Science.gov (United States)

    Budiman, M. A.; Rachmawati, D.

    2017-12-01

    The security of the widely-used RSA public key cryptography algorithm depends on the difficulty of factoring a big integer into two large prime numbers. For many years, the integer factorization problem has been intensively and extensively studied in the field of number theory. As a result, a lot of deterministic algorithms such as Euler’s algorithm, Kraitchik’s, and variants of Pollard’s algorithms have been researched comprehensively. Our study takes a rather uncommon approach: rather than making use of intensive number theories, we attempt to factorize RSA modulus n by using random-restart hill-climbing algorithm, which belongs the class of metaheuristic algorithms. The factorization time of RSA moduli with different lengths is recorded and compared with the factorization time of Pollard’s rho algorithm, which is a deterministic algorithm. Our experimental results indicates that while random-restart hill-climbing algorithm is an acceptable candidate to factorize smaller RSA moduli, the factorization speed is much slower than that of Pollard’s rho algorithm.

  8. Modified Bat Algorithm Based on Lévy Flight and Opposition Based Learning

    Directory of Open Access Journals (Sweden)

    Xian Shan

    2016-01-01

    Full Text Available Bat Algorithm (BA is a swarm intelligence algorithm which has been intensively applied to solve academic and real life optimization problems. However, due to the lack of good balance between exploration and exploitation, BA sometimes fails at finding global optimum and is easily trapped into local optima. In order to overcome the premature problem and improve the local searching ability of Bat Algorithm for optimization problems, we propose an improved BA called OBMLBA. In the proposed algorithm, a modified search equation with more useful information from the search experiences is introduced to generate a candidate solution, and Lévy Flight random walk is incorporated with BA in order to avoid being trapped into local optima. Furthermore, the concept of opposition based learning (OBL is embedded to BA to enhance the diversity and convergence capability. To evaluate the performance of the proposed approach, 16 benchmark functions have been employed. The results obtained by the experiments demonstrate the effectiveness and efficiency of OBMLBA for global optimization problems. Comparisons with some other BA variants and other state-of-the-art algorithms have shown the proposed approach significantly improves the performance of BA. Performances of the proposed algorithm on large scale optimization problems and real world optimization problems are not discussed in the paper, and it will be studied in the future work.

  9. VennPainter: A Tool for the Comparison and Identification of Candidate Genes Based on Venn Diagrams.

    Directory of Open Access Journals (Sweden)

    Guoliang Lin

    Full Text Available VennPainter is a program for depicting unique and shared sets of genes lists and generating Venn diagrams, by using the Qt C++ framework. The software produces Classic Venn, Edwards' Venn and Nested Venn diagrams and allows for eight sets in a graph mode and 31 sets in data processing mode only. In comparison, previous programs produce Classic Venn and Edwards' Venn diagrams and allow for a maximum of six sets. The software incorporates user-friendly features and works in Windows, Linux and Mac OS. Its graphical interface does not require a user to have programing skills. Users can modify diagram content for up to eight datasets because of the Scalable Vector Graphics output. VennPainter can provide output results in vertical, horizontal and matrix formats, which facilitates sharing datasets as required for further identification of candidate genes. Users can obtain gene lists from shared sets by clicking the numbers on the diagram. Thus, VennPainter is an easy-to-use, highly efficient, cross-platform and powerful program that provides a more comprehensive tool for identifying candidate genes and visualizing the relationships among genes or gene families in comparative analysis.

  10. Dose Calculation Accuracy of the Monte Carlo Algorithm for CyberKnife Compared with Other Commercially Available Dose Calculation Algorithms

    International Nuclear Information System (INIS)

    Sharma, Subhash; Ott, Joseph; Williams, Jamone; Dickow, Danny

    2011-01-01

    Monte Carlo dose calculation algorithms have the potential for greater accuracy than traditional model-based algorithms. This enhanced accuracy is particularly evident in regions of lateral scatter disequilibrium, which can develop during treatments incorporating small field sizes and low-density tissue. A heterogeneous slab phantom was used to evaluate the accuracy of several commercially available dose calculation algorithms, including Monte Carlo dose calculation for CyberKnife, Analytical Anisotropic Algorithm and Pencil Beam convolution for the Eclipse planning system, and convolution-superposition for the Xio planning system. The phantom accommodated slabs of varying density; comparisons between planned and measured dose distributions were accomplished with radiochromic film. The Monte Carlo algorithm provided the most accurate comparison between planned and measured dose distributions. In each phantom irradiation, the Monte Carlo predictions resulted in gamma analysis comparisons >97%, using acceptance criteria of 3% dose and 3-mm distance to agreement. In general, the gamma analysis comparisons for the other algorithms were <95%. The Monte Carlo dose calculation algorithm for CyberKnife provides more accurate dose distribution calculations in regions of lateral electron disequilibrium than commercially available model-based algorithms. This is primarily because of the ability of Monte Carlo algorithms to implicitly account for tissue heterogeneities, density scaling functions; and/or effective depth correction factors are not required.

  11. Comparison of classification algorithms for various methods of preprocessing radar images of the MSTAR base

    Science.gov (United States)

    Borodinov, A. A.; Myasnikov, V. V.

    2018-04-01

    The present work is devoted to comparing the accuracy of the known qualification algorithms in the task of recognizing local objects on radar images for various image preprocessing methods. Preprocessing involves speckle noise filtering and normalization of the object orientation in the image by the method of image moments and by a method based on the Hough transform. In comparison, the following classification algorithms are used: Decision tree; Support vector machine, AdaBoost, Random forest. The principal component analysis is used to reduce the dimension. The research is carried out on the objects from the base of radar images MSTAR. The paper presents the results of the conducted studies.

  12. Theoretic derivation of directed acyclic subgraph algorithm and comparisons with message passing algorithm

    Science.gov (United States)

    Ha, Jeongmok; Jeong, Hong

    2016-07-01

    This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.

  13. Comparison between dynamic programming and genetic algorithm for hydro unit economic load dispatch

    Directory of Open Access Journals (Sweden)

    Bin Xu

    2014-10-01

    Full Text Available The hydro unit economic load dispatch (ELD is of great importance in energy conservation and emission reduction. Dynamic programming (DP and genetic algorithm (GA are two representative algorithms for solving ELD problems. The goal of this study was to examine the performance of DP and GA while they were applied to ELD. We established numerical experiments to conduct performance comparisons between DP and GA with two given schemes. The schemes included comparing the CPU time of the algorithms when they had the same solution quality, and comparing the solution quality when they had the same CPU time. The numerical experiments were applied to the Three Gorges Reservoir in China, which is equipped with 26 hydro generation units. We found the relation between the performance of algorithms and the number of units through experiments. Results show that GA is adept at searching for optimal solutions in low-dimensional cases. In some cases, such as with a number of units of less than 10, GA's performance is superior to that of a coarse-grid DP. However, GA loses its superiority in high-dimensional cases. DP is powerful in obtaining stable and high-quality solutions. Its performance can be maintained even while searching over a large solution space. Nevertheless, due to its exhaustive enumerating nature, it costs excess time in low-dimensional cases.

  14. Algorithms imaging tests comparison following the first febrile urinary tract infection in children.

    Science.gov (United States)

    Tombesi, María M; Alconcher, Laura F; Lucarelli, Lucas; Ciccioli, Agustina

    2017-08-01

    To compare the diagnostic sensitivity, costs and radiation doses of imaging tests algorithms developed by the Argentine Society of Pediatrics in 2003 and 2015, against British and American guidelines after the first febrile urinary tract infection (UTI). Inclusion criteria: children ≤ 2 years old with their first febrile UTI and normal ultrasound, voiding cystourethrography and dimercaptosuccinic acid scintigraphy, according to the algorithm established by the Argentine Society of Pediatrics in 2003, treated between 2003 and 2010. The comparisons between algorithms were carried out through retrospective simulation. Eighty (80) patients met the inclusion criteria; 51 (63%) had vesicoureteral reflux (VUR); 6% of the cases were severe. Renal scarring was observed in 6 patients (7.5%). Cost: ARS 404,000. Radiation: 160 millisieverts. With the Argentine Society of Pediatrics' algorithm developed in 2015, the diagnosis of 4 VURs and 2 cases of renal scarring would have been missed. The cost of this omission would have been ARS 301,800 and 124 millisieverts of radiation. British and American guidelines would have missed the diagnosis of all VURs and all cases of renal scarring, with a related cost of ARS 23,000 and ARS 40,000, respectively and 0 radiation. Intensive protocols are highly sensitive to VUR and renal scarring, but they imply high costs and doses of radiation, and result in questionable benefits. Sociedad Argentina de Pediatría

  15. A comparison of an algorithm for automated sequential beam orientation selection (Cycle) with simulated annealing

    International Nuclear Information System (INIS)

    Woudstra, Evert; Heijmen, Ben J M; Storchi, Pascal R M

    2008-01-01

    Some time ago we developed and published a new deterministic algorithm (called Cycle) for automatic selection of beam orientations in radiotherapy. This algorithm is a plan generation process aiming at the prescribed PTV dose within hard dose and dose-volume constraints. The algorithm allows a large number of input orientations to be used and selects only the most efficient orientations, surviving the selection process. Efficiency is determined by a score function and is more or less equal to the extent of uninhibited access to the PTV for a specific beam during the selection process. In this paper we compare the capabilities of fast-simulated annealing (FSA) and Cycle for cases where local optima are supposed to be present. Five pancreas and five oesophagus cases previously treated in our institute were selected for this comparison. Plans were generated for FSA and Cycle, using the same hard dose and dose-volume constraints, and the largest possible achieved PTV doses as obtained from these algorithms were compared. The largest achieved PTV dose values were generally very similar for the two algorithms. In some cases FSA resulted in a slightly higher PTV dose than Cycle, at the cost of switching on substantially more beam orientations than Cycle. In other cases, when Cycle generated the solution with the highest PTV dose using only a limited number of non-zero weight beams, FSA seemed to have some difficulty in switching off the unfavourable directions. Cycle was faster than FSA, especially for large-dimensional feasible spaces. In conclusion, for the cases studied in this paper, we have found that despite the inherent drawback of sequential search as used by Cycle (where Cycle could probably get trapped in a local optimum), Cycle is nevertheless able to find comparable or sometimes slightly better treatment plans in comparison with FSA (which in theory finds the global optimum) especially in large-dimensional beam weight spaces

  16. Comparison of soil moisture retrieval algorithms based on the synergy between SMAP and SMOS-IC

    Science.gov (United States)

    Ebrahimi-Khusfi, Mohsen; Alavipanah, Seyed Kazem; Hamzeh, Saeid; Amiraslani, Farshad; Neysani Samany, Najmeh; Wigneron, Jean-Pierre

    2018-05-01

    This study was carried out to evaluate possible improvements of the soil moisture (SM) retrievals from the SMAP observations, based on the synergy between SMAP and SMOS. We assessed the impacts of the vegetation and soil roughness parameters on SM retrievals from SMAP observations. To do so, the effects of three key input parameters including the vegetation optical depth (VOD), effective scattering albedo (ω) and soil roughness (HR) parameters were assessed with the emphasis on the synergy with the VOD product derived from SMOS-IC, a new and simpler version of the SMOS algorithm, over two years of data (April 2015 to April 2017). First, a comprehensive comparison of seven SM retrieval algorithms was made to find the best one for SM retrievals from the SMAP observations. All results were evaluated against in situ measurements over 548 stations from the International Soil Moisture Network (ISMN) in terms of four statistical metrics: correlation coefficient (R), root mean square error (RMSE), bias and unbiased RMSE (UbRMSE). The comparison of seven SM retrieval algorithms showed that the dual channel algorithm based on the additional use of the SMOS-IC VOD product (selected algorithm) led to the best results of SM retrievals over 378, 399, 330 and 271 stations (out of a total of 548 stations) in terms of R, RMSE, UbRMSE and both R & UbRMSE, respectively. Moreover, comparing the measured and retrieved SM values showed that this synergy approach led to an increase in median R value from 0.6 to 0.65 and a decrease in median UbRMSE from 0.09 m3/m3 to 0.06 m3/m3. Second, using the algorithm selected in a first step and defined above, the ω and HR parameters were calibrated over 218 rather homogenous ISMN stations. 72 combinations of various values of ω and HR were used for the calibration over different land cover classes. In this calibration process, the optimal values of ω and HR were found for the different land cover classes. The obtained results indicated that the

  17. An Algorithmic Comparison of the Hyper-Reduction and the Discrete Empirical Interpolation Method for a Nonlinear Thermal Problem

    Directory of Open Access Journals (Sweden)

    Felix Fritzen

    2018-02-01

    Full Text Available A novel algorithmic discussion of the methodological and numerical differences of competing parametric model reduction techniques for nonlinear problems is presented. First, the Galerkin reduced basis (RB formulation is presented, which fails at providing significant gains with respect to the computational efficiency for nonlinear problems. Renowned methods for the reduction of the computing time of nonlinear reduced order models are the Hyper-Reduction and the (Discrete Empirical Interpolation Method (EIM, DEIM. An algorithmic description and a methodological comparison of both methods are provided. The accuracy of the predictions of the hyper-reduced model and the (DEIM in comparison to the Galerkin RB is investigated. All three approaches are applied to a simple uncertainty quantification of a planar nonlinear thermal conduction problem. The results are compared to computationally intense finite element simulations.

  18. Phase-Retrieval Uncertainty Estimation and Algorithm Comparison for the JWST-ISIM Test Campaign

    Science.gov (United States)

    Aronstein, David L.; Smith, J. Scott

    2016-01-01

    Phase retrieval, the process of determining the exitpupil wavefront of an optical instrument from image-plane intensity measurements, is the baseline methodology for characterizing the wavefront for the suite of science instruments (SIs) in the Integrated Science Instrument Module (ISIM) for the James Webb Space Telescope (JWST). JWST is a large, infrared space telescope with a 6.5-meter diameter primary mirror. JWST is currently NASA's flagship mission and will be the premier space observatory of the next decade. ISIM contains four optical benches with nine unique instruments, including redundancies. ISIM was characterized at the Goddard Space Flight Center (GSFC) in Greenbelt, MD in a series of cryogenic vacuum tests using a telescope simulator. During these tests, phase-retrieval algorithms were used to characterize the instruments. The objective of this paper is to describe the Monte-Carlo simulations that were used to establish uncertainties (i.e., error bars) for the wavefronts of the various instruments in ISIM. Multiple retrieval algorithms were used in the analysis of ISIM phase-retrieval focus-sweep data, including an iterativetransform algorithm and a nonlinear optimization algorithm. These algorithms emphasize the recovery of numerous optical parameters, including low-order wavefront composition described by Zernike polynomial terms and high-order wavefront described by a point-by-point map, location of instrument best focus, focal ratio, exit-pupil amplitude, the morphology of any extended object, and optical jitter. The secondary objective of this paper is to report on the relative accuracies of these algorithms for the ISIM instrument tests, and a comparison of their computational complexity and their performance on central and graphical processing unit clusters. From a phase-retrieval perspective, the ISIM test campaign includes a variety of source illumination bandwidths, various image-plane sampling criteria above and below the Nyquist- Shannon

  19. A comparison of two adaptive algorithms for the control of active engine mounts

    Science.gov (United States)

    Hillis, A. J.; Harrison, A. J. L.; Stoten, D. P.

    2005-08-01

    This paper describes work conducted in order to control automotive active engine mounts, consisting of a conventional passive mount and an internal electromagnetic actuator. Active engine mounts seek to cancel the oscillatory forces generated by the rotation of out-of-balance masses within the engine. The actuator generates a force dependent on a control signal from an algorithm implemented with a real-time DSP. The filtered-x least-mean-square (FXLMS) adaptive filter is used as a benchmark for comparison with a new implementation of the error-driven minimal controller synthesis (Er-MCSI) adaptive controller. Both algorithms are applied to an active mount fitted to a saloon car equipped with a four-cylinder turbo-diesel engine, and have no a priori knowledge of the system dynamics. The steady-state and transient performance of the two algorithms are compared and the relative merits of the two approaches are discussed. The Er-MCSI strategy offers significant computational advantages as it requires no cancellation path modelling. The Er-MCSI controller is found to perform in a fashion similar to the FXLMS filter—typically reducing chassis vibration by 50-90% under normal driving conditions.

  20. Comparison and combination of NLPQL and MOGA algorithms for a marine medium-speed diesel engine optimisation

    International Nuclear Information System (INIS)

    Hu, Nao; Zhou, Peilin; Yang, Jianguo

    2017-01-01

    Highlights: • NLPQL algorithm is not effective when used for seven engine parameters optimisation. • MOGA algorithm is time consuming but offers broader and finer solutions. • A better design is offered by NLPQL algorithm when using a start point from MOGA. • SOI has the dominant and clearly opposite effects on NOx and SFOC. • Late injection, low swirl and large spray angle can lower NOx and soot simultaneously. - Abstract: Seven engine design parameters were investigated by use of NLPQL algorithm and MOGA separately and together. Detailed comparisons were made on NOx, soot, SFOC, and also on the design parameters. Results indicate that NLPQL algorithm failed to approach optimal designs while MOGA offered more and better feasible Pareto designs. Then, an optimal design obtained by MOGA which has the trade-off between NOx and soot was set as the starting point of NLPQL algorithm. In this situation, an even better design with lower NOx and soot was approached. Combustion processes of the optimal designs were also disclosed and compared in detail. Late injection and small swirl were reckoned to be the main reasons for reducing NOx. In the end, RSM contour maps were applied in order to gain a better understanding of the sensitivity of import parameters on NOx, soot and SFOC.

  1. Image microarrays derived from tissue microarrays (IMA-TMA: New resource for computer-aided diagnostic algorithm development

    Directory of Open Access Journals (Sweden)

    Jennifer A Hipp

    2012-01-01

    Full Text Available Background: Conventional tissue microarrays (TMAs consist of cores of tissue inserted into a recipient paraffin block such that a tissue section on a single glass slide can contain numerous patient samples in a spatially structured pattern. Scanning TMAs into digital slides for subsequent analysis by computer-aided diagnostic (CAD algorithms all offers the possibility of evaluating candidate algorithms against a near-complete repertoire of variable disease morphologies. This parallel interrogation approach simplifies the evaluation, validation, and comparison of such candidate algorithms. A recently developed digital tool, digital core (dCORE, and image microarray maker (iMAM enables the capture of uniformly sized and resolution-matched images, with these representing key morphologic features and fields of view, aggregated into a single monolithic digital image file in an array format, which we define as an image microarray (IMA. We further define the TMA-IMA construct as IMA-based images derived from whole slide images of TMAs themselves. Methods: Here we describe the first combined use of the previously described dCORE and iMAM tools, toward the goal of generating a higher-order image construct, with multiple TMA cores from multiple distinct conventional TMAs assembled as a single digital image montage. This image construct served as the basis of the carrying out of a massively parallel image analysis exercise, based on the use of the previously described spatially invariant vector quantization (SIVQ algorithm. Results: Multicase, multifield TMA-IMAs of follicular lymphoma and follicular hyperplasia were separately rendered, using the aforementioned tools. Each of these two IMAs contained a distinct spectrum of morphologic heterogeneity with respect to both tingible body macrophage (TBM appearance and apoptotic body morphology. SIVQ-based pattern matching, with ring vectors selected to screen for either tingible body macrophages or apoptotic

  2. A novel line segment detection algorithm based on graph search

    Science.gov (United States)

    Zhao, Hong-dan; Liu, Guo-ying; Song, Xu

    2018-02-01

    To overcome the problem of extracting line segment from an image, a method of line segment detection was proposed based on the graph search algorithm. After obtaining the edge detection result of the image, the candidate straight line segments are obtained in four directions. For the candidate straight line segments, their adjacency relationships are depicted by a graph model, based on which the depth-first search algorithm is employed to determine how many adjacent line segments need to be merged. Finally we use the least squares method to fit the detected straight lines. The comparative experimental results verify that the proposed algorithm has achieved better results than the line segment detector (LSD).

  3. Hybrid Projected Gradient-Evolutionary Search Algorithm for Mixed Integer Nonlinear Optimization Problems

    National Research Council Canada - National Science Library

    Homaifar, Abdollah; Esterline, Albert; Kimiaghalam, Bahram

    2005-01-01

    The Hybrid Projected Gradient-Evolutionary Search Algorithm (HPGES) algorithm uses a specially designed evolutionary-based global search strategy to efficiently create candidate solutions in the solution space...

  4. Comparison of evolutionary computation algorithms for solving bi ...

    Indian Academy of Sciences (India)

    failure probability. Multiobjective Evolutionary Computation algorithms (MOEAs) are well-suited for Multiobjective task scheduling on heterogeneous environment. The two Multi-Objective Evolutionary Algorithms such as Multiobjective Genetic. Algorithm (MOGA) and Multiobjective Evolutionary Programming (MOEP) with.

  5. Planet Candidate Validation in K2 Crowded Fields

    Science.gov (United States)

    Rampalli, Rayna; Vanderburg, Andrew; Latham, David; Quinn, Samuel

    2018-01-01

    In just three years, the K2 mission has yielded some remarkable outcomes with the discovery of over 100 confirmed planets and 500 reported planet candidates to be validated. One challenge with this mission is the search for planets located in star-crowded regions. Campaign 13 is one such example, located towards the galactic plane in the constellation of Taurus. We subject the potential planetary candidates to a validation process involving spectroscopy to derive certain stellar parameters. Seeing-limited on/off imaging follow-up is also utilized in order to rule out false positives due to nearby eclipsing binaries. Using Markov chain Monte Carlo analysis, the best-fit parameters for each candidate are generated. These will be suitable for finding a candidate’s false positive probability through methods including feeding such parameters into the Validation of Exoplanet Signals using a Probabilistic Algorithm (VESPA). These techniques and results serve as important tools for conducting candidate validation and follow-up observations for space-based missions such as the upcoming TESS mission since TESS’s large camera pixels resemble K2’s star-crowded fields.

  6. A comparison between genetic algorithms and neural networks for optimizing fuel recharges in BWR

    International Nuclear Information System (INIS)

    Ortiz J, J.; Requena, I.

    2002-01-01

    In this work the results of a genetic algorithm (AG) and a neural recurrent multi state network (RNRME) for optimizing the fuel reload of 5 cycles of the Laguna Verde nuclear power plant (CNLV) are presented. The fuel reload obtained by both methods are compared and it was observed that the RNRME creates better fuel distributions that the AG. Moreover a comparison of the utility for using one or another one techniques is make. (Author)

  7. A comparison of performance measures for online algorithms

    DEFF Research Database (Denmark)

    Boyar, Joan; Irani, Sandy; Larsen, Kim Skak

    2009-01-01

    is to balance greediness and adaptability. We examine how these measures evaluate the Greedy Algorithm and Lazy Double Coverage, commonly studied algorithms in the context of server problems. We examine Competitive Analysis, the Max/Max Ratio, the Random Order Ratio, Bijective Analysis and Relative Worst Order...... Analysis and determine how they compare the two algorithms. We find that by the Max/Max Ratio and Bijective Analysis, Greedy is the better algorithm. Under the other measures Lazy Double Coverage is better, though Relative Worst Order Analysis indicates that Greedy is sometimes better. Our results also...... provide the first proof of optimality of an algorithm under Relative Worst Order Analysis....

  8. Efficient sequential and parallel algorithms for finding edit distance based motifs.

    Science.gov (United States)

    Pal, Soumitra; Xiao, Peng; Rajasekaran, Sanguthevar

    2016-08-18

    Motif search is an important step in extracting meaningful patterns from biological data. The general problem of motif search is intractable and there is a pressing need to develop efficient, exact and approximation algorithms to solve this problem. In this paper, we present several novel, exact, sequential and parallel algorithms for solving the (l,d) Edit-distance-based Motif Search (EMS) problem: given two integers l,d and n biological strings, find all strings of length l that appear in each input string with atmost d errors of types substitution, insertion and deletion. One popular technique to solve the problem is to explore for each input string the set of all possible l-mers that belong to the d-neighborhood of any substring of the input string and output those which are common for all input strings. We introduce a novel and provably efficient neighborhood exploration technique. We show that it is enough to consider the candidates in neighborhood which are at a distance exactly d. We compactly represent these candidate motifs using wildcard characters and efficiently explore them with very few repetitions. Our sequential algorithm uses a trie based data structure to efficiently store and sort the candidate motifs. Our parallel algorithm in a multi-core shared memory setting uses arrays for storing and a novel modification of radix-sort for sorting the candidate motifs. The algorithms for EMS are customarily evaluated on several challenging instances such as (8,1), (12,2), (16,3), (20,4), and so on. The best previously known algorithm, EMS1, is sequential and in estimated 3 days solves up to instance (16,3). Our sequential algorithms are more than 20 times faster on (16,3). On other hard instances such as (9,2), (11,3), (13,4), our algorithms are much faster. Our parallel algorithm has more than 600 % scaling performance while using 16 threads. Our algorithms have pushed up the state-of-the-art of EMS solvers and we believe that the techniques introduced in

  9. Comparison of SeaWinds Backscatter Imaging Algorithms

    Science.gov (United States)

    Long, David G.

    2017-01-01

    This paper compares the performance and tradeoffs of various backscatter imaging algorithms for the SeaWinds scatterometer when multiple passes over a target are available. Reconstruction methods are compared with conventional gridding algorithms. In particular, the performance and tradeoffs in conventional ‘drop in the bucket’ (DIB) gridding at the intrinsic sensor resolution are compared to high-spatial-resolution imaging algorithms such as fine-resolution DIB and the scatterometer image reconstruction (SIR) that generate enhanced-resolution backscatter images. Various options for each algorithm are explored, including considering both linear and dB computation. The effects of sampling density and reconstruction quality versus time are explored. Both simulated and actual data results are considered. The results demonstrate the effectiveness of high-resolution reconstruction using SIR as well as its limitations and the limitations of DIB and fDIB. PMID:28828143

  10. Wind Turbines Support Techniques during Frequency Drops — Energy Utilization Comparison

    Directory of Open Access Journals (Sweden)

    Ayman B. Attya

    2014-08-01

    Full Text Available The supportive role of wind turbines during frequency drops is still not clear enough, although there are many proposed algorithms. Most of the offered techniques make the wind turbine deviates from optimum power generation operation to special operation modes, to guarantee the availability of reasonable power support, when the system suffers frequency deviations. This paper summarizes the most dominant support algorithms and derives wind turbine power curves for each one. It also conducts a comparison from the point of view of wasted energy, with respect to optimum power generation. The authors insure the advantage of a frequency support algorithm, they previously presented, as it achieved lower amounts of wasted energy. This analysis is performed in two locations that are promising candidates for hosting wind farms in Egypt. Additionally, two different types of wind turbines from two different manufacturers are integrated. Matlab and Simulink are the implemented simulation environments.

  11. Quantum algorithm for association rules mining

    Science.gov (United States)

    Yu, Chao-Hua; Gao, Fei; Wang, Qing-Le; Wen, Qiao-Yan

    2016-10-01

    Association rules mining (ARM) is one of the most important problems in knowledge discovery and data mining. Given a transaction database that has a large number of transactions and items, the task of ARM is to acquire consumption habits of customers by discovering the relationships between itemsets (sets of items). In this paper, we address ARM in the quantum settings and propose a quantum algorithm for the key part of ARM, finding frequent itemsets from the candidate itemsets and acquiring their supports. Specifically, for the case in which there are Mf(k ) frequent k -itemsets in the Mc(k ) candidate k -itemsets (Mf(k )≤Mc(k ) ), our algorithm can efficiently mine these frequent k -itemsets and estimate their supports by using parallel amplitude estimation and amplitude amplification with complexity O (k/√{Mc(k )Mf(k ) } ɛ ) , where ɛ is the error for estimating the supports. Compared with the classical counterpart, i.e., the classical sampling-based algorithm, whose complexity is O (k/Mc(k ) ɛ2) , our quantum algorithm quadratically improves the dependence on both ɛ and Mc(k ) in the best case when Mf(k )≪Mc(k ) and on ɛ alone in the worst case when Mf(k )≈Mc(k ) .

  12. Dynamic Programming Algorithms in Speech Recognition

    Directory of Open Access Journals (Sweden)

    Titus Felix FURTUNA

    2008-01-01

    Full Text Available In a system of speech recognition containing words, the recognition requires the comparison between the entry signal of the word and the various words of the dictionary. The problem can be solved efficiently by a dynamic comparison algorithm whose goal is to put in optimal correspondence the temporal scales of the two words. An algorithm of this type is Dynamic Time Warping. This paper presents two alternatives for implementation of the algorithm designed for recognition of the isolated words.

  13. Comparison of Pilot Symbol Embedded Channel Estimation Algorithms

    Directory of Open Access Journals (Sweden)

    P. Kadlec

    2009-12-01

    Full Text Available In the paper, algorithms of the pilot symbol embedded channel estimation are compared. Attention is turned to the Least Square (LS channel estimation and the Sliding Correlator (SC algorithm. Both algorithms are implemented in Matlab to estimate the Channel Impulse Response (CIR of a channel exhibiting multi-path propagation. Algorithms are compared from the viewpoint of computational demands, influence of the Additive White Gaussian Noise (AWGN, an embedded pilot symbol and a computed CIR over the estimation error.

  14. Online co-regularized algorithms

    NARCIS (Netherlands)

    Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.

    2012-01-01

    We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks

  15. Identification of alternative splice variants in Aspergillus flavus through comparison of multiple tandem MS search algorithms

    Directory of Open Access Journals (Sweden)

    Chang Kung-Yen

    2011-07-01

    Full Text Available Abstract Background Database searching is the most frequently used approach for automated peptide assignment and protein inference of tandem mass spectra. The results, however, depend on the sequences in target databases and on search algorithms. Recently by using an alternative splicing database, we identified more proteins than with the annotated proteins in Aspergillus flavus. In this study, we aimed at finding a greater number of eligible splice variants based on newly available transcript sequences and the latest genome annotation. The improved database was then used to compare four search algorithms: Mascot, OMSSA, X! Tandem, and InsPecT. Results The updated alternative splicing database predicted 15833 putative protein variants, 61% more than the previous results. There was transcript evidence for 50% of the updated genes compared to the previous 35% coverage. Database searches were conducted using the same set of spectral data, search parameters, and protein database but with different algorithms. The false discovery rates of the peptide-spectrum matches were estimated Conclusions We were able to detect dozens of new peptides using the improved alternative splicing database with the recently updated annotation of the A. flavus genome. Unlike the identifications of the peptides and the RefSeq proteins, large variations existed between the putative splice variants identified by different algorithms. 12 candidates of putative isoforms were reported based on the consensus peptide-spectrum matches. This suggests that applications of multiple search engines effectively reduced the possible false positive results and validated the protein identifications from tandem mass spectra using an alternative splicing database.

  16. ISINA: INTEGRAL Source Identification Network Algorithm

    Science.gov (United States)

    Scaringi, S.; Bird, A. J.; Clark, D. J.; Dean, A. J.; Hill, A. B.; McBride, V. A.; Shaw, S. E.

    2008-11-01

    We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using random forests, is applied to the IBIS/ISGRI data set in order to ease the production of unbiased future soft gamma-ray source catalogues. First, we introduce the data set and the problems encountered when dealing with images obtained using the coded mask technique. The initial step of source candidate searching is introduced and an initial candidate list is created. A description of the feature extraction on the initial candidate list is then performed together with feature merging for these candidates. Three training and testing sets are created in order to deal with the diverse time-scales encountered when dealing with the gamma-ray sky. Three independent random forests are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the transient matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples. Based on observations with INTEGRAL, an ESA project with instruments and science data centre funded by ESA member states (especially the PI countries: Denmark, France, Germany, Italy, Spain), Czech Republic and Poland, and the participation of Russia and the USA. E-mail: simo@astro.soton.ac.uk

  17. Comparison of phase unwrapping algorithms for topography reconstruction based on digital speckle pattern interferometry

    Science.gov (United States)

    Li, Yuanbo; Cui, Xiaoqian; Wang, Hongbei; Zhao, Mengge; Ding, Hongbin

    2017-10-01

    Digital speckle pattern interferometry (DSPI) can diagnose the topography evolution in real-time, continuous and non-destructive, and has been considered as a most promising technique for Plasma-Facing Components (PFCs) topography diagnostic under the complicated environment of tokamak. It is important for the study of digital speckle pattern interferometry to enhance speckle patterns and obtain the real topography of the ablated crater. In this paper, two kinds of numerical model based on flood-fill algorithm has been developed to obtain the real profile by unwrapping from the wrapped phase in speckle interference pattern, which can be calculated through four intensity images by means of 4-step phase-shifting technique. During the process of phase unwrapping by means of flood-fill algorithm, since the existence of noise pollution, and other inevitable factors will lead to poor quality of the reconstruction results, this will have an impact on the authenticity of the restored topography. The calculation of the quality parameters was introduced to obtain the quality-map from the wrapped phase map, this work presents two different methods to calculate the quality parameters. Then quality parameters are used to guide the path of flood-fill algorithm, and the pixels with good quality parameters are given priority calculation, so that the quality of speckle interference pattern reconstruction results are improved. According to the comparison between the flood-fill algorithm which is suitable for speckle pattern interferometry and the quality-guided flood-fill algorithm (with two different calculation approaches), the errors which caused by noise pollution and the discontinuous of the strips were successfully reduced.

  18. Influence on dose calculation by difference of dose calculation algorithms in stereotactic lung irradiation. Comparison of pencil beam convolution (inhomogeneity correction: batho power law) and analytical anisotropic algorithm

    International Nuclear Information System (INIS)

    Tachibana, Masayuki; Noguchi, Yoshitaka; Fukunaga, Jyunichi; Hirano, Naomi; Yoshidome, Satoshi; Hirose, Takaaki

    2009-01-01

    The monitor unit (MU) was calculated by pencil beam convolution (inhomogeneity correction algorithm: batho power law) [PBC (BPL)] which is the dose calculation algorithm based on measurement in the past in the stereotactic lung irradiation study. The recalculation was done by analytical anisotropic algorithm (AAA), which is the dose calculation algorithm based on theory data. The MU calculated by PBC (BPL) and AAA was compared for each field. In the result of the comparison of 1031 fields in 136 cases, the MU calculated by PBC (BPL) was about 2% smaller than that calculated by AAA. This depends on whether one does the calculation concerning the extension of the second electrons. In particular, the difference in the MU is influenced by the X-ray energy. With the same X-ray energy, when the irradiation field size is small, the lung pass length is long, the lung pass length percentage is large, and the CT value of the lung is low, and the difference of MU is increased. (author)

  19. An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch.

    Science.gov (United States)

    Yurtkuran, Alkın; Emel, Erdal

    2016-01-01

    The artificial bee colony (ABC) algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA) to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature.

  20. An Enhanced Artificial Bee Colony Algorithm with Solution Acceptance Rule and Probabilistic Multisearch

    Directory of Open Access Journals (Sweden)

    Alkın Yurtkuran

    2016-01-01

    Full Text Available The artificial bee colony (ABC algorithm is a popular swarm based technique, which is inspired from the intelligent foraging behavior of honeybee swarms. This paper proposes a new variant of ABC algorithm, namely, enhanced ABC with solution acceptance rule and probabilistic multisearch (ABC-SA to address global optimization problems. A new solution acceptance rule is proposed where, instead of greedy selection between old solution and new candidate solution, worse candidate solutions have a probability to be accepted. Additionally, the acceptance probability of worse candidates is nonlinearly decreased throughout the search process adaptively. Moreover, in order to improve the performance of the ABC and balance the intensification and diversification, a probabilistic multisearch strategy is presented. Three different search equations with distinctive characters are employed using predetermined search probabilities. By implementing a new solution acceptance rule and a probabilistic multisearch approach, the intensification and diversification performance of the ABC algorithm is improved. The proposed algorithm has been tested on well-known benchmark functions of varying dimensions by comparing against novel ABC variants, as well as several recent state-of-the-art algorithms. Computational results show that the proposed ABC-SA outperforms other ABC variants and is superior to state-of-the-art algorithms proposed in the literature.

  1. A feature extraction algorithm based on corner and spots in self-driving vehicles

    Directory of Open Access Journals (Sweden)

    Yupeng FENG

    2017-06-01

    Full Text Available To solve the poor real-time performance problem of the visual odometry based on embedded system with limited computing resources, an image matching method based on Harris and SIFT is proposed, namely the Harris-SIFT algorithm. On the basis of the review of SIFT algorithm, the principle of Harris-SIFT algorithm is provided. First, Harris algorithm is used to extract the corners of the image as candidate feature points, and scale invariant feature transform (SIFT features are extracted from those candidate feature points. At last, through an example, the algorithm is simulated by Matlab, then the complexity and other performance of the algorithm are analyzed. The experimental results show that the proposed method reduces the computational complexity and improves the speed of feature extraction. Harris-SIFT algorithm can be used in the real-time vision odometer system, and will bring about a wide application of visual odometry in embedded navigation system.

  2. Performance comparison of weighted sum-minimum mean square error and virtual signal-to-interference plus noise ratio algorithms in simulated and measured channels

    DEFF Research Database (Denmark)

    Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels

    2014-01-01

    A comparison in data achievement between two well-known algorithms with simulated and real measured data is presented. The algorithms maximise the data rate in cooperative base stations (BS) multiple-input-single-output scenario. Weighted sum-minimum mean square error algorithm could be used...... in multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results....

  3. Comparison of the efficiency of two algorithms which solve the shortest path problem with an emotional agent

    Directory of Open Access Journals (Sweden)

    Petruseva Silvana

    2006-01-01

    Full Text Available This paper discusses the comparison of the efficiency of two algorithms, by estimation of their complexity. For solving the problem, the Neural Network Crossbar Adaptive Array (NN-CAA is used as the agent architecture, implementing a model of an emotion. The problem discussed is how to find the shortest path in an environment with n states. The domains concerned are environments with n states, one of which is the starting state, one is the goal state, and some states are undesirable and they should be avoided. It is obtained that finding one path (one solution is efficient, i.e. in polynomial time by both algorithms. One of the algorithms is faster than the other only in the multiplicative constant, and it shows a step forward toward the optimality of the learning process. However, finding the optimal solution (the shortest path by both algorithms is in exponential time which is asserted by two theorems. It might be concluded that the concept of subgoal is one step forward toward the optimality of the process of the agent learning. Yet, it should be explored further on, in order to obtain an efficient, polynomial algorithm.

  4. FUNDAMENTAL PROPERTIES OF KEPLER PLANET-CANDIDATE HOST STARS USING ASTEROSEISMOLOGY

    Energy Technology Data Exchange (ETDEWEB)

    Huber, Daniel; Lissauer, Jack J.; Rowe, Jason F. [NASA Ames Research Center, Moffett Field, CA 94035 (United States); Chaplin, William J. [School of Physics and Astronomy, University of Birmingham, Birmingham B15 2TT (United Kingdom); Christensen-Dalsgaard, Jorgen; Kjeldsen, Hans; Handberg, Rasmus; Karoff, Christoffer; Lund, Mikkel N.; Lundkvist, Mia [Stellar Astrophysics Centre, Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C (Denmark); Gilliland, Ronald L. [Center for Exoplanets and Habitable Worlds, The Pennsylvania State University, 525 Davey Lab, University Park, PA 16802 (United States); Buchhave, Lars A. [Niels Bohr Institute, University of Copenhagen, DK-2100 Copenhagen (Denmark); Fischer, Debra A.; Basu, Sarbani [Department of Astronomy, Yale University, New Haven, CT 06511 (United States); Sanchis-Ojeda, Roberto [Department of Physics, Massachusetts Institute of Technology, 77 Massachusetts Ave., Cambridge, MA 02139 (United States); Hekker, Saskia [Astronomical Institute ' ' Anton Pannekoek' ' , University of Amsterdam, Science Park 904, 1098 XH Amsterdam (Netherlands); Howard, Andrew W. [Institute for Astronomy, University of Hawaii, 2680 Woodlawn Drive, Honolulu, HI 96822 (United States); Isaacson, Howard; Marcy, Geoffrey W. [Department of Astronomy, University of California, Berkeley, CA 94720 (United States); Latham, David W., E-mail: daniel.huber@nasa.gov [Harvard-Smithsonian Center for Astrophysics, Cambridge, MA 02138 (United States); and others

    2013-04-20

    We have used asteroseismology to determine fundamental properties for 66 Kepler planet-candidate host stars, with typical uncertainties of 3% and 7% in radius and mass, respectively. The results include new asteroseismic solutions for four host stars with confirmed planets (Kepler-4, Kepler-14, Kepler-23 and Kepler-25) and increase the total number of Kepler host stars with asteroseismic solutions to 77. A comparison with stellar properties in the planet-candidate catalog by Batalha et al. shows that radii for subgiants and giants obtained from spectroscopic follow-up are systematically too low by up to a factor of 1.5, while the properties for unevolved stars are in good agreement. We furthermore apply asteroseismology to confirm that a large majority of cool main-sequence hosts are indeed dwarfs and not misclassified giants. Using the revised stellar properties, we recalculate the radii for 107 planet candidates in our sample, and comment on candidates for which the radii change from a previously giant-planet/brown-dwarf/stellar regime to a sub-Jupiter size or vice versa. A comparison of stellar densities from asteroseismology with densities derived from transit models in Batalha et al. assuming circular orbits shows significant disagreement for more than half of the sample due to systematics in the modeled impact parameters or due to planet candidates that may be in eccentric orbits. Finally, we investigate tentative correlations between host-star masses and planet-candidate radii, orbital periods, and multiplicity, but caution that these results may be influenced by the small sample size and detection biases.

  5. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    1999-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint ranking algorithm for learning Optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  6. Empirical tests of the Gradual Learning Algorithm

    NARCIS (Netherlands)

    Boersma, P.; Hayes, B.

    2001-01-01

    The Gradual Learning Algorithm (Boersma 1997) is a constraint-ranking algorithm for learning optimality-theoretic grammars. The purpose of this article is to assess the capabilities of the Gradual Learning Algorithm, particularly in comparison with the Constraint Demotion algorithm of Tesar and

  7. Comparison of Algorithms for the Optimal Location of Control Valves for Leakage Reduction in WDNs

    Directory of Open Access Journals (Sweden)

    Enrico Creaco

    2018-04-01

    Full Text Available The paper presents the comparison of two different algorithms for the optimal location of control valves for leakage reduction in water distribution networks (WDNs. The former is based on the sequential addition (SA of control valves. At the generic step Nval of SA, the search for the optimal combination of Nval valves is carried out, while containing the optimal combination of Nval − 1 valves found at the previous step. Therefore, only one new valve location is searched for at each step of SA, among all the remaining available locations. The latter algorithm consists of a multi-objective genetic algorithm (GA, in which valve locations are encoded inside individual genes. For the sake of consistency, the same embedded algorithm, based on iterated linear programming (LP, was used inside SA and GA, to search for the optimal valve settings at various time slots in the day. The results of applications to two WDNs show that SA and GA yield identical results for small values of Nval. When this number grows, the limitations of SA, related to its reduced exploration of the research space, emerge. In fact, for higher values of Nval, SA tends to produce less beneficial valve locations in terms of leakage abatement. However, the smaller computation time of SA may make this algorithm preferable in the case of large WDNs, for which the application of GA would be overly burdensome.

  8. Comparison of 3D Maximum A Posteriori and Filtered Backprojection algorithms for high resolution animal imaging in microPET

    International Nuclear Information System (INIS)

    Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.

    2000-01-01

    We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior

  9. Automatic design of decision-tree induction algorithms tailored to flexible-receptor docking data.

    Science.gov (United States)

    Barros, Rodrigo C; Winck, Ana T; Machado, Karina S; Basgalupp, Márcio P; de Carvalho, André C P L F; Ruiz, Duncan D; de Souza, Osmar Norberto

    2012-11-21

    This paper addresses the prediction of the free energy of binding of a drug candidate with enzyme InhA associated with Mycobacterium tuberculosis. This problem is found within rational drug design, where interactions between drug candidates and target proteins are verified through molecular docking simulations. In this application, it is important not only to correctly predict the free energy of binding, but also to provide a comprehensible model that could be validated by a domain specialist. Decision-tree induction algorithms have been successfully used in drug-design related applications, specially considering that decision trees are simple to understand, interpret, and validate. There are several decision-tree induction algorithms available for general-use, but each one has a bias that makes it more suitable for a particular data distribution. In this article, we propose and investigate the automatic design of decision-tree induction algorithms tailored to particular drug-enzyme binding data sets. We investigate the performance of our new method for evaluating binding conformations of different drug candidates to InhA, and we analyze our findings with respect to decision tree accuracy, comprehensibility, and biological relevance. The empirical analysis indicates that our method is capable of automatically generating decision-tree induction algorithms that significantly outperform the traditional C4.5 algorithm with respect to both accuracy and comprehensibility. In addition, we provide the biological interpretation of the rules generated by our approach, reinforcing the importance of comprehensible predictive models in this particular bioinformatics application. We conclude that automatically designing a decision-tree algorithm tailored to molecular docking data is a promising alternative for the prediction of the free energy from the binding of a drug candidate with a flexible-receptor.

  10. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  11. An Early Fire Detection Algorithm Using IP Cameras

    Directory of Open Access Journals (Sweden)

    Hector Perez-Meana

    2012-05-01

    Full Text Available The presence of smoke is the first symptom of fire; therefore to achieve early fire detection, accurate and quick estimation of the presence of smoke is very important. In this paper we propose an algorithm to detect the presence of smoke using video sequences captured by Internet Protocol (IP cameras, in which important features of smoke, such as color, motion and growth properties are employed. For an efficient smoke detection in the IP camera platform, a detection algorithm must operate directly in the Discrete Cosine Transform (DCT domain to reduce computational cost, avoiding a complete decoding process required for algorithms that operate in spatial domain. In the proposed algorithm the DCT Inter-transformation technique is used to increase the detection accuracy without inverse DCT operation. In the proposed scheme, firstly the candidate smoke regions are estimated using motion and color smoke properties; next using morphological operations the noise is reduced. Finally the growth properties of the candidate smoke regions are furthermore analyzed through time using the connected component labeling technique. Evaluation results show that a feasible smoke detection method with false negative and false positive error rates approximately equal to 4% and 2%, respectively, is obtained.

  12. SINGLE-LINED SPECTROSCOPIC BINARY STAR CANDIDATES IN THE RAVE SURVEY

    International Nuclear Information System (INIS)

    Matijevic, G.; Zwitter, T.; Bienayme, O.; Siebert, A.; Watson, F. G.; Bland-Hawthorn, J.; Parker, Q. A.; Freeman, K. C.; Gilmore, G.; Grebel, E. K.; Helmi, A.; Munari, U.; Siviero, A.; Navarro, J. F.; Reid, W.; Seabroke, G. M.; Steinmetz, M.; Williams, M.; Wyse, R. F. G.

    2011-01-01

    Repeated spectroscopic observations of stars in the RAdial Velocity Experiment (RAVE) database are used to identify and examine single-lined binary (SB1) candidates. The RAVE latest internal database (VDR3) includes radial velocities, atmospheric parameters, and other parameters for approximately a quarter of a million different stars with slightly less than 300,000 observations. In the sample of ∼20,000 stars observed more than once, 1333 stars with variable radial velocities were identified. Most of them are believed to be SB1 candidates. The fraction of SB1 candidates among stars with several observations is between 10% and 15% which is the lower limit for binarity among RAVE stars. Due to the distribution of time spans between the re-observation that is biased toward relatively short timescales (days to weeks), the periods of the identified SB1 candidates are most likely in the same range. Because of the RAVE's narrow magnitude range most of the dwarf candidates belong to the thin Galactic disk while the giants are part of the thick disk with distances extending to up to a few kpc. The comparison of the list of SB1 candidates to the VSX catalog of variable stars yielded several pulsating variables among the giant population with radial velocity variations of up to few tens of km s -1 . There are 26 matches between the catalog of spectroscopic binary orbits (S B 9 ) and the whole RAVE sample for which the given periastron time and the time of RAVE observation were close enough to yield a reliable comparison. RAVE measurements of radial velocities of known spectroscopic binaries are consistent with their published radial velocity curves.

  13. Integrative analysis to select cancer candidate biomarkers to targeted validation

    Science.gov (United States)

    Heberle, Henry; Domingues, Romênia R.; Granato, Daniela C.; Yokoo, Sami; Canevarolo, Rafael R.; Winck, Flavia V.; Ribeiro, Ana Carolina P.; Brandão, Thaís Bianca; Filgueiras, Paulo R.; Cruz, Karen S. P.; Barbuto, José Alexandre; Poppi, Ronei J.; Minghim, Rosane; Telles, Guilherme P.; Fonseca, Felipe Paiva; Fox, Jay W.; Santos-Silva, Alan R.; Coletta, Ricardo D.; Sherman, Nicholas E.; Paes Leme, Adriana F.

    2015-01-01

    Targeted proteomics has flourished as the method of choice for prospecting for and validating potential candidate biomarkers in many diseases. However, challenges still remain due to the lack of standardized routines that can prioritize a limited number of proteins to be further validated in human samples. To help researchers identify candidate biomarkers that best characterize their samples under study, a well-designed integrative analysis pipeline, comprising MS-based discovery, feature selection methods, clustering techniques, bioinformatic analyses and targeted approaches was performed using discovery-based proteomic data from the secretomes of three classes of human cell lines (carcinoma, melanoma and non-cancerous). Three feature selection algorithms, namely, Beta-binomial, Nearest Shrunken Centroids (NSC), and Support Vector Machine-Recursive Features Elimination (SVM-RFE), indicated a panel of 137 candidate biomarkers for carcinoma and 271 for melanoma, which were differentially abundant between the tumor classes. We further tested the strength of the pipeline in selecting candidate biomarkers by immunoblotting, human tissue microarrays, label-free targeted MS and functional experiments. In conclusion, the proposed integrative analysis was able to pre-qualify and prioritize candidate biomarkers from discovery-based proteomics to targeted MS. PMID:26540631

  14. Algorithmic and user study of an autocompletion algorithm on a large medical vocabulary.

    Science.gov (United States)

    Sevenster, Merlijn; van Ommering, Rob; Qian, Yuechen

    2012-02-01

    Autocompletion supports human-computer interaction in software applications that let users enter textual data. We will be inspired by the use case in which medical professionals enter ontology concepts, catering the ongoing demand for structured and standardized data in medicine. Goal is to give an algorithmic analysis of one particular autocompletion algorithm, called multi-prefix matching algorithm, which suggests terms whose words' prefixes contain all words in the string typed by the user, e.g., in this sense, opt ner me matches optic nerve meningioma. Second we aim to investigate how well it supports users entering concepts from a large and comprehensive medical vocabulary (snomed ct). We give a concise description of the multi-prefix algorithm, and sketch how it can be optimized to meet required response time. Performance will be compared to a baseline algorithm, which gives suggestions that extend the string typed by the user to the right, e.g. optic nerve m gives optic nerve meningioma, but opt ner me does not. We conduct a user experiment in which 12 participants are invited to complete 40 snomed ct terms with the baseline algorithm and another set of 40 snomed ct terms with the multi-prefix algorithm. Our results show that users need significantly fewer keystrokes when supported by the multi-prefix algorithm than when supported by the baseline algorithm. The proposed algorithm is a competitive candidate for searching and retrieving terms from a large medical ontology. Copyright © 2011 Elsevier Inc. All rights reserved.

  15. Automated Lead Optimization of MMP-12 Inhibitors Using a Genetic Algorithm.

    Science.gov (United States)

    Pickett, Stephen D; Green, Darren V S; Hunt, David L; Pardoe, David A; Hughes, Ian

    2011-01-13

    Traditional lead optimization projects involve long synthesis and testing cycles, favoring extensive structure-activity relationship (SAR) analysis and molecular design steps, in an attempt to limit the number of cycles that a project must run to optimize a development candidate. Microfluidic-based chemistry and biology platforms, with cycle times of minutes rather than weeks, lend themselves to unattended autonomous operation. The bottleneck in the lead optimization process is therefore shifted from synthesis or test to SAR analysis and design. As such, the way is open to an algorithm-directed process, without the need for detailed user data analysis. Here, we present results of two synthesis and screening experiments, undertaken using traditional methodology, to validate a genetic algorithm optimization process for future application to a microfluidic system. The algorithm has several novel features that are important for the intended application. For example, it is robust to missing data and can suggest compounds for retest to ensure reliability of optimization. The algorithm is first validated on a retrospective analysis of an in-house library embedded in a larger virtual array of presumed inactive compounds. In a second, prospective experiment with MMP-12 as the target protein, 140 compounds are submitted for synthesis over 10 cycles of optimization. Comparison is made to the results from the full combinatorial library that was synthesized manually and tested independently. The results show that compounds selected by the algorithm are heavily biased toward the more active regions of the library, while the algorithm is robust to both missing data (compounds where synthesis failed) and inactive compounds. This publication places the full combinatorial library and biological data into the public domain with the intention of advancing research into algorithm-directed lead optimization methods.

  16. Development, evaluation, and selection of candidate high-level waste forms

    International Nuclear Information System (INIS)

    Bernadzikowski, T.A.; Allender, J.S.; Gordon, D.E.; Gould, T.H. Jr.

    1982-01-01

    The seven candidate waste forms, evaluated as potential media for the immobilization and gelogic disposal of high-level nuclear wastes were borosilicate glass, SYNROC, tailored ceramic, high-silica glass, FUETAP concrete, coated sol-gel particles, and glass marbles in a lead matrix. The evaluation, completed on August 1, 1981, combined preliminary waste form evaluations conducted at Department of Energy (DOE) defense waste-sites and at independent laboratories, peer review assessments, a product performance evaluation, and a processability analysis. Based on the combined results of these four inputs, two of the seven forms, borosilicate glass and a titanate-based ceramic, SYNROC, were selected as the reference and alternative forms, respectively, for continued development and evaluation in the National HLW Program. The borosilicate glass and ceramic forms were further compared during FY-1982 on the basis of risk assessments, cost comparisons, properties comparisons, and conformance with proposed regulatory and repository criteria. Both the glass and ceramic forms are viable candidates for use at DOE defense HLW sites; they are also candidates for immobilization of commercial reprocessing wastes. This paper describes the waste form screening process, discusses each of the four major inputs considered in the selection of the two forms in 1981, and presents a brief summary of the comparisons of the two forms during 1982 and the selection process to determine the final form for SRP defense HLW

  17. A COMPARISON OF HAZE REMOVAL ALGORITHMS AND THEIR IMPACTS ON CLASSIFICATION ACCURACY FOR LANDSAT IMAGERY

    Directory of Open Access Journals (Sweden)

    Yang Xiao

    Full Text Available The quality of Landsat images in humid areas is considerably degraded by haze in terms of their spectral response pattern, which limits the possibility of their application in using visible and near-infrared bands. A variety of haze removal algorithms have been proposed to correct these unsatisfactory illumination effects caused by the haze contamination. The purpose of this study was to illustrate the difference of two major algorithms (the improved homomorphic filtering (HF and the virtual cloud point (VCP for their effectiveness in solving spatially varying haze contamination, and to evaluate the impacts of haze removal on land cover classification. A case study with exploiting large quantities of Landsat TM images and climates (clear and haze in the most humid areas in China proved that these haze removal algorithms both perform well in processing Landsat images contaminated by haze. The outcome of the application of VCP appears to be more similar to the reference images compared to HF. Moreover, the Landsat image with VCP haze removal can improve the classification accuracy effectively in comparison to that without haze removal, especially in the cloudy contaminated area

  18. Comparison of tracking algorithms implemented in OpenCV

    Directory of Open Access Journals (Sweden)

    Janku Peter

    2016-01-01

    Full Text Available Computer vision is very progressive and modern part of computer science. From scientific point of view, theoretical aspects of computer vision algorithms prevail in many papers and publications. The underlying theory is really important, but on the other hand, the final implementation of an algorithm significantly affects its performance and robustness. For this reason, this paper tries to compare real implementation of tracking algorithms (one part of computer vision problem, which can be found in the very popular library OpenCV. Moreover, the possibilities of optimizations are discussed.

  19. Comparison of two (geometric) algorithms for auto OMA

    DEFF Research Database (Denmark)

    Juul, Martin; Olsen, Peter; Balling, Ole

    2018-01-01

    parameters. The two algorithms are compared and illustrated on simulated data. Different choices of distance measures are discussed and evaluated. It is illustrated how a simple distance measure outperforms traditional distance measures from other Auto OMA algorithms. Traditional measures are unable...

  20. Global Optimization of a Periodic System using a Genetic Algorithm

    Science.gov (United States)

    Stucke, David; Crespi, Vincent

    2001-03-01

    We use a novel application of a genetic algorithm global optimizatin technique to find the lowest energy structures for periodic systems. We apply this technique to colloidal crystals for several different stoichiometries of binary and trinary colloidal crystals. This application of a genetic algorithm is decribed and results of likely candidate structures are presented.

  1. Night-Time Vehicle Detection Algorithm Based on Visual Saliency and Deep Learning

    Directory of Open Access Journals (Sweden)

    Yingfeng Cai

    2016-01-01

    Full Text Available Night vision systems get more and more attention in the field of automotive active safety field. In this area, a number of researchers have proposed far-infrared sensor based night-time vehicle detection algorithm. However, existing algorithms have low performance in some indicators such as the detection rate and processing time. To solve this problem, we propose a far-infrared image vehicle detection algorithm based on visual saliency and deep learning. Firstly, most of the nonvehicle pixels will be removed with visual saliency computation. Then, vehicle candidate will be generated by using prior information such as camera parameters and vehicle size. Finally, classifier trained with deep belief networks will be applied to verify the candidates generated in last step. The proposed algorithm is tested in around 6000 images and achieves detection rate of 92.3% and processing time of 25 Hz which is better than existing methods.

  2. Optimal Pid Controller Design Using Adaptive Vurpso Algorithm

    Science.gov (United States)

    Zirkohi, Majid Moradi

    2015-04-01

    The purpose of this paper is to improve theVelocity Update Relaxation Particle Swarm Optimization algorithm (VURPSO). The improved algorithm is called Adaptive VURPSO (AVURPSO) algorithm. Then, an optimal design of a Proportional-Integral-Derivative (PID) controller is obtained using the AVURPSO algorithm. An adaptive momentum factor is used to regulate a trade-off between the global and the local exploration abilities in the proposed algorithm. This operation helps the system to reach the optimal solution quickly and saves the computation time. Comparisons on the optimal PID controller design confirm the superiority of AVURPSO algorithm to the optimization algorithms mentioned in this paper namely the VURPSO algorithm, the Ant Colony algorithm, and the conventional approach. Comparisons on the speed of convergence confirm that the proposed algorithm has a faster convergence in a less computation time to yield a global optimum value. The proposed AVURPSO can be used in the diverse areas of optimization problems such as industrial planning, resource allocation, scheduling, decision making, pattern recognition and machine learning. The proposed AVURPSO algorithm is efficiently used to design an optimal PID controller.

  3. Improved target detection algorithm using Fukunaga-Koontz transform and distance classifier correlation filter

    Science.gov (United States)

    Bal, A.; Alam, M. S.; Aslan, M. S.

    2006-05-01

    Often sensor ego-motion or fast target movement causes the target to temporarily go out of the field-of-view leading to reappearing target detection problem in target tracking applications. Since the target goes out of the current frame and reenters at a later frame, the reentering location and variations in rotation, scale, and other 3D orientations of the target are not known thus complicating the detection algorithm has been developed using Fukunaga-Koontz Transform (FKT) and distance classifier correlation filter (DCCF). The detection algorithm uses target and background information, extracted from training samples, to detect possible candidate target images. The detected candidate target images are then introduced into the second algorithm, DCCF, called clutter rejection module, to determine the target coordinates are detected and tracking algorithm is initiated. The performance of the proposed FKT-DCCF based target detection algorithm has been tested using real-world forward looking infrared (FLIR) video sequences.

  4. Comparison of the nucleotide sequence of wild-type hepatitis - A virus and its attenuated candidate vaccine derivative

    International Nuclear Information System (INIS)

    Cohen, J.I.; Rosenblum, B.; Ticehurst, J.R.; Daemer, R.; Feinstone, S.; Purcell, R.H.

    1987-01-01

    Development of attenuated mutants for use as vaccines is in progress for other viruses, including influenza, rotavirus, varicella-zoster, cytomegalovirus, and hepatitis-A virus (HAV). Attenuated viruses may be derived from naturally occurring mutants that infect human or nonhuman hosts. Alternatively, attenuated mutants may be generated by passage of wild-type virus in cell culture. Production of attenuated viruses in cell culture is a laborious and empiric process. Despite previous empiric successes, understanding the molecular basis for attenuation of vaccine viruses could facilitate future development and use of live-virus vaccines. Comparison of the complete nucleotide sequences of wild-type (virulent) and vaccine (attenuated) viruses has been reported for polioviruses and yellow fever virus. Here, the authors compare the nucleotide sequence of wild-type HAV HM-175 with that of a candidate vaccine derivative

  5. A multi-frame particle tracking algorithm robust against input noise

    International Nuclear Information System (INIS)

    Li, Dongning; Zhang, Yuanhui; Sun, Yigang; Yan, Wei

    2008-01-01

    The performance of a particle tracking algorithm which detects particle trajectories from discretely recorded particle positions could be substantially hindered by the input noise. In this paper, a particle tracking algorithm is developed which is robust against input noise. This algorithm employs the regression method instead of the extrapolation method usually employed by existing algorithms to predict future particle positions. If a trajectory cannot be linked to a particle at a frame, the algorithm can still proceed by trying to find a candidate at the next frame. The connectivity of tracked trajectories is inspected to remove the false ones. The algorithm is validated with synthetic data. The result shows that the algorithm is superior to traditional algorithms in the aspect of tracking long trajectories

  6. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    International Nuclear Information System (INIS)

    Cheng Sheng-Yi; Liu Wen-Jin; Chen Shan-Qiu; Dong Li-Zhi; Yang Ping; Xu Bing

    2015-01-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n 2 ) ∼ O(n 3 ) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ∼ (O(n) 3/2 ), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. (paper)

  7. Comparison of candidate materials for a synthetic osteo-odonto keratoprosthesis device.

    Science.gov (United States)

    Tan, Xiao Wei; Perera, A Promoda P; Tan, Anna; Tan, Donald; Khor, K A; Beuerman, Roger W; Mehta, Jodhbir S

    2011-01-05

    Osteo-odonto keratoprosthesis is one of the most successful forms of keratoprosthesis surgery for end-stage corneal and ocular surface disease. There is a lack of detailed comparison studies on the biocompatibilities of different materials used in keratoprosthesis. The aim of this investigation was to compare synthetic bioinert materials used for keratoprosthesis surgery with hydroxyapatite (HA) as a reference. Test materials were sintered titanium oxide (TiO(2)), aluminum oxide (Al(2)O(3)), and yttria-stabilized zirconia (YSZ) with density >95%. Bacterial adhesion on the substrates was evaluated using scanning electron microscopy and the spread plate method. Surface properties of the implant discs were scanned using optical microscopy. Human keratocyte attachment and proliferation rates were assessed by cell counting and MTT assay at different time points. Morphologic analysis and immunoblotting were used to evaluate focal adhesion formation, whereas cell adhesion force was measured with a multimode atomic force microscope. The authors found that bacterial adhesion on the TiO(2), Al(2)O(3), and YSZ surfaces were lower than that on HA substrates. TiO(2) significantly promoted keratocyte proliferation and viability compared with HA, Al(2)O(3,) and YSZ. Immunofluorescent imaging analyses, immunoblotting, and atomic force microscope measurement revealed that TiO(2) surfaces enhanced cell spreading and cell adhesion compared with HA and Al(2)O(3). TiO(2) is the most suitable replacement candidate for use as skirt material because it enhanced cell functions and reduced bacterial adhesion. This would, in turn, enhance tissue integration and reduce device failure rates during keratoprosthesis surgery.

  8. The Fuzzy MCDM Algorithms for the M&A Due Diligence

    Directory of Open Access Journals (Sweden)

    Chung-Tsen Tsao

    2008-04-01

    Full Text Available An M&A due diligence is the process in which one of the parties to the transaction undertakes to investigate the other in order to judge whether to go forward with the transaction on the terms proposed. It encompasses the missions in three phases: searching and preliminary screening potential candidates, evaluating the candidates and deciding the target, and assisting the after-transaction integration. This work suggests using a Fuzzy Multiple Criteria Decision Making approach (Fuzzy MCDM and develops detailed algorithms to carry out the second-phase task. The approach of MCDM is able to facilitate the analysis and integration of information from different aspects and criteria. The theory of Fuzzy Sets can include qualitative information in addition to quantitative information. In the developed algorithms the evaluators' subjective judgments are expressed in linguistic terms which can better reflect human intuitive thought than the quantitative scores. These linguistic judgments are transformed into fuzzy numbers and made subsequent synthesis with quantitative financial figures. The order of candidates can be ranked after a defuzzification. Then the acquiring firm can work out a more specific study, including pricing and costing, on the priority candidates so as to decide the target.

  9. An efficient non-dominated sorting method for evolutionary algorithms.

    Science.gov (United States)

    Fang, Hongbing; Wang, Qian; Tu, Yi-Cheng; Horstemeyer, Mark F

    2008-01-01

    We present a new non-dominated sorting algorithm to generate the non-dominated fronts in multi-objective optimization with evolutionary algorithms, particularly the NSGA-II. The non-dominated sorting algorithm used by NSGA-II has a time complexity of O(MN(2)) in generating non-dominated fronts in one generation (iteration) for a population size N and M objective functions. Since generating non-dominated fronts takes the majority of total computational time (excluding the cost of fitness evaluations) of NSGA-II, making this algorithm faster will significantly improve the overall efficiency of NSGA-II and other genetic algorithms using non-dominated sorting. The new non-dominated sorting algorithm proposed in this study reduces the number of redundant comparisons existing in the algorithm of NSGA-II by recording the dominance information among solutions from their first comparisons. By utilizing a new data structure called the dominance tree and the divide-and-conquer mechanism, the new algorithm is faster than NSGA-II for different numbers of objective functions. Although the number of solution comparisons by the proposed algorithm is close to that of NSGA-II when the number of objectives becomes large, the total computational time shows that the proposed algorithm still has better efficiency because of the adoption of the dominance tree structure and the divide-and-conquer mechanism.

  10. Planetary transit candidates in the CoRoT-SRc01 field

    DEFF Research Database (Denmark)

    Erikson, A.; Santerne, A.; Renner, S.

    2012-01-01

    Context. CoRoT is a pioneering space mission whose primary goals are stellar seismology and extrasolar planets search. Its surveys of large stellar fields generate numerous planetary candidates whose lightcurves have transit-like features. An extensive analytical and observational follow-up effort...... is undertaken to classify these candidates. Aims. We present the list of planetary transit candidates from the CoRoT LRa01 star field in the Monoceros constellation toward the Galactic anti-center direction. The CoRoT observations of LRa01 lasted from 24 October 2007 to 3 March 2008. Methods. We acquired...... and analyzed 7470 chromatic and 3938 monochromatic lightcurves. Instrumental noise and stellar variability were treated with several filtering tools by different teams from the CoRoT community. Different transit search algorithms were applied to the lightcurves. Results. Fifty-one stars were classified...

  11. Flow enforcement algorithms for ATM networks

    DEFF Research Database (Denmark)

    Dittmann, Lars; Jacobsen, Søren B.; Moth, Klaus

    1991-01-01

    Four measurement algorithms for flow enforcement in asynchronous transfer mode (ATM) networks are presented. The algorithms are the leaky bucket, the rectangular sliding window, the triangular sliding window, and the exponentially weighted moving average. A comparison, based partly on teletraffic...

  12. Global alignment algorithms implementations | Fatumo ...

    African Journals Online (AJOL)

    In this paper, we implemented the two routes for sequence comparison, that is; the dotplot and Needleman-wunsch algorithm for global sequence alignment. Our algorithms were implemented in python programming language and were tested on Linux platform 1.60GHz, 512 MB of RAM SUSE 9.2 and 10.1 versions.

  13. An algorithm for finding a similar subgraph of all Hamiltonian cycles

    Science.gov (United States)

    Wafdan, R.; Ihsan, M.; Suhaimi, D.

    2018-01-01

    This paper discusses an algorithm to find a similar subgraph called findSimSubG algorithm. A similar subgraph is a subgraph with a maximum number of edges, contains no isolated vertex and is contained in every Hamiltonian cycle of a Hamiltonian Graph. The algorithm runs only on Hamiltonian graphs with at least two Hamiltonian cycles. The algorithm works by examining whether the initial subgraph of the first Hamiltonian cycle is a subgraph of comparison graphs. If the initial subgraph is not in comparison graphs, the algorithm will remove edges and vertices of the initial subgraph that are not in comparison graphs. There are two main processes in the algorithm, changing Hamiltonian cycle into a cycle graph and removing edges and vertices of the initial subgraph that are not in comparison graphs. The findSimSubG algorithm can find the similar subgraph without using backtracking method. The similar subgraph cannot be found on certain graphs, such as an n-antiprism graph, complete bipartite graph, complete graph, 2n-crossed prism graph, n-crown graph, n-möbius ladder, prism graph, and wheel graph. The complexity of this algorithm is O(m|V|), where m is the number of Hamiltonian cycles and |V| is the number of vertices of a Hamiltonian graph.

  14. COOBBO: A Novel Opposition-Based Soft Computing Algorithm for TSP Problems

    Directory of Open Access Journals (Sweden)

    Qingzheng Xu

    2014-12-01

    Full Text Available In this paper, we propose a novel definition of opposite path. Its core feature is that the sequence of candidate paths and the distances between adjacent nodes in the tour are considered simultaneously. In a sense, the candidate path and its corresponding opposite path have the same (or similar at least distance to the optimal path in the current population. Based on an accepted framework for employing opposition-based learning, Oppositional Biogeography-Based Optimization using the Current Optimum, called COOBBO algorithm, is introduced to solve traveling salesman problems. We demonstrate its performance on eight benchmark problems and compare it with other optimization algorithms. Simulation results illustrate that the excellent performance of our proposed algorithm is attributed to the distinct definition of opposite path. In addition, its great strength lies in exploitation for enhancing the solution accuracy, not exploration for improving the population diversity. Finally, by comparing different version of COOBBO, another conclusion is that each successful opposition-based soft computing algorithm needs to adjust and remain a good balance between backward adjacent node and forward adjacent node.

  15. A new comparison of hyperspectral anomaly detection algorithms for real-time applications

    Science.gov (United States)

    Díaz, María.; López, Sebastián.; Sarmiento, Roberto

    2016-10-01

    Due to the high spectral resolution that remotely sensed hyperspectral images provide, there has been an increasing interest in anomaly detection. The aim of anomaly detection is to stand over pixels whose spectral signature differs significantly from the background spectra. Basically, anomaly detectors mark pixels with a certain score, considering as anomalies those whose scores are higher than a threshold. Receiver Operating Characteristic (ROC) curves have been widely used as an assessment measure in order to compare the performance of different algorithms. ROC curves are graphical plots which illustrate the trade- off between false positive and true positive rates. However, they are limited in order to make deep comparisons due to the fact that they discard relevant factors required in real-time applications such as run times, costs of misclassification and the competence to mark anomalies with high scores. This last fact is fundamental in anomaly detection in order to distinguish them easily from the background without any posterior processing. An extensive set of simulations have been made using different anomaly detection algorithms, comparing their performances and efficiencies using several extra metrics in order to complement ROC curves analysis. Results support our proposal and demonstrate that ROC curves do not provide a good visualization of detection performances for themselves. Moreover, a figure of merit has been proposed in this paper which encompasses in a single global metric all the measures yielded for the proposed additional metrics. Therefore, this figure, named Detection Efficiency (DE), takes into account several crucial types of performance assessment that ROC curves do not consider. Results demonstrate that algorithms with the best detection performances according to ROC curves do not have the highest DE values. Consequently, the recommendation of using extra measures to properly evaluate performances have been supported and justified by

  16. Comparison of Different MPPT Algorithms with a Proposed One Using a Power Estimator for Grid Connected PV Systems

    Directory of Open Access Journals (Sweden)

    Manel Hlaili

    2016-01-01

    Full Text Available Photovoltaic (PV energy is one of the most important energy sources since it is clean and inexhaustible. It is important to operate PV energy conversion systems in the maximum power point (MPP to maximize the output energy of PV arrays. An MPPT control is necessary to extract maximum power from the PV arrays. In recent years, a large number of techniques have been proposed for tracking the maximum power point. This paper presents a comparison of different MPPT methods and proposes one which used a power estimator and also analyses their suitability for systems which experience a wide range of operating conditions. The classic analysed methods, the incremental conductance (IncCond, perturbation and observation (P&O, ripple correlation (RC algorithms, are suitable and practical. Simulation results of a single phase NPC grid connected PV system operating with the aforementioned methods are presented to confirm effectiveness of the scheme and algorithms. Simulation results verify the correct operation of the different MPPT and the proposed algorithm.

  17. Comparison between iterative wavefront control algorithm and direct gradient wavefront control algorithm for adaptive optics system

    Science.gov (United States)

    Cheng, Sheng-Yi; Liu, Wen-Jin; Chen, Shan-Qiu; Dong, Li-Zhi; Yang, Ping; Xu, Bing

    2015-08-01

    Among all kinds of wavefront control algorithms in adaptive optics systems, the direct gradient wavefront control algorithm is the most widespread and common method. This control algorithm obtains the actuator voltages directly from wavefront slopes through pre-measuring the relational matrix between deformable mirror actuators and Hartmann wavefront sensor with perfect real-time characteristic and stability. However, with increasing the number of sub-apertures in wavefront sensor and deformable mirror actuators of adaptive optics systems, the matrix operation in direct gradient algorithm takes too much time, which becomes a major factor influencing control effect of adaptive optics systems. In this paper we apply an iterative wavefront control algorithm to high-resolution adaptive optics systems, in which the voltages of each actuator are obtained through iteration arithmetic, which gains great advantage in calculation and storage. For AO system with thousands of actuators, the computational complexity estimate is about O(n2) ˜ O(n3) in direct gradient wavefront control algorithm, while the computational complexity estimate in iterative wavefront control algorithm is about O(n) ˜ (O(n)3/2), in which n is the number of actuators of AO system. And the more the numbers of sub-apertures and deformable mirror actuators, the more significant advantage the iterative wavefront control algorithm exhibits. Project supported by the National Key Scientific and Research Equipment Development Project of China (Grant No. ZDYZ2013-2), the National Natural Science Foundation of China (Grant No. 11173008), and the Sichuan Provincial Outstanding Youth Academic Technology Leaders Program, China (Grant No. 2012JQ0012).

  18. A comparison of body image concern in candidates for rhinoplasty and therapeutic surgery.

    Science.gov (United States)

    Hashemi, Seyed Amirhosein Ghazizadeh; Edalatnoor, Behnoosh; Edalatnoor, Behnaz; Niksun, Omid

    2017-09-01

    Body dysmorphic disorder among patients referring for cosmetic surgeries is a disorder that if not diagnosed by a physician, can cause irreparable damage to the doctor and the patient. The aim of this study was to compare body image concern in candidates for rhinoplasty and therapeutic surgery. This was a cross-sectional study conducted on 212 patients referring to Loghman Hospital of Tehran for rhinoplasty and therapeutic surgery during the period from 2014 through 2016. For each person in a cosmetic surgery group, a person of the same sex and age in a therapeutic surgery group was matched, and the study was conducted on 60 subjects in the rhinoplasty group and 62 patients in the therapeutic surgery group. Then, the Body Image Concern Inventory and demographic data were filled by all patients and the level of body image concern in both groups was compared. Statistical analysis was conducted using SPSS 16, Chi-square test as well as paired-samples t-test. P-value of less than 0.05 was considered statistically significant. In this study, 122 patients (49 males and 73 females) with mean age of 27.1±7.3 between 18 and 55 years of age were investigated. Sixty subjects were candidates for rhinoplasty and 62 subjects for therapeutic surgery. Candidates for rhinoplasty were mostly male (60%) and single (63.3%). Results of the t-test demonstrated that body image concern and body dysmorphic disorder were higher in the rhinoplasty group compared to the therapeutic group (pconcern was higher in rhinoplasty candidates compared to candidates for other surgeries. Visiting and correct interviewing of people who referred for rhinoplasty is very important to measure their level of body image concern to diagnose any disorders available and to consider required treatments.

  19. Infrared study of new star cluster candidates associated to dusty globules

    Science.gov (United States)

    Soto King, P.; Barbá, R.; Roman-Lopes, A.; Jaque, M.; Firpo, V.; Nilo, J. L.; Soto, M.; Minniti, D.

    2014-10-01

    We present results from a study of a sample of small star clusters associated to dusty globules and bright-rimmed clouds that have been observed under ESO/Chile public infrared survey Vista Variables in the Vía Láctea (VVV). In this short communication, we analyse the near-infrared properties of a set of four small clusters candidates associated to dark clouds. This sample of clusters associated to dusty globules are selected from the new VVV stellar cluster candidates developed by members of La Serena VVV Group (Barbá et al. 2014). Firstly, we are producing color-color and color-magnitude diagrams for both, cluster candidates and surrounding areas for comparison through PSF photometry. The cluster positions are determined from the morphology on the images and also from the comparison of the observed luminosity function for the cluster candidates and the surrounding star fields. Now, we are working in the procedures to establish the full sample of clusters to be analyzed and methods for subtraction of the star field contamination. These clusters associated to dusty globules are simple laboratories to study the star formation relatively free of the influence of large star-forming regions and populous clusters, and they will be compared with those clusters associated to bright-rimmed globules, which are influenced by the energetic action of nearby O and B massive stars.

  20. ACCURACY COMPARISON OF ALGORITHMS FOR DETERMINATION OF IMAGE CENTER COORDINATES IN OPTOELECTRONIC DEVICES

    Directory of Open Access Journals (Sweden)

    N. A. Starasotnikau

    2018-01-01

    Full Text Available Accuracy in determination of coordinates for image having simple shapes is considered as one of important and significant parameters in metrological optoelectronic systems such as autocollimators, stellar sensors, Shack-Hartmann sensors, schemes for geometric calibration of digital cameras for aerial and space imagery, various tracking systems. The paper describes a mathematical model for a measuring stand based on a collimator which projects a test-object onto a photodetector of an optoelectronic device. The mathematical model takes into account characteristic noises for photodetectors: a shot noise of the desired signal (photon and a shot noise of a dark signal, readout and spatial heterogeneity of CCD (charge-coupled device matrix elements. In order to reduce noise effect it is proposed to apply the Wiener filter for smoothing an image and its unambiguous identification and also enter a threshold according to brightness level. The paper contains a comparison of two algorithms for determination of coordinates in accordance with energy gravity center and contour. Sobel, Pruitt, Roberts, Laplacian Gaussian, Canni detectors have been used for determination of the test-object contour. The essence of the algorithm for determination of coordinates lies in search for an image contour in the form of a circle with its subsequent approximation and determination of the image center. An error calculation has been made while determining coordinates of a gravity center for test-objects of various diameters: 5, 10, 20, 30, 40, 50 pixels of a photodetector and also signalto-noise ratio values: 200, 100, 70, 20, 10. Signal-to-noise ratio has been calculated as a difference between maximum image intensity of the test-object and the background which is divided by mean-square deviation of the background. The accuracy for determination of coordinates has been improved by 0.5-1 order in case when there was an increase in a signal-to-noise ratio. Accuracy

  1. Automatic vetting of planet candidates from ground based surveys: Machine learning with NGTS

    Science.gov (United States)

    Armstrong, David J.; Günther, Maximilian N.; McCormac, James; Smith, Alexis M. S.; Bayliss, Daniel; Bouchy, François; Burleigh, Matthew R.; Casewell, Sarah; Eigmüller, Philipp; Gillen, Edward; Goad, Michael R.; Hodgkin, Simon T.; Jenkins, James S.; Louden, Tom; Metrailler, Lionel; Pollacco, Don; Poppenhaeger, Katja; Queloz, Didier; Raynard, Liam; Rauer, Heike; Udry, Stéphane; Walker, Simon R.; Watson, Christopher A.; West, Richard G.; Wheatley, Peter J.

    2018-05-01

    State of the art exoplanet transit surveys are producing ever increasing quantities of data. To make the best use of this resource, in detecting interesting planetary systems or in determining accurate planetary population statistics, requires new automated methods. Here we describe a machine learning algorithm that forms an integral part of the pipeline for the NGTS transit survey, demonstrating the efficacy of machine learning in selecting planetary candidates from multi-night ground based survey data. Our method uses a combination of random forests and self-organising-maps to rank planetary candidates, achieving an AUC score of 97.6% in ranking 12368 injected planets against 27496 false positives in the NGTS data. We build on past examples by using injected transit signals to form a training set, a necessary development for applying similar methods to upcoming surveys. We also make the autovet code used to implement the algorithm publicly accessible. autovet is designed to perform machine learned vetting of planetary candidates, and can utilise a variety of methods. The apparent robustness of machine learning techniques, whether on space-based or the qualitatively different ground-based data, highlights their importance to future surveys such as TESS and PLATO and the need to better understand their advantages and pitfalls in an exoplanetary context.

  2. A Comparison of the Life Satisfaction and Hopelessness Levels of Teacher Candidates in Turkey

    Science.gov (United States)

    Gencay, Selcuk; Gencay, Okkes Alpaslan

    2011-01-01

    This study aims to explore the level of hopelessness and life satisfaction of teacher candidates from the view points of gender and branch variables. With this aim, the "Beck Hopelessness Scale and Life Satisfaction Scale" has been applied to a total of 278 teacher candidates, of which 133 were females and 145 were males. According to…

  3. [An improved algorithm for electrohysterogram envelope extraction].

    Science.gov (United States)

    Lu, Yaosheng; Pan, Jie; Chen, Zhaoxia; Chen, Zhaoxia

    2017-02-01

    Extraction uterine contraction signal from abdominal uterine electromyogram(EMG) signal is considered as the most promising method to replace the traditional tocodynamometer(TOCO) for detecting uterine contractions activity. The traditional root mean square(RMS) algorithm has only some limited values in canceling the impulsive noise. In our study, an improved algorithm for uterine EMG envelope extraction was proposed to overcome the problem. Firstly, in our experiment, zero-crossing detection method was used to separate the burst of uterine electrical activity from the raw uterine EMG signal. After processing the separated signals by employing two filtering windows which have different width, we used the traditional RMS algorithm to extract uterus EMG envelope. To assess the performance of the algorithm, the improved algorithm was compared with two existing intensity of uterine electromyogram(IEMG) extraction algorithms. The results showed that the improved algorithm was better than the traditional ones in eliminating impulsive noise present in the uterine EMG signal. The measurement sensitivity and positive predictive value(PPV) of the improved algorithm were 0.952 and 0.922, respectively, which were not only significantly higher than the corresponding values(0.859 and 0.847) of the first comparison algorithm, but also higher than the values(0.928 and 0.877) of the second comparison algorithm. Thus the new method is reliable and effective.

  4. Comparing Online Algorithms for Bin Packing Problems

    DEFF Research Database (Denmark)

    Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard

    2012-01-01

    The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst-ord......-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair....

  5. A hardware-algorithm co-design approach to optimize seizure detection algorithms for implantable applications.

    Science.gov (United States)

    Raghunathan, Shriram; Gupta, Sumeet K; Markandeya, Himanshu S; Roy, Kaushik; Irazoqui, Pedro P

    2010-10-30

    Implantable neural prostheses that deliver focal electrical stimulation upon demand are rapidly emerging as an alternate therapy for roughly a third of the epileptic patient population that is medically refractory. Seizure detection algorithms enable feedback mechanisms to provide focally and temporally specific intervention. Real-time feasibility and computational complexity often limit most reported detection algorithms to implementations using computers for bedside monitoring or external devices communicating with the implanted electrodes. A comparison of algorithms based on detection efficacy does not present a complete picture of the feasibility of the algorithm with limited computational power, as is the case with most battery-powered applications. We present a two-dimensional design optimization approach that takes into account both detection efficacy and hardware cost in evaluating algorithms for their feasibility in an implantable application. Detection features are first compared for their ability to detect electrographic seizures from micro-electrode data recorded from kainate-treated rats. Circuit models are then used to estimate the dynamic and leakage power consumption of the compared features. A score is assigned based on detection efficacy and the hardware cost for each of the features, then plotted on a two-dimensional design space. An optimal combination of compared features is used to construct an algorithm that provides maximal detection efficacy per unit hardware cost. The methods presented in this paper would facilitate the development of a common platform to benchmark seizure detection algorithms for comparison and feasibility analysis in the next generation of implantable neuroprosthetic devices to treat epilepsy. Copyright © 2010 Elsevier B.V. All rights reserved.

  6. Binar Sort: A Linear Generalized Sorting Algorithm

    OpenAIRE

    Gilreath, William F.

    2008-01-01

    Sorting is a common and ubiquitous activity for computers. It is not surprising that there exist a plethora of sorting algorithms. For all the sorting algorithms, it is an accepted performance limit that sorting algorithms are linearithmic or O(N lg N). The linearithmic lower bound in performance stems from the fact that the sorting algorithms use the ordering property of the data. The sorting algorithm uses comparison by the ordering property to arrange the data elements from an initial perm...

  7. Insertion algorithms for network model database management systems

    Science.gov (United States)

    Mamadolimov, Abdurashid; Khikmat, Saburov

    2017-12-01

    The network model is a database model conceived as a flexible way of representing objects and their relationships. Its distinguishing feature is that the schema, viewed as a graph in which object types are nodes and relationship types are arcs, forms partial order. When a database is large and a query comparison is expensive then the efficiency requirement of managing algorithms is minimizing the number of query comparisons. We consider updating operation for network model database management systems. We develop a new sequantial algorithm for updating operation. Also we suggest a distributed version of the algorithm.

  8. Comparison of multihardware parallel implementations for a phase unwrapping algorithm

    Science.gov (United States)

    Hernandez-Lopez, Francisco Javier; Rivera, Mariano; Salazar-Garibay, Adan; Legarda-Sáenz, Ricardo

    2018-04-01

    Phase unwrapping is an important problem in the areas of optical metrology, synthetic aperture radar (SAR) image analysis, and magnetic resonance imaging (MRI) analysis. These images are becoming larger in size and, particularly, the availability and need for processing of SAR and MRI data have increased significantly with the acquisition of remote sensing data and the popularization of magnetic resonators in clinical diagnosis. Therefore, it is important to develop faster and accurate phase unwrapping algorithms. We propose a parallel multigrid algorithm of a phase unwrapping method named accumulation of residual maps, which builds on a serial algorithm that consists of the minimization of a cost function; minimization achieved by means of a serial Gauss-Seidel kind algorithm. Our algorithm also optimizes the original cost function, but unlike the original work, our algorithm is a parallel Jacobi class with alternated minimizations. This strategy is known as the chessboard type, where red pixels can be updated in parallel at same iteration since they are independent. Similarly, black pixels can be updated in parallel in an alternating iteration. We present parallel implementations of our algorithm for different parallel multicore architecture such as CPU-multicore, Xeon Phi coprocessor, and Nvidia graphics processing unit. In all the cases, we obtain a superior performance of our parallel algorithm when compared with the original serial version. In addition, we present a detailed comparative performance of the developed parallel versions.

  9. A Novel Clustering Algorithm Inspired by Membrane Computing

    Directory of Open Access Journals (Sweden)

    Hong Peng

    2015-01-01

    Full Text Available P systems are a class of distributed parallel computing models; this paper presents a novel clustering algorithm, which is inspired from mechanism of a tissue-like P system with a loop structure of cells, called membrane clustering algorithm. The objects of the cells express the candidate centers of clusters and are evolved by the evolution rules. Based on the loop membrane structure, the communication rules realize a local neighborhood topology, which helps the coevolution of the objects and improves the diversity of objects in the system. The tissue-like P system can effectively search for the optimal partitioning with the help of its parallel computing advantage. The proposed clustering algorithm is evaluated on four artificial data sets and six real-life data sets. Experimental results show that the proposed clustering algorithm is superior or competitive to k-means algorithm and several evolutionary clustering algorithms recently reported in the literature.

  10. COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES

    Directory of Open Access Journals (Sweden)

    A.A. Haseena Thasneem

    2015-05-01

    Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.

  11. Comparison of multiobjective harmony search, cuckoo search and bat-inspired algorithms for renewable distributed generation placement

    Directory of Open Access Journals (Sweden)

    John E. Candelo-Becerra

    2015-07-01

    Full Text Available Electric power losses have a significant impact on the total costs of distribution networks. The use of renewable energy sources is a major alternative to improve power losses and costs, although other important issues are also enhanced such as voltage magnitudes and network congestion. However, determining the best location and size of renewable energy generators can be sometimes a challenging task due to a large number of possible combinations in the search space. Furthermore, the multiobjective functions increase the complexity of the problem and metaheuristics are preferred to find solutions in a relatively short time. This paper evaluates the performance of the cuckoo search (CS, harmony search (HS, and bat-inspired (BA algorithms for the location and size of renewable distributed generation (RDG in radial distribution networks using a multiobjective function defined as minimizing the energy losses and the RDG costs. The metaheuristic algorithms were programmed in Matlab and tested using the 33-node radial distribution network. The three algorithms obtained similar results for the two objectives evaluated, finding points close to the best solutions in the Pareto front. Comparisons showed that the CS obtained the minimum results for most points evaluated, but the BA and the HS were close to the best solution.

  12. Testing a Fourier Accelerated Hybrid Monte Carlo Algorithm

    OpenAIRE

    Catterall, S.; Karamov, S.

    2001-01-01

    We describe a Fourier Accelerated Hybrid Monte Carlo algorithm suitable for dynamical fermion simulations of non-gauge models. We test the algorithm in supersymmetric quantum mechanics viewed as a one-dimensional Euclidean lattice field theory. We find dramatic reductions in the autocorrelation time of the algorithm in comparison to standard HMC.

  13. Performance comparison of extracellular spike sorting algorithms for single-channel recordings.

    Science.gov (United States)

    Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert

    2012-01-30

    Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (psorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (palgorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.

  14. Algorithm comparison for schedule optimization in MR fingerprinting.

    Science.gov (United States)

    Cohen, Ouri; Rosen, Matthew S

    2017-09-01

    In MR Fingerprinting, the flip angles and repetition times are chosen according to a pseudorandom schedule. In previous work, we have shown that maximizing the discrimination between different tissue types by optimizing the acquisition schedule allows reductions in the number of measurements required. The ideal optimization algorithm for this application remains unknown, however. In this work we examine several different optimization algorithms to determine the one best suited for optimizing MR Fingerprinting acquisition schedules. Copyright © 2017 Elsevier Inc. All rights reserved.

  15. An Enhanced Jaya Algorithm with a Two Group Adaption

    Directory of Open Access Journals (Sweden)

    Chibing Gong

    2017-01-01

    Full Text Available This paper proposes a novel performance enhanced Jaya algorithm with a two group adaption (E-Jaya. Two improvements are presented in E-Jaya. First, instead of using the best and the worst values in Jaya algorithm, EJaya separates all candidates into two groups: the better and the worse groups based on their fitness values, then the mean of the better group and the mean of the worse group are used. Second, in order to add non algorithm-specific parameters in E-Jaya, a novel adaptive method of dividing the two groups has been developed. Finally, twelve benchmark functions with different dimensionality, such as 40, 60, and 100, were evaluated using the proposed EJaya algorithm. The results show that E-Jaya significantly outperformed Jaya algorithm in terms of the solution accuracy. Additionally, E-Jaya was also compared with a differential evolution (DE, a self-adapting control parameters in differential evolution (jDE, a firefly algorithm (FA, and a standard particle swarm optimization 2011 (SPSO2011 algorithm. E-Jaya algorithm outperforms all the algorithms.

  16. A voting-based star identification algorithm utilizing local and global distribution

    Science.gov (United States)

    Fan, Qiaoyun; Zhong, Xuyang; Sun, Junhua

    2018-03-01

    A novel star identification algorithm based on voting scheme is presented in this paper. In the proposed algorithm, the global distribution and local distribution of sensor stars are fully utilized, and the stratified voting scheme is adopted to obtain the candidates for sensor stars. The database optimization is employed to reduce its memory requirement and improve the robustness of the proposed algorithm. The simulation shows that the proposed algorithm exhibits 99.81% identification rate with 2-pixel standard deviations of positional noises and 0.322-Mv magnitude noises. Compared with two similar algorithms, the proposed algorithm is more robust towards noise, and the average identification time and required memory is less. Furthermore, the real sky test shows that the proposed algorithm performs well on the real star images.

  17. Comparison of the inversion algorithms applied to the ozone vertical profile retrieval from SCIAMACHY limb measurements

    Directory of Open Access Journals (Sweden)

    A. Rozanov

    2007-09-01

    Full Text Available This paper is devoted to an intercomparison of ozone vertical profiles retrieved from the measurements of scattered solar radiation performed by the SCIAMACHY instrument in the limb viewing geometry. Three different inversion algorithms including the prototype of the operational Level 1 to 2 processor to be operated by the European Space Agency are considered. Unlike usual validation studies, this comparison removes the uncertainties arising when comparing measurements made by different instruments probing slightly different air masses and focuses on the uncertainties specific to the modeling-retrieval problem only. The intercomparison was performed for 5 selected orbits of SCIAMACHY showing a good overall agreement of the results in the middle stratosphere, whereas considerable discrepancies were identified in the lower stratosphere and upper troposphere altitude region. Additionally, comparisons with ground-based lidar measurements are shown for selected profiles demonstrating an overall correctness of the retrievals.

  18. Comparison of Two Phenotypic Algorithms To Detect Carbapenemase-Producing Enterobacteriaceae

    Science.gov (United States)

    Dortet, Laurent; Bernabeu, Sandrine; Gonzalez, Camille

    2017-01-01

    ABSTRACT A novel algorithm designed for the screening of carbapenemase-producing Enterobacteriaceae (CPE), based on faropenem and temocillin disks, was compared to that of the Committee of the Antibiogram of the French Society of Microbiology (CA-SFM), which is based on ticarcillin-clavulanate, imipenem, and temocillin disks. The two algorithms presented comparable negative predictive values (98.6% versus 97.5%) for CPE screening among carbapenem-nonsusceptible Enterobacteriaceae. However, since 46.2% (n = 49) of the CPE were correctly identified as OXA-48-like producers by the faropenem/temocillin-based algorithm, it significantly decreased the number of complementary tests needed (42.2% versus 62.6% with the CA-SFM algorithm). PMID:28607010

  19. Particle algorithms for population dynamics in flows

    International Nuclear Information System (INIS)

    Perlekar, Prasad; Toschi, Federico; Benzi, Roberto; Pigolotti, Simone

    2011-01-01

    We present and discuss particle based algorithms to numerically study the dynamics of population subjected to an advecting flow condition. We discuss few possible variants of the algorithms and compare them in a model compressible flow. A comparison against appropriate versions of the continuum stochastic Fisher equation (sFKPP) is also presented and discussed. The algorithms can be used to study populations genetics in fluid environments.

  20. A Low-Complexity Joint Detection-Decoding Algorithm for Nonbinary LDPC-Coded Modulation Systems

    OpenAIRE

    Wang, Xuepeng; Bai, Baoming; Ma, Xiao

    2010-01-01

    In this paper, we present a low-complexity joint detection-decoding algorithm for nonbinary LDPC codedmodulation systems. The algorithm combines hard-decision decoding using the message-passing strategy with the signal detector in an iterative manner. It requires low computational complexity, offers good system performance and has a fast rate of decoding convergence. Compared to the q-ary sum-product algorithm (QSPA), it provides an attractive candidate for practical applications of q-ary LDP...

  1. An improved approach to exchange non-rectangular departments in CRAFT algorithm

    OpenAIRE

    Esmaeili Aliabadi, Danial; Pourghannad, Behrooz

    2012-01-01

    In this Paper, an algorithm which improves CRAFT algorithm’s efficacy is developed. CRAFT is an algorithm widely used to solve facility layout problems. Our proposed method, named Plasma, can be used to improve CRAFT results. In this note, Plasma algorithm is tested in several sample problems. The comparison between Plasma and classic CRAFT and also Micro-CRAFT indicates that Plasma is successful in cost reduction in comparison with CRAFT and Micro-CRAFT.

  2. A novel automated spike sorting algorithm with adaptable feature extraction.

    Science.gov (United States)

    Bestel, Robert; Daus, Andreas W; Thielemann, Christiane

    2012-10-15

    To study the electrophysiological properties of neuronal networks, in vitro studies based on microelectrode arrays have become a viable tool for analysis. Although in constant progress, a challenging task still remains in this area: the development of an efficient spike sorting algorithm that allows an accurate signal analysis at the single-cell level. Most sorting algorithms currently available only extract a specific feature type, such as the principal components or Wavelet coefficients of the measured spike signals in order to separate different spike shapes generated by different neurons. However, due to the great variety in the obtained spike shapes, the derivation of an optimal feature set is still a very complex issue that current algorithms struggle with. To address this problem, we propose a novel algorithm that (i) extracts a variety of geometric, Wavelet and principal component-based features and (ii) automatically derives a feature subset, most suitable for sorting an individual set of spike signals. Thus, there is a new approach that evaluates the probability distribution of the obtained spike features and consequently determines the candidates most suitable for the actual spike sorting. These candidates can be formed into an individually adjusted set of spike features, allowing a separation of the various shapes present in the obtained neuronal signal by a subsequent expectation maximisation clustering algorithm. Test results with simulated data files and data obtained from chick embryonic neurons cultured on microelectrode arrays showed an excellent classification result, indicating the superior performance of the described algorithm approach. Copyright © 2012 Elsevier B.V. All rights reserved.

  3. A high-performance spatial database based approach for pathology imaging algorithm evaluation.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A D; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J; Saltz, Joel H

    2013-01-01

    Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. WE HAVE CONSIDERED TWO SCENARIOS FOR ALGORITHM EVALUATION: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and

  4. redMaPPer. I. Algorithm and SDSS DR8 catalog

    Energy Technology Data Exchange (ETDEWEB)

    Rykoff, E. S.; Rozo, E.; Reddick, R.; Wechsler, R. H. [SLAC National Accelerator Laboratory, Menlo Park, CA 94025 (United States); Busha, M. T. [Institute for Theoretical Physics, University of Zürich, 8057 Zürich (Switzerland); Cunha, C. E. [Kavli Institute for Particle Astrophysics and Cosmology, Stanford University, Palo Alto, CA 94305 (United States); Finoguenov, A. [Department of Physics, University of Helsinki, FI-00014 Helsinki (Finland); Evrard, A.; Koester, B. P. [Physics Department, University of Michigan, Ann Arbor, MI 48109 (United States); Hao, J.; Nord, B. [Center for Particle Astrophysics, Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States); Leauthaud, A. [Kavli Institute for the Physics and Mathematics of the Universe, University of Tokyo, Kashiwa 277-8583 (Japan); Pierre, M.; Sadibekova, T. [Service d' Astrophysique, CEA Saclay, F-91191 Gif sur Yvette Cedex (France); Sheldon, E. S. [Brookhaven National Laboratory, Upton, NY 11973 (United States)

    2014-04-20

    We describe redMaPPer, a new red sequence cluster finder specifically designed to make optimal use of ongoing and near-future large photometric surveys. The algorithm has multiple attractive features: (1) it can iteratively self-train the red sequence model based on a minimal spectroscopic training sample, an important feature for high-redshift surveys. (2) It can handle complex masks with varying depth. (3) It produces cluster-appropriate random points to enable large-scale structure studies. (4) All clusters are assigned a full redshift probability distribution P(z). (5) Similarly, clusters can have multiple candidate central galaxies, each with corresponding centering probabilities. (6) The algorithm is parallel and numerically efficient: it can run a Dark Energy Survey-like catalog in ∼500 CPU hours. (7) The algorithm exhibits excellent photometric redshift performance, the richness estimates are tightly correlated with external mass proxies, and the completeness and purity of the corresponding catalogs are superb. We apply the redMaPPer algorithm to ∼10, 000 deg{sup 2} of SDSS DR8 data and present the resulting catalog of ∼25,000 clusters over the redshift range z in [0.08, 0.55]. The redMaPPer photometric redshifts are nearly Gaussian, with a scatter σ {sub z} ≈ 0.006 at z ≈ 0.1, increasing to σ {sub z} ≈ 0.02 at z ≈ 0.5 due to increased photometric noise near the survey limit. The median value for |Δz|/(1 + z) for the full sample is 0.006. The incidence of projection effects is low (≤5%). Detailed performance comparisons of the redMaPPer DR8 cluster catalog to X-ray and Sunyaev-Zel'dovich catalogs are presented in a companion paper.

  5. Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms

    Directory of Open Access Journals (Sweden)

    Dominik Zurek

    2013-01-01

    Full Text Available Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting algorithms on GPU cards and multicore processors. Then hybrid algorithm will be presented which consists of parts executed on both platforms, standard CPU and GPU.

  6. Comparison of Firefly algorithm and Artificial Immune System algorithm for lot streaming in -machine flow shop scheduling

    Directory of Open Access Journals (Sweden)

    G. Vijay Chakaravarthy

    2012-11-01

    Full Text Available Lot streaming is a technique used to split the processing of lots into several sublots (transfer batches to allow the overlapping of operations in a multistage manufacturing systems thereby shortening the production time (makespan. The objective of this paper is to minimize the makespan and total flow time of -job, -machine lot streaming problem in a flow shop with equal and variable size sublots and also to determine the optimal sublot size. In recent times researchers are concentrating and applying intelligent heuristics to solve flow shop problems with lot streaming. In this research, Firefly Algorithm (FA and Artificial Immune System (AIS algorithms are used to solve the problem. The results obtained by the proposed algorithms are also compared with the performance of other worked out traditional heuristics. The computational results shows that the identified algorithms are more efficient, effective and better than the algorithms already tested for this problem.

  7. Fast algorithm for Morphological Filters

    International Nuclear Information System (INIS)

    Lou Shan; Jiang Xiangqian; Scott, Paul J

    2011-01-01

    In surface metrology, morphological filters, which evolved from the envelope filtering system (E-system) work well for functional prediction of surface finish in the analysis of surfaces in contact. The naive algorithms are time consuming, especially for areal data, and not generally adopted in real practice. A fast algorithm is proposed based on the alpha shape. The hull obtained by rolling the alpha ball is equivalent to the morphological opening/closing in theory. The algorithm depends on Delaunay triangulation with time complexity O(nlogn). In comparison to the naive algorithms it generates the opening and closing envelope without combining dilation and erosion. Edge distortion is corrected by reflective padding for open profiles/surfaces. Spikes in the sample data are detected and points interpolated to prevent singularities. The proposed algorithm works well both for morphological profile and area filters. Examples are presented to demonstrate the validity and superiority on efficiency of this algorithm over the naive algorithm.

  8. Development of a hybrid energy storage sizing algorithm associated with the evaluation of power management in different driving cycles

    International Nuclear Information System (INIS)

    Masoud, Masih Tehrani; Mohammad Reza, Ha'iri Yazdi; Esfahanian, Vahid; Sagha, Hossein

    2012-01-01

    In this paper, a hybrid energy storage sizing algorithm for electric vehicles is developed to achieve a semi optimum cost effective design. Using the developed algorithm, a driving cycle is divided into its micro-trips and the power and energy demands in each micro trip are determined. The battery size is estimated because the battery fulfills the power demands. Moreover, the ultra capacitor (UC) energy (or the number of UC modules) is assessed because the UC delivers the maximum energy demands of the different micro trips of a driving cycle. Finally, a design factor, which shows the power of the hybrid energy storage control strategy, is utilized to evaluate the newly designed control strategies. Using the developed algorithm, energy saving loss, driver satisfaction criteria, and battery life criteria are calculated using a feed forward dynamic modeling software program and are utilized for comparison among different energy storage candidates. This procedure is applied to the hybrid energy storage sizing of a series hybrid electric city bus in Manhattan and to the Tehran driving cycle. Results show that a higher aggressive driving cycle (Manhattan) requires more expensive energy storage system and more sophisticated energy management strategy

  9. Sampling algorithms for validation of supervised learning models for Ising-like systems

    Science.gov (United States)

    Portman, Nataliya; Tamblyn, Isaac

    2017-12-01

    In this paper, we build and explore supervised learning models of ferromagnetic system behavior, using Monte-Carlo sampling of the spin configuration space generated by the 2D Ising model. Given the enormous size of the space of all possible Ising model realizations, the question arises as to how to choose a reasonable number of samples that will form physically meaningful and non-intersecting training and testing datasets. Here, we propose a sampling technique called ;ID-MH; that uses the Metropolis-Hastings algorithm creating Markov process across energy levels within the predefined configuration subspace. We show that application of this method retains phase transitions in both training and testing datasets and serves the purpose of validation of a machine learning algorithm. For larger lattice dimensions, ID-MH is not feasible as it requires knowledge of the complete configuration space. As such, we develop a new ;block-ID; sampling strategy: it decomposes the given structure into square blocks with lattice dimension N ≤ 5 and uses ID-MH sampling of candidate blocks. Further comparison of the performance of commonly used machine learning methods such as random forests, decision trees, k nearest neighbors and artificial neural networks shows that the PCA-based Decision Tree regressor is the most accurate predictor of magnetizations of the Ising model. For energies, however, the accuracy of prediction is not satisfactory, highlighting the need to consider more algorithmically complex methods (e.g., deep learning).

  10. Overlap Algorithms in Flexible Job-shop Scheduling

    Directory of Open Access Journals (Sweden)

    Celia Gutierrez

    2014-06-01

    Full Text Available The flexible Job-shop Scheduling Problem (fJSP considers the execution of jobs by a set of candidate resources while satisfying time and technological constraints. This work, that follows the hierarchical architecture, is based on an algorithm where each objective (resource allocation, start-time assignment is solved by a genetic algorithm (GA that optimizes a particular fitness function, and enhances the results by the execution of a set of heuristics that evaluate and repair each scheduling constraint on each operation. The aim of this work is to analyze the impact of some algorithmic features of the overlap constraint heuristics, in order to achieve the objectives at a highest degree. To demonstrate the efficiency of this approach, experimentation has been performed and compared with similar cases, tuning the GA parameters correctly.

  11. Novel medical image enhancement algorithms

    Science.gov (United States)

    Agaian, Sos; McClendon, Stephen A.

    2010-01-01

    In this paper, we present two novel medical image enhancement algorithms. The first, a global image enhancement algorithm, utilizes an alpha-trimmed mean filter as its backbone to sharpen images. The second algorithm uses a cascaded unsharp masking technique to separate the high frequency components of an image in order for them to be enhanced using a modified adaptive contrast enhancement algorithm. Experimental results from enhancing electron microscopy, radiological, CT scan and MRI scan images, using the MATLAB environment, are then compared to the original images as well as other enhancement methods, such as histogram equalization and two forms of adaptive contrast enhancement. An image processing scheme for electron microscopy images of Purkinje cells will also be implemented and utilized as a comparison tool to evaluate the performance of our algorithm.

  12. List-Based Simulated Annealing Algorithm for Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    Shi-hua Zhan

    2016-01-01

    Full Text Available Simulated annealing (SA algorithm is a popular intelligent optimization algorithm which has been successfully applied in many fields. Parameters’ setting is a key factor for its performance, but it is also a tedious work. To simplify parameters setting, we present a list-based simulated annealing (LBSA algorithm to solve traveling salesman problem (TSP. LBSA algorithm uses a novel list-based cooling schedule to control the decrease of temperature. Specifically, a list of temperatures is created first, and then the maximum temperature in list is used by Metropolis acceptance criterion to decide whether to accept a candidate solution. The temperature list is adapted iteratively according to the topology of the solution space of the problem. The effectiveness and the parameter sensitivity of the list-based cooling schedule are illustrated through benchmark TSP problems. The LBSA algorithm, whose performance is robust on a wide range of parameter values, shows competitive performance compared with some other state-of-the-art algorithms.

  13. Algorithms For Integrating Nonlinear Differential Equations

    Science.gov (United States)

    Freed, A. D.; Walker, K. P.

    1994-01-01

    Improved algorithms developed for use in numerical integration of systems of nonhomogenous, nonlinear, first-order, ordinary differential equations. In comparison with integration algorithms, these algorithms offer greater stability and accuracy. Several asymptotically correct, thereby enabling retention of stability and accuracy when large increments of independent variable used. Accuracies attainable demonstrated by applying them to systems of nonlinear, first-order, differential equations that arise in study of viscoplastic behavior, spread of acquired immune-deficiency syndrome (AIDS) virus and predator/prey populations.

  14. Testing algorithms for critical slowing down

    Directory of Open Access Journals (Sweden)

    Cossu Guido

    2018-01-01

    Full Text Available We present the preliminary tests on two modifications of the Hybrid Monte Carlo (HMC algorithm. Both algorithms are designed to travel much farther in the Hamiltonian phase space for each trajectory and reduce the autocorrelations among physical observables thus tackling the critical slowing down towards the continuum limit. We present a comparison of costs of the new algorithms with the standard HMC evolution for pure gauge fields, studying the autocorrelation times for various quantities including the topological charge.

  15. Parallel-vector algorithms for particle simulations on shared-memory multiprocessors

    International Nuclear Information System (INIS)

    Nishiura, Daisuke; Sakaguchi, Hide

    2011-01-01

    Over the last few decades, the computational demands of massive particle-based simulations for both scientific and industrial purposes have been continuously increasing. Hence, considerable efforts are being made to develop parallel computing techniques on various platforms. In such simulations, particles freely move within a given space, and so on a distributed-memory system, load balancing, i.e., assigning an equal number of particles to each processor, is not guaranteed. However, shared-memory systems achieve better load balancing for particle models, but suffer from the intrinsic drawback of memory access competition, particularly during (1) paring of contact candidates from among neighboring particles and (2) force summation for each particle. Here, novel algorithms are proposed to overcome these two problems. For the first problem, the key is a pre-conditioning process during which particle labels are sorted by a cell label in the domain to which the particles belong. Then, a list of contact candidates is constructed by pairing the sorted particle labels. For the latter problem, a table comprising the list indexes of the contact candidate pairs is created and used to sum the contact forces acting on each particle for all contacts according to Newton's third law. With just these methods, memory access competition is avoided without additional redundant procedures. The parallel efficiency and compatibility of these two algorithms were evaluated in discrete element method (DEM) simulations on four types of shared-memory parallel computers: a multicore multiprocessor computer, scalar supercomputer, vector supercomputer, and graphics processing unit. The computational efficiency of a DEM code was found to be drastically improved with our algorithms on all but the scalar supercomputer. Thus, the developed parallel algorithms are useful on shared-memory parallel computers with sufficient memory bandwidth.

  16. Autonomous algorithms for image restoration

    OpenAIRE

    Griniasty , Meir

    1994-01-01

    We describe a general theoretical framework for algorithms that adaptively tune all their parameters during the restoration of a noisy image. The adaptation procedure is based on a mean field approach which is known as ``Deterministic Annealing'', and is reminiscent of the ``Deterministic Bolzmann Machiné'. The algorithm is less time consuming in comparison with its simulated annealing alternative. We apply the theory to several architectures and compare their performances.

  17. Advanced reconstruction algorithms for electron tomography: From comparison to combination

    Energy Technology Data Exchange (ETDEWEB)

    Goris, B. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Roelandts, T. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Batenburg, K.J. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Heidari Mezerji, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Bals, S., E-mail: sara.bals@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)

    2013-04-15

    In this work, the simultaneous iterative reconstruction technique (SIRT), the total variation minimization (TVM) reconstruction technique and the discrete algebraic reconstruction technique (DART) for electron tomography are compared and the advantages and disadvantages are discussed. Furthermore, we describe how the result of a three dimensional (3D) reconstruction based on TVM can provide objective information that is needed as the input for a DART reconstruction. This approach results in a tomographic reconstruction of which the segmentation is carried out in an objective manner. - Highlights: ► A comparative study between different reconstruction algorithms for tomography is performed. ► Reconstruction algorithms that uses prior knowledge about the specimen have a superior result. ► One reconstruction algorithm can provide the prior knowledge for a second algorithm.

  18. Comparison of l₁-Norm SVR and Sparse Coding Algorithms for Linear Regression.

    Science.gov (United States)

    Zhang, Qingtian; Hu, Xiaolin; Zhang, Bo

    2015-08-01

    Support vector regression (SVR) is a popular function estimation technique based on Vapnik's concept of support vector machine. Among many variants, the l1-norm SVR is known to be good at selecting useful features when the features are redundant. Sparse coding (SC) is a technique widely used in many areas and a number of efficient algorithms are available. Both l1-norm SVR and SC can be used for linear regression. In this brief, the close connection between the l1-norm SVR and SC is revealed and some typical algorithms are compared for linear regression. The results show that the SC algorithms outperform the Newton linear programming algorithm, an efficient l1-norm SVR algorithm, in efficiency. The algorithms are then used to design the radial basis function (RBF) neural networks. Experiments on some benchmark data sets demonstrate the high efficiency of the SC algorithms. In particular, one of the SC algorithms, the orthogonal matching pursuit is two orders of magnitude faster than a well-known RBF network designing algorithm, the orthogonal least squares algorithm.

  19. A Unified Differential Evolution Algorithm for Global Optimization

    Energy Technology Data Exchange (ETDEWEB)

    Qiang, Ji; Mitchell, Chad

    2014-06-24

    Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.

  20. Advanced metaheuristic algorithms for laser optimization

    International Nuclear Information System (INIS)

    Tomizawa, H.

    2010-01-01

    A laser is one of the most important experimental tools. In synchrotron radiation field, lasers are widely used for experiments with Pump-Probe techniques. Especially for Xray-FELs, a laser has important roles as a seed light source or photo-cathode-illuminating light source to generate a high brightness electron bunch. The controls of laser pulse characteristics are required for many kinds of experiments. However, the laser should be tuned and customized for each requirement by laser experts. The automatic tuning of laser is required to realize with some sophisticated algorithms. The metaheuristic algorithm is one of the useful candidates to find one of the best solutions as acceptable as possible. The metaheuristic laser tuning system is expected to save our human resources and time for the laser preparations. I have shown successful results on a metaheuristic algorithm based on a genetic algorithm to optimize spatial (transverse) laser profiles and a hill climbing method extended with a fuzzy set theory to choose one of the best laser alignments automatically for each experimental requirement. (author)

  1. Cloud detection algorithm comparison and validation for operational Landsat data products

    Science.gov (United States)

    Foga, Steven Curtis; Scaramuzza, Pat; Guo, Song; Zhu, Zhe; Dilley, Ronald; Beckmann, Tim; Schmidt, Gail L.; Dwyer, John L.; Hughes, MJ; Laue, Brady

    2017-01-01

    Clouds are a pervasive and unavoidable issue in satellite-borne optical imagery. Accurate, well-documented, and automated cloud detection algorithms are necessary to effectively leverage large collections of remotely sensed data. The Landsat project is uniquely suited for comparative validation of cloud assessment algorithms because the modular architecture of the Landsat ground system allows for quick evaluation of new code, and because Landsat has the most comprehensive manual truth masks of any current satellite data archive. Currently, the Landsat Level-1 Product Generation System (LPGS) uses separate algorithms for determining clouds, cirrus clouds, and snow and/or ice probability on a per-pixel basis. With more bands onboard the Landsat 8 Operational Land Imager (OLI)/Thermal Infrared Sensor (TIRS) satellite, and a greater number of cloud masking algorithms, the U.S. Geological Survey (USGS) is replacing the current cloud masking workflow with a more robust algorithm that is capable of working across multiple Landsat sensors with minimal modification. Because of the inherent error from stray light and intermittent data availability of TIRS, these algorithms need to operate both with and without thermal data. In this study, we created a workflow to evaluate cloud and cloud shadow masking algorithms using cloud validation masks manually derived from both Landsat 7 Enhanced Thematic Mapper Plus (ETM +) and Landsat 8 OLI/TIRS data. We created a new validation dataset consisting of 96 Landsat 8 scenes, representing different biomes and proportions of cloud cover. We evaluated algorithm performance by overall accuracy, omission error, and commission error for both cloud and cloud shadow. We found that CFMask, C code based on the Function of Mask (Fmask) algorithm, and its confidence bands have the best overall accuracy among the many algorithms tested using our validation data. The Artificial Thermal-Automated Cloud Cover Algorithm (AT-ACCA) is the most accurate

  2. The improved Apriori algorithm based on matrix pruning and weight analysis

    Science.gov (United States)

    Lang, Zhenhong

    2018-04-01

    This paper uses the matrix compression algorithm and weight analysis algorithm for reference and proposes an improved matrix pruning and weight analysis Apriori algorithm. After the transactional database is scanned for only once, the algorithm will construct the boolean transaction matrix. Through the calculation of one figure in the rows and columns of the matrix, the infrequent item set is pruned, and a new candidate item set is formed. Then, the item's weight and the transaction's weight as well as the weight support for items are calculated, thus the frequent item sets are gained. The experimental result shows that the improved Apriori algorithm not only reduces the number of repeated scans of the database, but also improves the efficiency of data correlation mining.

  3. An efficient genetic algorithm for structural RNA pairwise alignment and its application to non-coding RNA discovery in yeast

    Directory of Open Access Journals (Sweden)

    Taneda Akito

    2008-12-01

    Full Text Available Abstract Background Aligning RNA sequences with low sequence identity has been a challenging problem since such a computation essentially needs an algorithm with high complexities for taking structural conservation into account. Although many sophisticated algorithms for the purpose have been proposed to date, further improvement in efficiency is necessary to accelerate its large-scale applications including non-coding RNA (ncRNA discovery. Results We developed a new genetic algorithm, Cofolga2, for simultaneously computing pairwise RNA sequence alignment and consensus folding, and benchmarked it using BRAliBase 2.1. The benchmark results showed that our new algorithm is accurate and efficient in both time and memory usage. Then, combining with the originally trained SVM, we applied the new algorithm to novel ncRNA discovery where we compared S. cerevisiae genome with six related genomes in a pairwise manner. By focusing our search to the relatively short regions (50 bp to 2,000 bp sandwiched by conserved sequences, we successfully predict 714 intergenic and 1,311 sense or antisense ncRNA candidates, which were found in the pairwise alignments with stable consensus secondary structure and low sequence identity (≤ 50%. By comparing with the previous predictions, we found that > 92% of the candidates is novel candidates. The estimated rate of false positives in the predicted candidates is 51%. Twenty-five percent of the intergenic candidates has supports for expression in cell, i.e. their genomic positions overlap those of the experimentally determined transcripts in literature. By manual inspection of the results, moreover, we obtained four multiple alignments with low sequence identity which reveal consensus structures shared by three species/sequences. Conclusion The present method gives an efficient tool complementary to sequence-alignment-based ncRNA finders.

  4. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2013-01-01

    Full Text Available Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS data model. Aims: (1 Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2 Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3 Develop a set of queries to support data sampling and result comparisons; (4 Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1 algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2 algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The

  5. Performance of Estimation of distribution algorithm for initial core loading optimization of AHWR-LEU

    International Nuclear Information System (INIS)

    Thakur, Amit; Singh, Baltej; Gupta, Anurag; Duggal, Vibhuti; Bhatt, Kislay; Krishnani, P.D.

    2016-01-01

    Highlights: • EDA has been applied to optimize initial core of AHWR-LEU. • Suitable value of weighing factor ‘α’ and population size in EDA was estimated. • The effect of varying initial distribution function on optimized solution was studied. • For comparison, Genetic algorithm was also applied. - Abstract: Population based evolutionary algorithms now form an integral part of fuel management in nuclear reactors and are frequently being used for fuel loading pattern optimization (LPO) problems. In this paper we have applied Estimation of distribution algorithm (EDA) to optimize initial core loading pattern (LP) of AHWR-LEU. In EDA, new solutions are generated by sampling the probability distribution model estimated from the selected best candidate solutions. The weighing factor ‘α’ decides the fraction of current best solution for updating the probability distribution function after each generation. A wider use of EDA warrants a comprehensive study on parameters like population size, weighing factor ‘α’ and initial probability distribution function. In the present study, we have done an extensive analysis on these parameters (population size, weighing factor ‘α’ and initial probability distribution function) in EDA. It is observed that choosing a very small value of ‘α’ may limit the search of optimized solutions in the near vicinity of initial probability distribution function and better loading patterns which are away from initial distribution function may not be considered with due weightage. It is also observed that increasing the population size improves the optimized loading pattern, however the algorithm still fails if the initial distribution function is not close to the expected optimized solution. We have tried to find out the suitable values for ‘α’ and population size to be considered for AHWR-LEU initial core loading pattern optimization problem. For sake of comparison and completeness, we have also addressed the

  6. Comparison of two global digital algorithms for Minkowski tensor estimation

    DEFF Research Database (Denmark)

    The geometry of real world objects can be described by Minkowski tensors. Algorithms have been suggested to approximate Minkowski tensors if only a binary image of the object is available. This paper presents implementations of two such algorithms. The theoretical convergence properties...... are confirmed by simulations on test sets, and recommendations for input arguments of the algorithms are given. For increasing resolutions, we obtain more accurate estimators for the Minkowski tensors. Digitisations of more complicated objects are shown to require higher resolutions....

  7. Comparison Study on Two Model-Based Adaptive Algorithms for SOC Estimation of Lithium-Ion Batteries in Electric Vehicles

    Directory of Open Access Journals (Sweden)

    Yong Tian

    2014-12-01

    Full Text Available State of charge (SOC estimation is essential to battery management systems in electric vehicles (EVs to ensure the safe operations of batteries and providing drivers with the remaining range of the EVs. A number of estimation algorithms have been developed to get an accurate SOC value because the SOC cannot be directly measured with sensors and is closely related to various factors, such as ambient temperature, current rate and battery aging. In this paper, two model-based adaptive algorithms, including the adaptive unscented Kalman filter (AUKF and adaptive slide mode observer (ASMO are applied and compared in terms of convergence behavior, tracking accuracy, computational cost and estimation robustness against parameter uncertainties of the battery model in SOC estimation. Two typical driving cycles, including the Dynamic Stress Test (DST and New European Driving Cycle (NEDC are applied to evaluate the performance of the two algorithms. Comparison results show that the AUKF has merits in convergence ability and tracking accuracy with an accurate battery model, while the ASMO has lower computational cost and better estimation robustness against parameter uncertainties of the battery model.

  8. Speeding Up the String Comparison of the IDS Snort using Parallel Programming: A Systematic Literature Review on the Parallelized Aho-Corasick Algorithm

    Directory of Open Access Journals (Sweden)

    SILVA JUNIOR,J. B.

    2016-12-01

    Full Text Available The Intrusion Detection System (IDS needs to compare the contents of all packets arriving at the network interface with a set of signatures for indicating possible attacks, a task that consumes much CPU processing time. In order to alleviate this problem, some researchers have tried to parallelize the IDS's comparison engine, transferring execution from the CPU to GPU. This paper identifies and maps the parallelization features of the Aho-Corasick algorithm, which is used in Snort to compare patterns, in order to show this algorithm's implementation and execution issues, as well as optimization techniques for the Aho-Corasick machine. We have found 147 papers from important computer science publications databases, and have mapped them. We selected 22 and analyzed them in order to find our results. Our analysis of the papers showed, among other results, that parallelization of the AC algorithm is a new task and the authors have focused on the State Transition Table as the most common way to implement the algorithm on the GPU. Furthermore, we found that some techniques speed up the algorithm and reduce the required machine storage space are highly used, such as the algorithm running on the fastest memories and mechanisms for reducing the number of nodes and bit maping.

  9. Comparison of parameter estimation algorithms in hydrological modelling

    DEFF Research Database (Denmark)

    Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan

    2006-01-01

    Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...... and provides a better coverage of the Pareto optimal solutions at a lower computational cost....

  10. Clustering performance comparison using K-means and expectation maximization algorithms.

    Science.gov (United States)

    Jung, Yong Gyu; Kang, Min Soo; Heo, Jun

    2014-11-14

    Clustering is an important means of data mining based on separating data categories by similar features. Unlike the classification algorithm, clustering belongs to the unsupervised type of algorithms. Two representatives of the clustering algorithms are the K -means and the expectation maximization (EM) algorithm. Linear regression analysis was extended to the category-type dependent variable, while logistic regression was achieved using a linear combination of independent variables. To predict the possibility of occurrence of an event, a statistical approach is used. However, the classification of all data by means of logistic regression analysis cannot guarantee the accuracy of the results. In this paper, the logistic regression analysis is applied to EM clusters and the K -means clustering method for quality assessment of red wine, and a method is proposed for ensuring the accuracy of the classification results.

  11. Close Sequence Comparisons are Sufficient to Identify Humancis-Regulatory Elements

    Energy Technology Data Exchange (ETDEWEB)

    Prabhakar, Shyam; Poulin, Francis; Shoukry, Malak; Afzal, Veena; Rubin, Edward M.; Couronne, Olivier; Pennacchio, Len A.

    2005-12-01

    Cross-species DNA sequence comparison is the primary method used to identify functional noncoding elements in human and other large genomes. However, little is known about the relative merits of evolutionarily close and distant sequence comparisons, due to the lack of a universal metric for sequence conservation, and also the paucity of empirically defined benchmark sets of cis-regulatory elements. To address this problem, we developed a general-purpose algorithm (Gumby) that detects slowly-evolving regions in primate, mammalian and more distant comparisons without requiring adjustment of parameters, and ranks conserved elements by P-value using Karlin-Altschul statistics. We benchmarked Gumby predictions against previously identified cis-regulatory elements at diverse genomic loci, and also tested numerous extremely conserved human-rodent sequences for transcriptional enhancer activity using reporter-gene assays in transgenic mice. Human regulatory elements were identified with acceptable sensitivity and specificity by comparison with 1-5 other eutherian mammals or 6 other simian primates. More distant comparisons (marsupial, avian, amphibian and fish) failed to identify many of the empirically defined functional noncoding elements. We derived an intuitive relationship between ancient and recent noncoding sequence conservation from whole genome comparative analysis, which explains some of these findings. Lastly, we determined that, in addition to strength of conservation, genomic location and/or density of surrounding conserved elements must also be considered in selecting candidate enhancers for testing at embryonic time points.

  12. Improved autonomous star identification algorithm

    International Nuclear Information System (INIS)

    Luo Li-Yan; Xu Lu-Ping; Zhang Hua; Sun Jing-Rong

    2015-01-01

    The log–polar transform (LPT) is introduced into the star identification because of its rotation invariance. An improved autonomous star identification algorithm is proposed in this paper to avoid the circular shift of the feature vector and to reduce the time consumed in the star identification algorithm using LPT. In the proposed algorithm, the star pattern of the same navigation star remains unchanged when the stellar image is rotated, which makes it able to reduce the star identification time. The logarithmic values of the plane distances between the navigation and its neighbor stars are adopted to structure the feature vector of the navigation star, which enhances the robustness of star identification. In addition, some efforts are made to make it able to find the identification result with fewer comparisons, instead of searching the whole feature database. The simulation results demonstrate that the proposed algorithm can effectively accelerate the star identification. Moreover, the recognition rate and robustness by the proposed algorithm are better than those by the LPT algorithm and the modified grid algorithm. (paper)

  13. Flash-Aware Page Replacement Algorithm

    Directory of Open Access Journals (Sweden)

    Guangxia Xu

    2014-01-01

    Full Text Available Due to the limited main memory resource of consumer electronics equipped with NAND flash memory as storage device, an efficient page replacement algorithm called FAPRA is proposed for NAND flash memory in the light of its inherent characteristics. FAPRA introduces an efficient victim page selection scheme taking into account the benefit-to-cost ratio for evicting each victim page candidate and the combined recency and frequency value, as well as the erase count of the block to which each page belongs. Since the dirty victim page often contains clean data that exist in both the main memory and the NAND flash memory based storage device, FAPRA only writes the dirty data within the victim page back to the NAND flash memory based storage device in order to reduce the redundant write operations. We conduct a series of trace-driven simulations and experimental results show that our proposed FAPRA algorithm outperforms the state-of-the-art algorithms in terms of page hit ratio, the number of write operations, runtime, and the degree of wear leveling.

  14. Fully Pipelined Parallel Architecture for Candidate Block and Pixel-Subsampling-Based Motion Estimation

    Directory of Open Access Journals (Sweden)

    Reeba Korah

    2008-01-01

    Full Text Available This paper presents a low power and high speed architecture for motion estimation with Candidate Block and Pixel Subsampling (CBPS Algorithm. Coarse-to-fine search approach is employed to find the motion vector so that the local minima problem is totally eliminated. Pixel subsampling is performed in the selected candidate blocks which significantly reduces computational cost with low quality degradation. The architecture developed is a fully pipelined parallel design with 9 processing elements. Two different methods are deployed to reduce the power consumption, parallel and pipelined implementation and parallel accessing to memory. For processing 30 CIF frames per second our architecture requires a clock frequency of 4.5 MHz.

  15. Comparison of optimization algorithms in intensity-modulated radiation therapy planning

    Science.gov (United States)

    Kendrick, Rachel

    Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.

  16. The quest for light sea quarks: algorithms for the future

    International Nuclear Information System (INIS)

    Schroers, W.; Eicker, N.; D'Elia, M.; Forcrand, Ph. de; Gebert, C.; Lippert, Th.; Montvay, I.; Orth, B.; Pepe, M.; Schilling, K.

    2002-01-01

    As part of a systematic algorithm study, we present first results on a performance comparison between a multibosonic algorithm and the hybrid Monte Carlo algorithm as employed by the SESAM collaboration. The standard Wilson fermion action is used on 32 x 16 3 lattices at β = 5.5

  17. A Simple and Efficient Artificial Bee Colony Algorithm

    Directory of Open Access Journals (Sweden)

    Yunfeng Xu

    2013-01-01

    Full Text Available Artificial bee colony (ABC is a new population-based stochastic algorithm which has shown good search abilities on many optimization problems. However, the original ABC shows slow convergence speed during the search process. In order to enhance the performance of ABC, this paper proposes a new artificial bee colony (NABC algorithm, which modifies the search pattern of both employed and onlooker bees. A solution pool is constructed by storing some best solutions of the current swarm. New candidate solutions are generated by searching the neighborhood of solutions randomly chosen from the solution pool. Experiments are conducted on a set of twelve benchmark functions. Simulation results show that our approach is significantly better or at least comparable to the original ABC and seven other stochastic algorithms.

  18. Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms.

    Science.gov (United States)

    Bianchi, Emanuela; Doppelbauer, Günther; Filion, Laura; Dijkstra, Marjolein; Kahl, Gerhard

    2012-06-07

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the isobaric-isothermal ensemble and (ii) an optimization technique based on ideas of evolutionary algorithms. We show that the two methods are equally successful and provide consistent results on crystalline phases of patchy particle systems.

  19. Uncertainty assessment and comparison of different dose algorithms used to evaluate a two element LiF:Mg,Ti TL personal dosemeter

    International Nuclear Information System (INIS)

    Stadtmann, H.; Hranitzky, F.C.

    2008-01-01

    This paper presents the results of an uncertainty assessment and comparison study of different dose algorithms used for evaluating our routine two element TL whole body dosemeter. Due to the photon energy response of the two different filtered LiF:Mg,Ti detector elements the application of dose algorithms is necessary to assess the relevant photon doses over the rated energy range with an acceptable energy response. Three dose algorithms are designed to calculate the dose for the different dose equivalent quantities, i.e. personal dose equivalent H p (10) and H p (0.07) and photon dose equivalent H x used for personal monitoring before introducing personal dose equivalent. Based on experimental results both for free in air calibration as well as calibration on the ISO water slab phantom (type test data) a detailed uncertainty analysis war performed by means of Monte Carlo simulation techniques. The uncertainty contribution of the individual detector element signals was taken into special consideration. (author)

  20. Using qualitative research to inform development of a diagnostic algorithm for UTI in children.

    Science.gov (United States)

    de Salis, Isabel; Whiting, Penny; Sterne, Jonathan A C; Hay, Alastair D

    2013-06-01

    Diagnostic and prognostic algorithms can help reduce clinical uncertainty. The selection of candidate symptoms and signs to be measured in case report forms (CRFs) for potential inclusion in diagnostic algorithms needs to be comprehensive, clearly formulated and relevant for end users. To investigate whether qualitative methods could assist in designing CRFs in research developing diagnostic algorithms. Specifically, the study sought to establish whether qualitative methods could have assisted in designing the CRF for the Health Technology Association funded Diagnosis of Urinary Tract infection in Young children (DUTY) study, which will develop a diagnostic algorithm to improve recognition of urinary tract infection (UTI) in children aged children in primary care and a Children's Emergency Department. We elicited features that clinicians believed useful in diagnosing UTI and compared these for presence or absence and terminology with the DUTY CRF. Despite much agreement between clinicians' accounts and the DUTY CRFs, we identified a small number of potentially important symptoms and signs not included in the CRF and some included items that could have been reworded to improve understanding and final data analysis. This study uniquely demonstrates the role of qualitative methods in the design and content of CRFs used for developing diagnostic (and prognostic) algorithms. Research groups developing such algorithms should consider using qualitative methods to inform the selection and wording of candidate symptoms and signs.

  1. Amodified probabilistic genetic algorithm for the solution of complex constrained optimization problems

    OpenAIRE

    Vorozheikin, A.; Gonchar, T.; Panfilov, I.; Sopov, E.; Sopov, S.

    2009-01-01

    A new algorithm for the solution of complex constrained optimization problems based on the probabilistic genetic algorithm with optimal solution prediction is proposed. The efficiency investigation results in comparison with standard genetic algorithm are presented.

  2. TPS-HAMMER: improving HAMMER registration algorithm by soft correspondence matching and thin-plate splines based deformation interpolation.

    Science.gov (United States)

    Wu, Guorong; Yap, Pew-Thian; Kim, Minjeong; Shen, Dinggang

    2010-02-01

    We present an improved MR brain image registration algorithm, called TPS-HAMMER, which is based on the concepts of attribute vectors and hierarchical landmark selection scheme proposed in the highly successful HAMMER registration algorithm. We demonstrate that TPS-HAMMER algorithm yields better registration accuracy, robustness, and speed over HAMMER owing to (1) the employment of soft correspondence matching and (2) the utilization of thin-plate splines (TPS) for sparse-to-dense deformation field generation. These two aspects can be integrated into a unified framework to refine the registration iteratively by alternating between soft correspondence matching and dense deformation field estimation. Compared with HAMMER, TPS-HAMMER affords several advantages: (1) unlike the Gaussian propagation mechanism employed in HAMMER, which can be slow and often leaves unreached blotches in the deformation field, the deformation interpolation in the non-landmark points can be obtained immediately with TPS in our algorithm; (2) the smoothness of deformation field is preserved due to the nice properties of TPS; (3) possible misalignments can be alleviated by allowing the matching of the landmarks with a number of possible candidate points and enforcing more exact matches in the final stages of the registration. Extensive experiments have been conducted, using the original HAMMER as a comparison baseline, to validate the merits of TPS-HAMMER. The results show that TPS-HAMMER yields significant improvement in both accuracy and speed, indicating high applicability for the clinical scenario. Copyright (c) 2009 Elsevier Inc. All rights reserved.

  3. Algorithms for reconstructing images for industrial applications

    International Nuclear Information System (INIS)

    Lopes, R.T.; Crispim, V.R.

    1986-01-01

    Several algorithms for reconstructing objects from their projections are being studied in our Laboratory, for industrial applications. Such algorithms are useful locating the position and shape of different composition of materials in the object. A Comparative study of two algorithms is made. The two investigated algorithsm are: The MART (Multiplicative - Algebraic Reconstruction Technique) and the Convolution Method. The comparison are carried out from the point view of the quality of the image reconstructed, number of views and cost. (Author) [pt

  4. Optimization in optical systems revisited: Beyond genetic algorithms

    Science.gov (United States)

    Gagnon, Denis; Dumont, Joey; Dubé, Louis

    2013-05-01

    Designing integrated photonic devices such as waveguides, beam-splitters and beam-shapers often requires optimization of a cost function over a large solution space. Metaheuristics - algorithms based on empirical rules for exploring the solution space - are specifically tailored to those problems. One of the most widely used metaheuristics is the standard genetic algorithm (SGA), based on the evolution of a population of candidate solutions. However, the stochastic nature of the SGA sometimes prevents access to the optimal solution. Our goal is to show that a parallel tabu search (PTS) algorithm is more suited to optimization problems in general, and to photonics in particular. PTS is based on several search processes using a pool of diversified initial solutions. To assess the performance of both algorithms (SGA and PTS), we consider an integrated photonics design problem, the generation of arbitrary beam profiles using a two-dimensional waveguide-based dielectric structure. The authors acknowledge financial support from the Natural Sciences and Engineering Research Council of Canada (NSERC).

  5. A study of Hough Transform-based fingerprint alignment algorithms

    CSIR Research Space (South Africa)

    Mlambo, CS

    2014-10-01

    Full Text Available the implementation of each algorithm. The comparison is performed by considering the alignment results computed using each group of algorithms when varying number of minutiae points, rotation angle, and translation. In addition, the memory usage, computing time...

  6. Planetary transit candidates in CoRoT LRa01 field

    DEFF Research Database (Denmark)

    Carone, L.; Gandolfi, D.; Cabrera, J.

    2012-01-01

    We present the list of planetary transit candidates from the CoRoT LRa01 star field in the Monoceros constellation toward the Galactic anti-center direction. The CoRoT observations of LRa01 lasted from 24 October 2007 to 3 March 2008. We acquired and analyzed 7470 chromatic and 3938 monochromatic...... lightcurves. Instrumental noise and stellar variability were treated with several filtering tools by different teams from the CoRoT community. Different transit search algorithms were applied to the lightcurves. (2 data files)....

  7. Normed kernel function-based fuzzy possibilistic C-means (NKFPCM) algorithm for high-dimensional breast cancer database classification with feature selection is based on Laplacian Score

    Science.gov (United States)

    Lestari, A. W.; Rustam, Z.

    2017-07-01

    In the last decade, breast cancer has become the focus of world attention as this disease is one of the primary leading cause of death for women. Therefore, it is necessary to have the correct precautions and treatment. In previous studies, Fuzzy Kennel K-Medoid algorithm has been used for multi-class data. This paper proposes an algorithm to classify the high dimensional data of breast cancer using Fuzzy Possibilistic C-means (FPCM) and a new method based on clustering analysis using Normed Kernel Function-Based Fuzzy Possibilistic C-Means (NKFPCM). The objective of this paper is to obtain the best accuracy in classification of breast cancer data. In order to improve the accuracy of the two methods, the features candidates are evaluated using feature selection, where Laplacian Score is used. The results show the comparison accuracy and running time of FPCM and NKFPCM with and without feature selection.

  8. Improved Algorithms OF CELF and CELF++ for Influence Maximization

    Directory of Open Access Journals (Sweden)

    Jiaguo Lv

    2014-06-01

    Full Text Available Motivated by the wide application in some fields, such as viral marketing, sales promotion etc, influence maximization has been the most important and extensively studied problem in social network. However, the most classical KK-Greedy algorithm for influence maximization is inefficient. Two major sources of the algorithm’s inefficiency were analyzed in this paper. With the analysis of algorithms CELF and CELF++, all nodes in the influenced set of u would never bring any marginal gain when a new seed u was produced. Through this optimization strategy, a lot of redundant nodes will be removed from the candidate nodes. Basing on the strategy, two improved algorithms of Lv_CELF and Lv_CELF++ were proposed in this study. To evaluate the two algorithms, the two algorithms with their benchmark algorithms of CELF and CELF++ were conducted on some real world datasets. To estimate the algorithms, influence degree and running time were employed to measure the performance and efficiency respectively. Experimental results showed that, compared with benchmark algorithms of CELF and CELF++, matching effects and higher efficiency were achieved by the new algorithms Lv_CELF and Lv_CELF++. Solutions with the proposed optimization strategy can be useful for the decisionmaking problems under the scenarios related to the influence maximization problem.

  9. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Science.gov (United States)

    Yue, Han; Zhu, Xinyan; Chen, Di; Liu, Lingjia

    2016-06-01

    Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG) is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  10. Predicting patchy particle crystals: variable box shape simulations and evolutionary algorithms

    NARCIS (Netherlands)

    Bianchi, E.; Doppelbauer, G.; Filion, L.C.; Dijkstra, M.; Kahl, G.

    2012-01-01

    We consider several patchy particle models that have been proposed in literature and we investigate their candidate crystal structures in a systematic way. We compare two different algorithms for predicting crystal structures: (i) an approach based on Monte Carlo simulations in the

  11. A pencil beam algorithm for helium ion beam therapy

    Energy Technology Data Exchange (ETDEWEB)

    Fuchs, Hermann; Stroebele, Julia; Schreiner, Thomas; Hirtl, Albert; Georg, Dietmar [Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University of Vienna, 1090 Vienna (Austria); Department of Radiation Oncology, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria) and Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria); Department of Radiation Oncology, Medical University of Vienna/AKH Vienna (Austria) and Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria); PEG MedAustron, 2700 Wiener Neustadt (Austria); Department of Nuclear Medicine, Medical University of Vienna, 1090 Vienna (Austria); Christian Doppler Laboratory for Medical Radiation Research for Radiation Oncology, Medical University of Vienna, 1090 Vienna (Austria); Department of Radiation Oncology, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria) and Comprehensive Cancer Center, Medical University of Vienna/AKH Vienna, 1090 Vienna (Austria)

    2012-11-15

    Purpose: To develop a flexible pencil beam algorithm for helium ion beam therapy. Dose distributions were calculated using the newly developed pencil beam algorithm and validated using Monte Carlo (MC) methods. Methods: The algorithm was based on the established theory of fluence weighted elemental pencil beam (PB) kernels. Using a new real-time splitting approach, a minimization routine selects the optimal shape for each sub-beam. Dose depositions along the beam path were determined using a look-up table (LUT). Data for LUT generation were derived from MC simulations in water using GATE 6.1. For materials other than water, dose depositions were calculated by the algorithm using water-equivalent depth scaling. Lateral beam spreading caused by multiple scattering has been accounted for by implementing a non-local scattering formula developed by Gottschalk. A new nuclear correction was modelled using a Voigt function and implemented by a LUT approach. Validation simulations have been performed using a phantom filled with homogeneous materials or heterogeneous slabs of up to 3 cm. The beams were incident perpendicular to the phantoms surface with initial particle energies ranging from 50 to 250 MeV/A with a total number of 10{sup 7} ions per beam. For comparison a special evaluation software was developed calculating the gamma indices for dose distributions. Results: In homogeneous phantoms, maximum range deviations between PB and MC of less than 1.1% and differences in the width of the distal energy falloff of the Bragg-Peak from 80% to 20% of less than 0.1 mm were found. Heterogeneous phantoms using layered slabs satisfied a {gamma}-index criterion of 2%/2mm of the local value except for some single voxels. For more complex phantoms using laterally arranged bone-air slabs, the {gamma}-index criterion was exceeded in some areas giving a maximum {gamma}-index of 1.75 and 4.9% of the voxels showed {gamma}-index values larger than one. The calculation precision of the

  12. An intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces.

    Science.gov (United States)

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-09-01

    Poisson disk sampling has excellent spatial and spectral properties, and plays an important role in a variety of visual computing. Although many promising algorithms have been proposed for multidimensional sampling in euclidean space, very few studies have been reported with regard to the problem of generating Poisson disks on surfaces due to the complicated nature of the surface. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. In sharp contrast to the conventional parallel approaches, our method neither partitions the given surface into small patches nor uses any spatial data structure to maintain the voids in the sampling domain. Instead, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. Our algorithm guarantees that the generated Poisson disks are uniformly and randomly distributed without bias. It is worth noting that our method is intrinsic and independent of the embedding space. This intrinsic feature allows us to generate Poisson disk patterns on arbitrary surfaces in IR(n). To our knowledge, this is the first intrinsic, parallel, and accurate algorithm for surface Poisson disk sampling. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  13. A review of ocean chlorophyll algorithms and primary production models

    Science.gov (United States)

    Li, Jingwen; Zhou, Song; Lv, Nan

    2015-12-01

    This paper mainly introduces the five ocean chlorophyll concentration inversion algorithm and 3 main models for computing ocean primary production based on ocean chlorophyll concentration. Through the comparison of five ocean chlorophyll inversion algorithm, sums up the advantages and disadvantages of these algorithm,and briefly analyzes the trend of ocean primary production model.

  14. FHSA-SED: Two-Locus Model Detection for Genome-Wide Association Study with Harmony Search Algorithm.

    Directory of Open Access Journals (Sweden)

    Shouheng Tuo

    Full Text Available Two-locus model is a typical significant disease model to be identified in genome-wide association study (GWAS. Due to intensive computational burden and diversity of disease models, existing methods have drawbacks on low detection power, high computation cost, and preference for some types of disease models.In this study, two scoring functions (Bayesian network based K2-score and Gini-score are used for characterizing two SNP locus as a candidate model, the two criteria are adopted simultaneously for improving identification power and tackling the preference problem to disease models. Harmony search algorithm (HSA is improved for quickly finding the most likely candidate models among all two-locus models, in which a local search algorithm with two-dimensional tabu table is presented to avoid repeatedly evaluating some disease models that have strong marginal effect. Finally G-test statistic is used to further test the candidate models.We investigate our method named FHSA-SED on 82 simulated datasets and a real AMD dataset, and compare it with two typical methods (MACOED and CSE which have been developed recently based on swarm intelligent search algorithm. The results of simulation experiments indicate that our method outperforms the two compared algorithms in terms of detection power, computation time, evaluation times, sensitivity (TPR, specificity (SPC, positive predictive value (PPV and accuracy (ACC. Our method has identified two SNPs (rs3775652 and rs10511467 that may be also associated with disease in AMD dataset.

  15. Performance comparison of attitude determination, attitude estimation, and nonlinear observers algorithms

    Science.gov (United States)

    MOHAMMED, M. A. SI; BOUSSADIA, H.; BELLAR, A.; ADNANE, A.

    2017-01-01

    This paper presents a brief synthesis and useful performance analysis of different attitude filtering algorithms (attitude determination algorithms, attitude estimation algorithms, and nonlinear observers) applied to Low Earth Orbit Satellite in terms of accuracy, convergence time, amount of memory, and computation time. This latter is calculated in two ways, using a personal computer and also using On-board computer 750 (OBC 750) that is being used in many SSTL Earth observation missions. The use of this comparative study could be an aided design tool to the designer to choose from an attitude determination or attitude estimation or attitude observer algorithms. The simulation results clearly indicate that the nonlinear Observer is the more logical choice.

  16. Comparison of Clustering Algorithms for the Identification of Topics on Twitter

    Directory of Open Access Journals (Sweden)

    Marjori N. M. Klinczak

    2016-05-01

    Full Text Available Topic Identification in Social Networks has become an important task when dealing with event detection, particularly when global communities are affected. In order to attack this problem, text processing techniques and machine learning algorithms have been extensively used. In this paper we compare four clustering algorithms – k-means, k-medoids, DBSCAN and NMF (Non-negative Matrix Factorization – in order to detect topics related to textual messages obtained from Twitter. The algorithms were applied to a database initially composed by tweets having hashtags related to the recent Nepal earthquake as initial context. Obtained results suggest that the NMF clustering algorithm presents superior results, providing simpler clusters that are also easier to interpret.

  17. Hybrid Artificial Bee Colony Algorithm and Particle Swarm Search for Global Optimization

    Directory of Open Access Journals (Sweden)

    Wang Chun-Feng

    2014-01-01

    Full Text Available Artificial bee colony (ABC algorithm is one of the most recent swarm intelligence based algorithms, which has been shown to be competitive to other population-based algorithms. However, there is still an insufficiency in ABC regarding its solution search equation, which is good at exploration but poor at exploitation. To overcome this problem, we propose a novel artificial bee colony algorithm based on particle swarm search mechanism. In this algorithm, for improving the convergence speed, the initial population is generated by using good point set theory rather than random selection firstly. Secondly, in order to enhance the exploitation ability, the employed bee, onlookers, and scouts utilize the mechanism of PSO to search new candidate solutions. Finally, for further improving the searching ability, the chaotic search operator is adopted in the best solution of the current iteration. Our algorithm is tested on some well-known benchmark functions and compared with other algorithms. Results show that our algorithm has good performance.

  18. A new cut-based algorithm for the multi-state flow network reliability problem

    International Nuclear Information System (INIS)

    Yeh, Wei-Chang; Bae, Changseok; Huang, Chia-Ling

    2015-01-01

    Many real-world systems can be modeled as multi-state network systems in which reliability can be derived in terms of the lower bound points of level d, called d-minimal cuts (d-MCs). This study proposes a new method to find and verify obtained d-MCs with simple and useful found properties for the multi-state flow network reliability problem. The proposed algorithm runs in O(mσp) time, which represents a significant improvement over the previous O(mp 2 σ) time bound based on max-flow/min-cut, where p, σ and m denote the number of MCs, d-MC candidates and edges, respectively. The proposed algorithm also conquers the weakness of some existing methods, which failed to remove duplicate d-MCs in special cases. A step-by-step example is given to demonstrate how the proposed algorithm locates and verifies all d-MC candidates. As evidence of the utility of the proposed approach, we present extensive computational results on 20 benchmark networks in another example. The computational results compare favorably with a previously developed algorithm in the literature. - Highlights: • A new method is proposed to find all d-MCs for the multi-state flow networks. • The proposed method can prevent the generation of d-MC duplicates. • The proposed method is simpler and more efficient than the best-known algorithms

  19. An Enhanced Hybrid Social Based Routing Algorithm for MANET-DTN

    Directory of Open Access Journals (Sweden)

    Martin Matis

    2016-01-01

    Full Text Available A new routing algorithm for mobile ad hoc networks is proposed in this paper: an Enhanced Hybrid Social Based Routing (HSBR algorithm for MANET-DTN as optimal solution for well-connected multihop mobile networks (MANET and/or worse connected MANET with small density of the nodes and/or due to mobility fragmented MANET into two or more subnetworks or islands. This proposed HSBR algorithm is fully decentralized combining main features of both Dynamic Source Routing (DSR and Social Based Opportunistic Routing (SBOR algorithms. The proposed scheme is simulated and evaluated by replaying real life traces which exhibit this highly dynamic topology. Evaluation of new proposed HSBR algorithm was made by comparison with DSR and SBOR. All methods were simulated with different levels of velocity. The results show that HSBR has the highest success of packet delivery, but with higher delay in comparison with DSR, and much lower in comparison with SBOR. Simulation results indicate that HSBR approach can be applicable in networks, where MANET or DTN solutions are separately useless or ineffective. This method provides delivery of the message in every possible situation in areas without infrastructure and can be used as backup method for disaster situation when infrastructure is destroyed.

  20. Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches

    Directory of Open Access Journals (Sweden)

    Ufuk Çelik

    2015-01-01

    Full Text Available The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy.

  1. Algorithm of pulmonary emphysema extraction using thoracic 3D CT images

    Science.gov (United States)

    Saita, Shinsuke; Kubo, Mitsuru; Kawata, Yoshiki; Niki, Noboru; Nakano, Yasutaka; Ohmatsu, Hironobu; Tominaga, Keigo; Eguchi, Kenji; Moriyama, Noriyuki

    2007-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  2. TauFinder: A Reconstruction Algorithm for τ Leptons at Linear Colliders

    CERN Document Server

    Muennich, A

    2010-01-01

    An algorithm to find and reconstruct τ leptons was developed, which targets τs that produce high energetic, low multiplicity jets as can be observed at multi TeV e+e− collisions. However, it makes no assumption about the decay of the τ candidate thus finding hadronic as well as leptonic decays. The algorithm delivers a reconstructed τ as seen by the detector. This note provides an overview of the algorithm, the cuts used and gives some evaluation of the performance. A first implementation is available within the ILC software framework as a MAR- LIN processor . Appendix A is intended as a short user manual.

  3. A Robust Formant Extraction Algorithm Combining Spectral Peak Picking and Root Polishing

    Directory of Open Access Journals (Sweden)

    Seo Kwang-deok

    2006-01-01

    Full Text Available We propose a robust formant extraction algorithm that combines the spectral peak picking, formants location examining for peak merger checking, and the root extraction methods. The spectral peak picking method is employed to locate the formant candidates, and the root extraction is used for solving the peak merger problem. The location and the distance between the extracted formants are also utilized to efficiently find out suspected peak mergers. The proposed algorithm does not require much computation, and is shown to be superior to previous formant extraction algorithms through extensive tests using TIMIT speech database.

  4. Actigraphy-based sleep estimation in adolescents and adults: a comparison with polysomnography using two scoring algorithms

    Directory of Open Access Journals (Sweden)

    Quante M

    2018-01-01

    Full Text Available Mirja Quante,1–3 Emily R Kaplan,2 Michael Cailler,2 Michael Rueschman,2 Rui Wang,2–5 Jia Weng,2 Elsie M Taveras,3,5,6 Susan Redline2,3,7 1Department of Neonatology, University of Tuebingen, Tuebingen, Germany; 2Division of Sleep and Circadian Disorders, Departments of Medicine and Neurology, Brigham and Women’s Hospital, Boston, MA, USA; 3Harvard Medical School, Boston, MA, USA; 4Department of Biostatistics, Harvard T. H. Chan School of Public Health, Boston, MA, USA; 5Department of Population Medicine, Harvard Medical School and The Harvard Pilgrim Health Care Institute, Boston, MA, USA; 6Division of General Academic Pediatrics, Department of Pediatrics, MassGeneral Hospital for Children, Boston, MA, USA; 7Department of Medicine, Beth Israel Deaconess Medical Center, Boston, MA, USA Objectives: Actigraphy is widely used to estimate sleep–wake time, despite limited information regarding the comparability of different devices and algorithms. We compared estimates of sleep–wake times determined by two wrist actigraphs (GT3X+ versus Actiwatch Spectrum [AWS] to in-home polysomnography (PSG, using two algorithms (Sadeh and Cole–Kripke for the GT3X+ recordings.Subjects and methods: Participants included a sample of 35 healthy volunteers (13 school children and 22 adults, 46% male from Boston, MA, USA. Twenty-two adults wore the GT3X+ and AWS simultaneously for at least five consecutive days and nights. In addition, actigraphy and PSG were concurrently measured in 12 of these adults and another 13 children over a single night. We used intraclass correlation coefficients (ICCs, epoch-by-epoch comparisons, paired t-tests, and Bland–Altman plots to determine the level of agreement between actigraphy and PSG, and differences between devices and algorithms.Results: Each actigraph showed comparable accuracy (0.81–0.86 for sleep–wake estimation compared to PSG. When analyzing data from the GT3X+, the Cole–Kripke algorithm was more

  5. Centroid vetting of transiting planet candidates from the Next Generation Transit Survey

    Science.gov (United States)

    Günther, Maximilian N.; Queloz, Didier; Gillen, Edward; McCormac, James; Bayliss, Daniel; Bouchy, Francois; Walker, Simon. R.; West, Richard G.; Eigmüller, Philipp; Smith, Alexis M. S.; Armstrong, David J.; Burleigh, Matthew; Casewell, Sarah L.; Chaushev, Alexander P.; Goad, Michael R.; Grange, Andrew; Jackman, James; Jenkins, James S.; Louden, Tom; Moyano, Maximiliano; Pollacco, Don; Poppenhaeger, Katja; Rauer, Heike; Raynard, Liam; Thompson, Andrew P. G.; Udry, Stéphane; Watson, Christopher A.; Wheatley, Peter J.

    2017-11-01

    The Next Generation Transit Survey (NGTS), operating in Paranal since 2016, is a wide-field survey to detect Neptunes and super-Earths transiting bright stars, which are suitable for precise radial velocity follow-up and characterization. Thereby, its sub-mmag photometric precision and ability to identify false positives are crucial. Particularly, variable background objects blended in the photometric aperture frequently mimic Neptune-sized transits and are costly in follow-up time. These objects can best be identified with the centroiding technique: if the photometric flux is lost off-centre during an eclipse, the flux centroid shifts towards the centre of the target star. Although this method has successfully been employed by the Kepler mission, it has previously not been implemented from the ground. We present a fully automated centroid vetting algorithm developed for NGTS, enabled by our high-precision autoguiding. Our method allows detecting centroid shifts with an average precision of 0.75 milli-pixel (mpix), and down to 0.25 mpix for specific targets, for a pixel size of 4.97 arcsec. The algorithm is now part of the NGTS candidate vetting pipeline and automatically employed for all detected signals. Further, we develop a joint Bayesian fitting model for all photometric and centroid data, allowing to disentangle which object (target or background) is causing the signal, and what its astrophysical parameters are. We demonstrate our method on two NGTS objects of interest. These achievements make NGTS the first ground-based wide-field transit survey ever to successfully apply the centroiding technique for automated candidate vetting, enabling the production of a robust candidate list before follow-up.

  6. A Comparison of the Effects of K-Anonymity on Machine Learning Algorithms

    OpenAIRE

    Hayden Wimmer; Loreen Powell

    2014-01-01

    While research has been conducted in machine learning algorithms and in privacy preserving in data mining (PPDM), a gap in the literature exists which combines the aforementioned areas to determine how PPDM affects common machine learning algorithms. The aim of this research is to narrow this literature gap by investigating how a common PPDM algorithm, K-Anonymity, affects common machine learning and data mining algorithms, namely neural networks, logistic regression, decision trees, and Baye...

  7. Algorithmic parameterization of mixed treatment comparisons

    NARCIS (Netherlands)

    G. van Valkenhoef (Gert); T. Tervonen (Tommi); B. de Brock (Bert)

    2012-01-01

    textabstractMixed Treatment Comparisons (MTCs) enable the simultaneous meta-analysis (data pooling) of networks of clinical trials comparing ≥2 alternative treatments. Inconsistency models are critical in MTC to assess the overall consistency between evidence sources. Only in the absence of

  8. Comparison of Greedy Algorithms for Decision Tree Optimization

    KAUST Repository

    Alkhalid, Abdulaziz; Chikalov, Igor; Moshkov, Mikhail

    2013-01-01

    This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values

  9. Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows

    Science.gov (United States)

    Palmer, Grant; Venkatapathy, Ethiraj

    1995-01-01

    Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.

  10. A new simple iterative reconstruction algorithm for SPECT transmission measurement

    International Nuclear Information System (INIS)

    Hwang, D.S.; Zeng, G.L.

    2005-01-01

    This paper proposes a new iterative reconstruction algorithm for transmission tomography and compares this algorithm with several other methods. The new algorithm is simple and resembles the emission ML-EM algorithm in form. Due to its simplicity, it is easy to implement and fast to compute a new update at each iteration. The algorithm also always guarantees non-negative solutions. Evaluations are performed using simulation studies and real phantom data. Comparisons with other algorithms such as convex, gradient, and logMLEM show that the proposed algorithm is as good as others and performs better in some cases

  11. Algorithmic parameterization of mixed treatment comparisons

    NARCIS (Netherlands)

    van Valkenhoef, Gert; Tervonen, Tommi; de Brock, Bert; Hillege, Hans

    Mixed Treatment Comparisons (MTCs) enable the simultaneous meta-analysis (data pooling) of networks of clinical trials comparing a parts per thousand yen2 alternative treatments. Inconsistency models are critical in MTC to assess the overall consistency between evidence sources. Only in the absence

  12. Comparison of algorithms for blood stain detection applied to forensic hyperspectral imagery

    Science.gov (United States)

    Yang, Jie; Messinger, David W.; Mathew, Jobin J.; Dube, Roger R.

    2016-05-01

    Blood stains are among the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Early detection of blood stains is particularly important since the blood reacts physically and chemically with air and materials over time. Accurate identification of blood remnants, including regions that might have been intentionally cleaned, is an important aspect of forensic investigation. Hyperspectral imaging might be a potential method to detect blood stains because it is non-contact and provides substantial spectral information that can be used to identify regions in a scene with trace amounts of blood. The potential complexity of scenes in which such vast violence occurs can be high when the range of scene material types and conditions containing blood stains at a crime scene are considered. Some stains are hard to detect by the unaided eye, especially if a conscious effort to clean the scene has occurred (we refer to these as "latent" blood stains). In this paper we present the initial results of a study of the use of hyperspectral imaging algorithms for blood detection in complex scenes. We describe a hyperspectral imaging system which generates images covering 400 nm - 700 nm visible range with a spectral resolution of 10 nm. Three image sets of 31 wavelength bands were generated using this camera for a simulated indoor crime scene in which blood stains were placed on a T-shirt and walls. To detect blood stains in the scene, Principal Component Analysis (PCA), Subspace Reed Xiaoli Detection (SRXD), and Topological Anomaly Detection (TAD) algorithms were used. Comparison of the three hyperspectral image analysis techniques shows that TAD is most suitable for detecting blood stains and discovering latent blood stains.

  13. PCTFPeval: a web tool for benchmarking newly developed algorithms for predicting cooperative transcription factor pairs in yeast.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Wu, Wei-Sheng

    2015-01-01

    Computational identification of cooperative transcription factor (TF) pairs helps understand the combinatorial regulation of gene expression in eukaryotic cells. Many advanced algorithms have been proposed to predict cooperative TF pairs in yeast. However, it is still difficult to conduct a comprehensive and objective performance comparison of different algorithms because of lacking sufficient performance indices and adequate overall performance scores. To solve this problem, in our previous study (published in BMC Systems Biology 2014), we adopted/proposed eight performance indices and designed two overall performance scores to compare the performance of 14 existing algorithms for predicting cooperative TF pairs in yeast. Most importantly, our performance comparison framework can be applied to comprehensively and objectively evaluate the performance of a newly developed algorithm. However, to use our framework, researchers have to put a lot of effort to construct it first. To save researchers time and effort, here we develop a web tool to implement our performance comparison framework, featuring fast data processing, a comprehensive performance comparison and an easy-to-use web interface. The developed tool is called PCTFPeval (Predicted Cooperative TF Pair evaluator), written in PHP and Python programming languages. The friendly web interface allows users to input a list of predicted cooperative TF pairs from their algorithm and select (i) the compared algorithms among the 15 existing algorithms, (ii) the performance indices among the eight existing indices, and (iii) the overall performance scores from two possible choices. The comprehensive performance comparison results are then generated in tens of seconds and shown as both bar charts and tables. The original comparison results of each compared algorithm and each selected performance index can be downloaded as text files for further analyses. Allowing users to select eight existing performance indices and 15

  14. Comparison of genetic algorithm and imperialist competitive algorithms in predicting bed load transport in clean pipe.

    Science.gov (United States)

    Ebtehaj, Isa; Bonakdari, Hossein

    2014-01-01

    The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations.

  15. On the performance of pre-microRNA detection algorithms

    DEFF Research Database (Denmark)

    Saçar Demirci, Müşerref Duygu; Baumbach, Jan; Allmer, Jens

    2017-01-01

    assess 13 ab initio pre-miRNA detection approaches using all relevant, published, and novel data sets while judging algorithm performance based on ten intrinsic performance measures. We present an extensible framework, izMiR, which allows for the unbiased comparison of existing algorithms, adding new...

  16. Simulation System of Car Crash Test in C-NCAP Analysis Based on an Improved Apriori Algorithm*

    Science.gov (United States)

    Xiang, LI

    In order to analysis car crash test in C-NCAP, an improved algorithm is given based on Apriori algorithm in this paper. The new algorithm is implemented with vertical data layout, breadth first searching, and intersecting. It takes advantage of the efficiency of vertical data layout and intersecting, and prunes candidate frequent item sets like Apriori. Finally, the new algorithm is applied in simulation of car crash test analysis system. The result shows that the relations will affect the C-NCAP test results, and it can provide a reference for the automotive design.

  17. A Novel Approach for Bi-Level Segmentation of Tuberculosis Bacilli Based on Meta-Heuristic Algorithms

    Directory of Open Access Journals (Sweden)

    AYAS, S.

    2018-02-01

    Full Text Available Image thresholding is the most crucial step in microscopic image analysis to distinguish bacilli objects causing of tuberculosis disease. Therefore, several bi-level thresholding algorithms are widely used to increase the bacilli segmentation accuracy. However, bi-level microscopic image thresholding problem has not been solved using optimization algorithms. This paper introduces a novel approach for the segmentation problem using heuristic algorithms and presents visual and quantitative comparisons of heuristic and state-of-art thresholding algorithms. In this study, well-known heuristic algorithms such as Firefly Algorithm, Particle Swarm Optimization, Cuckoo Search, Flower Pollination are used to solve bi-level microscopic image thresholding problem, and the results are compared with the state-of-art thresholding algorithms such as K-Means, Fuzzy C-Means, Fast Marching. Kapur's entropy is chosen as the entropy measure to be maximized. Experiments are performed to make comparisons in terms of evaluation metrics and execution time. The quantitative results are calculated based on ground truth segmentation. According to the visual results, heuristic algorithms have better performance and the quantitative results are in accord with the visual results. Furthermore, experimental time comparisons show the superiority and effectiveness of the heuristic algorithms over traditional thresholding algorithms.

  18. Comparison of SAR calculation algorithms for the finite-difference time-domain method

    International Nuclear Information System (INIS)

    Laakso, Ilkka; Uusitupa, Tero; Ilvonen, Sami

    2010-01-01

    Finite-difference time-domain (FDTD) simulations of specific-absorption rate (SAR) have several uncertainty factors. For example, significantly varying SAR values may result from the use of different algorithms for determining the SAR from the FDTD electric field. The objective of this paper is to rigorously study the divergence of SAR values due to different SAR calculation algorithms and to examine if some SAR calculation algorithm should be preferred over others. For this purpose, numerical FDTD results are compared to analytical solutions in a one-dimensional layered model and a three-dimensional spherical object. Additionally, the implications of SAR calculation algorithms for dosimetry of anatomically realistic whole-body models are studied. The results show that the trapezium algorithm-based on the trapezium integration rule-is always conservative compared to the analytic solution, making it a good choice for worst-case exposure assessment. In contrast, the mid-ordinate algorithm-named after the mid-ordinate integration rule-usually underestimates the analytic SAR. The linear algorithm-which is approximately a weighted average of the two-seems to be the most accurate choice overall, typically giving the best fit with the shape of the analytic SAR distribution. For anatomically realistic models, the whole-body SAR difference between different algorithms is relatively independent of the used body model, incident direction and polarization of the plane wave. The main factors affecting the difference are cell size and frequency. The choice of the SAR calculation algorithm is an important simulation parameter in high-frequency FDTD SAR calculations, and it should be explained to allow intercomparison of the results between different studies. (note)

  19. A Multi-Scale Settlement Matching Algorithm Based on ARG

    Directory of Open Access Journals (Sweden)

    H. Yue

    2016-06-01

    Full Text Available Homonymous entity matching is an important part of multi-source spatial data integration, automatic updating and change detection. Considering the low accuracy of existing matching methods in dealing with matching multi-scale settlement data, an algorithm based on Attributed Relational Graph (ARG is proposed. The algorithm firstly divides two settlement scenes at different scales into blocks by small-scale road network and constructs local ARGs in each block. Then, ascertains candidate sets by merging procedures and obtains the optimal matching pairs by comparing the similarity of ARGs iteratively. Finally, the corresponding relations between settlements at large and small scales are identified. At the end of this article, a demonstration is presented and the results indicate that the proposed algorithm is capable of handling sophisticated cases.

  20. A benchmark for comparison of cell tracking algorithms

    NARCIS (Netherlands)

    M. Maška (Martin); V. Ulman (Vladimír); K. Svoboda; P. Matula (Pavel); P. Matula (Petr); C. Ederra (Cristina); A. Urbiola (Ainhoa); T. España (Tomás); R. Venkatesan (Rajkumar); D.M.W. Balak (Deepak); P. Karas (Pavel); T. Bolcková (Tereza); M. Štreitová (Markéta); C. Carthel (Craig); S. Coraluppi (Stefano); N. Harder (Nathalie); K. Rohr (Karl); K.E.G. Magnusson (Klas E.); J. Jaldén (Joakim); H.M. Blau (Helen); O.M. Dzyubachyk (Oleh); P. Křížek (Pavel); G.M. Hagen (Guy); D. Pastor-Escuredo (David); D. Jimenez-Carretero (Daniel); M.J. Ledesma-Carbayo (Maria); A. Muñoz-Barrutia (Arrate); E. Meijering (Erik); M. Kozubek (Michal); C. Ortiz-De-Solorzano (Carlos)

    2014-01-01

    textabstractMotivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International

  1. A new warfarin dosing algorithm including VKORC1 3730 G > A polymorphism: comparison with results obtained by other published algorithms.

    Science.gov (United States)

    Cini, Michela; Legnani, Cristina; Cosmi, Benilde; Guazzaloca, Giuliana; Valdrè, Lelia; Frascaro, Mirella; Palareti, Gualtiero

    2012-08-01

    Warfarin dosing is affected by clinical and genetic variants, but the contribution of the genotype associated with warfarin resistance in pharmacogenetic algorithms has not been well assessed yet. We developed a new dosing algorithm including polymorphisms associated both with warfarin sensitivity and resistance in the Italian population, and its performance was compared with those of eight previously published algorithms. Clinical and genetic data (CYP2C9*2, CYP2C9*3, VKORC1 -1639 G > A, and VKORC1 3730 G > A) were used to elaborate the new algorithm. Derivation and validation groups comprised 55 (58.2% men, mean age 69 years) and 40 (57.5% men, mean age 70 years) patients, respectively, who were on stable anticoagulation therapy for at least 3 months with different oral anticoagulation therapy (OAT) indications. Performance of the new algorithm, evaluated with mean absolute error (MAE) defined as the absolute value of the difference between observed daily maintenance dose and predicted daily dose, correlation with the observed dose and R(2) value, was comparable with or slightly lower than that obtained using the other algorithms. The new algorithm could correctly assign 53.3%, 50.0%, and 57.1% of patients to the low (≤25 mg/week), intermediate (26-44 mg/week) and high (≥ 45 mg/week) dosing range, respectively. Our data showed a significant increase in predictive accuracy among patients requiring high warfarin dose compared with the other algorithms (ranging from 0% to 28.6%). The algorithm including VKORC1 3730 G > A, associated with warfarin resistance, allowed a more accurate identification of resistant patients who require higher warfarin dosage.

  2. Comparison of several algorithms of the electric force calculation in particle plasma models

    International Nuclear Information System (INIS)

    Lachnitt, J; Hrach, R

    2014-01-01

    This work is devoted to plasma modelling using the technique of molecular dynamics. The crucial problem of most such models is the efficient calculation of electric force. This is usually solved by using the particle-in-cell (PIC) algorithm. However, PIC is an approximative algorithm as it underestimates the short-range interactions of charged particles. We propose a hybrid algorithm which adds these interactions to PIC. Then we include this algorithm in a set of algorithms which we test against each other in a two-dimensional collisionless magnetized plasma model. Besides our hybrid algorithm, this set includes two variants of pure PIC and the direct application of Coulomb's law. We compare particle forces, particle trajectories, total energy conservation and the speed of the algorithms. We find out that the hybrid algorithm can be a good replacement of direct Coulomb's law application (quite accurate and much faster). It is however probably unnecessary to use it in practical 2D models.

  3. Fundamental resource-allocating model in colleges and universities based on Immune Clone Algorithms

    Science.gov (United States)

    Ye, Mengdie

    2017-05-01

    In this thesis we will seek the combination of antibodies and antigens converted from the optimal course arrangement and make an analogy with Immune Clone Algorithms. According to the character of the Algorithms, we apply clone, clone gene and clone selection to arrange courses. Clone operator can combine evolutionary search and random search, global search and local search. By cloning and clone mutating candidate solutions, we can find the global optimal solution quickly.

  4. An AUTONOMOUS STAR IDENTIFICATION ALGORITHM BASED ON THE DIRECTED CIRCULARITY PATTERN

    Directory of Open Access Journals (Sweden)

    J. Xie

    2012-07-01

    Full Text Available The accuracy of the angular distance may decrease due to lots of factors, such as the parameters of the stellar camera aren't calibrated on-orbit, or the location accuracy of the star image points is low, and so on, which can cause the low success rates of star identification. A robust directed circularity pattern algorithm is proposed in this paper, which is developed on basis of the matching probability algorithm. The improved algorithm retains the matching probability strategy to identify master star, and constructs a directed circularity pattern with the adjacent stars for unitary matching. The candidate matching group which has the longest chain will be selected as the final result. Simulation experiments indicate that the improved algorithm has high successful identification and reliability etc, compared with the original algorithm. The experiments with real data are used to verify it.

  5. Analysis of decision fusion algorithms in handling uncertainties for integrated health monitoring systems

    Science.gov (United States)

    Zein-Sabatto, Saleh; Mikhail, Maged; Bodruzzaman, Mohammad; DeSimio, Martin; Derriso, Mark; Behbahani, Alireza

    2012-06-01

    It has been widely accepted that data fusion and information fusion methods can improve the accuracy and robustness of decision-making in structural health monitoring systems. It is arguably true nonetheless, that decision-level is equally beneficial when applied to integrated health monitoring systems. Several decisions at low-levels of abstraction may be produced by different decision-makers; however, decision-level fusion is required at the final stage of the process to provide accurate assessment about the health of the monitored system as a whole. An example of such integrated systems with complex decision-making scenarios is the integrated health monitoring of aircraft. Thorough understanding of the characteristics of the decision-fusion methodologies is a crucial step for successful implementation of such decision-fusion systems. In this paper, we have presented the major information fusion methodologies reported in the literature, i.e., probabilistic, evidential, and artificial intelligent based methods. The theoretical basis and characteristics of these methodologies are explained and their performances are analyzed. Second, candidate methods from the above fusion methodologies, i.e., Bayesian, Dempster-Shafer, and fuzzy logic algorithms are selected and their applications are extended to decisions fusion. Finally, fusion algorithms are developed based on the selected fusion methods and their performance are tested on decisions generated from synthetic data and from experimental data. Also in this paper, a modeling methodology, i.e. cloud model, for generating synthetic decisions is presented and used. Using the cloud model, both types of uncertainties; randomness and fuzziness, involved in real decision-making are modeled. Synthetic decisions are generated with an unbiased process and varying interaction complexities among decisions to provide for fair performance comparison of the selected decision-fusion algorithms. For verification purposes

  6. Algorithm improvement for phase control of subharmonic buncher

    International Nuclear Information System (INIS)

    Zhang Junqiang; Yu Luyang; Yin Chongxian; Zhao Minghua; Zhong Shaopeng

    2011-01-01

    To realize digital phase control of subharmonic buncher,a low level radio frequency control system using down converter, IQ modulator and demodulator techniques, and commercial PXI system, was developed on the platform of LabVIEW. A single-neuron adaptive PID (proportional-integral-derivative) control algorithm with ability of self learning was adopted, satisfying the requirements of phase stability. By comparison with the traditional PID algorithm in field testing, the new algorithm has good stability, fast response and strong anti-interference ability. (authors)

  7. A multi-band semi-analytical algorithm for estimating chlorophyll-a concentration in the Yellow River Estuary, China.

    Science.gov (United States)

    Chen, Jun; Quan, Wenting; Cui, Tingwei

    2015-01-01

    In this study, two sample semi-analytical algorithms and one new unified multi-band semi-analytical algorithm (UMSA) for estimating chlorophyll-a (Chla) concentration were constructed by specifying optimal wavelengths. The three sample semi-analytical algorithms, including the three-band semi-analytical algorithm (TSA), four-band semi-analytical algorithm (FSA), and UMSA algorithm, were calibrated and validated by the dataset collected in the Yellow River Estuary between September 1 and 10, 2009. By comparing of the accuracy of assessment of TSA, FSA, and UMSA algorithms, it was found that the UMSA algorithm had a superior performance in comparison with the two other algorithms, TSA and FSA. Using the UMSA algorithm in retrieving Chla concentration in the Yellow River Estuary decreased by 25.54% NRMSE (normalized root mean square error) when compared with the FSA algorithm, and 29.66% NRMSE in comparison with the TSA algorithm. These are very significant improvements upon previous methods. Additionally, the study revealed that the TSA and FSA algorithms are merely more specific forms of the UMSA algorithm. Owing to the special form of the UMSA algorithm, if the same bands were used for both the TSA and UMSA algorithms or FSA and UMSA algorithms, the UMSA algorithm would theoretically produce superior results in comparison with the TSA and FSA algorithms. Thus, good results may also be produced if the UMSA algorithm were to be applied for predicting Chla concentration for datasets of Gitelson et al. (2008) and Le et al. (2009).

  8. Service Composition Instantiation Based on Cross-Modified Artificial Bee Colony Algorithm

    Institute of Scientific and Technical Information of China (English)

    Lei Huo; Zhiliang Wang

    2016-01-01

    Internet of things (IoT) imposes new challenges on service composition as it is difficult to manage a quick instantiation of a complex services from a growing number of dynamic candidate services.A cross-modified Artificial Bee Colony Algorithm (CMABC) is proposed to achieve the optimal solution services in an acceptable time and high accuracy.Firstly,web service instantiation model was established.What is more,to overcome the problem of discrete and chaotic solution space,the global optimal solution was used to accelerate convergence rate by imitating the cross operation of Genetic algorithm (GA).The simulation experiment result shows that CMABC exhibited faster convergence speed and better convergence accuracy than some other intelligent optimization algorithms.

  9. Real time tracking by LOPF algorithm with mixture model

    Science.gov (United States)

    Meng, Bo; Zhu, Ming; Han, Guangliang; Wu, Zhiguo

    2007-11-01

    A new particle filter-the Local Optimum Particle Filter (LOPF) algorithm is presented for tracking object accurately and steadily in visual sequences in real time which is a challenge task in computer vision field. In order to using the particles efficiently, we first use Sobel algorithm to extract the profile of the object. Then, we employ a new Local Optimum algorithm to auto-initialize some certain number of particles from these edge points as centre of the particles. The main advantage we do this in stead of selecting particles randomly in conventional particle filter is that we can pay more attentions on these more important optimum candidates and reduce the unnecessary calculation on those negligible ones, in addition we can overcome the conventional degeneracy phenomenon in a way and decrease the computational costs. Otherwise, the threshold is a key factor that affecting the results very much. So here we adapt an adaptive threshold choosing method to get the optimal Sobel result. The dissimilarities between the target model and the target candidates are expressed by a metric derived from the Bhattacharyya coefficient. Here, we use both the counter cue to select the particles and the color cur to describe the targets as the mixture target model. The effectiveness of our scheme is demonstrated by real visual tracking experiments. Results from simulations and experiments with real video data show the improved performance of the proposed algorithm when compared with that of the standard particle filter. The superior performance is evident when the target encountering the occlusion in real video where the standard particle filter usually fails.

  10. A comparison of optimization algorithms for localized in vivo B0 shimming.

    Science.gov (United States)

    Nassirpour, Sahar; Chang, Paul; Fillmer, Ariane; Henning, Anke

    2018-02-01

    To compare several different optimization algorithms currently used for localized in vivo B 0 shimming, and to introduce a novel, fast, and robust constrained regularized algorithm (ConsTru) for this purpose. Ten different optimization algorithms (including samples from both generic and dedicated least-squares solvers, and a novel constrained regularized inversion method) were implemented and compared for shimming in five different shimming volumes on 66 in vivo data sets from both 7 T and 9.4 T. The best algorithm was chosen to perform single-voxel spectroscopy at 9.4 T in the frontal cortex of the brain on 10 volunteers. The results of the performance tests proved that the shimming algorithm is prone to unstable solutions if it depends on the value of a starting point, and is not regularized to handle ill-conditioned problems. The ConsTru algorithm proved to be the most robust, fast, and efficient algorithm among all of the chosen algorithms. It enabled acquisition of spectra of reproducible high quality in the frontal cortex at 9.4 T. For localized in vivo B 0 shimming, the use of a dedicated linear least-squares solver instead of a generic nonlinear one is highly recommended. Among all of the linear solvers, the constrained regularized method (ConsTru) was found to be both fast and most robust. Magn Reson Med 79:1145-1156, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  11. Genetic algorithm essentials

    CERN Document Server

    Kramer, Oliver

    2017-01-01

    This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.

  12. Phase Grouping Line Extraction Algorithm Using Overlapped Partition

    Directory of Open Access Journals (Sweden)

    WANG Jingxue

    2015-07-01

    Full Text Available Aiming at solving the problem of fracture at the discontinuities area and the challenges of line fitting in each partition, an innovative line extraction algorithm is proposed based on phase grouping using overlapped partition. The proposed algorithm adopted dual partition steps, which will generate overlapped eight partitions. Between the two steps, the middle axis in the first step coincides with the border lines in the other step. Firstly, the connected edge points that share the same phase gradients are merged into the line candidates, and fitted into line segments. Then to remedy the break lines at the border areas, the break segments in the second partition steps are refitted. The proposed algorithm is robust and does not need any parameter tuning. Experiments with various datasets have confirmed that the method is not only capable of handling the linear features, but also powerful enough in handling the curve features.

  13. Comparison Performance of Genetic Algorithm and Ant Colony Optimization in Course Scheduling Optimizing

    Directory of Open Access Journals (Sweden)

    Imam Ahmad Ashari

    2016-11-01

    Full Text Available Scheduling problems at the university is a complex type of scheduling problems. The scheduling process should be carried out at every turn of the semester's. The core of the problem of scheduling courses at the university is that the number of components that need to be considered in making the schedule, some of the components was made up of students, lecturers, time and a room with due regard to the limits and certain conditions so that no collision in the schedule such as mashed room, mashed lecturer and others. To resolve a scheduling problem most appropriate technique used is the technique of optimization. Optimization techniques can give the best results desired. Metaheuristic algorithm is an algorithm that has a lot of ways to solve the problems to the very limit the optimal solution. In this paper, we use a genetic algorithm and ant colony optimization algorithm is an algorithm metaheuristic to solve the problem of course scheduling. The two algorithm will be tested and compared to get performance is the best. The algorithm was tested using data schedule courses of the university in Semarang. From the experimental results we conclude that the genetic algorithm has better performance than the ant colony optimization  algorithm in solving the case of course scheduling.

  14. Application of the distributed genetic algorithm for loading pattern optimization problems

    International Nuclear Information System (INIS)

    Hashimoto, Hiroshi; Yamamoto, Akio

    2000-01-01

    The distributed genetic algorithm (DGA) is applied for loading pattern optimization problems of the pressurized water reactors (PWR). Due to stiff nature of the loading pattern optimizations (e.g. multi-modality and non-linearity), stochastic methods like the simulated annealing or the genetic algorithm (GA) are widely applied for these problems. A basic concept of DGA is based on that of GA. However, DGA equally distributes candidates of solutions (i.e. loading patterns) to several independent 'islands' and evolves them in each island. Migrations of some candidates are performed among islands with a certain period. Since candidates of solutions independently evolve in each island with accepting different genes of migrants from other islands, premature convergence in the traditional GA can be prevented. Because many candidate loading patterns should be evaluated in one generation of GA or DGA, the parallelization in these calculations works efficiently. Parallel efficiency was measured using our optimization code and good load balance was attained even in a heterogeneous cluster environment due to dynamic distribution of the calculation load. The optimization code is based on the client/server architecture with the TCP/IP native socket and a client (optimization module) and calculation server modules communicate the objects of loading patterns each other. Throughout the sensitivity study on optimization parameters of DGA, a suitable set of the parameters for a test problem was identified. Finally, optimization capability of DGA and the traditional GA was compared in the test problem and DGA provided better optimization results than the traditional GA. (author)

  15. A Source Identification Algorithm for INTEGRAL

    Science.gov (United States)

    Scaringi, Simone; Bird, Antony J.; Clark, David J.; Dean, Anthony J.; Hill, Adam B.; McBride, Vanessa A.; Shaw, Simon E.

    2008-12-01

    We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using Random Forests, is applied to the IBIS/ISGRI dataset in order to ease the production of unbiased future soft gamma-ray source catalogues. The key steps of candidate searching, filtering and feature extraction are described. Three training and testing sets are created in order to deal with the diverse timescales and diverse objects encountered when dealing with the gamma-ray sky. Three independent Random Forest are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described: the Transient Matrix. Finally the performance of the network is assessed and discussed using the testing set and some illustrative source examples.

  16. Halopentacenes: Promising Candidates for Organic Semiconductors

    International Nuclear Information System (INIS)

    Gong-He, Du; Zhao-Yu, Ren; Ji-Ming, Zheng; Ping, Guo

    2009-01-01

    We introduce polar substituents such as F, Cl, Br into pentacene to enhance the dissolubility in common organic solvents while retaining the high charge-carrier mobilities of pentacene. Geometric structures, dipole moments, frontier molecule orbits, ionization potentials and electron affinities, as well as reorganization energies of those molecules, and of pentacene for comparison, are successively calculated by density functional theory. The results indicate that halopentacenes have rather small reorganization energies (< 0.2 eV), and when the substituents are in position 2 or positions 2 and 9, they are polarity molecules. Thus we conjecture that they can easily be dissolved in common organic solvents, and are promising candidates for organic semiconductors. (condensed matter: electronicstructure, electrical, magnetic, and opticalproperties)

  17. Analog Circuit Design Optimization Based on Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Mansour Barari

    2014-01-01

    Full Text Available This paper investigates an evolutionary-based designing system for automated sizing of analog integrated circuits (ICs. Two evolutionary algorithms, genetic algorithm and PSO (Parswal particle swarm optimization algorithm, are proposed to design analog ICs with practical user-defined specifications. On the basis of the combination of HSPICE and MATLAB, the system links circuit performances, evaluated through specific electrical simulation, to the optimization system in the MATLAB environment, for the selected topology. The system has been tested by typical and hard-to-design cases, such as complex analog blocks with stringent design requirements. The results show that the design specifications are closely met. Comparisons with available methods like genetic algorithms show that the proposed algorithm offers important advantages in terms of optimization quality and robustness. Moreover, the algorithm is shown to be efficient.

  18. Distinguishing between cancer driver and passenger gene alteration candidates via cross-species comparison: a pilot study

    International Nuclear Information System (INIS)

    Ji, Xinglai; Tang, Jie; Halberg, Richard; Busam, Dana; Ferriera, Steve; Peña, Maria Marjorette O; Venkataramu, Chinnambally; Yeatman, Timothy J; Zhao, Shaying

    2010-01-01

    We are developing a cross-species comparison strategy to distinguish between cancer driver- and passenger gene alteration candidates, by utilizing the difference in genomic location of orthologous genes between the human and other mammals. As an initial test of this strategy, we conducted a pilot study with human colorectal cancer (CRC) and its mouse model C57BL/6J Apc Min/+ , focusing on human 5q22.2 and 18q21.1-q21.2. We first performed bioinformatics analysis on the evolution of 5q22.2 and 18q21.1-q21.2 regions. Then, we performed exon-targeted sequencing, real time quantitative polymerase chain reaction (qPCR), and real time quantitative reverse transcriptase PCR (qRT-PCR) analyses on a number of genes of both regions with both human and mouse colon tumors. These two regions (5q22.2 and 18q21.1-q21.2) are frequently deleted in human CRCs and encode genuine colorectal tumor suppressors APC and SMAD4. They also encode genes such as MCC (mutated in colorectal cancer) with their role in CRC etiology unknown. We have discovered that both regions are evolutionarily unstable, resulting in genes that are clustered in each human region being found scattered at several distinct loci in the genome of many other species. For instance, APC and MCC are within 200 kb apart in human 5q22.2 but are 10 Mb apart in the mouse genome. Importantly, our analyses revealed that, while known CRC driver genes APC and SMAD4 were disrupted in both human colorectal tumors and tumors from Apc Min/+ mice, the questionable MCC gene was disrupted in human tumors but appeared to be intact in mouse tumors. These results indicate that MCC may not actually play any causative role in early colorectal tumorigenesis. We also hypothesize that its disruption in human CRCs is likely a mere result of its close proximity to APC in the human genome. Expanding this pilot study to the entire genome may identify more questionable genes like MCC, facilitating the discovery of new CRC driver gene candidates

  19. Distinguishing between cancer driver and passenger gene alteration candidates via cross-species comparison: a pilot study.

    Science.gov (United States)

    Ji, Xinglai; Tang, Jie; Halberg, Richard; Busam, Dana; Ferriera, Steve; Peña, Maria Marjorette O; Venkataramu, Chinnambally; Yeatman, Timothy J; Zhao, Shaying

    2010-08-13

    We are developing a cross-species comparison strategy to distinguish between cancer driver- and passenger gene alteration candidates, by utilizing the difference in genomic location of orthologous genes between the human and other mammals. As an initial test of this strategy, we conducted a pilot study with human colorectal cancer (CRC) and its mouse model C57BL/6J ApcMin/+, focusing on human 5q22.2 and 18q21.1-q21.2. We first performed bioinformatics analysis on the evolution of 5q22.2 and 18q21.1-q21.2 regions. Then, we performed exon-targeted sequencing, real time quantitative polymerase chain reaction (qPCR), and real time quantitative reverse transcriptase PCR (qRT-PCR) analyses on a number of genes of both regions with both human and mouse colon tumors. These two regions (5q22.2 and 18q21.1-q21.2) are frequently deleted in human CRCs and encode genuine colorectal tumor suppressors APC and SMAD4. They also encode genes such as MCC (mutated in colorectal cancer) with their role in CRC etiology unknown. We have discovered that both regions are evolutionarily unstable, resulting in genes that are clustered in each human region being found scattered at several distinct loci in the genome of many other species. For instance, APC and MCC are within 200 kb apart in human 5q22.2 but are 10 Mb apart in the mouse genome. Importantly, our analyses revealed that, while known CRC driver genes APC and SMAD4 were disrupted in both human colorectal tumors and tumors from ApcMin/+ mice, the questionable MCC gene was disrupted in human tumors but appeared to be intact in mouse tumors. These results indicate that MCC may not actually play any causative role in early colorectal tumorigenesis. We also hypothesize that its disruption in human CRCs is likely a mere result of its close proximity to APC in the human genome. Expanding this pilot study to the entire genome may identify more questionable genes like MCC, facilitating the discovery of new CRC driver gene candidates.

  20. Performance comparison of some evolutionary algorithms on job shop scheduling problems

    Science.gov (United States)

    Mishra, S. K.; Rao, C. S. P.

    2016-09-01

    Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.

  1. Differential Evolution Algorithm with Self-Adaptive Population Resizing Mechanism

    Directory of Open Access Journals (Sweden)

    Xu Wang

    2013-01-01

    Full Text Available A differential evolution (DE algorithm with self-adaptive population resizing mechanism, SapsDE, is proposed to enhance the performance of DE by dynamically choosing one of two mutation strategies and tuning control parameters in a self-adaptive manner. More specifically, more appropriate mutation strategies along with its parameter settings can be determined adaptively according to the previous status at different stages of the evolution process. To verify the performance of SapsDE, 17 benchmark functions with a wide range of dimensions, and diverse complexities are used. Nonparametric statistical procedures were performed for multiple comparisons between the proposed algorithm and five well-known DE variants from the literature. Simulation results show that SapsDE is effective and efficient. It also exhibits much more superiorresultsthan the other five algorithms employed in the comparison in most of the cases.

  2. VLSI PARTITIONING ALGORITHM WITH ADAPTIVE CONTROL PARAMETER

    Directory of Open Access Journals (Sweden)

    P. N. Filippenko

    2013-03-01

    Full Text Available The article deals with the problem of very large-scale integration circuit partitioning. A graph is selected as a mathematical model describing integrated circuit. Modification of ant colony optimization algorithm is presented, which is used to solve graph partitioning problem. Ant colony optimization algorithm is an optimization method based on the principles of self-organization and other useful features of the ants’ behavior. The proposed search system is based on ant colony optimization algorithm with the improved method of the initial distribution and dynamic adjustment of the control search parameters. The experimental results and performance comparison show that the proposed method of very large-scale integration circuit partitioning provides the better search performance over other well known algorithms.

  3. Optimal algorithm for automatic detection of microaneurysms based on receiver operating characteristic curve

    Science.gov (United States)

    Xu, Lili; Luo, Shuqian

    2010-11-01

    Microaneurysms (MAs) are the first manifestations of the diabetic retinopathy (DR) as well as an indicator for its progression. Their automatic detection plays a key role for both mass screening and monitoring and is therefore in the core of any system for computer-assisted diagnosis of DR. The algorithm basically comprises the following stages: candidate detection aiming at extracting the patterns possibly corresponding to MAs based on mathematical morphological black top hat, feature extraction to characterize these candidates, and classification based on support vector machine (SVM), to validate MAs. Feature vector and kernel function of SVM selection is very important to the algorithm. We use the receiver operating characteristic (ROC) curve to evaluate the distinguishing performance of different feature vectors and different kernel functions of SVM. The ROC analysis indicates the quadratic polynomial SVM with a combination of features as the input shows the best discriminating performance.

  4. Spatiao – Temporal Evaluation and Comparison of MM5 Model using Similarity Algorithm

    Directory of Open Access Journals (Sweden)

    N. Siabi

    2016-02-01

    Full Text Available Introduction temporal and spatial change of meteorological and environmental variables is very important. These changes can be predicted by numerical prediction models over time and in different locations and can be provided as spatial zoning maps with interpolation methods such as geostatistics (16, 6. But these maps are comparable to each other as visual, qualitative and univariate for a limited number of maps (15. To resolve this problem the similarity algorithm is used. This algorithm is a simultaneous comparison method to a large number of data (18. Numerical prediction models such as MM5 were used in different studies (10, 22, and 23. But a little research is done to compare the spatio-temporal similarity of the models with real data quantitatively. The purpose of this paper is to integrate geostatistical techniques with similarity algorithm to study the spatial and temporal MM5 model predicted results with real data. Materials and Methods The study area is north east of Iran. 55 to 61 degrees of longitude and latitude is 30 to 38 degrees. Monthly and annual temperature and precipitation actual data for the period of 1990-2010 was received from the Meteorological Agency and Department of Energy. MM5 Model Data, with a spatial resolution 0.5 × 0.5 degree were downloaded from the NASA website (5. GS+ and ArcGis software were used to produce each variable map. We used multivariate methods co-kriging and kriging with an external drift by applying topography and height as a secondary variable via implementing Digital Elevation Model. (6,12,14. Then the standardize and similarity algorithms (9,11 was applied by programming in MATLAB software to each map grid point. The spatial and temporal similarities between data collections and model results were obtained by F values. These values are between 0 and 0.5 where the value below 0.2 indicates good similarity and above 0.5 shows very poor similarity. The results were plotted on maps by MATLAB

  5. Reactive power and voltage control based on general quantum genetic algorithms

    DEFF Research Database (Denmark)

    Vlachogiannis, Ioannis (John); Østergaard, Jacob

    2009-01-01

    This paper presents an improved evolutionary algorithm based on quantum computing for optima l steady-state performance of power systems. However, the proposed general quantum genetic algorithm (GQ-GA) can be applied in various combinatorial optimization problems. In this study the GQ-GA determines...... techniques such as enhanced GA, multi-objective evolutionary algorithm and particle swarm optimization algorithms, as well as the classical primal-dual interior-point optimal power flow algorithm. The comparison demonstrates the ability of the GQ-GA in reaching more optimal solutions....

  6. The Texas medication algorithm project: clinical results for schizophrenia.

    Science.gov (United States)

    Miller, Alexander L; Crismon, M Lynn; Rush, A John; Chiles, John; Kashner, T Michael; Toprac, Marcia; Carmody, Thomas; Biggs, Melanie; Shores-Wilson, Kathy; Chiles, Judith; Witte, Brad; Bow-Thomas, Christine; Velligan, Dawn I; Trivedi, Madhukar; Suppes, Trisha; Shon, Steven

    2004-01-01

    In the Texas Medication Algorithm Project (TMAP), patients were given algorithm-guided treatment (ALGO) or treatment as usual (TAU). The ALGO intervention included a clinical coordinator to assist the physicians and administer a patient and family education program. The primary comparison in the schizophrenia module of TMAP was between patients seen in clinics in which ALGO was used (n = 165) and patients seen in clinics in which no algorithms were used (n = 144). A third group of patients, seen in clinics using an algorithm for bipolar or major depressive disorder but not for schizophrenia, was also studied (n = 156). The ALGO group had modestly greater improvement in symptoms (Brief Psychiatric Rating Scale) during the first quarter of treatment. The TAU group caught up by the end of 12 months. Cognitive functions were more improved in ALGO than in TAU at 3 months, and this difference was greater at 9 months (the final cognitive assessment). In secondary comparisons of ALGO with the second TAU group, the greater improvement in cognitive functioning was again noted, but the initial symptom difference was not significant.

  7. Citizen Candidates Under Uncertainty

    OpenAIRE

    Eguia, Jon X.

    2005-01-01

    In this paper we make two contributions to the growing literature on "citizen-candidate" models of representative democracy. First, we add uncertainty about the total vote count. We show that in a society with a large electorate, where the outcome of the election is uncertain and where winning candidates receive a large reward from holding office, there will be a two-candidate equilibrium and no equilibria with a single candidate. Second, we introduce a new concept of equilibrium, which we te...

  8. Comparison of spike-sorting algorithms for future hardware implementation.

    Science.gov (United States)

    Gibson, Sarah; Judy, Jack W; Markovic, Dejan

    2008-01-01

    Applications such as brain-machine interfaces require hardware spike sorting in order to (1) obtain single-unit activity and (2) perform data reduction for wireless transmission of data. Such systems must be low-power, low-area, high-accuracy, automatic, and able to operate in real time. Several detection and feature extraction algorithms for spike sorting are described briefly and evaluated in terms of accuracy versus computational complexity. The nonlinear energy operator method is chosen as the optimal spike detection algorithm, being most robust over noise and relatively simple. The discrete derivatives method [1] is chosen as the optimal feature extraction method, maintaining high accuracy across SNRs with a complexity orders of magnitude less than that of traditional methods such as PCA.

  9. Application of Hybrid Genetic Algorithm Routine in Optimizing Food and Bioengineering Processes

    Directory of Open Access Journals (Sweden)

    Jaya Shankar Tumuluru

    2016-11-01

    Full Text Available Optimization is a crucial step in the analysis of experimental results. Deterministic methods only converge on local optimums and require exponentially more time as dimensionality increases. Stochastic algorithms are capable of efficiently searching the domain space; however convergence is not guaranteed. This article demonstrates the novelty of the hybrid genetic algorithm (HGA, which combines both stochastic and deterministic routines for improved optimization results. The new hybrid genetic algorithm developed is applied to the Ackley benchmark function as well as case studies in food, biofuel, and biotechnology processes. For each case study, the hybrid genetic algorithm found a better optimum candidate than reported by the sources. In the case of food processing, the hybrid genetic algorithm improved the anthocyanin yield by 6.44%. Optimization of bio-oil production using HGA resulted in a 5.06% higher yield. In the enzyme production process, HGA predicted a 0.39% higher xylanase yield. Hybridization of the genetic algorithm with a deterministic algorithm resulted in an improved optimum compared to statistical methods.

  10. Research on Palmprint Identification Method Based on Quantum Algorithms

    Directory of Open Access Journals (Sweden)

    Hui Li

    2014-01-01

    Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.

  11. Optimization of Pressurizer Based on Genetic-Simplex Algorithm

    International Nuclear Information System (INIS)

    Wang, Cheng; Yan, Chang Qi; Wang, Jian Jun

    2014-01-01

    Pressurizer is one of key components in nuclear power system. It's important to control the dimension in the design of pressurizer through optimization techniques. In this work, a mathematic model of a vertical electric heating pressurizer was established. A new Genetic-Simplex Algorithm (GSA) that combines genetic algorithm and simplex algorithm was developed to enhance the searching ability, and the comparison among modified and original algorithms is conducted by calculating the benchmark function. Furthermore, the optimization design of pressurizer, taking minimization of volume and net weight as objectives, was carried out considering thermal-hydraulic and geometric constraints through GSA. The results indicate that the mathematical model is agreeable for the pressurizer and the new algorithm is more effective than the traditional genetic algorithm. The optimization design shows obvious validity and can provide guidance for real engineering design

  12. Algorithm of pulmonary emphysema extraction using low dose thoracic 3D CT images

    Science.gov (United States)

    Saita, S.; Kubo, M.; Kawata, Y.; Niki, N.; Nakano, Y.; Omatsu, H.; Tominaga, K.; Eguchi, K.; Moriyama, N.

    2006-03-01

    Recently, due to aging and smoking, emphysema patients are increasing. The restoration of alveolus which was destroyed by emphysema is not possible, thus early detection of emphysema is desired. We describe a quantitative algorithm for extracting emphysematous lesions and quantitatively evaluate their distribution patterns using low dose thoracic 3-D CT images. The algorithm identified lung anatomies, and extracted low attenuation area (LAA) as emphysematous lesion candidates. Applying the algorithm to 100 thoracic 3-D CT images and then by follow-up 3-D CT images, we demonstrate its potential effectiveness to assist radiologists and physicians to quantitatively evaluate the emphysematous lesions distribution and their evolution in time interval changes.

  13. Comparison study of reconstruction algorithms for prototype digital breast tomosynthesis using various breast phantoms.

    Science.gov (United States)

    Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung

    2016-02-01

    Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low

  14. Comparison of sorting algorithms to increase the range of Hartmann-Shack aberrometry.

    Science.gov (United States)

    Bedggood, Phillip; Metha, Andrew

    2010-01-01

    Recently many software-based approaches have been suggested for improving the range and accuracy of Hartmann-Shack aberrometry. We compare the performance of four representative algorithms, with a focus on aberrometry for the human eye. Algorithms vary in complexity from the simplistic traditional approach to iterative spline extrapolation based on prior spot measurements. Range is assessed for a variety of aberration types in isolation using computer modeling, and also for complex wavefront shapes using a real adaptive optics system. The effects of common sources of error for ocular wavefront sensing are explored. The results show that the simplest possible iterative algorithm produces comparable range and robustness compared to the more complicated algorithms, while keeping processing time minimal to afford real-time analysis.

  15. A Qualitative Comparison between the Proportional Navigation and Differential Geometry Guidance Algorithms

    Directory of Open Access Journals (Sweden)

    Yunes Sh. ALQUDSI

    2018-06-01

    Full Text Available This paper discusses and presents an overview of the proportional navigation (PN guidance law as well as the differential geometry (DG guidance algorithm that are used to develop the intercept course of a certain target. The intent of this study is to illustrate the advantages of the guidance algorithm generated based on the concepts of differential geometry against the well-known PN guidance law. The basic principles behind the both algorithms are mentioned. Moreover, the different versions of the PN approach is briefly clarified to show the essential improvement from one version to the other. The paper terminated with numerous two-dimension simulation figures to give a great value of visual aids, illustrating the significant relations and main features and properties of both algorithms.

  16. An algorithm for preferential selection of spectroscopic targets in LEGUE

    International Nuclear Information System (INIS)

    Carlin, Jeffrey L.; Newberg, Heidi Jo; Lépine, Sébastien; Deng Licai; Chen Yuqin; Fu Xiaoting; Gao Shuang; Li Jing; Liu Chao; Beers, Timothy C.; Christlieb, Norbert; Grillmair, Carl J.; Guhathakurta, Puragra; Han Zhanwen; Hou Jinliang; Lee, Hsu-Tai; Liu Xiaowei; Pan Kaike; Sellwood, J. A.; Wang Hongchi

    2012-01-01

    We describe a general target selection algorithm that is applicable to any survey in which the number of available candidates is much larger than the number of objects to be observed. This routine aims to achieve a balance between a smoothly-varying, well-understood selection function and the desire to preferentially select certain types of targets. Some target-selection examples are shown that illustrate different possibilities of emphasis functions. Although it is generally applicable, the algorithm was developed specifically for the LAMOST Experiment for Galactic Understanding and Exploration (LEGUE) survey that will be carried out using the Chinese Guo Shou Jing Telescope. In particular, this algorithm was designed for the portion of LEGUE targeting the Galactic halo, in which we attempt to balance a variety of science goals that require stars at fainter magnitudes than can be completely sampled by LAMOST. This algorithm has been implemented for the halo portion of the LAMOST pilot survey, which began in October 2011.

  17. Testing a polarimetric cloud imager aboard research vessel Polarstern: comparison of color-based and polarimetric cloud detection algorithms.

    Science.gov (United States)

    Barta, András; Horváth, Gábor; Horváth, Ákos; Egri, Ádám; Blahó, Miklós; Barta, Pál; Bumke, Karl; Macke, Andreas

    2015-02-10

    Cloud cover estimation is an important part of routine meteorological observations. Cloudiness measurements are used in climate model evaluation, nowcasting solar radiation, parameterizing the fluctuations of sea surface insolation, and building energy transfer models of the atmosphere. Currently, the most widespread ground-based method to measure cloudiness is based on analyzing the unpolarized intensity and color distribution of the sky obtained by digital cameras. As a new approach, we propose that cloud detection can be aided by the additional use of skylight polarization measured by 180° field-of-view imaging polarimetry. In the fall of 2010, we tested such a novel polarimetric cloud detector aboard the research vessel Polarstern during expedition ANT-XXVII/1. One of our goals was to test the durability of the measurement hardware under the extreme conditions of a trans-Atlantic cruise. Here, we describe the instrument and compare the results of several different cloud detection algorithms, some conventional and some newly developed. We also discuss the weaknesses of our design and its possible improvements. The comparison with cloud detection algorithms developed for traditional nonpolarimetric full-sky imagers allowed us to evaluate the added value of polarimetric quantities. We found that (1) neural-network-based algorithms perform the best among the investigated schemes and (2) global information (the mean and variance of intensity), nonoptical information (e.g., sun-view geometry), and polarimetric information (e.g., the degree of polarization) improve the accuracy of cloud detection, albeit slightly.

  18. Using Genetic Algorithms in Secured Business Intelligence Mobile Applications

    Directory of Open Access Journals (Sweden)

    Silvia TRIF

    2011-01-01

    Full Text Available The paper aims to assess the use of genetic algorithms for training neural networks used in secured Business Intelligence Mobile Applications. A comparison is made between classic back-propagation method and a genetic algorithm based training. The design of these algorithms is presented. A comparative study is realized for determining the better way of training neural networks, from the point of view of time and memory usage. The results show that genetic algorithms based training offer better performance and memory usage than back-propagation and they are fit to be implemented on mobile devices.

  19. A comparison between physicians and computer algorithms for form CMS-2728 data reporting.

    Science.gov (United States)

    Malas, Mohammed Said; Wish, Jay; Moorthi, Ranjani; Grannis, Shaun; Dexter, Paul; Duke, Jon; Moe, Sharon

    2017-01-01

    CMS-2728 form (Medical Evidence Report) assesses 23 comorbidities chosen to reflect poor outcomes and increased mortality risk. Previous studies questioned the validity of physician reporting on forms CMS-2728. We hypothesize that reporting of comorbidities by computer algorithms identifies more comorbidities than physician completion, and, therefore, is more reflective of underlying disease burden. We collected data from CMS-2728 forms for all 296 patients who had incident ESRD diagnosis and received chronic dialysis from 2005 through 2014 at Indiana University outpatient dialysis centers. We analyzed patients' data from electronic medical records systems that collated information from multiple health care sources. Previously utilized algorithms or natural language processing was used to extract data on 10 comorbidities for a period of up to 10 years prior to ESRD incidence. These algorithms incorporate billing codes, prescriptions, and other relevant elements. We compared the presence or unchecked status of these comorbidities on the forms to the presence or absence according to the algorithms. Computer algorithms had higher reporting of comorbidities compared to forms completion by physicians. This remained true when decreasing data span to one year and using only a single health center source. The algorithms determination was well accepted by a physician panel. Importantly, algorithms use significantly increased the expected deaths and lowered the standardized mortality ratios. Using computer algorithms showed superior identification of comorbidities for form CMS-2728 and altered standardized mortality ratios. Adapting similar algorithms in available EMR systems may offer more thorough evaluation of comorbidities and improve quality reporting. © 2016 International Society for Hemodialysis.

  20. Topology Control Algorithms for Spacecraft Formation Flying Networks Under Connectivity and Time-Delay Constraints, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — SSCI is proposing to develop a set of topology control algorithms for a formation flying spacecraft that can be used to design and evaluate candidate formation...

  1. Exact and Heuristic Algorithms for Runway Scheduling

    Science.gov (United States)

    Malik, Waqar A.; Jung, Yoon C.

    2016-01-01

    This paper explores the Single Runway Scheduling (SRS) problem with arrivals, departures, and crossing aircraft on the airport surface. Constraints for wake vortex separations, departure area navigation separations and departure time window restrictions are explicitly considered. The main objective of this research is to develop exact and heuristic based algorithms that can be used in real-time decision support tools for Air Traffic Control Tower (ATCT) controllers. The paper provides a multi-objective dynamic programming (DP) based algorithm that finds the exact solution to the SRS problem, but may prove unusable for application in real-time environment due to large computation times for moderate sized problems. We next propose a second algorithm that uses heuristics to restrict the search space for the DP based algorithm. A third algorithm based on a combination of insertion and local search (ILS) heuristics is then presented. Simulation conducted for the east side of Dallas/Fort Worth International Airport allows comparison of the three proposed algorithms and indicates that the ILS algorithm performs favorably in its ability to find efficient solutions and its computation times.

  2. Applications of machine-learning algorithms for infrared colour selection of Galactic Wolf-Rayet stars

    Science.gov (United States)

    Morello, Giuseppe; Morris, P. W.; Van Dyk, S. D.; Marston, A. P.; Mauerhan, J. C.

    2018-01-01

    We have investigated and applied machine-learning algorithms for infrared colour selection of Galactic Wolf-Rayet (WR) candidates. Objects taken from the Spitzer Galactic Legacy Infrared Midplane Survey Extraordinaire (GLIMPSE) catalogue of the infrared objects in the Galactic plane can be classified into different stellar populations based on the colours inferred from their broad-band photometric magnitudes [J, H and Ks from 2 Micron All Sky Survey (2MASS), and the four Spitzer/IRAC bands]. The algorithms tested in this pilot study are variants of the k-nearest neighbours approach, which is ideal for exploratory studies of classification problems where interrelations between variables and classes are complicated. The aims of this study are (1) to provide an automated tool to select reliable WR candidates and potentially other classes of objects, (2) to measure the efficiency of infrared colour selection at performing these tasks and (3) to lay the groundwork for statistically inferring the total number of WR stars in our Galaxy. We report the performance results obtained over a set of known objects and selected candidates for which we have carried out follow-up spectroscopic observations, and confirm the discovery of four new WR stars.

  3. Optimization of Charge/Discharge Coordination to Satisfy Network Requirements Using Heuristic Algorithms in Vehicle-to-Grid Concept

    Directory of Open Access Journals (Sweden)

    DOGAN, A.

    2018-02-01

    Full Text Available Image thresholding is the most crucial step in microscopic image analysis to distinguish bacilli objects causing of tuberculosis disease. Therefore, several bi-level thresholding algorithms are widely used to increase the bacilli segmentation accuracy. However, bi-level microscopic image thresholding problem has not been solved using optimization algorithms. This paper introduces a novel approach for the segmentation problem using heuristic algorithms and presents visual and quantitative comparisons of heuristic and state-of-art thresholding algorithms. In this study, well-known heuristic algorithms such as Firefly Algorithm, Particle Swarm Optimization, Cuckoo Search, Flower Pollination are used to solve bi-level microscopic image thresholding problem, and the results are compared with the state-of-art thresholding algorithms such as K-Means, Fuzzy C-Means, Fast Marching. Kapur's entropy is chosen as the entropy measure to be maximized. Experiments are performed to make comparisons in terms of evaluation metrics and execution time. The quantitative results are calculated based on ground truth segmentation. According to the visual results, heuristic algorithms have better performance and the quantitative results are in accord with the visual results. Furthermore, experimental time comparisons show the superiority and effectiveness of the heuristic algorithms over traditional thresholding algorithms.

  4. IMPLEMENTATION OF OBJECT TRACKING ALGORITHMS ON THE BASIS OF CUDA TECHNOLOGY

    Directory of Open Access Journals (Sweden)

    B. A. Zalesky

    2014-01-01

    Full Text Available A fast version of correlation algorithm to track objects on video-sequences made by a nonstabilized camcorder is presented. The algorithm is based on comparison of local correlations of the object image and regions of video-frames. The algorithm is implemented in programming technology CUDA. Application of CUDA allowed to attain real time execution of the algorithm. To improve its precision and stability, a robust version of the Kalman filter has been incorporated into the flowchart. Tests showed applicability of the algorithm to practical object tracking.

  5. A modified CoRoT detrend algorithm and the discovery of a new planetary companion

    Science.gov (United States)

    Boufleur, Rodrigo C.; Emilio, Marcelo; Janot-Pacheco, Eduardo; Andrade, Laerte; Ferraz-Mello, Sylvio; do Nascimento, José-Dias, Jr.; de La Reza, Ramiro

    2018-01-01

    We present MCDA, a modification of the COnvection ROtation and planetary Transits (CoRoT) detrend algorithm (CDA) suitable to detrend chromatic light curves. By means of robust statistics and better handling of short-term variability, the implementation decreases the systematic light-curve variations and improves the detection of exoplanets when compared with the original algorithm. All CoRoT chromatic light curves (a total of 65 655) were analysed with our algorithm. Dozens of new transit candidates and all previously known CoRoT exoplanets were rediscovered in those light curves using a box-fitting algorithm. For three of the new cases, spectroscopic measurements of the candidates' host stars were retrieved from the ESO Science Archive Facility and used to calculate stellar parameters and, in the best cases, radial velocities. In addition to our improved detrend technique, we announce the discovery of a planet that orbits a 0.79_{-0.09}^{+0.08} R⊙ star with a period of 6.718 37 ± 0.000 01 d and has 0.57_{-0.05}^{+0.06} RJ and 0.15 ± 0.10 MJ. We also present the analysis of two cases in which parameters found suggest the existence of possible planetary companions.

  6. A space-efficient algorithm for local similarities.

    Science.gov (United States)

    Huang, X Q; Hardison, R C; Miller, W

    1990-10-01

    Existing dynamic-programming algorithms for identifying similar regions of two sequences require time and space proportional to the product of the sequence lengths. Often this space requirement is more limiting than the time requirement. We describe a dynamic-programming local-similarity algorithm that needs only space proportional to the sum of the sequence lengths. The method can also find repeats within a single long sequence. To illustrate the algorithm's potential, we discuss comparison of a 73,360 nucleotide sequence containing the human beta-like globin gene cluster and a corresponding 44,594 nucleotide sequence for rabbit, a problem well beyond the capabilities of other dynamic-programming software.

  7. Evaluation of Chinese Calligraphy by Using DBSC Vectorization and ICP Algorithm

    Directory of Open Access Journals (Sweden)

    Mengdi Wang

    2016-01-01

    Full Text Available Chinese calligraphy is a charismatic ancient art form with high artistic value in Chinese culture. Virtual calligraphy learning system is a research hotspot in recent years. In such system, a judging mechanism for user’s practice result is quite important. Sometimes, user’s handwritten character is not that standard, the size and position are not fixed, and the whole character may be even askew, which brings difficulty for its evaluation. In this paper, we propose an approach by using DBSCs (disk B-spline curves vectorization and ICP (iterative closest point algorithm, which cannot only evaluate a calligraphic character without knowing what it is, but also deal with the above problems commendably. Firstly we find the promising candidate characters from the database according to the angular difference relations as quickly as possible. Then we check these vectorized candidates by using ICP algorithm based upon the skeleton, hence finding out the best matching character. Finally a comprehensive evaluation involving global (the whole character and local (strokes similarities is implemented, and a final composited evaluation score can be worked out.

  8. Future possibilities for precise studies of the X(125) Higgs candidate

    CERN Multimedia

    CERN. Geneva; Zimmermann, Frank

    2012-01-01

    We present a summary of the state-of-the-art comparison relevant to possible studies of the X(125) Higgs boson candidate. The machines considered are the LHC and its upgrades, Linear and circular e+e- colliders such as ILC/CLIC or LEP3/TLEP, gamma-gamma colliders and muon colliders. The conclusions of the recent ICFA beam dynamics HF2012 workshop in Fermilab will also be shown.

  9. Sorting processes with energy-constrained comparisons*

    Science.gov (United States)

    Geissmann, Barbara; Penna, Paolo

    2018-05-01

    We study very simple sorting algorithms based on a probabilistic comparator model. In this model, errors in comparing two elements are due to (1) the energy or effort put in the comparison and (2) the difference between the compared elements. Such algorithms repeatedly compare and swap pairs of randomly chosen elements, and they correspond to natural Markovian processes. The study of these Markov chains reveals an interesting phenomenon. Namely, in several cases, the algorithm that repeatedly compares only adjacent elements is better than the one making arbitrary comparisons: in the long-run, the former algorithm produces sequences that are "better sorted". The analysis of the underlying Markov chain poses interesting questions as the latter algorithm yields a nonreversible chain, and therefore its stationary distribution seems difficult to calculate explicitly. We nevertheless provide bounds on the stationary distributions and on the mixing time of these processes in several restrictions.

  10. Ranking candidate disease genes from gene expression and protein interaction: a Katz-centrality based approach.

    Directory of Open Access Journals (Sweden)

    Jing Zhao

    Full Text Available Many diseases have complex genetic causes, where a set of alleles can affect the propensity of getting the disease. The identification of such disease genes is important to understand the mechanistic and evolutionary aspects of pathogenesis, improve diagnosis and treatment of the disease, and aid in drug discovery. Current genetic studies typically identify chromosomal regions associated specific diseases. But picking out an unknown disease gene from hundreds of candidates located on the same genomic interval is still challenging. In this study, we propose an approach to prioritize candidate genes by integrating data of gene expression level, protein-protein interaction strength and known disease genes. Our method is based only on two, simple, biologically motivated assumptions--that a gene is a good disease-gene candidate if it is differentially expressed in cases and controls, or that it is close to other disease-gene candidates in its protein interaction network. We tested our method on 40 diseases in 58 gene expression datasets of the NCBI Gene Expression Omnibus database. On these datasets our method is able to predict unknown disease genes as well as identifying pleiotropic genes involved in the physiological cellular processes of many diseases. Our study not only provides an effective algorithm for prioritizing candidate disease genes but is also a way to discover phenotypic interdependency, cooccurrence and shared pathophysiology between different disorders.

  11. An algorithm to track laboratory zebrafish shoals.

    Science.gov (United States)

    Feijó, Gregory de Oliveira; Sangalli, Vicenzo Abichequer; da Silva, Isaac Newton Lima; Pinho, Márcio Sarroglia

    2018-05-01

    In this paper, a semi-automatic multi-object tracking method to track a group of unmarked zebrafish is proposed. This method can handle partial occlusion cases, maintaining the correct identity of each individual. For every object, we extracted a set of geometric features to be used in the two main stages of the algorithm. The first stage selected the best candidate, based both on the blobs identified in the image and the estimate generated by a Kalman Filter instance. In the second stage, if the same candidate-blob is selected by two or more instances, a blob-partitioning algorithm takes place in order to split this blob and reestablish the instances' identities. If the algorithm cannot determine the identity of a blob, a manual intervention is required. This procedure was compared against a manual labeled ground truth on four video sequences with different numbers of fish and spatial resolution. The performance of the proposed method is then compared against two well-known zebrafish tracking methods found in the literature: one that treats occlusion scenarios and one that only track fish that are not in occlusion. Based on the data set used, the proposed method outperforms the first method in correctly separating fish in occlusion, increasing its efficiency by at least 8.15% of the cases. As for the second, the proposed method's overall performance outperformed the second in some of the tested videos, especially those with lower image quality, because the second method requires high-spatial resolution images, which is not a requirement for the proposed method. Yet, the proposed method was able to separate fish involved in occlusion and correctly assign its identity in up to 87.85% of the cases, without accounting for user intervention. Copyright © 2018 Elsevier Ltd. All rights reserved.

  12. Performance evaluation of firefly algorithm with variation in sorting for non-linear benchmark problems

    Science.gov (United States)

    Umbarkar, A. J.; Balande, U. T.; Seth, P. D.

    2017-06-01

    The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.

  13. A Novel AHRS Inertial Sensor-Based Algorithm for Wheelchair Propulsion Performance Analysis

    Directory of Open Access Journals (Sweden)

    Jonathan Bruce Shepherd

    2016-08-01

    Full Text Available With the increasing rise of professionalism in sport, athletes, teams, and coaches are looking to technology to monitor performance in both games and training in order to find a competitive advantage. The use of inertial sensors has been proposed as a cost effective and adaptable measurement device for monitoring wheelchair kinematics; however, the outcomes are dependent on the reliability of the processing algorithms. Though there are a variety of algorithms that have been proposed to monitor wheelchair propulsion in court sports, they all have limitations. Through experimental testing, we have shown the Attitude and Heading Reference System (AHRS-based algorithm to be a suitable and reliable candidate algorithm for estimating velocity, distance, and approximating trajectory. The proposed algorithm is computationally inexpensive, agnostic of wheel camber, not sensitive to sensor placement, and can be embedded for real-time implementations. The research is conducted under Griffith University Ethics (GU Ref No: 2016/294.

  14. A triangle voting algorithm based on double feature constraints for star sensors

    Science.gov (United States)

    Fan, Qiaoyun; Zhong, Xuyang

    2018-02-01

    A novel autonomous star identification algorithm is presented in this study. In the proposed algorithm, each sensor star constructs multi-triangle with its bright neighbor stars and obtains its candidates by triangle voting process, in which the triangle is considered as the basic voting element. In order to accelerate the speed of this algorithm and reduce the required memory for star database, feature extraction is carried out to reduce the dimension of triangles and each triangle is described by its base and height. During the identification period, the voting scheme based on double feature constraints is proposed to implement triangle voting. This scheme guarantees that only the catalog star satisfying two features can vote for the sensor star, which improves the robustness towards false stars. The simulation and real star image test demonstrate that compared with the other two algorithms, the proposed algorithm is more robust towards position noise, magnitude noise and false stars.

  15. Simple Exact Algorithm for Transistor Sizing of Low-Power High-Speed Arithmetic Circuits

    Directory of Open Access Journals (Sweden)

    Tooraj Nikoubin

    2010-01-01

    Full Text Available A new transistor sizing algorithm, SEA (Simple Exact Algorithm, for optimizing low-power and high-speed arithmetic integrated circuits is proposed. In comparison with other transistor sizing algorithms, simplicity, accuracy, independency of order and initial sizing factors of transistors, and flexibility in choosing the optimization parameters such as power consumption, delay, Power-Delay Product (PDP, chip area or the combination of them are considered as the advantages of this new algorithm. More exhaustive rules of grouping transistors are the main trait of our algorithm. Hence, the SEA algorithm dominates some major transistor sizing metrics such as optimization rate, simulation speed, and reliability. According to approximate comparison of the SEA algorithm with MDE and ADC for a number of conventional full adder circuits, delay and PDP have been improved 55.01% and 57.92% on an average, respectively. By comparing the SEA and Chang's algorithm, 25.64% improvement in PDP and 33.16% improvement in delay have been achieved. All the simulations have been performed with 0.13 m technology based on the BSIM3v3 model using HSpice simulator software.

  16. An adaptive map-matching algorithm based on hierarchical fuzzy system from vehicular GPS data.

    Directory of Open Access Journals (Sweden)

    Jinjun Tang

    Full Text Available An improved hierarchical fuzzy inference method based on C-measure map-matching algorithm is proposed in this paper, in which the C-measure represents the certainty or probability of the vehicle traveling on the actual road. A strategy is firstly introduced to use historical positioning information to employ curve-curve matching between vehicle trajectories and shapes of candidate roads. It improves matching performance by overcoming the disadvantage of traditional map-matching algorithm only considering current information. An average historical distance is used to measure similarity between vehicle trajectories and road shape. The input of system includes three variables: distance between position point and candidate roads, angle between driving heading and road direction, and average distance. As the number of fuzzy rules will increase exponentially when adding average distance as a variable, a hierarchical fuzzy inference system is then applied to reduce fuzzy rules and improve the calculation efficiency. Additionally, a learning process is updated to support the algorithm. Finally, a case study contains four different routes in Beijing city is used to validate the effectiveness and superiority of the proposed method.

  17. Assessment of subsidence in karst terranes at selected areas in East Tennessee and comparison with a candidate site at Oak Ridge, Tennessee: Phase 2

    International Nuclear Information System (INIS)

    Newton, J.G.; Tanner, J.M.

    1987-09-01

    Work in the respective areas included assessment of conditions related to sinkhole development. Information collected and assessed involved geology, hydrogeology, land use, lineaments and linear trends, identification of karst features and zones, and inventory of historical sinkhole development and type. Karstification of the candidate, Rhea County, and Morristown study areas, in comparison to other karst areas in Tennessee, can be classified informally as youthful, submature, and mature, respectively. Historical sinkhole development in the more karstified areas is attributed to the greater degree of structural deformation by faulting and fracturing, subsequent solutioning of bedrock, thinness of residuum, and degree of development by man. Sinkhole triggering mechanisms identified are progressive solution of bedrock, water-level fluctuations, piping, and loading. 68 refs., 18 figs., 11 tabs

  18. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł; Marszał-Paszek, Barbara; Moshkov, Mikhail; Paszek, Piotr; Skowron, Andrzej; Suraj, Zbigniew

    2010-01-01

    the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory

  19. New VVV Survey Globular Cluster Candidates in the Milky Way Bulge

    Energy Technology Data Exchange (ETDEWEB)

    Minniti, Dante; Gómez, Matías [Departamento de Física, Facultad de Ciencias Exactas, Universidad Andrés Bello, Av. Fernandez Concha 700, Las Condes, Santiago (Chile); Geisler, Douglas; Fernández-Trincado, Jose G. [Departamento de Astronomía, Casilla 160-C, Universidad de Concepción, Casilla 160-C, Concepción (Chile); Alonso-García, Javier; Beamín, Juan Carlos; Borissova, Jura; Catelan, Marcio; Ramos, Rodrigo Contreras; Kurtev, Radostin; Pullen, Joyce [Instituto Milenio de Astrofísica, Santiago (Chile); Palma, Tali; Clariá, Juan J. [Observatorio Astronómico, Universidad Nacional de Córdoba, Laprida 854, Córdoba (Argentina); Cohen, Roger E. [Space Telescope Science Institute, 2700 San Martin Drive, Baltimore (United States); Dias, Bruno [European Southern Observatory, Alonso de Cordova 3107, Vitacura, Santiago (Chile); Hempel, Maren [Pontificia Universidad Católica de Chile, Instituto de Astrofísica, Av. Vicuña Mackenna 4860, Santiago (Chile); Ivanov, Valentin D. [European Southern Observatory, Karl-Schwarszchild-Str. 2, D-85748 Garching bei Muenchen (Germany); Lucas, Phillip W. [Dept. of Astronomy, University of Hertfordshire, Hertfordshire (United Kingdom); Moni-Bidin, Christian; Alegría, Sebastian Ramírez [Instituto de Astronomía, Universidad Católica del Norte, Antofagasta (Chile); and others

    2017-11-10

    It is likely that a number of Galactic globular clusters remain to be discovered, especially toward the Galactic bulge. High stellar density combined with high and differential interstellar reddening are the two major problems for finding globular clusters located toward the bulge. We use the deep near-IR photometry of the VISTA Variables in the Vía Láctea (VVV) Survey to search for globular clusters projected toward the Galactic bulge, and hereby report the discovery of 22 new candidate globular clusters. These objects, detected as high density regions in our maps of bulge red giants, are confirmed as globular cluster candidates by their color–magnitude diagrams. We provide their coordinates as well as their near-IR color–magnitude diagrams, from which some basic parameters are derived, such as reddenings and heliocentric distances. The color–magnitude diagrams reveal well defined red giant branches in all cases, often including a prominent red clump. The new globular cluster candidates exhibit a variety of extinctions (0.06 < A {sub Ks} < 2.77) and distances (5.3 < D < 9.5 kpc). We also classify the globular cluster candidates into 10 metal-poor and 12 metal-rich clusters, based on the comparison of their color–magnitude diagrams with those of known globular clusters also observed by the VVV Survey. Finally, we argue that the census for Galactic globular clusters still remains incomplete, and that many more candidate globular clusters (particularly the low luminosity ones) await to be found and studied in detail in the central regions of the Milky Way.

  20. A Food Chain Algorithm for Capacitated Vehicle Routing Problem with Recycling in Reverse Logistics

    Science.gov (United States)

    Song, Qiang; Gao, Xuexia; Santos, Emmanuel T.

    2015-12-01

    This paper introduces the capacitated vehicle routing problem with recycling in reverse logistics, and designs a food chain algorithm for it. Some illustrative examples are selected to conduct simulation and comparison. Numerical results show that the performance of the food chain algorithm is better than the genetic algorithm, particle swarm optimization as well as quantum evolutionary algorithm.

  1. Refinement of the CALIOP cloud mask algorithm

    Science.gov (United States)

    Katagiri, Shuichiro; Sato, Kaori; Ohta, Kohei; Okamoto, Hajime

    2018-04-01

    A modified cloud mask algorithm was applied to the CALIOP data to have more ability to detect the clouds in the lower atmosphere. In this algorithm, we also adopt the fully attenuation discrimination and the remain noise estimation using the data obtained at an altitude of 40 km to avoid contamination of stratospheric aerosols. The new cloud mask shows an increase in the lower cloud fraction. Comparison of the results to the data observed with a PML ground observation was also made.

  2. Positive-Negative Asymmetry in the Evaluations of Political Candidates. The Role of Features of Similarity and Affect in Voter Behavior.

    Science.gov (United States)

    Falkowski, Andrzej; Jabłońska, Magdalena

    2018-01-01

    In this study we followed the extension of Tversky's research about features of similarity with its application to open sets. Unlike the original closed-set model in which a feature was shifted between a common and a distinctive set, we investigated how addition of new features and deletion of existing features affected similarity judgments. The model was tested empirically in a political context and we analyzed how positive and negative changes in a candidate's profile affect the similarity of the politician to his or her ideal and opposite counterpart. The results showed a positive-negative asymmetry in comparison judgments where enhancing negative features (distinctive for an ideal political candidate) had a greater effect on judgments than operations on positive (common) features. However, the effect was not observed for comparisons to a bad politician. Further analyses showed that in the case of a negative reference point, the relationship between similarity judgments and voting intention was mediated by the affective evaluation of the candidate.

  3. Discovery of new natural products by application of X-hitting, a novel algorithm for automated comparison of full UV-spectra, combined with structural determination by NMR spectroscophy

    DEFF Research Database (Denmark)

    Larsen, Thomas Ostenfeld; Petersen, Bent O.; Duus, Jens Øllgaard

    2005-01-01

    X-hitting, a newly developed algorithm for automated comparison of UV data, has been used for the tracking of two novel spiro-quinazoline metabolites, lapatins A (1)andB(2), in a screening study targeting quinazolines. The structures of 1 and 2 were elucidated by analysis of spectroscopic data...

  4. Algorithm comparison and benchmarking using a parallel spectra transform shallow water model

    Energy Technology Data Exchange (ETDEWEB)

    Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)

    1995-04-01

    In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.

  5. Algorithmic solution of arithmetic problems and operands-answer associations in long-term memory.

    Science.gov (United States)

    Thevenot, C; Barrouillet, P; Fayol, M

    2001-05-01

    Many developmental models of arithmetic problem solving assume that any algorithmic solution of a given problem results in an association of the two operands and the answer in memory (Logan & Klapp, 1991; Siegler, 1996). In this experiment, adults had to perform either an operation or a comparison on the same pairs of two-digit numbers and then a recognition task. It is shown that unlike comparisons, the algorithmic solution of operations impairs the recognition of operands in adults. Thus, the postulate of a necessary and automatic storage of operands-answer associations in memory when young children solve additions by algorithmic strategies needs to be qualified.

  6. Level-0 trigger algorithms for the ALICE PHOS detector

    CERN Document Server

    Wang, D; Wang, Y P; Huang, G M; Kral, J; Yin, Z B; Zhou, D C; Zhang, F; Ullaland, K; Muller, H; Liu, L J

    2011-01-01

    The PHOS level-0 trigger provides a minimum bias trigger for p-p collisions and information for a level-1 trigger at both p-p and Pb-Pb collisions. There are two level-0 trigger generating algorithms under consideration: the Direct Comparison algorithm and the Weighted Sum algorithm. In order to study trigger algorithms via simulation, a simplified equivalent model is extracted from the trigger electronics to derive the waveform function of the Analog-or signal as input to the trigger algorithms. Simulations shown that the Weighted Sum algorithm can achieve higher trigger efficiency and provide more precise single channel energy information than the direct compare algorithm. An energy resolution of 9.75 MeV can be achieved with the Weighted Sum algorithm at a sampling rate of 40 Msps (mega samples per second) at 1 GeV. The timing performance at a sampling rate of 40 Msps with the Weighted Sum algorithm is better than that at a sampling rate of 20 Msps with both algorithms. The level-0 trigger can be delivered...

  7. Tag SNP selection via a genetic algorithm.

    Science.gov (United States)

    Mahdevar, Ghasem; Zahiri, Javad; Sadeghi, Mehdi; Nowzari-Dalini, Abbas; Ahrabian, Hayedeh

    2010-10-01

    Single Nucleotide Polymorphisms (SNPs) provide valuable information on human evolutionary history and may lead us to identify genetic variants responsible for human complex diseases. Unfortunately, molecular haplotyping methods are costly, laborious, and time consuming; therefore, algorithms for constructing full haplotype patterns from small available data through computational methods, Tag SNP selection problem, are convenient and attractive. This problem is proved to be an NP-hard problem, so heuristic methods may be useful. In this paper we present a heuristic method based on genetic algorithm to find reasonable solution within acceptable time. The algorithm was tested on a variety of simulated and experimental data. In comparison with the exact algorithm, based on brute force approach, results show that our method can obtain optimal solutions in almost all cases and runs much faster than exact algorithm when the number of SNP sites is large. Our software is available upon request to the corresponding author.

  8. Optimized Data Indexing Algorithms for OLAP Systems

    Directory of Open Access Journals (Sweden)

    Lucian BORNAZ

    2010-12-01

    Full Text Available The need to process and analyze large data volumes, as well as to convey the information contained therein to decision makers naturally led to the development of OLAP systems. Similarly to SGBDs, OLAP systems must ensure optimum access to the storage environment. Although there are several ways to optimize database systems, implementing a correct data indexing solution is the most effective and less costly. Thus, OLAP uses indexing algorithms for relational data and n-dimensional summarized data stored in cubes. Today database systems implement derived indexing algorithms based on well-known Tree, Bitmap and Hash indexing algorithms. This is because no indexing algorithm provides the best performance for any particular situation (type, structure, data volume, application. This paper presents a new n-dimensional cube indexing algorithm, derived from the well known B-Tree index, which indexes data stored in data warehouses taking in consideration their multi-dimensional nature and provides better performance in comparison to the already implemented Tree-like index types.

  9. Incident Light Frequency-Based Image Defogging Algorithm

    Directory of Open Access Journals (Sweden)

    Wenbo Zhang

    2017-01-01

    Full Text Available To solve the color distortion problem produced by the dark channel prior algorithm, an improved method for calculating transmittance of all channels, respectively, was proposed in this paper. Based on the Beer-Lambert Law, the influence between the frequency of the incident light and the transmittance was analyzed, and the ratios between each channel’s transmittance were derived. Then, in order to increase efficiency, the input image was resized to a smaller size before acquiring the refined transmittance which will be resized to the same size of original image. Finally, all the transmittances were obtained with the help of the proportion between each color channel, and then they were used to restore the defogging image. Experiments suggest that the improved algorithm can produce a much more natural result image in comparison with original algorithm, which means the problem of high color saturation was eliminated. What is more, the improved algorithm speeds up by four to nine times compared to the original algorithm.

  10. A Taylor weak-statement algorithm for hyperbolic conservation laws

    Science.gov (United States)

    Baker, A. J.; Kim, J. W.

    1987-01-01

    Finite element analysis, applied to computational fluid dynamics (CFD) problem classes, presents a formal procedure for establishing the ingredients of a discrete approximation numerical solution algorithm. A classical Galerkin weak-statement formulation, formed on a Taylor series extension of the conservation law system, is developed herein that embeds a set of parameters eligible for constraint according to specification of suitable norms. The derived family of Taylor weak statements is shown to contain, as special cases, over one dozen independently derived CFD algorithms published over the past several decades for the high speed flow problem class. A theoretical analysis is completed that facilitates direct qualitative comparisons. Numerical results for definitive linear and nonlinear test problems permit direct quantitative performance comparisons.

  11. A Sustainable City Planning Algorithm Based on TLBO and Local Search

    Science.gov (United States)

    Zhang, Ke; Lin, Li; Huang, Xuanxuan; Liu, Yiming; Zhang, Yonggang

    2017-09-01

    Nowadays, how to design a city with more sustainable features has become a center problem in the field of social development, meanwhile it has provided a broad stage for the application of artificial intelligence theories and methods. Because the design of sustainable city is essentially a constraint optimization problem, the swarm intelligence algorithm of extensive research has become a natural candidate for solving the problem. TLBO (Teaching-Learning-Based Optimization) algorithm is a new swarm intelligence algorithm. Its inspiration comes from the “teaching” and “learning” behavior of teaching class in the life. The evolution of the population is realized by simulating the “teaching” of the teacher and the student “learning” from each other, with features of less parameters, efficient, simple thinking, easy to achieve and so on. It has been successfully applied to scheduling, planning, configuration and other fields, which achieved a good effect and has been paid more and more attention by artificial intelligence researchers. Based on the classical TLBO algorithm, we propose a TLBO_LS algorithm combined with local search. We design and implement the random generation algorithm and evaluation model of urban planning problem. The experiments on the small and medium-sized random generation problem showed that our proposed algorithm has obvious advantages over DE algorithm and classical TLBO algorithm in terms of convergence speed and solution quality.

  12. Evaluation of six TPS algorithms in computing entrance and exit doses

    Science.gov (United States)

    Metwaly, Mohamed; Glegg, Martin; Baggarley, Shaun P.; Elliott, Alex

    2014-01-01

    Entrance and exit doses are commonly measured in in vivo dosimetry for comparison with expected values, usually generated by the treatment planning system (TPS), to verify accuracy of treatment delivery. This report aims to evaluate the accuracy of six TPS algorithms in computing entrance and exit doses for a 6 MV beam. The algorithms tested were: pencil beam convolution (Eclipse PBC), analytical anisotropic algorithm (Eclipse AAA), AcurosXB (Eclipse AXB), FFT convolution (XiO Convolution), multigrid superposition (XiO Superposition), and Monte Carlo photon (Monaco MC). Measurements with ionization chamber (IC) and diode detector in water phantoms were used as a reference. Comparisons were done in terms of central axis point dose, 1D relative profiles, and 2D absolute gamma analysis. Entrance doses computed by all TPS algorithms agreed to within 2% of the measured values. Exit doses computed by XiO Convolution, XiO Superposition, Eclipse AXB, and Monaco MC agreed with the IC measured doses to within 2%‐3%. Meanwhile, Eclipse PBC and Eclipse AAA computed exit doses were higher than the IC measured doses by up to 5.3% and 4.8%, respectively. Both algorithms assume that full backscatter exists even at the exit level, leading to an overestimation of exit doses. Despite good agreements at the central axis for Eclipse AXB and Monaco MC, 1D relative comparisons showed profiles mismatched at depths beyond 11.5 cm. Overall, the 2D absolute gamma (3%/3 mm) pass rates were better for Monaco MC, while Eclipse AXB failed mostly at the outer 20% of the field area. The findings of this study serve as a useful baseline for the implementation of entrance and exit in vivo dosimetry in clinical departments utilizing any of these six common TPS algorithms for reference comparison. PACS numbers: 87.55.‐x, 87.55.D‐, 87.55.N‐, 87.53.Bn PMID:24892349

  13. Research on parallel algorithm for sequential pattern mining

    Science.gov (United States)

    Zhou, Lijuan; Qin, Bai; Wang, Yu; Hao, Zhongxiao

    2008-03-01

    Sequential pattern mining is the mining of frequent sequences related to time or other orders from the sequence database. Its initial motivation is to discover the laws of customer purchasing in a time section by finding the frequent sequences. In recent years, sequential pattern mining has become an important direction of data mining, and its application field has not been confined to the business database and has extended to new data sources such as Web and advanced science fields such as DNA analysis. The data of sequential pattern mining has characteristics as follows: mass data amount and distributed storage. Most existing sequential pattern mining algorithms haven't considered the above-mentioned characteristics synthetically. According to the traits mentioned above and combining the parallel theory, this paper puts forward a new distributed parallel algorithm SPP(Sequential Pattern Parallel). The algorithm abides by the principal of pattern reduction and utilizes the divide-and-conquer strategy for parallelization. The first parallel task is to construct frequent item sets applying frequent concept and search space partition theory and the second task is to structure frequent sequences using the depth-first search method at each processor. The algorithm only needs to access the database twice and doesn't generate the candidated sequences, which abates the access time and improves the mining efficiency. Based on the random data generation procedure and different information structure designed, this paper simulated the SPP algorithm in a concrete parallel environment and implemented the AprioriAll algorithm. The experiments demonstrate that compared with AprioriAll, the SPP algorithm had excellent speedup factor and efficiency.

  14. A numeric comparison of variable selection algorithms for supervised learning

    International Nuclear Information System (INIS)

    Palombo, G.; Narsky, I.

    2009-01-01

    Datasets in modern High Energy Physics (HEP) experiments are often described by dozens or even hundreds of input variables. Reducing a full variable set to a subset that most completely represents information about data is therefore an important task in analysis of HEP data. We compare various variable selection algorithms for supervised learning using several datasets such as, for instance, imaging gamma-ray Cherenkov telescope (MAGIC) data found at the UCI repository. We use classifiers and variable selection methods implemented in the statistical package StatPatternRecognition (SPR), a free open-source C++ package developed in the HEP community ( (http://sourceforge.net/projects/statpatrec/)). For each dataset, we select a powerful classifier and estimate its learning accuracy on variable subsets obtained by various selection algorithms. When possible, we also estimate the CPU time needed for the variable subset selection. The results of this analysis are compared with those published previously for these datasets using other statistical packages such as R and Weka. We show that the most accurate, yet slowest, method is a wrapper algorithm known as generalized sequential forward selection ('Add N Remove R') implemented in SPR.

  15. Performance Evaluation of Incremental K-means Clustering Algorithm

    OpenAIRE

    Chakraborty, Sanjay; Nagwani, N. K.

    2014-01-01

    The incremental K-means clustering algorithm has already been proposed and analysed in paper [Chakraborty and Nagwani, 2011]. It is a very innovative approach which is applicable in periodically incremental environment and dealing with a bulk of updates. In this paper the performance evaluation is done for this incremental K-means clustering algorithm using air pollution database. This paper also describes the comparison on the performance evaluations between existing K-means clustering and i...

  16. Comparison of Co-Temporal Modeling Algorithms on Sparse Experimental Time Series Data Sets.

    Science.gov (United States)

    Allen, Edward E; Norris, James L; John, David J; Thomas, Stan J; Turkett, William H; Fetrow, Jacquelyn S

    2010-01-01

    Multiple approaches for reverse-engineering biological networks from time-series data have been proposed in the computational biology literature. These approaches can be classified by their underlying mathematical algorithms, such as Bayesian or algebraic techniques, as well as by their time paradigm, which includes next-state and co-temporal modeling. The types of biological relationships, such as parent-child or siblings, discovered by these algorithms are quite varied. It is important to understand the strengths and weaknesses of the various algorithms and time paradigms on actual experimental data. We assess how well the co-temporal implementations of three algorithms, continuous Bayesian, discrete Bayesian, and computational algebraic, can 1) identify two types of entity relationships, parent and sibling, between biological entities, 2) deal with experimental sparse time course data, and 3) handle experimental noise seen in replicate data sets. These algorithms are evaluated, using the shuffle index metric, for how well the resulting models match literature models in terms of siblings and parent relationships. Results indicate that all three co-temporal algorithms perform well, at a statistically significant level, at finding sibling relationships, but perform relatively poorly in finding parent relationships.

  17. Algorithm Research of Individualized Travelling Route Recommendation Based on Similarity

    Directory of Open Access Journals (Sweden)

    Xue Shan

    2015-01-01

    Full Text Available Although commercial recommendation system has made certain achievement in travelling route development, the recommendation system is facing a series of challenges because of people’s increasing interest in travelling. It is obvious that the core content of the recommendation system is recommendation algorithm. The advantages of recommendation algorithm can bring great effect to the recommendation system. Based on this, this paper applies traditional collaborative filtering algorithm for analysis. Besides, illustrating the deficiencies of the algorithm, such as the rating unicity and rating matrix sparsity, this paper proposes an improved algorithm combing the multi-similarity algorithm based on user and the element similarity algorithm based on user, so as to compensate for the deficiencies that traditional algorithm has within a controllable range. Experimental results have shown that the improved algorithm has obvious advantages in comparison with the traditional one. The improved algorithm has obvious effect on remedying the rating matrix sparsity and rating unicity.

  18. A modified K3M thinning algorithm

    Directory of Open Access Journals (Sweden)

    Tabedzki Marek

    2016-06-01

    Full Text Available The K3M thinning algorithm is a general method for image data reduction by skeletonization. It had proved its feasibility in most cases as a reliable and robust solution in typical applications of thinning, particularly in preprocessing for optical character recognition. However, the algorithm had still some weak points. Since then K3M has been revised, addressing the best known drawbacks. This paper presents a modified version of the algorithm. A comparison is made with the original one and two other thinning approaches. The proposed modification, among other things, solves the main drawback of K3M, namely, the results of thinning an image after rotation with various angles.

  19. Synthesis algorithm of VLSI multipliers for ASIC

    Science.gov (United States)

    Chua, O. H.; Eldin, A. G.

    1993-01-01

    Multipliers are critical sub-blocks in ASIC design, especially for digital signal processing and communications applications. A flexible multiplier synthesis tool is developed which is capable of generating multiplier blocks for word size in the range of 4 to 256 bits. A comparison of existing multiplier algorithms is made in terms of speed, silicon area, and suitability for automated synthesis and verification of its VLSI implementation. The algorithm divides the range of supported word sizes into sub-ranges and provides each sub-range with a specific multiplier architecture for optimal speed and area. The algorithm of the synthesis tool and the multiplier architectures are presented. Circuit implementation and the automated synthesis methodology are discussed.

  20. Comparison of two ordinal prediction models

    DEFF Research Database (Denmark)

    Kattan, Michael W; Gerds, Thomas A

    2015-01-01

    system (i.e. old or new), such as the level of evidence for one or more factors included in the system or the general opinions of expert clinicians. However, given the major objective of estimating prognosis on an ordinal scale, we argue that the rival staging system candidates should be compared...... on their ability to predict outcome. We sought to outline an algorithm that would compare two rival ordinal systems on their predictive ability. RESULTS: We devised an algorithm based largely on the concordance index, which is appropriate for comparing two models in their ability to rank observations. We...... demonstrate our algorithm with a prostate cancer staging system example. CONCLUSION: We have provided an algorithm for selecting the preferred staging system based on prognostic accuracy. It appears to be useful for the purpose of selecting between two ordinal prediction models....

  1. Current Piano Education of Turkish Music Teacher Candidates: Comparisons of Instructors and Students Perceptions

    Science.gov (United States)

    Jelen, Birsen

    2015-01-01

    In recent years almost every newly opened government funded university in Turkey has established a music department where future music teachers are educated and piano is compulsory for every single music teacher candidate in Turkey. The aim of this research is to compare piano teaching instructors' and their students' perceptions about the current…

  2. Space mapping optimization algorithms for engineering design

    DEFF Research Database (Denmark)

    Koziel, Slawomir; Bandler, John W.; Madsen, Kaj

    2006-01-01

    A simple, efficient optimization algorithm based on space mapping (SM) is presented. It utilizes input SM to reduce the misalignment between the coarse and fine models of the optimized object over a region of interest, and output space mapping (OSM) to ensure matching of response and first...... to a benchmark problem. In comparison with SMIS, the models presented are simple and have a small number of parameters that need to be extracted. The new algorithm is applied to the optimization of coupled-line band-pass filter....

  3. Comparison of primary productivity estimates in the Baltic Sea based on the DESAMBEM algorithm with estimates based on other similar algorithms

    Directory of Open Access Journals (Sweden)

    Małgorzata Stramska

    2013-02-01

    Full Text Available The quasi-synoptic view available from satellites has been broadly used in recent years to observe in near-real time the large-scale dynamics of marine ecosystems and to estimate primary productivity in the world ocean. However, the standard global NASA ocean colour algorithms generally do not produce good results in the Baltic Sea. In this paper, we compare the ability of seven algorithms to estimate depth-integrated daily primary production (PP, mg C m-2 in the Baltic Sea. All the algorithms use surface chlorophyll concentration, sea surface temperature, photosynthetic available radiation, latitude, longitude and day of the year as input data. Algorithm-derived PP is then compared with PP estimates obtained from 14C uptake measurements. The results indicate that the best agreement between the modelled and measured PP in the Baltic Sea is obtained with the DESAMBEM algorithm. This result supports the notion that a regional approach should be used in the interpretation of ocean colour satellite data in the Baltic Sea.

  4. Analysis of a parallel multigrid algorithm

    Science.gov (United States)

    Chan, Tony F.; Tuminaro, Ray S.

    1989-01-01

    The parallel multigrid algorithm of Frederickson and McBryan (1987) is considered. This algorithm uses multiple coarse-grid problems (instead of one problem) in the hope of accelerating convergence and is found to have a close relationship to traditional multigrid methods. Specifically, the parallel coarse-grid correction operator is identical to a traditional multigrid coarse-grid correction operator, except that the mixing of high and low frequencies caused by aliasing error is removed. Appropriate relaxation operators can be chosen to take advantage of this property. Comparisons between the standard multigrid and the new method are made.

  5. The role of the core in degeneracy of chiral candidate band doubling

    International Nuclear Information System (INIS)

    Timar, J.; Sohler, D.; Vaman, C.; SUNY, Stony Brook, NY; Starosta, K.; Fossan, D.B.; Koike, T.; Tohoku Univ., Sendai; Lee, I.Y.; Macchiavelli, A.O.

    2005-01-01

    Complete text of publication follows. Nearly degenerate ΔI=1 rotational bands have been observed recently in several odd-odd nuclei in the A ∼ 130 and A ∼ 100 mass regions. The properties of these doublet bands have been found to agree with the scenario of spontaneous formation of chirality and disagree with other possible scenarios. However, the most recent results obtained from life-time experiments for some chiral candidate nuclei in the A ∼ 130 mass region seem to contradict the chiral interpretation of the doublet bands in these nuclei based on the observed differences in the absolute electromagnetic transition rates; the transition rates expected for chiral doublets are predicted to be very similar. Therefore it is interesting to search for new types of experimental data that may provide further possibilities to distinguish between alternative interpretations, and may uncover new properties of the mechanism that is responsible for the band doubling in these nuclei. Such a new type of experimental data was found by studying the chiral candidate bands in neighboring Rh nuclei. High-spin states of 103 Rh were studied using the 96 Zr( 11 B,4n) reaction at 40 MeV beam energy and chiral partner candidate bands have been found in it. As a result of this observation a special quartet of neighboring chiral candidate nuclei can be investigated for the first time. With this quartet identified a comparison between the behavior of the nearly degenerate doublet bands belonging to the same core but to different valence quasiparticle configurations, as well as belonging to different cores but to the same valence quasiparticle configuration, becomes possible. The comparison shows that the energy separation of these doublet band structures depends mainly on the core properties and only at less extent on the valence quasiparticle coupling. This observation sets up new criteria for the explanations of the band doublings, restricting the possible scenarios and providing

  6. Application of the distributed genetic algorithm for in-core fuel optimization problems under parallel computational environment

    International Nuclear Information System (INIS)

    Yamamoto, Akio; Hashimoto, Hiroshi

    2002-01-01

    The distributed genetic algorithm (DGA) is applied for loading pattern optimization problems of the pressurized water reactors. A basic concept of DGA follows that of the conventional genetic algorithm (GA). However, DGA equally distributes candidates of solutions (i.e. loading patterns) to several independent ''islands'' and evolves them in each island. Communications between islands, i.e. migrations of some candidates between islands are performed with a certain period. Since candidates of solutions independently evolve in each island while accepting different genes of migrants, premature convergence in the conventional GA can be prevented. Because many candidate loading patterns should be evaluated in GA or DGA, the parallelization is efficient to reduce turn around time. Parallel efficiency of DGA was measured using our optimization code and good efficiency was attained even in a heterogeneous cluster environment due to dynamic distribution of the calculation load. The optimization code is based on the client/server architecture with the TCP/IP native socket and a client (optimization) module and calculation server modules communicate the objects of loading patterns each other. Throughout the sensitivity study on optimization parameters of DGA, a suitable set of the parameters for a test problem was identified. Finally, optimization capability of DGA and the conventional GA was compared in the test problem and DGA provided better optimization results than the conventional GA. (author)

  7. Who seeks bariatric surgery? Psychosocial functioning among adolescent candidates, other treatment-seeking adolescents with obesity and healthy controls.

    Science.gov (United States)

    Call, C C; Devlin, M J; Fennoy, I; Zitsman, J L; Walsh, B T; Sysko, R

    2017-12-01

    Limited data are available on the characteristics of adolescents with obesity who seek bariatric surgery. Existing data suggest that adolescent surgery candidates have a higher body mass index (BMI) than comparison adolescents with obesity, but the limited findings regarding psychosocial functioning are mixed. This study aimed to compare BMI and psychosocial functioning among adolescent bariatric surgery candidates, outpatient medical-treatment-seeking adolescents with obesity (receiving lifestyle modification), and adolescents in the normal-weight range. All adolescents completed self-report measures of impulsivity, delay discounting, depression, anxiety, stress, eating pathology, family functioning and quality of life, and had their height and weight measured. Adolescent surgical candidates had higher BMIs than both comparison groups. Surgical candidates did not differ from medical-treatment-seeking adolescents with obesity on any measure of psychosocial functioning, but both groups of adolescents with obesity reported greater anxiety and eating pathology and poorer quality of life than normal-weight adolescents. Quality of life no longer differed across groups after controlling for BMI, suggesting that it is highly related to weight status. Adolescents with obesity may experience greater anxiety, eating pathology, and quality of life impairments than their peers in the normal-weight range regardless of whether they are seeking surgery or outpatient medical treatment. Clinical implications and directions for future research are discussed. © 2017 World Obesity Federation.

  8. Prime Numbers Comparison using Sieve of Eratosthenes and Sieve of Sundaram Algorithm

    Science.gov (United States)

    Abdullah, D.; Rahim, R.; Apdilah, D.; Efendi, S.; Tulus, T.; Suwilo, S.

    2018-03-01

    Prime numbers are numbers that have their appeal to researchers due to the complexity of these numbers, many algorithms that can be used to generate prime numbers ranging from simple to complex computations, Sieve of Eratosthenes and Sieve of Sundaram are two algorithm that can be used to generate Prime numbers of randomly generated or sequential numbered random numbers, testing in this study to find out which algorithm is better used for large primes in terms of time complexity, the test also assisted with applications designed using Java language with code optimization and Maximum memory usage so that the testing process can be simultaneously and the results obtained can be objective

  9. An Enhanced Genetic Algorithm for the Generalized Traveling Salesman Problem

    Directory of Open Access Journals (Sweden)

    H. Jafarzadeh

    2017-12-01

    Full Text Available The generalized traveling salesman problem (GTSP deals with finding the minimum-cost tour in a clustered set of cities. In this problem, the traveler is interested in finding the best path that goes through all clusters. As this problem is NP-hard, implementing a metaheuristic algorithm to solve the large scale problems is inevitable. The performance of these algorithms can be intensively promoted by other heuristic algorithms. In this study, a search method is developed that improves the quality of the solutions and competition time considerably in comparison with Genetic Algorithm. In the proposed algorithm, the genetic algorithms with the Nearest Neighbor Search (NNS are combined and a heuristic mutation operator is applied. According to the experimental results on a set of standard test problems with symmetric distances, the proposed algorithm finds the best solutions in most cases with the least computational time. The proposed algorithm is highly competitive with the published until now algorithms in both solution quality and running time.

  10. A comparison of updating algorithms for large N reduced models

    Energy Technology Data Exchange (ETDEWEB)

    Pérez, Margarita García [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); González-Arroyo, Antonio [Instituto de Física Teórica UAM-CSIC, Universidad Autónoma de Madrid,Nicolás Cabrera 13-15, E-28049-Madrid (Spain); Departamento de Física Teórica, C-XI Universidad Autónoma de Madrid,E-28049 Madrid (Spain); Keegan, Liam [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland); Okawa, Masanori [Graduate School of Science, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Core of Research for the Energetic Universe, Hiroshima University,Higashi-Hiroshima, Hiroshima 739-8526 (Japan); Ramos, Alberto [PH-TH, CERN,CH-1211 Geneva 23 (Switzerland)

    2015-06-29

    We investigate Monte Carlo updating algorithms for simulating SU(N) Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole SU(N) matrix at once, or iterating through SU(2) subgroups of the SU(N) matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  11. A comparison of updating algorithms for large $N$ reduced models

    CERN Document Server

    Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto

    2015-01-01

    We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.

  12. An Extended HITS Algorithm on Bipartite Network for Features Extraction of Online Customer Reviews

    Directory of Open Access Journals (Sweden)

    Chen Liu

    2018-05-01

    Full Text Available How to acquire useful information intelligently in the age of information explosion has become an important issue. In this context, sentiment analysis emerges with the growth of the need of information extraction. One of the most important tasks of sentiment analysis is feature extraction of entities in consumer reviews. This paper first constitutes a directed bipartite feature-sentiment relation network with a set of candidate features-sentiment pairs that is extracted by dependency syntax analysis from consumer reviews. Then, a novel method called MHITS which combines PMI with weighted HITS algorithm is proposed to rank these candidate product features to find out real product features. Empirical experiments indicate the effectiveness of our approach across different kinds and various data sizes of product. In addition, the effect of the proposed algorithm is not the same for the corpus with different proportions of the word pair that includes the “bad”, “good”, “poor”, “pretty good”, “not bad” these general collocation words.

  13. Planetary transit candidates in Corot-IRa01 field

    Science.gov (United States)

    Carpano, S.; Cabrera, J.; Alonso, R.; Barge, P.; Aigrain, S.; Almenara, J.-M.; Bordé, P.; Bouchy, F.; Carone, L.; Deeg, H. J.; de La Reza, R.; Deleuil, M.; Dvorak, R.; Erikson, A.; Fressin, F.; Fridlund, M.; Gondoin, P.; Guillot, T.; Hatzes, A.; Jorda, L.; Lammer, H.; Léger, A.; Llebaria, A.; Magain, P.; Moutou, C.; Ofir, A.; Ollivier, M.; Janot-Pacheco, E.; Pätzold, M.; Pont, F.; Queloz, D.; Rauer, H.; Régulo, C.; Renner, S.; Rouan, D.; Samuel, B.; Schneider, J.; Wuchterl, G.

    2009-10-01

    Context: CoRoT is a pioneering space mission devoted to the analysis of stellar variability and the photometric detection of extrasolar planets. Aims: We present the list of planetary transit candidates detected in the first field observed by CoRoT, IRa01, the initial run toward the Galactic anticenter, which lasted for 60 days. Methods: We analysed 3898 sources in the coloured bands and 5974 in the monochromatic band. Instrumental noise and stellar variability were taken into account using detrending tools before applying various transit search algorithms. Results: Fifty sources were classified as planetary transit candidates and the most reliable 40 detections were declared targets for follow-up ground-based observations. Two of these targets have so far been confirmed as planets, CoRoT-1b and CoRoT-4b, for which a complete characterization and specific studies were performed. The CoRoT space mission, launched on December 27th 2006, has been developed and is operated by CNES, with contributions from Austria, Belgium, Brazil, ESA, Germany, and Spain. Four French laboratories associated with the CNRS (LESIA, LAM, IAS ,OMP) collaborate with CNES on the satellite development. First CoRoT data are available to the public from the CoRoT archive: http://idoc-corot.ias.u-psud.fr.

  14. Variational optimization algorithms for uniform matrix product states

    Science.gov (United States)

    Zauner-Stauber, V.; Vanderstraeten, L.; Fishman, M. T.; Verstraete, F.; Haegeman, J.

    2018-01-01

    We combine the density matrix renormalization group (DMRG) with matrix product state tangent space concepts to construct a variational algorithm for finding ground states of one-dimensional quantum lattices in the thermodynamic limit. A careful comparison of this variational uniform matrix product state algorithm (VUMPS) with infinite density matrix renormalization group (IDMRG) and with infinite time evolving block decimation (ITEBD) reveals substantial gains in convergence speed and precision. We also demonstrate that VUMPS works very efficiently for Hamiltonians with long-range interactions and also for the simulation of two-dimensional models on infinite cylinders. The new algorithm can be conveniently implemented as an extension of an already existing DMRG implementation.

  15. The association between neurological deficit in acute ischemic stroke and mean transit time. Comparison of four different perfusion MRI algorithms

    International Nuclear Information System (INIS)

    Schellinger, Peter D.; Latour, Lawrence L.; Chalela, Julio A.; Warach, Steven; Wu, Chen-Sen

    2006-01-01

    The purpose of our study was to identify the perfusion MRI (pMRI) algorithm which yields a volume of hypoperfused tissue that best correlates with the acute clinical deficit as quantified by the NIH Stroke Scale (NIHSS) and therefore reflects critically hypoperfused tissue. A group of 20 patients with a first acute stroke and stroke MRI within 24 h of symptom onset were retrospectively analyzed. Perfusion maps were derived using four different algorithms to estimate relative mean transit time (rMTT): (1) cerebral blood flow (CBF) arterial input function (AIF)/singular voxel decomposition (SVD); (2) area peak; (3) time to peak (TTP); and (4) first moment method. Lesion volumes based on five different MTT thresholds relative to contralateral brain were compared with each other and correlated with NIHSS score. The first moment method had the highest correlation with NIHSS (r=0.79, P<0.001) followed by the AIF/SVD method, both of which did not differ significantly from each other with regard to lesion volumes. TTP and area peak derived both volumes, which correlated poorly or only moderately with NIHSS scores. Data from our pilot study suggest that the first moment and the AIF/SVD method have advantages over the other algorithms in identifying the pMRI lesion volume that best reflects clinical severity. At present there seems to be no need for extensive postprocessing and arbitrarily defined delay thresholds in pMRI as the simple qualitative approach with a first moment algorithm is equally accurate. Larger sample sizes which allow comparison between imaging and clinical outcomes are needed to refine the choice of best perfusion parameter in pMRI. (orig.)

  16. Low Access Delay Anti-Collision Algorithm for Reader in RFID systems

    DEFF Research Database (Denmark)

    Galiotto, Carlo; Marchetti, Nicola; Prasad, Neeli R.

    2010-01-01

    Radio Frequency Identification (RFID) is a technology which is spreading more and more as a medium to identify, locate and track assets through the productive chain. As all the wireless communication devices sharing the same transmission channel, RFID readers and tags experience collisions whenever...... deployed over the same area. In this work, the RFID reader collision problem is studied and a centralized scheduling-based algorithm is proposed as possible candidate solution, especially for those scenarios involving static or low mobility readers. Taking into account the circuitry limitations of the tags......, which do not allow to use frequency or code division multiple access schemes in the RFID systems, this paper proposes an algorithm aiming to prevent the readers collisions, while keeping the access delay of the readers to the channel possibly low. The simulation results show that this algorithm performs...

  17. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    Science.gov (United States)

    Gan, Wensheng; Zhang, Binbin

    2015-01-01

    Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns. PMID:25811038

  18. An Incremental High-Utility Mining Algorithm with Transaction Insertion

    Directory of Open Access Journals (Sweden)

    Jerry Chun-Wei Lin

    2015-01-01

    Full Text Available Association-rule mining is commonly used to discover useful and meaningful patterns from a very large database. It only considers the occurrence frequencies of items to reveal the relationships among itemsets. Traditional association-rule mining is, however, not suitable in real-world applications since the purchased items from a customer may have various factors, such as profit or quantity. High-utility mining was designed to solve the limitations of association-rule mining by considering both the quantity and profit measures. Most algorithms of high-utility mining are designed to handle the static database. Fewer researches handle the dynamic high-utility mining with transaction insertion, thus requiring the computations of database rescan and combination explosion of pattern-growth mechanism. In this paper, an efficient incremental algorithm with transaction insertion is designed to reduce computations without candidate generation based on the utility-list structures. The enumeration tree and the relationships between 2-itemsets are also adopted in the proposed algorithm to speed up the computations. Several experiments are conducted to show the performance of the proposed algorithm in terms of runtime, memory consumption, and number of generated patterns.

  19. Performance Comparison of Different System Identification Algorithms for FACET and ATF2

    CERN Document Server

    Pfingstner, J; Schulte, D

    2013-01-01

    Good system knowledge is an essential ingredient for the operation of modern accelerator facilities. For example, beam-based alignment algorithms and orbit feedbacks rely strongly on a precise measurement of the orbit response matrix. The quality of the measurement of this matrix can be improved over time by statistically combining the effects of small system excitations with the help of system identification algorithms. These small excitations can be applied in a parasitic mode without stopping the accelerator operation (on-line). In this work, different system identification algorithms are used in simulation studies for the response matrix measurement at ATF2. The results for ATF2 are finally compared with the results for FACET, latter originating from an earlier work.

  20. Immune Algorithm Complex Method for Transducer Calibration

    Directory of Open Access Journals (Sweden)

    YU Jiangming

    2014-08-01

    Full Text Available As a key link in engineering test tasks, the transducer calibration has significant influence on accuracy and reliability of test results. Because of unknown and complex nonlinear characteristics, conventional method can’t achieve satisfactory accuracy. An Immune algorithm complex modeling approach is proposed, and the simulated studies on the calibration of third multiple output transducers is made respectively by use of the developed complex modeling. The simulated and experimental results show that the Immune algorithm complex modeling approach can improve significantly calibration precision comparison with traditional calibration methods.

  1. PocketMatch: A new algorithm to compare binding sites in protein structures

    Directory of Open Access Journals (Sweden)

    Chandra Nagasuma

    2008-12-01

    Full Text Available Abstract Background Recognizing similarities and deriving relationships among protein molecules is a fundamental requirement in present-day biology. Similarities can be present at various levels which can be detected through comparison of protein sequences or their structural folds. In some cases similarities obscure at these levels could be present merely in the substructures at their binding sites. Inferring functional similarities between protein molecules by comparing their binding sites is still largely exploratory and not as yet a routine protocol. One of the main reasons for this is the limitation in the choice of appropriate analytical tools that can compare binding sites with high sensitivity. To benefit from the enormous amount of structural data that is being rapidly accumulated, it is essential to have high throughput tools that enable large scale binding site comparison. Results Here we present a new algorithm PocketMatch for comparison of binding sites in a frame invariant manner. Each binding site is represented by 90 lists of sorted distances capturing shape and chemical nature of the site. The sorted arrays are then aligned using an incremental alignment method and scored to obtain PMScores for pairs of sites. A comprehensive sensitivity analysis and an extensive validation of the algorithm have been carried out. A comparison with other site matching algorithms is also presented. Perturbation studies where the geometry of a given site was retained but the residue types were changed randomly, indicated that chance similarities were virtually non-existent. Our analysis also demonstrates that shape information alone is insufficient to discriminate between diverse binding sites, unless combined with chemical nature of amino acids. Conclusion A new algorithm has been developed to compare binding sites in accurate, efficient and high-throughput manner. Though the representation used is conceptually simplistic, we demonstrate that

  2. An Intrinsic Algorithm for Parallel Poisson Disk Sampling on Arbitrary Surfaces.

    Science.gov (United States)

    Ying, Xiang; Xin, Shi-Qing; Sun, Qian; He, Ying

    2013-03-08

    Poisson disk sampling plays an important role in a variety of visual computing, due to its useful statistical property in distribution and the absence of aliasing artifacts. While many effective techniques have been proposed to generate Poisson disk distribution in Euclidean space, relatively few work has been reported to the surface counterpart. This paper presents an intrinsic algorithm for parallel Poisson disk sampling on arbitrary surfaces. We propose a new technique for parallelizing the dart throwing. Rather than the conventional approaches that explicitly partition the spatial domain to generate the samples in parallel, our approach assigns each sample candidate a random and unique priority that is unbiased with regard to the distribution. Hence, multiple threads can process the candidates simultaneously and resolve conflicts by checking the given priority values. It is worth noting that our algorithm is accurate as the generated Poisson disks are uniformly and randomly distributed without bias. Our method is intrinsic in that all the computations are based on the intrinsic metric and are independent of the embedding space. This intrinsic feature allows us to generate Poisson disk distributions on arbitrary surfaces. Furthermore, by manipulating the spatially varying density function, we can obtain adaptive sampling easily.

  3. Adaptive algorithm of magnetic heading detection

    Science.gov (United States)

    Liu, Gong-Xu; Shi, Ling-Feng

    2017-11-01

    Magnetic data obtained from a magnetic sensor usually fluctuate in a certain range, which makes it difficult to estimate the magnetic heading accurately. In fact, magnetic heading information is usually submerged in noise because of all kinds of electromagnetic interference and the diversity of the pedestrian’s motion states. In order to solve this problem, a new adaptive algorithm based on the (typically) right-angled corridors of a building or residential buildings is put forward to process heading information. First, a 3D indoor localization platform is set up based on MPU9250. Then, several groups of data are measured by changing the experimental environment and pedestrian’s motion pace. The raw data from the attached inertial measurement unit are calibrated and arranged into a time-stamped array and written to a data file. Later, the data file is imported into MATLAB for processing and analysis using the proposed adaptive algorithm. Finally, the algorithm is verified by comparison with the existing algorithm. The experimental results show that the algorithm has strong robustness and good fault tolerance, which can detect the heading information accurately and in real-time.

  4. Learning-based meta-algorithm for MRI brain extraction.

    Science.gov (United States)

    Shi, Feng; Wang, Li; Gilmore, John H; Lin, Weili; Shen, Dinggang

    2011-01-01

    Multiple-segmentation-and-fusion method has been widely used for brain extraction, tissue segmentation, and region of interest (ROI) localization. However, such studies are hindered in practice by their computational complexity, mainly coming from the steps of template selection and template-to-subject nonlinear registration. In this study, we address these two issues and propose a novel learning-based meta-algorithm for MRI brain extraction. Specifically, we first use exemplars to represent the entire template library, and assign the most similar exemplar to the test subject. Second, a meta-algorithm combining two existing brain extraction algorithms (BET and BSE) is proposed to conduct multiple extractions directly on test subject. Effective parameter settings for the meta-algorithm are learned from the training data and propagated to subject through exemplars. We further develop a level-set based fusion method to combine multiple candidate extractions together with a closed smooth surface, for obtaining the final result. Experimental results show that, with only a small portion of subjects for training, the proposed method is able to produce more accurate and robust brain extraction results, at Jaccard Index of 0.956 +/- 0.010 on total 340 subjects under 6-fold cross validation, compared to those by the BET and BSE even using their best parameter combinations.

  5. Algorithms for optimization of branching gravity-driven water networks

    Directory of Open Access Journals (Sweden)

    I. Dardani

    2018-05-01

    Full Text Available The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs, this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011 to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel

  6. Algorithms for optimization of branching gravity-driven water networks

    Science.gov (United States)

    Dardani, Ian; Jones, Gerard F.

    2018-05-01

    The design of a water network involves the selection of pipe diameters that satisfy pressure and flow requirements while considering cost. A variety of design approaches can be used to optimize for hydraulic performance or reduce costs. To help designers select an appropriate approach in the context of gravity-driven water networks (GDWNs), this work assesses three cost-minimization algorithms on six moderate-scale GDWN test cases. Two algorithms, a backtracking algorithm and a genetic algorithm, use a set of discrete pipe diameters, while a new calculus-based algorithm produces a continuous-diameter solution which is mapped onto a discrete-diameter set. The backtracking algorithm finds the global optimum for all but the largest of cases tested, for which its long runtime makes it an infeasible option. The calculus-based algorithm's discrete-diameter solution produced slightly higher-cost results but was more scalable to larger network cases. Furthermore, the new calculus-based algorithm's continuous-diameter and mapped solutions provided lower and upper bounds, respectively, on the discrete-diameter global optimum cost, where the mapped solutions were typically within one diameter size of the global optimum. The genetic algorithm produced solutions even closer to the global optimum with consistently short run times, although slightly higher solution costs were seen for the larger network cases tested. The results of this study highlight the advantages and weaknesses of each GDWN design method including closeness to the global optimum, the ability to prune the solution space of infeasible and suboptimal candidates without missing the global optimum, and algorithm run time. We also extend an existing closed-form model of Jones (2011) to include minor losses and a more comprehensive two-part cost model, which realistically applies to pipe sizes that span a broad range typical of GDWNs of interest in this work, and for smooth and commercial steel roughness values.

  7. Optimized candidal biofilm microtiter assay

    NARCIS (Netherlands)

    Krom, Bastiaan P.; Cohen, Jesse B.; Feser, Gail E. McElhaney; Cihlar, Ronald L.

    Microtiter based candidal biofilm formation is commonly being used. Here we describe the analysis of factors influencing the development of candidal biofilms such as the coating with serum, growth medium and pH. The data reported here show that optimal candidal biofilm formation is obtained when

  8. Global synchronization algorithms for the Intel iPSC/860

    Science.gov (United States)

    Seidel, Steven R.; Davis, Mark A.

    1992-01-01

    In a distributed memory multicomputer that has no global clock, global processor synchronization can only be achieved through software. Global synchronization algorithms are used in tridiagonal systems solvers, CFD codes, sequence comparison algorithms, and sorting algorithms. They are also useful for event simulation, debugging, and for solving mutual exclusion problems. For the Intel iPSC/860 in particular, global synchronization can be used to ensure the most effective use of the communication network for operations such as the shift, where each processor in a one-dimensional array or ring concurrently sends a message to its right (or left) neighbor. Three global synchronization algorithms are considered for the iPSC/860: the gysnc() primitive provided by Intel, the PICL primitive sync0(), and a new recursive doubling synchronization (RDS) algorithm. The performance of these algorithms is compared to the performance predicted by communication models of both the long and forced message protocols. Measurements of the cost of shift operations preceded by global synchronization show that the RDS algorithm always synchronizes the nodes more precisely and costs only slightly more than the other two algorithms.

  9. Optimum Performance-Based Seismic Design Using a Hybrid Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    S. Talatahari

    2014-01-01

    Full Text Available A hybrid optimization method is presented to optimum seismic design of steel frames considering four performance levels. These performance levels are considered to determine the optimum design of structures to reduce the structural cost. A pushover analysis of steel building frameworks subject to equivalent-static earthquake loading is utilized. The algorithm is based on the concepts of the charged system search in which each agent is affected by local and global best positions stored in the charged memory considering the governing laws of electrical physics. Comparison of the results of the hybrid algorithm with those of other metaheuristic algorithms shows the efficiency of the hybrid algorithm.

  10. Identifying Novel Candidate Genes Related to Apoptosis from a Protein-Protein Interaction Network

    Directory of Open Access Journals (Sweden)

    Baoman Wang

    2015-01-01

    Full Text Available Apoptosis is the process of programmed cell death (PCD that occurs in multicellular organisms. This process of normal cell death is required to maintain the balance of homeostasis. In addition, some diseases, such as obesity, cancer, and neurodegenerative diseases, can be cured through apoptosis, which produces few side effects. An effective comprehension of the mechanisms underlying apoptosis will be helpful to prevent and treat some diseases. The identification of genes related to apoptosis is essential to uncover its underlying mechanisms. In this study, a computational method was proposed to identify novel candidate genes related to apoptosis. First, protein-protein interaction information was used to construct a weighted graph. Second, a shortest path algorithm was applied to the graph to search for new candidate genes. Finally, the obtained genes were filtered by a permutation test. As a result, 26 genes were obtained, and we discuss their likelihood of being novel apoptosis-related genes by collecting evidence from published literature.

  11. On the comparison of perturbation-iteration algorithm and residual power series method to solve fractional Zakharov-Kuznetsov equation

    Science.gov (United States)

    Şenol, Mehmet; Alquran, Marwan; Kasmaei, Hamed Daei

    2018-06-01

    In this paper, we present analytic-approximate solution of time-fractional Zakharov-Kuznetsov equation. This model demonstrates the behavior of weakly nonlinear ion acoustic waves in a plasma bearing cold ions and hot isothermal electrons in the presence of a uniform magnetic field. Basic definitions of fractional derivatives are described in the Caputo sense. Perturbation-iteration algorithm (PIA) and residual power series method (RPSM) are applied to solve this equation with success. The convergence analysis is also presented for both methods. Numerical results are given and then they are compared with the exact solutions. Comparison of the results reveal that both methods are competitive, powerful, reliable, simple to use and ready to apply to wide range of fractional partial differential equations.

  12. Research on AHP decision algorithms based on BP algorithm

    Science.gov (United States)

    Ma, Ning; Guan, Jianhe

    2017-10-01

    Decision making is the thinking activity that people choose or judge, and scientific decision-making has always been a hot issue in the field of research. Analytic Hierarchy Process (AHP) is a simple and practical multi-criteria and multi-objective decision-making method that combines quantitative and qualitative and can show and calculate the subjective judgment in digital form. In the process of decision analysis using AHP method, the rationality of the two-dimensional judgment matrix has a great influence on the decision result. However, in dealing with the real problem, the judgment matrix produced by the two-dimensional comparison is often inconsistent, that is, it does not meet the consistency requirements. BP neural network algorithm is an adaptive nonlinear dynamic system. It has powerful collective computing ability and learning ability. It can perfect the data by constantly modifying the weights and thresholds of the network to achieve the goal of minimizing the mean square error. In this paper, the BP algorithm is used to deal with the consistency of the two-dimensional judgment matrix of the AHP.

  13. A new scheduling algorithm for parallel sparse LU factorization with static pivoting

    Energy Technology Data Exchange (ETDEWEB)

    Grigori, Laura; Li, Xiaoye S.

    2002-08-20

    In this paper we present a static scheduling algorithm for parallel sparse LU factorization with static pivoting. The algorithm is divided into mapping and scheduling phases, using the symmetric pruned graphs of L' and U to represent dependencies. The scheduling algorithm is designed for driving the parallel execution of the factorization on a distributed-memory architecture. Experimental results and comparisons with SuperLU{_}DIST are reported after applying this algorithm on real world application matrices on an IBM SP RS/6000 distributed memory machine.

  14. Loading pattern optimization using ant colony algorithm

    International Nuclear Information System (INIS)

    Hoareau, Fabrice

    2008-01-01

    Electricite de France (EDF) operates 58 nuclear power plants (NPP), of the Pressurized Water Reactor type. The loading pattern optimization of these NPP is currently done by EDF expert engineers. Within this framework, EDF R and D has developed automatic optimization tools that assist the experts. LOOP is an industrial tool, developed by EDF R and D and based on a simulated annealing algorithm. In order to improve the results of such automatic tools, new optimization methods have to be tested. Ant Colony Optimization (ACO) algorithms are recent methods that have given very good results on combinatorial optimization problems. In order to evaluate the performance of such methods on loading pattern optimization, direct comparisons between LOOP and a mock-up based on the Max-Min Ant System algorithm (a particular variant of ACO algorithms) were made on realistic test-cases. It is shown that the results obtained by the ACO mock-up are very similar to those of LOOP. Future research will consist in improving these encouraging results by using parallelization and by hybridizing the ACO algorithm with local search procedures. (author)

  15. Performance Comparison of Superresolution Array Processing Algorithms. Revised

    National Research Council Canada - National Science Library

    Barabell, A

    1998-01-01

    ... have been documented in the literature, no systematic comparison has heretofore been undertaken. The general approach of the current study is to simulate a sequence of increasingly more general...

  16. Advanced signal separation and recovery algorithms for digital x-ray spectroscopy

    International Nuclear Information System (INIS)

    Mahmoud, Imbaby I.; El-Tokhy, Mohamed S.

    2015-01-01

    X-ray spectroscopy is widely used for in-situ applications for samples analysis. Therefore, spectrum drawing and assessment of x-ray spectroscopy with high accuracy is the main scope of this paper. A Silicon Lithium Si(Li) detector that cooled with a nitrogen is used for signal extraction. The resolution of the ADC is 12 bits. Also, the sampling rate of ADC is 5 MHz. Hence, different algorithms are implemented. These algorithms were run on a personal computer with Intel core TM i5-3470 CPU and 3.20 GHz. These algorithms are signal preprocessing, signal separation and recovery algorithms, and spectrum drawing algorithm. Moreover, statistical measurements are used for evaluation of these algorithms. Signal preprocessing based on DC-offset correction and signal de-noising is performed. DC-offset correction was done by using minimum value of radiation signal. However, signal de-noising was implemented using fourth order finite impulse response (FIR) filter, linear phase least-square FIR filter, complex wavelet transforms (CWT) and Kalman filter methods. We noticed that Kalman filter achieves large peak signal to noise ratio (PSNR) and lower error than other methods. However, CWT takes much longer execution time. Moreover, three different algorithms that allow correction of x-ray signal overlapping are presented. These algorithms are 1D non-derivative peak search algorithm, second derivative peak search algorithm and extrema algorithm. Additionally, the effect of signal separation and recovery algorithms on spectrum drawing is measured. Comparison between these algorithms is introduced. The obtained results confirm that second derivative peak search algorithm as well as extrema algorithm have very small error in comparison with 1D non-derivative peak search algorithm. However, the second derivative peak search algorithm takes much longer execution time. Therefore, extrema algorithm introduces better results over other algorithms. It has the advantage of recovering and

  17. Assessment of various supervised learning algorithms using different performance metrics

    Science.gov (United States)

    Susheel Kumar, S. M.; Laxkar, Deepak; Adhikari, Sourav; Vijayarajan, V.

    2017-11-01

    Our work brings out comparison based on the performance of supervised machine learning algorithms on a binary classification task. The supervised machine learning algorithms which are taken into consideration in the following work are namely Support Vector Machine(SVM), Decision Tree(DT), K Nearest Neighbour (KNN), Naïve Bayes(NB) and Random Forest(RF). This paper mostly focuses on comparing the performance of above mentioned algorithms on one binary classification task by analysing the Metrics such as Accuracy, F-Measure, G-Measure, Precision, Misclassification Rate, False Positive Rate, True Positive Rate, Specificity, Prevalence.

  18. A novel clustering algorithm based on quantum games

    International Nuclear Information System (INIS)

    Li Qiang; He Yan; Jiang Jingping

    2009-01-01

    Enormous successes have been made by quantum algorithms during the last decade. In this paper, we combine the quantum game with the problem of data clustering, and then develop a quantum-game-based clustering algorithm, in which data points in a dataset are considered as players who can make decisions and implement quantum strategies in quantum games. After each round of a quantum game, each player's expected payoff is calculated. Later, he uses a link-removing-and-rewiring (LRR) function to change his neighbors and adjust the strength of links connecting to them in order to maximize his payoff. Further, algorithms are discussed and analyzed in two cases of strategies, two payoff matrixes and two LRR functions. Consequently, the simulation results have demonstrated that data points in datasets are clustered reasonably and efficiently, and the clustering algorithms have fast rates of convergence. Moreover, the comparison with other algorithms also provides an indication of the effectiveness of the proposed approach.

  19. Motion Estimation Using the Firefly Algorithm in Ultrasonic Image Sequence of Soft Tissue

    Directory of Open Access Journals (Sweden)

    Chih-Feng Chao

    2015-01-01

    Full Text Available Ultrasonic image sequence of the soft tissue is widely used in disease diagnosis; however, the speckle noises usually influenced the image quality. These images usually have a low signal-to-noise ratio presentation. The phenomenon gives rise to traditional motion estimation algorithms that are not suitable to measure the motion vectors. In this paper, a new motion estimation algorithm is developed for assessing the velocity field of soft tissue in a sequence of ultrasonic B-mode images. The proposed iterative firefly algorithm (IFA searches for few candidate points to obtain the optimal motion vector, and then compares it to the traditional iterative full search algorithm (IFSA via a series of experiments of in vivo ultrasonic image sequences. The experimental results show that the IFA can assess the vector with better efficiency and almost equal estimation quality compared to the traditional IFSA method.

  20. Asymmetric intimacy and algorithm for detecting communities in bipartite networks

    Science.gov (United States)

    Wang, Xingyuan; Qin, Xiaomeng

    2016-11-01

    In this paper, an algorithm to choose a good partition in bipartite networks has been proposed. Bipartite networks have more theoretical significance and broader prospect of application. In view of distinctive structure of bipartite networks, in our method, two parameters are defined to show the relationships between the same type nodes and heterogeneous nodes respectively. Moreover, our algorithm employs a new method of finding and expanding the core communities in bipartite networks. Two kinds of nodes are handled separately and merged, and then the sub-communities are obtained. After that, objective communities will be found according to the merging rule. The proposed algorithm has been simulated in real-world networks and artificial networks, and the result verifies the accuracy and reliability of the parameters on intimacy for our algorithm. Eventually, comparisons with similar algorithms depict that the proposed algorithm has better performance.

  1. DESIGN AND EXAMINATION OF ALGORITHMS FOR SOLVING THE KNAPSACK PROBLEM

    Directory of Open Access Journals (Sweden)

    S. Kantsedal

    2015-07-01

    Full Text Available The use of methods of branches and boundaries as well as the methods of dynamic programming at solving the problem of «knapsack» is grounded. The main concepts are expounded. The methods and algorithms development for solving the above specified problem are described. Recommendations on practical application of constructed algorithms based on their experimental investigation and carrying out charactheristics comparison are presented.

  2. Issue-Advocacy versus Candidate Advertising: Effects on Candidate Preferences and Democratic Process.

    Science.gov (United States)

    Pfau, Michael; Holbert, R. Lance; Szabo, Erin Alison; Kaminski, Kelly

    2002-01-01

    Examines the influence of soft-money-sponsored issue-advocacy advertising in U.S. House and Senate campaigns, comparing its effects against candidate-sponsored positive advertising and contrast advertising on viewers' candidate preferences and on their attitude that reflect democratic values. Reveals no main effects for advertising approach on…

  3. Application of Heuristic and Metaheuristic Algorithms in Solving Constrained Weber Problem with Feasible Region Bounded by Arcs

    Directory of Open Access Journals (Sweden)

    Igor Stojanović

    2017-01-01

    Full Text Available The continuous planar facility location problem with the connected region of feasible solutions bounded by arcs is a particular case of the constrained Weber problem. This problem is a continuous optimization problem which has a nonconvex feasible set of constraints. This paper suggests appropriate modifications of four metaheuristic algorithms which are defined with the aim of solving this type of nonconvex optimization problems. Also, a comparison of these algorithms to each other as well as to the heuristic algorithm is presented. The artificial bee colony algorithm, firefly algorithm, and their recently proposed improved versions for constrained optimization are appropriately modified and applied to the case study. The heuristic algorithm based on modified Weiszfeld procedure is also implemented for the purpose of comparison with the metaheuristic approaches. Obtained numerical results show that metaheuristic algorithms can be successfully applied to solve the instances of this problem of up to 500 constraints. Among these four algorithms, the improved version of artificial bee algorithm is the most efficient with respect to the quality of the solution, robustness, and the computational efficiency.

  4. Efficient sampling algorithms for Monte Carlo based treatment planning

    International Nuclear Information System (INIS)

    DeMarco, J.J.; Solberg, T.D.; Chetty, I.; Smathers, J.B.

    1998-01-01

    Efficient sampling algorithms are necessary for producing a fast Monte Carlo based treatment planning code. This study evaluates several aspects of a photon-based tracking scheme and the effect of optimal sampling algorithms on the efficiency of the code. Four areas were tested: pseudo-random number generation, generalized sampling of a discrete distribution, sampling from the exponential distribution, and delta scattering as applied to photon transport through a heterogeneous simulation geometry. Generalized sampling of a discrete distribution using the cutpoint method can produce speedup gains of one order of magnitude versus conventional sequential sampling. Photon transport modifications based upon the delta scattering method were implemented and compared with a conventional boundary and collision checking algorithm. The delta scattering algorithm is faster by a factor of six versus the conventional algorithm for a boundary size of 5 mm within a heterogeneous geometry. A comparison of portable pseudo-random number algorithms and exponential sampling techniques is also discussed

  5. A High-Order CFS Algorithm for Clustering Big Data

    Directory of Open Access Journals (Sweden)

    Fanyu Bu

    2016-01-01

    Full Text Available With the development of Internet of Everything such as Internet of Things, Internet of People, and Industrial Internet, big data is being generated. Clustering is a widely used technique for big data analytics and mining. However, most of current algorithms are not effective to cluster heterogeneous data which is prevalent in big data. In this paper, we propose a high-order CFS algorithm (HOCFS to cluster heterogeneous data by combining the CFS clustering algorithm and the dropout deep learning model, whose functionality rests on three pillars: (i an adaptive dropout deep learning model to learn features from each type of data, (ii a feature tensor model to capture the correlations of heterogeneous data, and (iii a tensor distance-based high-order CFS algorithm to cluster heterogeneous data. Furthermore, we verify our proposed algorithm on different datasets, by comparison with other two clustering schemes, that is, HOPCM and CFS. Results confirm the effectiveness of the proposed algorithm in clustering heterogeneous data.

  6. Betweenness-based algorithm for a partition scale-free graph

    International Nuclear Information System (INIS)

    Zhang Bai-Da; Wu Jun-Jie; Zhou Jing; Tang Yu-Hua

    2011-01-01

    Many real-world networks are found to be scale-free. However, graph partition technology, as a technology capable of parallel computing, performs poorly when scale-free graphs are provided. The reason for this is that traditional partitioning algorithms are designed for random networks and regular networks, rather than for scale-free networks. Multilevel graph-partitioning algorithms are currently considered to be the state of the art and are used extensively. In this paper, we analyse the reasons why traditional multilevel graph-partitioning algorithms perform poorly and present a new multilevel graph-partitioning paradigm, top down partitioning, which derives its name from the comparison with the traditional bottom—up partitioning. A new multilevel partitioning algorithm, named betweenness-based partitioning algorithm, is also presented as an implementation of top—down partitioning paradigm. An experimental evaluation of seven different real-world scale-free networks shows that the betweenness-based partitioning algorithm significantly outperforms the existing state-of-the-art approaches. (interdisciplinary physics and related areas of science and technology)

  7. Detection algorithm of infrared small target based on improved SUSAN operator

    Science.gov (United States)

    Liu, Xingmiao; Wang, Shicheng; Zhao, Jing

    2010-10-01

    The methods of detecting small moving targets in infrared image sequences that contain moving nuisance objects and background noise is analyzed in this paper. A novel infrared small target detection algorithm based on improved SUSAN operator is put forward. The algorithm selects double templates for the infrared small target detection: one size is greater than the small target point size and another size is equal to the small target point size. First, the algorithm uses the big template to calculate the USAN of each pixel in the image and detect the small target, the edge of the image and isolated noise pixels; Then the algorithm uses the another template to calculate the USAN of pixels detected in the first step and improves the principles of SUSAN algorithm based on the characteristics of the small target so that the algorithm can only detect small targets and don't sensitive to the edge pixels of the image and isolated noise pixels. So the interference of the edge of the image and isolate noise points are removed and the candidate target points can be identified; At last, the target is detected by utilizing the continuity and consistency of target movement. The experimental results indicate that the improved SUSAN detection algorithm can quickly and effectively detect the infrared small targets.

  8. Image Encryption Using a Lightweight Stream Encryption Algorithm

    Directory of Open Access Journals (Sweden)

    Saeed Bahrami

    2012-01-01

    Full Text Available Security of the multimedia data including image and video is one of the basic requirements for the telecommunications and computer networks. In this paper, we consider a simple and lightweight stream encryption algorithm for image encryption, and a series of tests are performed to confirm suitability of the described encryption algorithm. These tests include visual test, histogram analysis, information entropy, encryption quality, correlation analysis, differential analysis, and performance analysis. Based on this analysis, it can be concluded that the present algorithm in comparison to A5/1 and W7 stream ciphers has the same security level, is better in terms of the speed of performance, and is used for real-time applications.

  9. DATA SECURITY IN LOCAL AREA NETWORK BASED ON FAST ENCRYPTION ALGORITHM

    Directory of Open Access Journals (Sweden)

    G. Ramesh

    2010-06-01

    Full Text Available Hacking is one of the greatest problems in the wireless local area networks. Many algorithms have been used to prevent the outside attacks to eavesdrop or prevent the data to be transferred to the end-user safely and correctly. In this paper, a new symmetrical encryption algorithm is proposed that prevents the outside attacks. The new algorithm avoids key exchange between users and reduces the time taken for the encryption and decryption. It operates at high data rate in comparison with The Data Encryption Standard (DES, Triple DES (TDES, Advanced Encryption Standard (AES-256, and RC6 algorithms. The new algorithm is applied successfully on both text file and voice message.

  10. Dynamic contrast-enhanced MRI of the prostate. Comparison of two different post-processing algorithms

    International Nuclear Information System (INIS)

    Beyersdorff, Dirk; Franiel, T.; Luedemann, L.; Dietz, E.; Galler, D.; Marchot, P.

    2011-01-01

    Purpose: To evaluate the usefulness of a commercially available post-processing software tool for detecting prostate cancer on dynamic contrast-enhanced magnetic resonance imaging (MRI) and to compare the results to those obtained with a custom-made post-processing algorithm already tested under clinical conditions. Materials and Methods: Forty-eight patients with proven prostate cancer were examined by standard MRI supplemented by dynamic contrast-enhanced dual susceptibility contrast (DCE-DSC) MRI prior to prostatectomy. A custom-made post-processing algorithm was used to analyze the MRI data sets and the results were compared to those obtained using a post-processing algorithm from Invivo Corporation (Dyna CAD for Prostate) applied to dynamic T 1-weighted images. Histology was used as the gold standard. Results: The sensitivity for prostate cancer detection was 78 % for the custom-made algorithm and 60 % for the commercial algorithm and the specificity was 79 % and 82 %, respectively. The accuracy was 79 % for our algorithm and 77.5 % for the commercial software tool. The chi-square test (McNemar-Bowker test) yielded no significant differences between the two tools (p = 0.06). Conclusion: The two investigated post-processing algorithms did not differ in terms of prostate cancer detection. The commercially available software tool allows reliable and fast analysis of dynamic contrast-enhanced MRI for the detection of prostate cancer. (orig.)

  11. Modern algorithms for large sparse eigenvalue problems

    International Nuclear Information System (INIS)

    Meyer, A.

    1987-01-01

    The volume is written for mathematicians interested in (numerical) linear algebra and in the solution of large sparse eigenvalue problems, as well as for specialists in engineering, who use the considered algorithms in the investigation of eigenoscillations of structures, in reactor physics, etc. Some variants of the algorithms based on the idea of a gradient-type direction of movement are presented and their convergence properties are discussed. From this, a general strategy for the direct use of preconditionings for the eigenvalue problem is derived. In this new approach the necessity of the solution of large linear systems is entirely avoided. Hence, these methods represent a new alternative to some other modern eigenvalue algorithms, as they show a slightly slower convergence on the one hand but essentially lower numerical and data processing problems on the other hand. A brief description and comparison of some well-known methods (i.e. simultaneous iteration, Lanczos algorithm) completes this volume. (author)

  12. Cryptanalysis on a modified Baptista-type cryptosystem with chaotic masking algorithm

    International Nuclear Information System (INIS)

    Chen Yong; Liao Xiaofeng

    2005-01-01

    Based on chaotic masking algorithm, an enhanced Baptista-type cryptosystem is proposed by Li et al. to resist all known attacks [S. Li, X. Mou, Z. Ji, J. Zhang, Y. Cai, Phys. Lett. A 307 (2003) 22; S. Li, G. Chen, K.-W. Wong, X. Mou, Y. Cai, Phys. Lett. A 332 (2004) 368]. In this Letter, we show that the second class bit extracting function in [S. Li, X. Mou, Z. Ji, J. Zhang, Y. Cai, Phys. Lett. A 307 (2003) 22] still leak partial information on the current chaotic state and reduce the security of cryptosystem. So, this type bit extracting function is not a good candidate for the masking algorithm

  13. Prioritization of candidate disease genes by topological similarity between disease and protein diffusion profiles.

    Science.gov (United States)

    Zhu, Jie; Qin, Yufang; Liu, Taigang; Wang, Jun; Zheng, Xiaoqi

    2013-01-01

    Identification of gene-phenotype relationships is a fundamental challenge in human health clinic. Based on the observation that genes causing the same or similar phenotypes tend to correlate with each other in the protein-protein interaction network, a lot of network-based approaches were proposed based on different underlying models. A recent comparative study showed that diffusion-based methods achieve the state-of-the-art predictive performance. In this paper, a new diffusion-based method was proposed to prioritize candidate disease genes. Diffusion profile of a disease was defined as the stationary distribution of candidate genes given a random walk with restart where similarities between phenotypes are incorporated. Then, candidate disease genes are prioritized by comparing their diffusion profiles with that of the disease. Finally, the effectiveness of our method was demonstrated through the leave-one-out cross-validation against control genes from artificial linkage intervals and randomly chosen genes. Comparative study showed that our method achieves improved performance compared to some classical diffusion-based methods. To further illustrate our method, we used our algorithm to predict new causing genes of 16 multifactorial diseases including Prostate cancer and Alzheimer's disease, and the top predictions were in good consistent with literature reports. Our study indicates that integration of multiple information sources, especially the phenotype similarity profile data, and introduction of global similarity measure between disease and gene diffusion profiles are helpful for prioritizing candidate disease genes. Programs and data are available upon request.

  14. CSA: An efficient algorithm to improve circular DNA multiple alignment

    Directory of Open Access Journals (Sweden)

    Pereira Luísa

    2009-07-01

    Full Text Available Abstract Background The comparison of homologous sequences from different species is an essential approach to reconstruct the evolutionary history of species and of the genes they harbour in their genomes. Several complete mitochondrial and nuclear genomes are now available, increasing the importance of using multiple sequence alignment algorithms in comparative genomics. MtDNA has long been used in phylogenetic analysis and errors in the alignments can lead to errors in the interpretation of evolutionary information. Although a large number of multiple sequence alignment algorithms have been proposed to date, they all deal with linear DNA and cannot handle directly circular DNA. Researchers interested in aligning circular DNA sequences must first rotate them to the "right" place using an essentially manual process, before they can use multiple sequence alignment tools. Results In this paper we propose an efficient algorithm that identifies the most interesting region to cut circular genomes in order to improve phylogenetic analysis when using standard multiple sequence alignment algorithms. This algorithm identifies the largest chain of non-repeated longest subsequences common to a set of circular mitochondrial DNA sequences. All the sequences are then rotated and made linear for multiple alignment purposes. To evaluate the effectiveness of this new tool, three different sets of mitochondrial DNA sequences were considered. Other tests considering randomly rotated sequences were also performed. The software package Arlequin was used to evaluate the standard genetic measures of the alignments obtained with and without the use of the CSA algorithm with two well known multiple alignment algorithms, the CLUSTALW and the MAVID tools, and also the visualization tool SinicView. Conclusion The results show that a circularization and rotation pre-processing step significantly improves the efficiency of public available multiple sequence alignment

  15. Comparing Evolutionary Strategies on a Biobjective Cultural Algorithm

    Directory of Open Access Journals (Sweden)

    Carolina Lagos

    2014-01-01

    Full Text Available Evolutionary algorithms have been widely used to solve large and complex optimisation problems. Cultural algorithms (CAs are evolutionary algorithms that have been used to solve both single and, to a less extent, multiobjective optimisation problems. In order to solve these optimisation problems, CAs make use of different strategies such as normative knowledge, historical knowledge, circumstantial knowledge, and among others. In this paper we present a comparison among CAs that make use of different evolutionary strategies; the first one implements a historical knowledge, the second one considers a circumstantial knowledge, and the third one implements a normative knowledge. These CAs are applied on a biobjective uncapacitated facility location problem (BOUFLP, the biobjective version of the well-known uncapacitated facility location problem. To the best of our knowledge, only few articles have applied evolutionary multiobjective algorithms on the BOUFLP and none of those has focused on the impact of the evolutionary strategy on the algorithm performance. Our biobjective cultural algorithm, called BOCA, obtains important improvements when compared to other well-known evolutionary biobjective optimisation algorithms such as PAES and NSGA-II. The conflicting objective functions considered in this study are cost minimisation and coverage maximisation. Solutions obtained by each algorithm are compared using a hypervolume S metric.

  16. Linking mothers and infants within electronic health records: a comparison of deterministic and probabilistic algorithms.

    Science.gov (United States)

    Baldwin, Eric; Johnson, Karin; Berthoud, Heidi; Dublin, Sascha

    2015-01-01

    To compare probabilistic and deterministic algorithms for linking mothers and infants within electronic health records (EHRs) to support pregnancy outcomes research. The study population was women enrolled in Group Health (Washington State, USA) delivering a liveborn infant from 2001 through 2008 (N = 33,093 deliveries) and infant members born in these years. We linked women to infants by surname, address, and dates of birth and delivery using deterministic and probabilistic algorithms. In a subset previously linked using "gold standard" identifiers (N = 14,449), we assessed each approach's sensitivity and positive predictive value (PPV). For deliveries with no "gold standard" linkage (N = 18,644), we compared the algorithms' linkage proportions. We repeated our analyses in an independent test set of deliveries from 2009 through 2013. We reviewed medical records to validate a sample of pairs apparently linked by one algorithm but not the other (N = 51 or 1.4% of discordant pairs). In the 2001-2008 "gold standard" population, the probabilistic algorithm's sensitivity was 84.1% (95% CI, 83.5-84.7) and PPV 99.3% (99.1-99.4), while the deterministic algorithm had sensitivity 74.5% (73.8-75.2) and PPV 95.7% (95.4-96.0). In the test set, the probabilistic algorithm again had higher sensitivity and PPV. For deliveries in 2001-2008 with no "gold standard" linkage, the probabilistic algorithm found matched infants for 58.3% and the deterministic algorithm, 52.8%. On medical record review, 100% of linked pairs appeared valid. A probabilistic algorithm improved linkage proportion and accuracy compared to a deterministic algorithm. Better linkage methods can increase the value of EHRs for pregnancy outcomes research. Copyright © 2014 John Wiley & Sons, Ltd.

  17. On the mid-infrared variability of candidate eruptive variables (exors): A comparison between Spitzer and WISE data

    Energy Technology Data Exchange (ETDEWEB)

    Antoniucci, S.; Giannini, T.; Li Causi, G.; Lorenzetti, D., E-mail: simone.antoniucci@oa-roma.inaf.it, E-mail: teresa.giannini@oa-roma.inaf.it, E-mail: gianluca.licausi@oa-roma.inaf.it, E-mail: dario.lorenzetti@oa-roma.inaf.it [INAF-Osservatorio Astronomico di Roma, via Frascati 33, I-00040 Monte Porzio (Italy)

    2014-02-10

    Aiming to statistically study the variability in the mid-IR of young stellar objects, we have compared the 3.6, 4.5, and 24 μm Spitzer fluxes of 1478 sources belonging to the C2D (Cores to Disks) legacy program with the WISE fluxes at 3.4, 4.6, and 22 μm. From this comparison, we have selected a robust sample of 34 variable sources. Their variations were classified per spectral Class (according to the widely accepted scheme of Class I/flat/II/III protostars), and per star forming region. On average, the number of variable sources decreases with increasing Class and is definitely higher in Perseus and Ophiuchus than in Chamaeleon and Lupus. According to the paradigm Class ≡ Evolution, the photometric variability can be considered to be a feature more pronounced in less evolved protostars, and, as such, related to accretion processes. Moreover, our statistical findings agree with the current knowledge of star formation activity in different regions. The 34 selected variables were further investigated for similarities with known young eruptive variables, namely the EXors. In particular, we analyzed (1) the shape of the spectral energy distribution, (2) the IR excess over the stellar photosphere, (3) magnitude versus color variations, and (4) output parameters of model fitting. This first systematic search for EXors ends up with 11 bona fide candidates that can be considered as suitable targets for monitoring or future investigations.

  18. A General Event Location Algorithm with Applications to Eclipse and Station Line-of-Sight

    Science.gov (United States)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  19. A General Event Location Algorithm with Applications to Eclispe and Station Line-of-Sight

    Science.gov (United States)

    Parker, Joel J. K.; Hughes, Steven P.

    2011-01-01

    A general-purpose algorithm for the detection and location of orbital events is developed. The proposed algorithm reduces the problem to a global root-finding problem by mapping events of interest (such as eclipses, station access events, etc.) to continuous, differentiable event functions. A stepping algorithm and a bracketing algorithm are used to detect and locate the roots. Examples of event functions and the stepping/bracketing algorithms are discussed, along with results indicating performance and accuracy in comparison to commercial tools across a variety of trajectories.

  20. Performance of a cavity-method-based algorithm for the prize-collecting Steiner tree problem on graphs

    Science.gov (United States)

    Biazzo, Indaco; Braunstein, Alfredo; Zecchina, Riccardo

    2012-08-01

    We study the behavior of an algorithm derived from the cavity method for the prize-collecting steiner tree (PCST) problem on graphs. The algorithm is based on the zero temperature limit of the cavity equations and as such is formally simple (a fixed point equation resolved by iteration) and distributed (parallelizable). We provide a detailed comparison with state-of-the-art algorithms on a wide range of existing benchmarks, networks, and random graphs. Specifically, we consider an enhanced derivative of the Goemans-Williamson heuristics and the dhea solver, a branch and cut integer linear programming based approach. The comparison shows that the cavity algorithm outperforms the two algorithms in most large instances both in running time and quality of the solution. Finally we prove a few optimality properties of the solutions provided by our algorithm, including optimality under the two postprocessing procedures defined in the Goemans-Williamson derivative and global optimality in some limit cases.

  1. A new hybrid metaheuristic algorithm for wind farm micrositing

    International Nuclear Information System (INIS)

    Massan, S.U.R.; Wagan, A.I.; Shaikh, M.M.

    2017-01-01

    This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm) for the solution of the WTO (Wind Turbine Optimization) problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm) and the FA (Firefly Algorithm). The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm) used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together. (author)

  2. A New Hybrid Metaheuristic Algorithm for Wind Farm Micrositing

    Directory of Open Access Journals (Sweden)

    SHAFIQ-UR-REHMAN MASSAN

    2017-07-01

    Full Text Available This work focuses on proposing a new algorithm, referred as HMA (Hybrid Metaheuristic Algorithm for the solution of the WTO (Wind Turbine Optimization problem. It is well documented that turbines located behind one another face a power loss due to the obstruction of the wind due to wake loss. It is required to reduce this wake loss by the effective placement of turbines using a new HMA. This HMA is derived from the two basic algorithms i.e. DEA (Differential Evolution Algorithm and the FA (Firefly Algorithm. The function of optimization is undertaken on the N.O. Jensen model. The blending of DEA and FA into HMA are discussed and the new algorithm HMA is implemented maximize power and minimize the cost in a WTO problem. The results by HMA have been compared with GA (Genetic Algorithm used in some previous studies. The successfully calculated total power produced and cost per unit turbine for a wind farm by using HMA and its comparison with past approaches using single algorithms have shown that there is a significant advantage of using the HMA as compared to the use of single algorithms. The first time implementation of a new algorithm by blending two single algorithms is a significant step towards learning the behavior of algorithms and their added advantages by using them together.

  3. A Memetic Differential Evolution Algorithm Based on Dynamic Preference for Constrained Optimization Problems

    Directory of Open Access Journals (Sweden)

    Ning Dong

    2014-01-01

    functions are executed, and comparisons with five state-of-the-art algorithms are made. The results illustrate that the proposed algorithm is competitive with and in some cases superior to the compared ones in terms of the quality, efficiency, and the robustness of the obtained results.

  4. An Adaptive Bacterial Foraging Optimization Algorithm with Lifecycle and Social Learning

    Directory of Open Access Journals (Sweden)

    Xiaohui Yan

    2012-01-01

    Full Text Available Bacterial Foraging Algorithm (BFO is a recently proposed swarm intelligence algorithm inspired by the foraging and chemotactic phenomenon of bacteria. However, its optimization ability is not so good compared with other classic algorithms as it has several shortages. This paper presents an improved BFO Algorithm. In the new algorithm, a lifecycle model of bacteria is founded. The bacteria could split, die, or migrate dynamically in the foraging processes, and population size varies as the algorithm runs. Social learning is also introduced so that the bacteria will tumble towards better directions in the chemotactic steps. Besides, adaptive step lengths are employed in chemotaxis. The new algorithm is named BFOLS and it is tested on a set of benchmark functions with dimensions of 2 and 20. Canonical BFO, PSO, and GA algorithms are employed for comparison. Experiment results and statistic analysis show that the BFOLS algorithm offers significant improvements than original BFO algorithm. Particulary with dimension of 20, it has the best performance among the four algorithms.

  5. Improved FHT Algorithms for Fast Computation of the Discrete Hartley Transform

    Directory of Open Access Journals (Sweden)

    M. T. Hamood

    2013-05-01

    Full Text Available In this paper, by using the symmetrical properties of the discrete Hartley transform (DHT, an improved radix-2 fast Hartley transform (FHT algorithm with arithmetic complexity comparable to that of the real-valued fast Fourier transform (RFFT is developed. It has a simple and regular butterfly structure and possesses the in-place computation property. Furthermore, using the same principles, the development can be extended to more efficient radix-based FHT algorithms. An example for the improved radix-4 FHT algorithm is given to show the validity of the presented method. The arithmetic complexity for the new algorithms are computed and then compared with the existing FHT algorithms. The results of these comparisons have shown that the developed algorithms reduce the number of multiplications and additions considerably.

  6. Fast Algorithm for Computing the Discrete Hartley Transform of Type-II

    Directory of Open Access Journals (Sweden)

    Mounir Taha Hamood

    2016-06-01

    Full Text Available The generalized discrete Hartley transforms (GDHTs have proved to be an efficient alternative to the generalized discrete Fourier transforms (GDFTs for real-valued data applications. In this paper, the development of direct computation of radix-2 decimation-in-time (DIT algorithm for the fast calculation of the GDHT of type-II (DHT-II is presented. The mathematical analysis and the implementation of the developed algorithm are derived, showing that this algorithm possesses a regular structure and can be implemented in-place for efficient memory utilization.The performance of the proposed algorithm is analyzed and the computational complexity is calculated for different transform lengths. A comparison between this algorithm and existing DHT-II algorithms shows that it can be considered as a good compromise between the structural and computational complexities.

  7. An up-to-date comparison of state-of-the-art classification algorithms

    KAUST Repository

    Zhang, Chongsheng

    2017-04-05

    Current benchmark reports of classification algorithms generally concern common classifiers and their variants but do not include many algorithms that have been introduced in recent years. Moreover, important properties such as the dependency on number of classes and features and CPU running time are typically not examined. In this paper, we carry out a comparative empirical study on both established classifiers and more recently proposed ones on 71 data sets originating from different domains, publicly available at UCI and KEEL repositories. The list of 11 algorithms studied includes Extreme Learning Machine (ELM), Sparse Representation based Classification (SRC), and Deep Learning (DL), which have not been thoroughly investigated in existing comparative studies. It is found that Stochastic Gradient Boosting Trees (GBDT) matches or exceeds the prediction performance of Support Vector Machines (SVM) and Random Forests (RF), while being the fastest algorithm in terms of prediction efficiency. ELM also yields good accuracy results, ranking in the top-5, alongside GBDT, RF, SVM, and C4.5 but this performance varies widely across all data sets. Unsurprisingly, top accuracy performers have average or slow training time efficiency. DL is the worst performer in terms of accuracy but second fastest in prediction efficiency. SRC shows good accuracy performance but it is the slowest classifier in both training and testing.

  8. An up-to-date comparison of state-of-the-art classification algorithms

    KAUST Repository

    Zhang, Chongsheng; Liu, Changchang; Zhang, Xiangliang; Almpanidis, George

    2017-01-01

    Current benchmark reports of classification algorithms generally concern common classifiers and their variants but do not include many algorithms that have been introduced in recent years. Moreover, important properties such as the dependency on number of classes and features and CPU running time are typically not examined. In this paper, we carry out a comparative empirical study on both established classifiers and more recently proposed ones on 71 data sets originating from different domains, publicly available at UCI and KEEL repositories. The list of 11 algorithms studied includes Extreme Learning Machine (ELM), Sparse Representation based Classification (SRC), and Deep Learning (DL), which have not been thoroughly investigated in existing comparative studies. It is found that Stochastic Gradient Boosting Trees (GBDT) matches or exceeds the prediction performance of Support Vector Machines (SVM) and Random Forests (RF), while being the fastest algorithm in terms of prediction efficiency. ELM also yields good accuracy results, ranking in the top-5, alongside GBDT, RF, SVM, and C4.5 but this performance varies widely across all data sets. Unsurprisingly, top accuracy performers have average or slow training time efficiency. DL is the worst performer in terms of accuracy but second fastest in prediction efficiency. SRC shows good accuracy performance but it is the slowest classifier in both training and testing.

  9. Prioritizing disease candidate proteins in cardiomyopathy-specific protein-protein interaction networks based on "guilt by association" analysis.

    Directory of Open Access Journals (Sweden)

    Wan Li

    Full Text Available The cardiomyopathies are a group of heart muscle diseases which can be inherited (familial. Identifying potential disease-related proteins is important to understand mechanisms of cardiomyopathies. Experimental identification of cardiomyophthies is costly and labour-intensive. In contrast, bioinformatics approach has a competitive advantage over experimental method. Based on "guilt by association" analysis, we prioritized candidate proteins involving in human cardiomyopathies. We first built weighted human cardiomyopathy-specific protein-protein interaction networks for three subtypes of cardiomyopathies using the known disease proteins from Online Mendelian Inheritance in Man as seeds. We then developed a method in prioritizing disease candidate proteins to rank candidate proteins in the network based on "guilt by association" analysis. It was found that most candidate proteins with high scores shared disease-related pathways with disease seed proteins. These top ranked candidate proteins were related with the corresponding disease subtypes, and were potential disease-related proteins. Cross-validation and comparison with other methods indicated that our approach could be used for the identification of potentially novel disease proteins, which may provide insights into cardiomyopathy-related mechanisms in a more comprehensive and integrated way.

  10. Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    Science.gov (United States)

    Taylor, Jonathan Christopher; Fenner, John Wesley

    2017-11-29

    Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification

  11. Improved candidate generation and coverage analysis methods for design optimization of symmetric multi-satellite constellations

    Science.gov (United States)

    Matossian, Mark G.

    1997-01-01

    Much attention in recent years has focused on commercial telecommunications ventures involving constellations of spacecraft in low and medium Earth orbit. These projects often require investments on the order of billions of dollars (US$) for development and operations, but surprisingly little work has been published on constellation design optimization for coverage analysis, traffic simulation and launch sequencing for constellation build-up strategies. This paper addresses the two most critical aspects of constellation orbital design — efficient constellation candidate generation and coverage analysis. Inefficiencies and flaws in the current standard algorithm for constellation modeling are identified, and a corrected and improved algorithm is presented. In the 1970's, John Walker and G. V. Mozhaev developed innovative strategies for continuous global coverage using symmetric non-geosynchronous constellations. (These are sometimes referred to as rosette, or Walker constellations. An example is pictured above.) In 1980, the late Arthur Ballard extended and generalized the work of Walker into a detailed algorithm for the NAVSTAR/GPS program, which deployed a 24 satellite symmetric constellation. Ballard's important contribution was published in his "Rosette Constellations of Earth Satellites."

  12. A Novel Parallel Algorithm for Edit Distance Computation

    Directory of Open Access Journals (Sweden)

    Muhammad Murtaza Yousaf

    2018-01-01

    Full Text Available The edit distance between two sequences is the minimum number of weighted transformation-operations that are required to transform one string into the other. The weighted transformation-operations are insert, remove, and substitute. Dynamic programming solution to find edit distance exists but it becomes computationally intensive when the lengths of strings become very large. This work presents a novel parallel algorithm to solve edit distance problem of string matching. The algorithm is based on resolving dependencies in the dynamic programming solution of the problem and it is able to compute each row of edit distance table in parallel. In this way, it becomes possible to compute the complete table in min(m,n iterations for strings of size m and n whereas state-of-the-art parallel algorithm solves the problem in max(m,n iterations. The proposed algorithm also increases the amount of parallelism in each of its iteration. The algorithm is also capable of exploiting spatial locality while its implementation. Additionally, the algorithm works in a load balanced way that further improves its performance. The algorithm is implemented for multicore systems having shared memory. Implementation of the algorithm in OpenMP shows linear speedup and better execution time as compared to state-of-the-art parallel approach. Efficiency of the algorithm is also proven better in comparison to its competitor.

  13. Exact parallel maximum clique algorithm for general and protein graphs.

    Science.gov (United States)

    Depolli, Matjaž; Konc, Janez; Rozman, Kati; Trobec, Roman; Janežič, Dušanka

    2013-09-23

    A new exact parallel maximum clique algorithm MaxCliquePara, which finds the maximum clique (the fully connected subgraph) in undirected general and protein graphs, is presented. First, a new branch and bound algorithm for finding a maximum clique on a single computer core, which builds on ideas presented in two published state of the art sequential algorithms is implemented. The new sequential MaxCliqueSeq algorithm is faster than the reference algorithms on both DIMACS benchmark graphs as well as on protein-derived product graphs used for protein structural comparisons. Next, the MaxCliqueSeq algorithm is parallelized by splitting the branch-and-bound search tree to multiple cores, resulting in MaxCliquePara algorithm. The ability to exploit all cores efficiently makes the new parallel MaxCliquePara algorithm markedly superior to other tested algorithms. On a 12-core computer, the parallelization provides up to 2 orders of magnitude faster execution on the large DIMACS benchmark graphs and up to an order of magnitude faster execution on protein product graphs. The algorithms are freely accessible on http://commsys.ijs.si/~matjaz/maxclique.

  14. Improved Parallel Three-List Algorithm for the Knapsack Problem without Memory Conflicts

    Institute of Scientific and Technical Information of China (English)

    Pan Jun; Li Kenli; Li Qinghua

    2006-01-01

    Based on the two-list algorithm and the parallel three-list algorithm, an improved parallel three-list algorithm for knapsack problem is proposed, in which the method of divide and conquer, and parallel merging without memory conflicts are adopted. To find a solution for the n-element knapsack problem, the proposed algorithm needs O(23n/8) time when O(23n/8) shared memory units and O(2n/4) processors are available. The comparisons between the proposed algorithm and 10 existing algorithms show that the improved parallel three-list algorithm is the first exclusive-read exclusive-write (EREW) parallel algorithm that can solve the knapsack instances in less than O(2n/2) time when the available hardware resource is smaller than O(2n/2), and hence is an improved result over the past researches.

  15. Planning the FUSE Mission Using the SOVA Algorithm

    Science.gov (United States)

    Lanzi, James; Heatwole, Scott; Ward, Philip R.; Civeit, Thomas; Calvani, Humberto; Kruk, Jeffrey W.; Suchkov, Anatoly

    2011-01-01

    Three documents discuss the Sustainable Objective Valuation and Attainability (SOVA) algorithm and software as used to plan tasks (principally, scientific observations and associated maneuvers) for the Far Ultraviolet Spectroscopic Explorer (FUSE) satellite. SOVA is a means of managing risk in a complex system, based on a concept of computing the expected return value of a candidate ordered set of tasks as a product of pre-assigned task values and assessments of attainability made against qualitatively defined strategic objectives. For the FUSE mission, SOVA autonomously assembles a week-long schedule of target observations and associated maneuvers so as to maximize the expected scientific return value while keeping the satellite stable, managing the angular momentum of spacecraft attitude- control reaction wheels, and striving for other strategic objectives. A six-degree-of-freedom model of the spacecraft is used in simulating the tasks, and the attainability of a task is calculated at each step by use of strategic objectives as defined by use of fuzzy inference systems. SOVA utilizes a variant of a graph-search algorithm known as the A* search algorithm to assemble the tasks into a week-long target schedule, using the expected scientific return value to guide the search.

  16. A meta-analysis based method for prioritizing candidate genes involved in a pre-specific function

    Directory of Open Access Journals (Sweden)

    Jingjing Zhai

    2016-12-01

    Full Text Available The identification of genes associated with a given biological function in plants remains a challenge, although network-based gene prioritization algorithms have been developed for Arabidopsis thaliana and many non-model plant species. Nevertheless, these network-based gene prioritization algorithms have encountered several problems; one in particular is that of unsatisfactory prediction accuracy due to limited network coverage, varying link quality, and/or uncertain network connectivity. Thus a model that integrates complementary biological data may be expected to increase the prediction accuracy of gene prioritization. Towards this goal, we developed a novel gene prioritization method named RafSee, to rank candidate genes using a random forest algorithm that integrates sequence, evolutionary, and epigenetic features of plants. Subsequently, we proposed an integrative approach named RAP (Rank Aggregation-based data fusion for gene Prioritization, in which an order statistics-based meta-analysis was used to aggregate the rank of the network-based gene prioritization method and RafSee, for accurately prioritizing candidate genes involved in a pre-specific biological function. Finally, we showcased the utility of RAP by prioritizing 380 flowering-time genes in Arabidopsis. The ‘leave-one-out’ cross-validation experiment showed that RafSee could work as a complement to a current state-of-art network-based gene prioritization system (AraNet v2. Moreover, RAP ranked 53.68% (204/380 flowering-time genes higher than AraNet v2, resulting in an 39.46% improvement in term of the first quartile rank. Further evaluations also showed that RAP was effective in prioritizing genes-related to different abiotic stresses. To enhance the usability of RAP for Arabidopsis and non-model plant species, an R package implementing the method is freely available at http://bioinfo.nwafu.edu.cn/software.

  17. A New Segment Building Algorithm for the Cathode Strip Chambers in the CMS Experiment

    Directory of Open Access Journals (Sweden)

    Golutvin I.

    2016-01-01

    Full Text Available A new segment building algorithm for the Cathode Strip Chambers in the CMS experiment is presented. A detailed description of the new algorithm is given along with a comparison with the algorithm used in the CMS software. The new segment builder was tested with different Monte-Carlo data samples. The new algorithm is meant to be robust and effective for hard muons and the higher luminosity that is expected in the future at the LHC.

  18. A Negative Selection Algorithm Based on Hierarchical Clustering of Self Set and its Application in Anomaly Detection

    Directory of Open Access Journals (Sweden)

    Wen Chen

    2011-08-01

    Full Text Available A negative selection algorithm based on the hierarchical clustering of self set HC-RNSA is introduced in this paper. Several strategies are applied to improve the algorithm performance. First, the self data set is replaced by the self cluster centers to compare with the detector candidates in each cluster level. As the number of self clusters is much less than the self set size, the detector generation efficiency is improved. Second, during the detector generation process, the detector candidates are restricted to the lower coverage space to reduce detector redundancy. In the article, the problem that the distances between antigens coverage to a constant value in the high dimensional space is analyzed, accordingly the Principle Component Analysis (PCA method is used to reduce the data dimension, and the fractional distance function is employed to enhance the distinctiveness between the self and non-self antigens. The detector generation procedure is terminated when the expected non-self coverage is reached. The theory analysis and experimental results demonstrate that the detection rate of HC-RNSA is higher than that of the traditional negative selection algorithms while the false alarm rate and time cost are reduced.

  19. A Cultural Algorithm for Optimal Design of Truss Structures

    Directory of Open Access Journals (Sweden)

    Shahin Jalili

    Full Text Available Abstract A cultural algorithm was utilized in this study to solve optimal design of truss structures problem achieving minimum weight objective under stress and deflection constraints. The algorithm is inspired by principles of human social evolution. It simulates the social interaction between the peoples and their beliefs in a belief space. Cultural Algorithm (CA utilizes the belief space and population space which affects each other based on acceptance and influence functions. The belief space of CA consists of different knowledge components. In this paper, only situational and normative knowledge components are used within the belief space. The performance of the method is demonstrated through four benchmark design examples. Comparison of the obtained results with those of some previous studies demonstrates the efficiency of this algorithm.

  20. Mapping of five candidate sex-determining loci in rainbow trout (Oncorhynchus mykiss

    Directory of Open Access Journals (Sweden)

    Drew Robert E

    2009-01-01

    Full Text Available Abstract Background Rainbow trout have an XX/XY genetic mechanism of sex determination where males are the heterogametic sex. The homology of the sex-determining gene (SDG in medaka to Dmrt1 suggested that SDGs evolve from downstream genes by gene duplication. Orthologous sequences of the major genes of the mammalian sex determination pathway have been reported in the rainbow trout but the map position for the majority of these genes has not been assigned. Results Five loci of four candidate genes (Amh, Dax1, Dmrt1 and Sox6 were tested for linkage to the Y chromosome of rainbow trout. We exclude the role of all these loci as candidates for the primary SDG in this species. Sox6i and Sox6ii, duplicated copies of Sox6, mapped to homeologous linkage groups 10 and 18 respectively. Genotyping fishes of the OSU × Arlee mapping family for Sox6i and Sox6ii alleles indicated that Sox6i locus might be deleted in the Arlee lineage. Conclusion Additional candidate genes should be tested for their linkage to the Y chromosome. Mapping data of duplicated Sox6 loci supports previously suggested homeology between linkage groups 10 and 18. Enrichment of the rainbow trout genomic map with known gene markers allows map comparisons with other salmonids. Mapping of candidate sex-determining loci is important for analyses of potential autosomal modifiers of sex-determination in rainbow trout.

  1. Sentiment analysis: a comparison of deep learning neural network algorithm with SVM and naϊve Bayes for Indonesian text

    Science.gov (United States)

    Calvin Frans Mariel, Wahyu; Mariyah, Siti; Pramana, Setia

    2018-03-01

    Deep learning is a new era of machine learning techniques that essentially imitate the structure and function of the human brain. It is a development of deeper Artificial Neural Network (ANN) that uses more than one hidden layer. Deep Learning Neural Network has a great ability on recognizing patterns from various data types such as picture, audio, text, and many more. In this paper, the authors tries to measure that algorithm’s ability by applying it into the text classification. The classification task herein is done by considering the content of sentiment in a text which is also called as sentiment analysis. By using several combinations of text preprocessing and feature extraction techniques, we aim to compare the precise modelling results of Deep Learning Neural Network with the other two commonly used algorithms, the Naϊve Bayes and Support Vector Machine (SVM). This algorithm comparison uses Indonesian text data with balanced and unbalanced sentiment composition. Based on the experimental simulation, Deep Learning Neural Network clearly outperforms the Naϊve Bayes and SVM and offers a better F-1 Score while for the best feature extraction technique which improves that modelling result is Bigram.

  2. Speeding disease gene discovery by sequence based candidate prioritization

    Directory of Open Access Journals (Sweden)

    Porteous David J

    2005-03-01

    Full Text Available Abstract Background Regions of interest identified through genetic linkage studies regularly exceed 30 centimorgans in size and can contain hundreds of genes. Traditionally this number is reduced by matching functional annotation to knowledge of the disease or phenotype in question. However, here we show that disease genes share patterns of sequence-based features that can provide a good basis for automatic prioritization of candidates by machine learning. Results We examined a variety of sequence-based features and found that for many of them there are significant differences between the sets of genes known to be involved in human hereditary disease and those not known to be involved in disease. We have created an automatic classifier called PROSPECTR based on those features using the alternating decision tree algorithm which ranks genes in the order of likelihood of involvement in disease. On average, PROSPECTR enriches lists for disease genes two-fold 77% of the time, five-fold 37% of the time and twenty-fold 11% of the time. Conclusion PROSPECTR is a simple and effective way to identify genes involved in Mendelian and oligogenic disorders. It performs markedly better than the single existing sequence-based classifier on novel data. PROSPECTR could save investigators looking at large regions of interest time and effort by prioritizing positional candidate genes for mutation detection and case-control association studies.

  3. Comparing the Robustness of Evolutionary Algorithms on the Basis of Benchmark Functions

    Directory of Open Access Journals (Sweden)

    DENIZ ULKER, E.

    2013-05-01

    Full Text Available In real-world optimization problems, even though the solution quality is of great importance, the robustness of the solution is also an important aspect. This paper investigates how the optimization algorithms are sensitive to the variations of control parameters and to the random initialization of the solution set for fixed control parameters. The comparison is performed of three well-known evolutionary algorithms which are Particle Swarm Optimization (PSO algorithm, Differential Evolution (DE algorithm and the Harmony Search (HS algorithm. Various benchmark functions with different characteristics are used for the evaluation of these algorithms. The experimental results show that the solution quality of the algorithms is not directly related to their robustness. In particular, the algorithm that is highly robust can have a low solution quality, or the algorithm that has a high quality of solution can be quite sensitive to the parameter variations.

  4. A stochastic de novo assembly algorithm for viral-sized genomes obtains correct genomes and builds consensus

    NARCIS (Netherlands)

    Bucur, Doina

    2017-01-01

    A genetic algorithm with stochastic macro mutation operators which merge, split, move, reverse and align DNA contigs on a scaffold is shown to accurately and consistently assemble raw DNA reads from an accurately sequenced single-read library into a contiguous genome. A candidate solution is a

  5. A new collage steganographic algorithm using cartoon design

    Science.gov (United States)

    Yi, Shuang; Zhou, Yicong; Pun, Chi-Man; Chen, C. L. Philip

    2014-02-01

    Existing collage steganographic methods suffer from low payload of embedding messages. To improve the payload while providing a high level of security protection to messages, this paper introduces a new collage steganographic algorithm using cartoon design. It embeds messages into the least significant bits (LSBs) of color cartoon objects, applies different permutations to each object, and adds objects to a cartoon cover image to obtain the stego image. Computer simulations and comparisons demonstrate that the proposed algorithm shows significantly higher capacity of embedding messages compared with existing collage steganographic methods.

  6. ANALYSIS OF THE CHARACTERISTICS OF INTERNATIONAL STANDARD ALGORITHMS «LIGHTWEIGHT CRYPTOGRAPHY» – ISO/IEC 29192-3:2012

    Directory of Open Access Journals (Sweden)

    A. S. Poljakov

    2014-01-01

    Full Text Available The data on the characteristics of international standard algorithms «lightweight cryptography» while application in hardware implementation based on microchips of FPGA are provided. A compari-son of the characteristics of these algorithms with the characteristics of several widely-used standard encryption algorithms is made and possibilities of lightweight cryptography algorithms are evaluated.

  7. The parallel processing impact in the optimization of the reactors neutronic by genetic algorithms

    International Nuclear Information System (INIS)

    Pereira, Claudio M.N.A.; Universidade Federal, Rio de Janeiro, RJ; Lapa, Celso M.F.; Mol, Antonio C.A.

    2002-01-01

    Nowadays, many optimization problems found in nuclear engineering has been solved through genetic algorithms (GA). The robustness of such methods is strongly related to the nature of search process which is based on populations of solution candidates, and this fact implies high computational cost in the optimization process. The use of GA become more critical when the evaluation process of a solution candidate is highly time consuming. Problems of this nature are common in the nuclear engineering, and an example is the reactor design optimization, where neutronic codes, which consume high CPU time, must be run. Aiming to investigate the impact of the use of parallel computation in the solution, through GA, of a reactor design optimization problem, a parallel genetic algorithm (PGA), using the Island Model, was developed. Exhaustive experiments, then 1500 processing hours in 550 MHz personal computers, have been done, in order to compare the conventional GA with the PGA. Such experiments have demonstrating the superiority of the PGA not only in terms of execution time, but also, in the optimization results. (author)

  8. A low-resource quantum factoring algorithm

    NARCIS (Netherlands)

    Bernstein, D.J.; Biasse, J. F.; Mosca, M.; Lange, T.; Takagi, T.

    2017-01-01

    In this paper, we present a factoring algorithm that, assuming standard heuristics, uses just (log N)2/3+o(1) qubits to factor an integer N in time Lq+o(1) where L = exp((log N)1/3 (log log N)2/3) and q =3√8/3 ≈ 1.387. For comparison, the lowest asymptotic time complexity for known pre-quantum

  9. CANDIDATE GENE ANALYSIS IN ISRAELI SOLDIERS WITH STRESS FRACTURES

    Directory of Open Access Journals (Sweden)

    Ran Yanovich

    2012-03-01

    Full Text Available To investigate the association of polymorphisms within candidate genes which we hypothesized may contribute to stress fracture predisposition, a case-control, cross- sectional study design was employed. Genotyping 268 Single Nucleotide Polymorphisms- SNPs within 17 genes in 385 Israeli young male and female recruits (182 with and 203 without stress fractures. Twenty-five polymorphisms within 9 genes (NR3C1, ANKH, VDR, ROR2, CALCR, IL6, COL1A2, CBG, and LRP4 showed statistically significant differences (p < 0.05 in the distribution between stress fracture cases and non stress fracture controls. Seventeen genetic variants were associated with an increased stress fracture risk, and eight variants with a decreased stress fracture risk. None of the SNP associations remained significant after correcting for multiple comparisons (false discovery rate- FDR. Our findings suggest that genes may be involved in stress fracture pathogenesis. Specifically, the CALCR and the VDR genes are intriguing candidates. The putative involvement of these genes in stress fracture predisposition requires analysis of more cases and controls and sequencing the relevant genomic regions, in order to define the specific gene mutations

  10. Using network properties to evaluate targeted immunization algorithms

    Directory of Open Access Journals (Sweden)

    Bita Shams

    2014-09-01

    Full Text Available Immunization of complex network with minimal or limited budget is a challenging issue for research community. In spite of much literature in network immunization, no comprehensive research has been conducted for evaluation and comparison of immunization algorithms. In this paper, we propose an evaluation framework for immunization algorithms regarding available amount of vaccination resources, goal of immunization program, and time complexity. The evaluation framework is designed based on network topological metrics which is extensible to all epidemic spreading model. Exploiting evaluation framework on well-known targeted immunization algorithms shows that in general, immunization based on PageRank centrality outperforms other targeting strategies in various types of networks, whereas, closeness and eigenvector centrality exhibit the worst case performance.

  11. A Self-Reconstructing Algorithm for Single and Multiple-Sensor Fault Isolation Based on Auto-Associative Neural Networks

    Directory of Open Access Journals (Sweden)

    Hamidreza Mousavi

    2017-01-01

    Full Text Available Recently different approaches have been developed in the field of sensor fault diagnostics based on Auto-Associative Neural Network (AANN. In this paper we present a novel algorithm called Self reconstructing Auto-Associative Neural Network (S-AANN which is able to detect and isolate single faulty sensor via reconstruction. We have also extended the algorithm to be applicable in multiple fault conditions. The algorithm uses a calibration model based on AANN. AANN can reconstruct the faulty sensor using non-faulty sensors due to correlation between the process variables, and mean of the difference between reconstructed and original data determines which sensors are faulty. The algorithms are tested on a Dimerization process. The simulation results show that the S-AANN can isolate multiple faulty sensors with low computational time that make the algorithm appropriate candidate for online applications.

  12. Variable threshold algorithm for division of labor analyzed as a dynamical system.

    Science.gov (United States)

    Castillo-Cagigal, Manuel; Matallanas, Eduardo; Navarro, Iñaki; Caamaño-Martín, Estefanía; Monasterio-Huelin, Félix; Gutiérrez, Álvaro

    2014-12-01

    Division of labor is a widely studied aspect of colony behavior of social insects. Division of labor models indicate how individuals distribute themselves in order to perform different tasks simultaneously. However, models that study division of labor from a dynamical system point of view cannot be found in the literature. In this paper, we define a division of labor model as a discrete-time dynamical system, in order to study the equilibrium points and their properties related to convergence and stability. By making use of this analytical model, an adaptive algorithm based on division of labor can be designed to satisfy dynamic criteria. In this way, we have designed and tested an algorithm that varies the response thresholds in order to modify the dynamic behavior of the system. This behavior modification allows the system to adapt to specific environmental and collective situations, making the algorithm a good candidate for distributed control applications. The variable threshold algorithm is based on specialization mechanisms. It is able to achieve an asymptotically stable behavior of the system in different environments and independently of the number of individuals. The algorithm has been successfully tested under several initial conditions and number of individuals.

  13. Comparison and application of wind retrieval algorithms for small unmanned aerial systems

    Science.gov (United States)

    Bonin, T. A.; Chilson, P. B.; Zielke, B. S.; Klein, P. M.; Leeman, J. R.

    2013-07-01

    Recently, there has been an increase in use of Unmanned Aerial Systems (UASs) as platforms for conducting fundamental and applied research in the lower atmosphere due to their relatively low cost and ability to collect samples with high spatial and temporal resolution. Concurrent with this development comes the need for accurate instrumentation and measurement methods suitable for small meteorological UASs. Moreover, the instrumentation to be integrated into such platforms must be small and lightweight. Whereas thermodynamic variables can be easily measured using well-aspirated sensors onboard, it is much more challenging to accurately measure the wind with a UAS. Several algorithms have been developed that incorporate GPS observations as a means of estimating the horizontal wind vector, with each algorithm exhibiting its own particular strengths and weaknesses. In the present study, the performance of three such GPS-based wind-retrieval algorithms has been investigated and compared with wind estimates from rawinsonde and sodar observations. Each of the algorithms considered agreed well with the wind measurements from sounding and sodar data. Through the integration of UAS-retrieved profiles of thermodynamic and kinematic parameters, one can investigate the static and dynamic stability of the atmosphere and relate them to the state of the boundary layer across a variety of times and locations, which might be difficult to access using conventional instrumentation.

  14. Development and comparisons of wind retrieval algorithms for small unmanned aerial systems

    Science.gov (United States)

    Bonin, T. A.; Chilson, P. B.; Zielke, B. S.; Klein, P. M.; Leeman, J. R.

    2012-12-01

    Recently, there has been an increase in use of Unmanned Aerial Systems (UASs) as platforms for conducting fundamental and applied research in the lower atmosphere due to their relatively low cost and ability to collect samples with high spatial and temporal resolution. Concurrent with this development comes the need for accurate instrumentation and measurement methods suitable for small meteorological UASs. Moreover, the instrumentation to be integrated into such platforms must be small and lightweight. Whereas thermodynamic variables can be easily measured using well aspirated sensors onboard, it is much more challenging to accurately measure the wind with a UAS. Several algorithms have been developed that incorporate GPS observations as a means of estimating the horizontal wind vector, with each algorithm exhibiting its own particular strengths and weaknesses. In the present study, the performance of three such GPS-based wind-retrieval algorithms has been investigated and compared with wind estimates from rawinsonde and sodar observations. Each of the algorithms considered agreed well with the wind measurements from sounding and sodar data. Through the integration of UAS-retrieved profiles of thermodynamic and kinematic parameters, one can investigate the static and dynamic stability of the atmosphere and relate them to the state of the boundary layer across a variety of times and locations, which might be difficult to access using conventional instrumentation.

  15. Optimization of antibacterial peptides by genetic algorithms and cheminformatics

    DEFF Research Database (Denmark)

    Fjell, Christopher D.; Jenssen, Håvard; Cheung, Warren A.

    2011-01-01

    Pathogens resistant to available drug therapies are a pressing global health problem. Short, cationic peptides represent a novel class of agents that have lower rates of drug resistance than derivatives of current antibiotics. Previously, we created a software system utilizing artificial neural...... 47 of the top rated 50 peptides chosen from an in silico library of nearly 100 000 sequences. Here, we report a method of generating candidate peptide sequences using the heuristic evolutionary programming method of genetic algorithms (GA), which provided a large (19-fold) improvement...

  16. Improved algorithms for approximate string matching (extended abstract

    Directory of Open Access Journals (Sweden)

    Papamichail Georgios

    2009-01-01

    Full Text Available Abstract Background The problem of approximate string matching is important in many different areas such as computational biology, text processing and pattern recognition. A great effort has been made to design efficient algorithms addressing several variants of the problem, including comparison of two strings, approximate pattern identification in a string or calculation of the longest common subsequence that two strings share. Results We designed an output sensitive algorithm solving the edit distance problem between two strings of lengths n and m respectively in time O((s - |n - m|·min(m, n, s + m + n and linear space, where s is the edit distance between the two strings. This worst-case time bound sets the quadratic factor of the algorithm independent of the longest string length and improves existing theoretical bounds for this problem. The implementation of our algorithm also excels in practice, especially in cases where the two strings compared differ significantly in length. Conclusion We have provided the design, analysis and implementation of a new algorithm for calculating the edit distance of two strings with both theoretical and practical implications. Source code of our algorithm is available online.

  17. On the use of harmony search algorithm in the training of wavelet neural networks

    Science.gov (United States)

    Lai, Kee Huong; Zainuddin, Zarita; Ong, Pauline

    2015-10-01

    Wavelet neural networks (WNNs) are a class of feedforward neural networks that have been used in a wide range of industrial and engineering applications to model the complex relationships between the given inputs and outputs. The training of WNNs involves the configuration of the weight values between neurons. The backpropagation training algorithm, which is a gradient-descent method, can be used for this training purpose. Nonetheless, the solutions found by this algorithm often get trapped at local minima. In this paper, a harmony search-based algorithm is proposed for the training of WNNs. The training of WNNs, thus can be formulated as a continuous optimization problem, where the objective is to maximize the overall classification accuracy. Each candidate solution proposed by the harmony search algorithm represents a specific WNN architecture. In order to speed up the training process, the solution space is divided into disjoint partitions during the random initialization step of harmony search algorithm. The proposed training algorithm is tested onthree benchmark problems from the UCI machine learning repository, as well as one real life application, namely, the classification of electroencephalography signals in the task of epileptic seizure detection. The results obtained show that the proposed algorithm outperforms the traditional harmony search algorithm in terms of overall classification accuracy.

  18. The Modified Frequency Algorithm of Digital Watermarking of Still Images Resistant to JPEG Compression

    Directory of Open Access Journals (Sweden)

    V. A. Batura

    2015-01-01

    Full Text Available Digital watermarking is an effective copyright protection for multimedia products (in particular, still images. Digital marking represents process of embedding into object of protection of a digital watermark which is invisible for a human eye. However there is rather large number of the harmful influences capable to destroy the watermark which is embedded into the still image. The most widespread attack is JPEG compression that is caused by efficiency of this format of compression and its big prevalence on the Internet.The new algorithm which is modification of algorithm of Elham is presented in the present article. The algorithm of digital marking of motionless images carries out embedding of a watermark in frequency coefficients of discrete Hadamard transform of the chosen image blocks. The choice of blocks of the image for embedding of a digital watermark is carried out on the basis of the set threshold of entropy of pixels. The choice of low-frequency coefficients for embedding is carried out on the basis of comparison of values of coefficients of discrete cosine transformation with a predetermined threshold, depending on the product of the built-in watermark coefficient on change coefficient.Resistance of new algorithm to compression of JPEG, noising, filtration, change of color, the size and histogram equalization is in details analysed. Research of algorithm consists in comparison of the appearance taken from the damaged image of a watermark with the introduced logo. Ability of algorithm to embedding of a watermark with a minimum level of distortions of the image is in addition analysed. It is established that the new algorithm in comparison by initial algorithm of Elham showed full resistance to compression of JPEG, and also the improved resistance to a noising, change of brightness and histogram equalization.The developed algorithm can be used for copyright protection on the static images. Further studies will be used to study the

  19. A Hybrid Adaptive Routing Algorithm for Event-Driven Wireless Sensor Networks

    Science.gov (United States)

    Figueiredo, Carlos M. S.; Nakamura, Eduardo F.; Loureiro, Antonio A. F.

    2009-01-01

    Routing is a basic function in wireless sensor networks (WSNs). For these networks, routing algorithms depend on the characteristics of the applications and, consequently, there is no self-contained algorithm suitable for every case. In some scenarios, the network behavior (traffic load) may vary a lot, such as an event-driven application, favoring different algorithms at different instants. This work presents a hybrid and adaptive algorithm for routing in WSNs, called Multi-MAF, that adapts its behavior autonomously in response to the variation of network conditions. In particular, the proposed algorithm applies both reactive and proactive strategies for routing infrastructure creation, and uses an event-detection estimation model to change between the strategies and save energy. To show the advantages of the proposed approach, it is evaluated through simulations. Comparisons with independent reactive and proactive algorithms show improvements on energy consumption. PMID:22423207

  20. Biology, the way it should have been, experiments with a Lamarckian algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Brown, F.M.; Snider, J. [Univ. of Kansas, Lawrence, KS (United States)

    1996-12-31

    This paper investigates the case where some information can be extracted directly from the fitness function of a genetic algorithm so that mutation may be achieved essentially on the Lamarckian principle of acquired characteristics. The basic rationale is that such additional information will provide better mutations, thus speeding up the search process. Comparisons are made between a pure Neo-Darwinian genetic algorithm and this Lamarckian algorithm on a number of problems, including a problem of interest to the US Army.

  1. Inter-algorithm lesion volumetry comparison of real and 3D simulated lung lesions in CT

    Science.gov (United States)

    Robins, Marthony; Solomon, Justin; Hoye, Jocelyn; Smith, Taylor; Ebner, Lukas; Samei, Ehsan

    2017-03-01

    The purpose of this study was to establish volumetric exchangeability between real and computational lung lesions in CT. We compared the overall relative volume estimation performance of segmentation tools when used to measure real lesions in actual patient CT images and computational lesions virtually inserted into the same patient images (i.e., hybrid datasets). Pathologically confirmed malignancies from 30 thoracic patient cases from Reference Image Database to Evaluate Therapy Response (RIDER) were modeled and used as the basis for the comparison. Lesions included isolated nodules as well as those attached to the pleura or other lung structures. Patient images were acquired using a 16 detector row or 64 detector row CT scanner (Lightspeed 16 or VCT; GE Healthcare). Scans were acquired using standard chest protocols during a single breath-hold. Virtual 3D lesion models based on real lesions were developed in Duke Lesion Tool (Duke University), and inserted using a validated image-domain insertion program. Nodule volumes were estimated using multiple commercial segmentation tools (iNtuition, TeraRecon, Inc., Syngo.via, Siemens Healthcare, and IntelliSpace, Philips Healthcare). Consensus based volume comparison showed consistent trends in volume measurement between real and virtual lesions across all software. The average percent bias (+/- standard error) shows -9.2+/-3.2% for real lesions versus -6.7+/-1.2% for virtual lesions with tool A, 3.9+/-2.5% and 5.0+/-0.9% for tool B, and 5.3+/-2.3% and 1.8+/-0.8% for tool C, respectively. Virtual lesion volumes were statistically similar to those of real lesions (.05 in most cases. Results suggest that hybrid datasets had similar inter-algorithm variability compared to real datasets.

  2. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    International Nuclear Information System (INIS)

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; McIntyre, Justin

    2017-01-01

    The aim of this study is to compare radioxenon beta–gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Finally, our results show that existing algorithms can be improved and some newer algorithms can be better than the ones currently used.

  3. Comparison of new and existing algorithms for the analysis of 2D radioxenon beta gamma spectra

    International Nuclear Information System (INIS)

    Deshmukh, Nikhil; Prinke, Amanda; Miller, Brian; McIntyre, Justin

    2017-01-01

    The aim of this paper is to compare radioxenon beta-gamma analysis algorithms using simulated spectra with experimentally measured background, where the ground truth of the signal is known. We believe that this is among the largest efforts to date in terms of the number of synthetic spectra generated and number of algorithms compared using identical spectra. We generate an estimate for the minimum detectable counts for each isotope using each algorithm. The paper also points out a conceptual model to put the various algorithms into a continuum. Our results show that existing algorithms can be improved and some newer algorithms can be better than the ones currently used. (author)

  4. Model-based fault diagnosis techniques design schemes, algorithms, and tools

    CERN Document Server

    Ding, Steven

    2008-01-01

    The objective of this book is to introduce basic model-based FDI schemes, advanced analysis and design algorithms, and the needed mathematical and control theory tools at a level for graduate students and researchers as well as for engineers. This is a textbook with extensive examples and references. Most methods are given in the form of an algorithm that enables a direct implementation in a programme. Comparisons among different methods are included when possible.

  5. Improved algorithm for solving nonlinear parabolized stability equations

    International Nuclear Information System (INIS)

    Zhao Lei; Zhang Cun-bo; Liu Jian-xin; Luo Ji-sheng

    2016-01-01

    Due to its high computational efficiency and ability to consider nonparallel and nonlinear effects, nonlinear parabolized stability equations (NPSE) approach has been widely used to study the stability and transition mechanisms. However, it often diverges in hypersonic boundary layers when the amplitude of disturbance reaches a certain level. In this study, an improved algorithm for solving NPSE is developed. In this algorithm, the mean flow distortion is included into the linear operator instead of into the nonlinear forcing terms in NPSE. An under-relaxation factor for computing the nonlinear terms is introduced during the iteration process to guarantee the robustness of the algorithm. Two case studies, the nonlinear development of stationary crossflow vortices and the fundamental resonance of the second mode disturbance in hypersonic boundary layers, are presented to validate the proposed algorithm for NPSE. Results from direct numerical simulation (DNS) are regarded as the baseline for comparison. Good agreement can be found between the proposed algorithm and DNS, which indicates the great potential of the proposed method on studying the crossflow and streamwise instability in hypersonic boundary layers. (paper)

  6. Novel Adaptive Bacteria Foraging Algorithms for Global Optimization

    Directory of Open Access Journals (Sweden)

    Ahmad N. K. Nasir

    2014-01-01

    Full Text Available This paper presents improved versions of bacterial foraging algorithm (BFA. The chemotaxis feature of bacteria through random motion is an effective strategy for exploring the optimum point in a search area. The selection of small step size value in the bacteria motion leads to high accuracy in the solution but it offers slow convergence. On the contrary, defining a large step size in the motion provides faster convergence but the bacteria will be unable to locate the optimum point hence reducing the fitness accuracy. In order to overcome such problems, novel linear and nonlinear mathematical relationships based on the index of iteration, index of bacteria, and fitness cost are adopted which can dynamically vary the step size of bacteria movement. The proposed algorithms are tested with several unimodal and multimodal benchmark functions in comparison with the original BFA. Moreover, the application of the proposed algorithms in modelling of a twin rotor system is presented. The results show that the proposed algorithms outperform the predecessor algorithm in all test functions and acquire better model for the twin rotor system.

  7. A comparison of thermal algorithms of fuel rod performance code systems

    International Nuclear Information System (INIS)

    Park, C. J.; Park, J. H.; Kang, K. H.; Ryu, H. J.; Moon, J. S.; Jeong, I. H.; Lee, C. Y.; Song, K. C.

    2003-11-01

    The goal of the fuel rod performance is to identify the robustness of a fuel rod with cladding material. Computer simulation of the fuel rod performance becomes one of important parts to designed and evaluate new nuclear fuels and claddings. To construct a computing code system for the fuel rod performance, several algorithms of the existing fuel rod performance code systems are compared and are summarized as a preliminary work. Among several code systems, FRAPCON, and FEMAXI for LWR, ELESTRES for CANDU reactor, and LIFE for fast reactor are reviewed. Thermal algorithms of the above codes are investigated including methodologies and subroutines. This work will be utilized to construct a computing code system for dry process fuel rod performance

  8. A comparison of thermal algorithms of fuel rod performance code systems

    Energy Technology Data Exchange (ETDEWEB)

    Park, C. J.; Park, J. H.; Kang, K. H.; Ryu, H. J.; Moon, J. S.; Jeong, I. H.; Lee, C. Y.; Song, K. C

    2003-11-01

    The goal of the fuel rod performance is to identify the robustness of a fuel rod with cladding material. Computer simulation of the fuel rod performance becomes one of important parts to designed and evaluate new nuclear fuels and claddings. To construct a computing code system for the fuel rod performance, several algorithms of the existing fuel rod performance code systems are compared and are summarized as a preliminary work. Among several code systems, FRAPCON, and FEMAXI for LWR, ELESTRES for CANDU reactor, and LIFE for fast reactor are reviewed. Thermal algorithms of the above codes are investigated including methodologies and subroutines. This work will be utilized to construct a computing code system for dry process fuel rod performance.

  9. Comparison of switching control algorithms effective in restricting the switching in the neighborhood of the origin

    International Nuclear Information System (INIS)

    Joung, JinWook; Chung, Lan; Smyth, Andrew W

    2010-01-01

    The active interaction control (AIC) system consisting of a primary structure, an auxiliary structure and an interaction element was proposed to protect the primary structure against earthquakes and winds. The objective of the AIC system in reducing the responses of the primary structure is fulfilled by activating or deactivating the switching between the engagement and the disengagement of the primary and auxiliary structures through the interaction element. The status of the interaction element is controlled by switching control algorithms. The previously developed switching control algorithms require an excessive amount of switching, which is inefficient. In this paper, the excessive amount of switching is restricted by imposing an appropriately designed switching boundary region, where switching is prohibited, on pre-designed engagement–disengagement conditions. Two different approaches are used in designing the newly proposed AID-off and AID-off 2 algorithms. The AID-off 2 algorithm is designed to affect deactivated switching regions explicitly, unlike the AID-off algorithm, which follows the same procedure of designing the engagement–disengagement conditions of the previously developed algorithms, by using the current status of the AIC system. Both algorithms are shown to be effective in reducing the amount of switching times triggered from the previously developed AID algorithm under an appropriately selected control sampling period for different earthquakes, but the AID-off 2 algorithm outperforms the AID-off algorithm in reducing the number of switching times

  10. GPU-based fast pencil beam algorithm for proton therapy

    International Nuclear Information System (INIS)

    Fujimoto, Rintaro; Nagamine, Yoshihiko; Kurihara, Tsuneya

    2011-01-01

    Performance of a treatment planning system is an essential factor in making sophisticated plans. The dose calculation is a major time-consuming process in planning operations. The standard algorithm for proton dose calculations is the pencil beam algorithm which produces relatively accurate results, but is time consuming. In order to shorten the computational time, we have developed a GPU (graphics processing unit)-based pencil beam algorithm. We have implemented this algorithm and calculated dose distributions in the case of a water phantom. The results were compared to those obtained by a traditional method with respect to the computational time and discrepancy between the two methods. The new algorithm shows 5-20 times faster performance using the NVIDIA GeForce GTX 480 card in comparison with the Intel Core-i7 920 processor. The maximum discrepancy of the dose distribution is within 0.2%. Our results show that GPUs are effective for proton dose calculations.

  11. Curve fitting using a genetic algorithm for the X-ray fluorescence measurement of lead in bone

    International Nuclear Information System (INIS)

    Luo, L.; McMaster University, Hamilton; Chettle, D.R.; Nie, H.; McNeill, F.E.; Popovic, M.

    2006-01-01

    We investigated the potential application of the genetic algorithm in the analysis of X-ray fluorescence spectra from measurement of lead in bone. Candidate solutions are first designed based on the field knowledge and the whole operation, evaluation, selection, crossover and mutation, is then repeated until a given convergence criterion is met. An average-parameters based genetic algorithm is suggested to improve the fitting precision and accuracy. Relative standard deviation (RSD%) of fitting amplitude, peak position and width is 1.3-7.1, 0.009-0.14 and 1.4-3.3, separately. The genetic algorithm was shown to make a good resolution and fitting of K lines of Pb and γ elastic peaks. (author)

  12. Workflow Scheduling Using Hybrid GA-PSO Algorithm in Cloud Computing

    Directory of Open Access Journals (Sweden)

    Ahmad M. Manasrah

    2018-01-01

    Full Text Available Cloud computing environment provides several on-demand services and resource sharing for clients. Business processes are managed using the workflow technology over the cloud, which represents one of the challenges in using the resources in an efficient manner due to the dependencies between the tasks. In this paper, a Hybrid GA-PSO algorithm is proposed to allocate tasks to the resources efficiently. The Hybrid GA-PSO algorithm aims to reduce the makespan and the cost and balance the load of the dependent tasks over the heterogonous resources in cloud computing environments. The experiment results show that the GA-PSO algorithm decreases the total execution time of the workflow tasks, in comparison with GA, PSO, HSGA, WSGA, and MTCT algorithms. Furthermore, it reduces the execution cost. In addition, it improves the load balancing of the workflow application over the available resources. Finally, the obtained results also proved that the proposed algorithm converges to optimal solutions faster and with higher quality compared to other algorithms.

  13. Fast treatment plan modification with an over-relaxed Cimmino algorithm

    International Nuclear Information System (INIS)

    Wu Chuan; Jeraj, Robert; Lu Weiguo; Mackie, Thomas R.

    2004-01-01

    A method to quickly modify a treatment plan in adaptive radiotherapy was proposed and studied. The method is based on a Cimmino-type algorithm in linear programming. The fast convergence speed is achieved by over-relaxing the algorithm relaxation parameter from its sufficient convergence range of (0, 2) to (0, ∞). The algorithm parameters are selected so that the over-relaxed Cimmino (ORC) algorithm can effectively approximate an unconstrained re-optimization process in adaptive radiotherapy. To demonstrate the effectiveness and flexibility of the proposed method in adaptive radiotherapy, two scenarios with different organ motion/deformation of one nasopharyngeal case were presented with comparisons made between this method and the re-optimization method. In both scenarios, the ORC algorithm modified treatment plans have dose distributions that are similar to those given by the re-optimized treatment plans. It takes us using the ORC algorithm to finish a treatment plan modification at least three times faster than the re-optimization procedure compared

  14. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities

    International Nuclear Information System (INIS)

    Carrasco, P.; Jornet, N.; Duch, M. A.; Panettieri, V.; Weber, L.; Eudaldo, T.; Ginjaume, M.; Ribas, M.

    2007-01-01

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10x10, 5x5, and 2x2 cm 2 ) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2x2 cm 2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values within

  15. Comparison of dose calculation algorithms in slab phantoms with cortical bone equivalent heterogeneities.

    Science.gov (United States)

    Carrasco, P; Jornet, N; Duch, M A; Panettieri, V; Weber, L; Eudaldo, T; Ginjaume, M; Ribas, M

    2007-08-01

    To evaluate the dose values predicted by several calculation algorithms in two treatment planning systems, Monte Carlo (MC) simulations and measurements by means of various detectors were performed in heterogeneous layer phantoms with water- and bone-equivalent materials. Percentage depth doses (PDDs) were measured with thermoluminescent dosimeters (TLDs), metal-oxide semiconductor field-effect transistors (MOSFETs), plane parallel and cylindrical ionization chambers, and beam profiles with films. The MC code used for the simulations was the PENELOPE code. Three different field sizes (10 x 10, 5 x 5, and 2 x 2 cm2) were studied in two phantom configurations and a bone equivalent material. These two phantom configurations contained heterogeneities of 5 and 2 cm of bone, respectively. We analyzed the performance of four correction-based algorithms and one based on convolution superposition. The correction-based algorithms were the Batho, the Modified Batho, the Equivalent TAR implemented in the Cadplan (Varian) treatment planning system (TPS), and the Helax-TMS Pencil Beam from the Helax-TMS (Nucletron) TPS. The convolution-superposition algorithm was the Collapsed Cone implemented in the Helax-TMS. All the correction-based calculation algorithms underestimated the dose inside the bone-equivalent material for 18 MV compared to MC simulations. The maximum underestimation, in terms of root-mean-square (RMS), was about 15% for the Helax-TMS Pencil Beam (Helax-TMS PB) for a 2 x 2 cm2 field inside the bone-equivalent material. In contrast, the Collapsed Cone algorithm yielded values around 3%. A more complex behavior was found for 6 MV where the Collapsed Cone performed less well, overestimating the dose inside the heterogeneity in 3%-5%. The rebuildup in the interface bone-water and the penumbra shrinking in high-density media were not predicted by any of the calculation algorithms except the Collapsed Cone, and only the MC simulations matched the experimental values

  16. A synthesis/design optimization algorithm for Rankine cycle based energy systems

    International Nuclear Information System (INIS)

    Toffolo, Andrea

    2014-01-01

    The algorithm presented in this work has been developed to search for the optimal topology and design parameters of a set of Rankine cycles forming an energy system that absorbs/releases heat at different temperature levels and converts part of the absorbed heat into electricity. This algorithm can deal with several applications in the field of energy engineering: e.g., steam cycles or bottoming cycles in combined/cogenerative plants, steam networks, low temperature organic Rankine cycles. The main purpose of this algorithm is to overcome the limitations of the search space introduced by the traditional mixed-integer programming techniques, which assume that possible solutions are derived from a single superstructure embedding them all. The algorithm presented in this work is a hybrid evolutionary/traditional optimization algorithm organized in two levels. A complex original codification of the topology and the intensive design parameters of the system is managed by the upper level evolutionary algorithm according to the criteria set by the HEATSEP method, which are used for the first time to automatically synthesize a “basic” system configuration from a set of elementary thermodynamic cycles. The lower SQP (sequential quadratic programming) algorithm optimizes the objective function(s) with respect to cycle mass flow rates only, taking into account the heat transfer feasibility constraint within the undefined heat transfer section. A challenging example of application is also presented to show the capabilities of the algorithm. - Highlights: • Energy systems based on Rankine cycles are used in many applications. • A hybrid algorithm is proposed to optimize the synthesis/design of such systems. • The topology of the candidate solutions is not limited by a superstructure. • Topology is managed by the genetic operators of the upper level algorithm. • The effectiveness of the algorithm is proved in a complex test case

  17. Evaluating and comparing algorithms for respiratory motion prediction

    International Nuclear Information System (INIS)

    Ernst, F; Dürichen, R; Schlaefer, A; Schweikard, A

    2013-01-01

    In robotic radiosurgery, it is necessary to compensate for systematic latencies arising from target tracking and mechanical constraints. This compensation is usually achieved by means of an algorithm which computes the future target position. In most scientific works on respiratory motion prediction, only one or two algorithms are evaluated on a limited amount of very short motion traces. The purpose of this work is to gain more insight into the real world capabilities of respiratory motion prediction methods by evaluating many algorithms on an unprecedented amount of data. We have evaluated six algorithms, the normalized least mean squares (nLMS), recursive least squares (RLS), multi-step linear methods (MULIN), wavelet-based multiscale autoregression (wLMS), extended Kalman filtering, and ε-support vector regression (SVRpred) methods, on an extensive database of 304 respiratory motion traces. The traces were collected during treatment with the CyberKnife (Accuray, Inc., Sunnyvale, CA, USA) and feature an average length of 71 min. Evaluation was done using a graphical prediction toolkit, which is available to the general public, as is the data we used. The experiments show that the nLMS algorithm—which is one of the algorithms currently used in the CyberKnife—is outperformed by all other methods. This is especially true in the case of the wLMS, the SVRpred, and the MULIN algorithms, which perform much better. The nLMS algorithm produces a relative root mean square (RMS) error of 75% or less (i.e., a reduction in error of 25% or more when compared to not doing prediction) in only 38% of the test cases, whereas the MULIN and SVRpred methods reach this level in more than 77%, the wLMS algorithm in more than 84% of the test cases. Our work shows that the wLMS algorithm is the most accurate algorithm and does not require parameter tuning, making it an ideal candidate for clinical implementation. Additionally, we have seen that the structure of a patient

  18. Performance tests of the Kramers equation and boson algorithms for simulations of QCD

    International Nuclear Information System (INIS)

    Jansen, K.; Liu Chuan; Jegerlehner, B.

    1995-12-01

    We present a performance comparison of the Kramers equation and the boson algorithms for simulations of QCD with two flavors of dynamical Wilson fermions and gauge group SU(2). Results are obtained on 6 3 12, 8 3 12 and 16 4 lattices. In both algorithms a number of optimizations are installed. (orig.)

  19. Benchmark Framework for Mobile Robots Navigation Algorithms

    Directory of Open Access Journals (Sweden)

    Nelson David Muñoz-Ceballos

    2014-01-01

    Full Text Available Despite the wide variety of studies and research on mobile robot systems, performance metrics are not often examined. This makes difficult to establish an objective comparison of achievements. In this paper, the navigation of an autonomous mobile robot is evaluated. Several metrics are described. These metrics, collectively, provide an indication of navigation quality, useful for comparing and analyzing navigation algorithms of mobile robots. This method is suggested as an educational tool, which allows the student to optimize the algorithms quality, relating to important aspectsof science, technology and engineering teaching, as energy consumption, optimization and design.

  20. A novel heuristic algorithm for capacitated vehicle routing problem

    Science.gov (United States)

    Kır, Sena; Yazgan, Harun Reşit; Tüncel, Emre

    2017-09-01

    The vehicle routing problem with the capacity constraints was considered in this paper. It is quite difficult to achieve an optimal solution with traditional optimization methods by reason of the high computational complexity for large-scale problems. Consequently, new heuristic or metaheuristic approaches have been developed to solve this problem. In this paper, we constructed a new heuristic algorithm based on the tabu search and adaptive large neighborhood search (ALNS) with several specifically designed operators and features to solve the capacitated vehicle routing problem (CVRP). The effectiveness of the proposed algorithm was illustrated on the benchmark problems. The algorithm provides a better performance on large-scaled instances and gained advantage in terms of CPU time. In addition, we solved a real-life CVRP using the proposed algorithm and found the encouraging results by comparison with the current situation that the company is in.

  1. Comparison of a constraint directed search to a genetic algorithm in a scheduling application

    International Nuclear Information System (INIS)

    Abbott, L.

    1993-01-01

    Scheduling plutonium containers for blending is a time-intensive operation. Several constraints must be taken into account; including the number of containers in a dissolver run, the size of each dissolver run, and the size and target purity of the blended mixture formed from these runs. Two types of algorithms have been used to solve this problem: a constraint directed search and a genetic algorithm. This paper discusses the implementation of these two different approaches to the problem and the strengths and weaknesses of each algorithm

  2. Iterative algorithm for joint zero diagonalization with application in blind source separation.

    Science.gov (United States)

    Zhang, Wei-Tao; Lou, Shun-Tian

    2011-07-01

    A new iterative algorithm for the nonunitary joint zero diagonalization of a set of matrices is proposed for blind source separation applications. On one hand, since the zero diagonalizer of the proposed algorithm is constructed iteratively by successive multiplications of an invertible matrix, the singular solutions that occur in the existing nonunitary iterative algorithms are naturally avoided. On the other hand, compared to the algebraic method for joint zero diagonalization, the proposed algorithm requires fewer matrices to be zero diagonalized to yield even better performance. The extension of the algorithm to the complex and nonsquare mixing cases is also addressed. Numerical simulations on both synthetic data and blind source separation using time-frequency distributions illustrate the performance of the algorithm and provide a comparison to the leading joint zero diagonalization schemes.

  3. Comparison of some classification algorithms based on deterministic and nondeterministic decision rules

    KAUST Repository

    Delimata, Paweł

    2010-01-01

    We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.

  4. A comparison of algorithms for inference and learning in probabilistic graphical models.

    Science.gov (United States)

    Frey, Brendan J; Jojic, Nebojsa

    2005-09-01

    Research into methods for reasoning under uncertainty is currently one of the most exciting areas of artificial intelligence, largely because it has recently become possible to record, store, and process large amounts of data. While impressive achievements have been made in pattern classification problems such as handwritten character recognition, face detection, speaker identification, and prediction of gene function, it is even more exciting that researchers are on the verge of introducing systems that can perform large-scale combinatorial analyses of data, decomposing the data into interacting components. For example, computational methods for automatic scene analysis are now emerging in the computer vision community. These methods decompose an input image into its constituent objects, lighting conditions, motion patterns, etc. Two of the main challenges are finding effective representations and models in specific applications and finding efficient algorithms for inference and learning in these models. In this paper, we advocate the use of graph-based probability models and their associated inference and learning algorithms. We review exact techniques and various approximate, computationally efficient techniques, including iterated conditional modes, the expectation maximization (EM) algorithm, Gibbs sampling, the mean field method, variational techniques, structured variational techniques and the sum-product algorithm ("loopy" belief propagation). We describe how each technique can be applied in a vision model of multiple, occluding objects and contrast the behaviors and performances of the techniques using a unifying cost function, free energy.

  5. CAS algorithm-based optimum design of PID controller in AVR system

    International Nuclear Information System (INIS)

    Zhu Hui; Li Lixiang; Zhao Ying; Guo Yu; Yang Yixian

    2009-01-01

    This paper presents a novel design method for determining the optimal PID controller parameters of an automatic voltage regulator (AVR) system using the chaotic ant swarm (CAS) algorithm. In the tuning process of parameters, the CAS algorithm is iterated to give the optimal parameters of the PID controller based on the fitness theory, where the position vector of each ant in the CAS algorithm corresponds to the parameter vector of the PID controller. The proposed CAS-PID controllers can ensure better control system performance with respect to the reference input in comparison with GA-PID controllers. Numerical simulations are provided to verify the effectiveness and feasibility of PID controller based on CAS algorithm.

  6. Implementation of software-based sensor linearization algorithms on low-cost microcontrollers.

    Science.gov (United States)

    Erdem, Hamit

    2010-10-01

    Nonlinear sensors and microcontrollers are used in many embedded system designs. As the input-output characteristic of most sensors is nonlinear in nature, obtaining data from a nonlinear sensor by using an integer microcontroller has always been a design challenge. This paper discusses the implementation of six software-based sensor linearization algorithms for low-cost microcontrollers. The comparative study of the linearization algorithms is performed by using a nonlinear optical distance-measuring sensor. The performance of the algorithms is examined with respect to memory space usage, linearization accuracy and algorithm execution time. The implementation and comparison results can be used for selection of a linearization algorithm based on the sensor transfer function, expected linearization accuracy and microcontroller capacity. Copyright © 2010 ISA. Published by Elsevier Ltd. All rights reserved.

  7. A Research on the Comparison of the MultipleIntelligince Types of the Candidates Who Succeeded and Failedinthe Entrance Exams of Physical Educationand Sports School

    Directory of Open Access Journals (Sweden)

    Murat KUL

    2014-07-01

    Full Text Available The purpose of the study , candidates who participated in a special aptitude test of Physical Education and Sports School are compared those who were eligible to register with the win of Multiple Inte lligence Areas. In the research Scan model was used. Within the investigation, in 785 candidates who applied Bartin Universty School of Physical Education and Sports Special Ability Test for 2013 - 2014 academic year, 536 volunteer candidates who have average age x yaş = 21.15± 2.66 constitude. As data collection tool, belogns to the candidates personal information form and “Multiple Intelligences Inventory” which was developed by Özden (2003 for he identification of multiple intellegences was applied. Reliability coefficient was discovered as .96. In evaluation of data, SPSS data an alysis program was used. In evaluation of data, frequency, average, standard, deviation from descriptive statistical techniques was used. Also by taking into account normal distribution of the data, Independent Sample T - test of statistical techniques was u sed. In considering the findings of the study “Bodily - Kinesthetic Intelligence” which is a field of Multiple Intelligences of candidates as statistically significant diffirence was found in the area. Candidates winning higher than avarage scores candidates who can not win are seen to have. Also, “Social - Interpersonal Intelligence” of candidates qualifing to register with who can not qualify to register statistically significant results were observed in the levels. Winning candidates in this area compared t o the candidates who win more than others, it is concluded that they carry the dominant features. As a result of “Verbal - Linguistic Intelligence”, “Logical - Mathematical Intelligence”, “Musical - Rhythmic Intelligence”, “Bodily - Kinesthetic Intelligence, “Soci al - Interpersonal Intelligence” of Multiple Intelligence Areas candidates who participated in Physical Education

  8. An Extremely Red and Two Other Nearby L Dwarf Candidates Previously Overlooked in 2MASS, WISE, and Other Surveys

    Science.gov (United States)

    Scholz, Ralf-Dieter; Bell, Cameron P. M.

    2018-02-01

    We present three new nearby L dwarf candidates, found in a continued combined color/proper motion search using WISE, 2MASS, and other survey data, where we included extended WISE sources and looked closer to the Galactic plane region. Their spectral types and distances were estimated from photometric comparisons to well-known L dwarfs with trigonometric parallaxes. The first object, 2MASS J07555430-3259589, is an extremely red L7.5p dwarf candidate at a photometric distance of about 16 pc. Its position, proper motion and distance are consistent with membership in the Carina-Near young moving group. The second one, 2MASS J07414279-0506464, is resolved in Gaia DR1 as a close binary (separation 0.3 arcsec), and we classify it as a equal-mass binary candidate consisting of two L5 dwarfs at 19 pc. Our nearest new neighbor, 2MASS J19251275+0700362, is an L7 dwarf candidate at 10 pc.

  9. To develop a universal gamut mapping algorithm

    International Nuclear Information System (INIS)

    Morovic, J.

    1998-10-01

    using various algorithms was also looked at and a strong and positive correlation was found between these two properties. It was also shown that the reproductions made with GCUSP were pleasant in isolation. which makes it a very good candidate for a standard universal gamut mapping algorithm. (author)

  10. Updating the CHAOS series of field models using Swarm data and resulting candidate models for IGRF-12

    DEFF Research Database (Denmark)

    Finlay, Chris; Olsen, Nils; Tøffner-Clausen, Lars

    th order spline representation with knot points spaced at 0.5 year intervals. The resulting field model is able to consistently fit data from six independent low Earth orbit satellites: Oersted, CHAMP, SAC-C and the three Swarm satellites. As an example, we present comparisons of the excellent model...... therefore conclude that Swarm data is suitable for building high-resolution models of the large-scale internal field, and proceed to extract IGRF-12 candidate models for the main field in epochs 2010 and 2015, as well as the predicted linear secular variarion for 2015-2020. The properties of these IGRF...... candidate models are briefly presented....

  11. An Improved Direction Finding Algorithm Based on Toeplitz Approximation

    Directory of Open Access Journals (Sweden)

    Qing Wang

    2013-01-01

    Full Text Available In this paper, a novel direction of arrival (DOA estimation algorithm called the Toeplitz fourth order cumulants multiple signal classification method (TFOC-MUSIC algorithm is proposed through combining a fast MUSIC-like algorithm termed the modified fourth order cumulants MUSIC (MFOC-MUSIC algorithm and Toeplitz approximation. In the proposed algorithm, the redundant information in the cumulants is removed. Besides, the computational complexity is reduced due to the decreased dimension of the fourth-order cumulants matrix, which is equal to the number of the virtual array elements. That is, the effective array aperture of a physical array remains unchanged. However, due to finite sampling snapshots, there exists an estimation error of the reduced-rank FOC matrix and thus the capacity of DOA estimation degrades. In order to improve the estimation performance, Toeplitz approximation is introduced to recover the Toeplitz structure of the reduced-dimension FOC matrix just like the ideal one which has the Toeplitz structure possessing optimal estimated results. The theoretical formulas of the proposed algorithm are derived, and the simulations results are presented. From the simulations, in comparison with the MFOC-MUSIC algorithm, it is concluded that the TFOC-MUSIC algorithm yields an excellent performance in both spatially-white noise and in spatially-color noise environments.

  12. Comparison of four machine learning algorithms for their applicability in satellite-based optical rainfall retrievals

    Science.gov (United States)

    Meyer, Hanna; Kühnlein, Meike; Appelhans, Tim; Nauss, Thomas

    2016-03-01

    Machine learning (ML) algorithms have successfully been demonstrated to be valuable tools in satellite-based rainfall retrievals which show the practicability of using ML algorithms when faced with high dimensional and complex data. Moreover, recent developments in parallel computing with ML present new possibilities for training and prediction speed and therefore make their usage in real-time systems feasible. This study compares four ML algorithms - random forests (RF), neural networks (NNET), averaged neural networks (AVNNET) and support vector machines (SVM) - for rainfall area detection and rainfall rate assignment using MSG SEVIRI data over Germany. Satellite-based proxies for cloud top height, cloud top temperature, cloud phase and cloud water path serve as predictor variables. The results indicate an overestimation of rainfall area delineation regardless of the ML algorithm (averaged bias = 1.8) but a high probability of detection ranging from 81% (SVM) to 85% (NNET). On a 24-hour basis, the performance of the rainfall rate assignment yielded R2 values between 0.39 (SVM) and 0.44 (AVNNET). Though the differences in the algorithms' performance were rather small, NNET and AVNNET were identified as the most suitable algorithms. On average, they demonstrated the best performance in rainfall area delineation as well as in rainfall rate assignment. NNET's computational speed is an additional advantage in work with large datasets such as in remote sensing based rainfall retrievals. However, since no single algorithm performed considerably better than the others we conclude that further research in providing suitable predictors for rainfall is of greater necessity than an optimization through the choice of the ML algorithm.

  13. Comparison of different reconstruction algorithms for three-dimensional ultrasound imaging in a neurosurgical setting.

    Science.gov (United States)

    Miller, D; Lippert, C; Vollmer, F; Bozinov, O; Benes, L; Schulte, D M; Sure, U

    2012-09-01

    Freehand three-dimensional ultrasound imaging (3D-US) is increasingly used in image-guided surgery. During image acquisition, a set of B-scans is acquired that is distributed in a non-parallel manner over the area of interest. Reconstructing these images into a regular array allows 3D visualization. However, the reconstruction process may introduce artefacts and may therefore reduce image quality. The aim of the study is to compare different algorithms with respect to image quality and diagnostic value for image guidance in neurosurgery. 3D-US data sets were acquired during surgery of various intracerebral lesions using an integrated ultrasound-navigation device. They were stored for post-hoc evaluation. Five different reconstruction algorithms, a standard multiplanar reconstruction with interpolation (MPR), a pixel nearest neighbour method (PNN), a voxel nearest neighbour method (VNN) and two voxel based distance-weighted algorithms (VNN2 and DW) were tested with respect to image quality and artefact formation. The capability of the algorithm to fill gaps within the sample volume was investigated and a clinical evaluation with respect to the diagnostic value of the reconstructed images was performed. MPR was significantly worse than the other algorithms in filling gaps. In an image subtraction test, VNN2 and DW reliably reconstructed images even if large amounts of data were missing. However, the quality of the reconstruction improved, if data acquisition was performed in a structured manner. When evaluating the diagnostic value of reconstructed axial, sagittal and coronal views, VNN2 and DW were judged to be significantly better than MPR and VNN. VNN2 and DW could be identified as robust algorithms that generate reconstructed US images with a high diagnostic value. These algorithms improve the utility and reliability of 3D-US imaging during intraoperative navigation. Copyright © 2012 John Wiley & Sons, Ltd.

  14. An efficient biological pathway layout algorithm combining grid-layout and spring embedder for complicated cellular location information.

    Science.gov (United States)

    Kojima, Kaname; Nagasaki, Masao; Miyano, Satoru

    2010-06-18

    Graph drawing is one of the important techniques for understanding biological regulations in a cell or among cells at the pathway level. Among many available layout algorithms, the spring embedder algorithm is widely used not only for pathway drawing but also for circuit placement and www visualization and so on because of the harmonized appearance of its results. For pathway drawing, location information is essential for its comprehension. However, complex shapes need to be taken into account when torus-shaped location information such as nuclear inner membrane, nuclear outer membrane, and plasma membrane is considered. Unfortunately, the spring embedder algorithm cannot easily handle such information. In addition, crossings between edges and nodes are usually not considered explicitly. We proposed a new grid-layout algorithm based on the spring embedder algorithm that can handle location information and provide layouts with harmonized appearance. In grid-layout algorithms, the mapping of nodes to grid points that minimizes a cost function is searched. By imposing positional constraints on grid points, location information including complex shapes can be easily considered. Our layout algorithm includes the spring embedder cost as a component of the cost function. We further extend the layout algorithm to enable dynamic update of the positions and sizes of compartments at each step. The new spring embedder-based grid-layout algorithm and a spring embedder algorithm are applied to three biological pathways; endothelial cell model, Fas-induced apoptosis model, and C. elegans cell fate simulation model. From the positional constraints, all the results of our algorithm satisfy location information, and hence, more comprehensible layouts are obtained as compared to the spring embedder algorithm. From the comparison of the number of crossings, the results of the grid-layout-based algorithm tend to contain more crossings than those of the spring embedder algorithm due to

  15. Comparisons of feature extraction algorithm based on unmanned aerial vehicle image

    Directory of Open Access Journals (Sweden)

    Xi Wenfei

    2017-07-01

    Full Text Available Feature point extraction technology has become a research hotspot in the photogrammetry and computer vision. The commonly used point feature extraction operators are SIFT operator, Forstner operator, Harris operator and Moravec operator, etc. With the high spatial resolution characteristics, UAV image is different from the traditional aviation image. Based on these characteristics of the unmanned aerial vehicle (UAV, this paper uses several operators referred above to extract feature points from the building images, grassland images, shrubbery images, and vegetable greenhouses images. Through the practical case analysis, the performance, advantages, disadvantages and adaptability of each algorithm are compared and analyzed by considering their speed and accuracy. Finally, the suggestions of how to adapt different algorithms in diverse environment are proposed.

  16. Target recognition of ladar range images using slice image: comparison of four improved algorithms

    Science.gov (United States)

    Xia, Wenze; Han, Shaokun; Cao, Jingya; Wang, Liang; Zhai, Yu; Cheng, Yang

    2017-07-01

    Compared with traditional 3-D shape data, ladar range images possess properties of strong noise, shape degeneracy, and sparsity, which make feature extraction and representation difficult. The slice image is an effective feature descriptor to resolve this problem. We propose four improved algorithms on target recognition of ladar range images using slice image. In order to improve resolution invariance of the slice image, mean value detection instead of maximum value detection is applied in these four improved algorithms. In order to improve rotation invariance of the slice image, three new improved feature descriptors-which are feature slice image, slice-Zernike moments, and slice-Fourier moments-are applied to the last three improved algorithms, respectively. Backpropagation neural networks are used as feature classifiers in the last two improved algorithms. The performance of these four improved recognition systems is analyzed comprehensively in the aspects of the three invariances, recognition rate, and execution time. The final experiment results show that the improvements for these four algorithms reach the desired effect, the three invariances of feature descriptors are not directly related to the final recognition performance of recognition systems, and these four improved recognition systems have different performances under different conditions.

  17. Estimating the chance of success in IVF treatment using a ranking algorithm.

    Science.gov (United States)

    Güvenir, H Altay; Misirli, Gizem; Dilbaz, Serdar; Ozdegirmenci, Ozlem; Demir, Berfu; Dilbaz, Berna

    2015-09-01

    In medicine, estimating the chance of success for treatment is important in deciding whether to begin the treatment or not. This paper focuses on the domain of in vitro fertilization (IVF), where estimating the outcome of a treatment is very crucial in the decision to proceed with treatment for both the clinicians and the infertile couples. IVF treatment is a stressful and costly process. It is very stressful for couples who want to have a baby. If an initial evaluation indicates a low pregnancy rate, decision of the couple may change not to start the IVF treatment. The aim of this study is twofold, firstly, to develop a technique that can be used to estimate the chance of success for a couple who wants to have a baby and secondly, to determine the attributes and their particular values affecting the outcome in IVF treatment. We propose a new technique, called success estimation using a ranking algorithm (SERA), for estimating the success of a treatment using a ranking-based algorithm. The particular ranking algorithm used here is RIMARC. The performance of the new algorithm is compared with two well-known algorithms that assign class probabilities to query instances. The algorithms used in the comparison are Naïve Bayes Classifier and Random Forest. The comparison is done in terms of area under the ROC curve, accuracy and execution time, using tenfold stratified cross-validation. The results indicate that the proposed SERA algorithm has a potential to be used successfully to estimate the probability of success in medical treatment.

  18. A computational fluid dynamics algorithm on a massively parallel computer

    International Nuclear Information System (INIS)

    Jespersen, D.C.; Levit, C.

    1989-01-01

    The implementation and performance of a finite-difference algorithm for the compressible Navier-Stokes equations in two or three dimensions on the Connection Machine are described. This machine is a single-instruction multiple-data machine with up to 65536 physical processors. The implicit portion of the algorithm is of particular interest. Running times and megadrop rates are given for two- and three-dimensional problems. Included are comparisons with the standard codes on a Cray X-MP/48. 15 refs

  19. A DIFFERENTIAL EVOLUTION ALGORITHM DEVELOPED FOR A NURSE SCHEDULING PROBLEM

    Directory of Open Access Journals (Sweden)

    Shahnazari-Shahrezaei, P.

    2012-11-01

    Full Text Available Nurse scheduling is a type of manpower allocation problem that tries to satisfy hospital managers objectives and nurses preferences as much as possible by generating fair shift schedules. This paper presents a nurse scheduling problem based on a real case study, and proposes two meta-heuristics a differential evolution algorithm (DE and a greedy randomised adaptive search procedure (GRASP to solve it. To investigate the efficiency of the proposed algorithms, two problems are solved. Furthermore, some comparison metrics are applied to examine the reliability of the proposed algorithms. The computational results in this paper show that the proposed DE outperforms the GRASP.

  20. Results of Evolution Supervised by Genetic Algorithms

    Directory of Open Access Journals (Sweden)

    Lorentz JÄNTSCHI

    2010-09-01

    Full Text Available The efficiency of a genetic algorithm is frequently assessed using a series of operators of evolution like crossover operators, mutation operators or other dynamic parameters. The present paper aimed to review the main results of evolution supervised by genetic algorithms used to identify solutions to agricultural and horticultural hard problems and to discuss the results of using a genetic algorithms on structure-activity relationships in terms of behavior of evolution supervised by genetic algorithms. A genetic algorithm had been developed and implemented in order to identify the optimal solution in term of estimation power of a multiple linear regression approach for structure-activity relationships. Three survival and three selection strategies (proportional, deterministic and tournament were investigated in order to identify the best survival-selection strategy able to lead to the model with higher estimation power. The Molecular Descriptors Family for structure characterization of a sample of 206 polychlorinated biphenyls with measured octanol-water partition coefficients was used as case study. Evolution using different selection and survival strategies proved to create populations of genotypes living in the evolution space with different diversity and variability. Under a series of criteria of comparisons these populations proved to be grouped and the groups were showed to be statistically different one to each other. The conclusions about genetic algorithm evolution according to a number of criteria were also highlighted.