Experience with CANDID: Comparison algorithm for navigating digital image databases
Energy Technology Data Exchange (ETDEWEB)
Kelly, P.; Cannon, M.
1994-10-01
This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.
CANDID: Comparison algorithm for navigating digital image databases
Energy Technology Data Exchange (ETDEWEB)
Kelly, P.M.; Cannon, T.M.
1994-02-21
In this paper, we propose a method for calculating the similarity between two digital images. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized distance between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to an example target image. This algorithm is applied to the problem of search and retrieval for database containing pulmonary CT imagery, and experimental results are provided.
Lee, K J; Jenet, F A; Martinez, J; Dartez, L P; Mata, A; Lunsford, G; Cohen, S; Biwer, C M; Rohr, M; Flanigan, J; Walker, A; Banaszak, S; Allen, B; Barr, E D; Bhat, N D R; Bogdanov, S; Brazier, A; Camilo, F; Champion, D J; Chatterjee, S; Cordes, J; Crawford, F; Deneva, J; Desvignes, G; Ferdman, R D; Freire, P; Hessels, J W T; Karuppusamy, R; Kaspi, V M; Knispel, B; Kramer, M; Lazarus, P; Lynch, R; Lyne, A; McLaughlin, M; Ransom, S; Scholz, P; Siemens, X; Spitler, L; Stairs, I; Tan, M; van Leeuwen, J; Zhu, W W
2013-01-01
Modern radio pulsar surveys produce a large volume of prospective candidates, the majority of which are polluted by human-created radio frequency interference or other forms of noise. Typically, large numbers of candidates need to be visually inspected in order to determine if they are real pulsars. This process can be labor intensive. In this paper, we introduce an algorithm called PEACE (Pulsar Evaluation Algorithm for Candidate Extraction) which improves the efficiency of identifying pulsar signals. The algorithm ranks the candidates based on a score function. Unlike popular machine-learning based algorithms, no prior training data sets are required. This algorithm has been applied to data from several large-scale radio pulsar surveys. Using the human-based ranking results generated by students in the Arecibo Remote Command enter programme, the statistical performance of PEACE was evaluated. It was found that PEACE ranked 68% of the student-identified pulsars within the top 0.17% of sorted candidates, 95% ...
Theoretical Comparison Between Candidates for Dark Matter
McKeough, James; Hira, Ajit; Valdez, Alexandra
2017-01-01
Since the generally-accepted view among astrophysicists is that the matter component of the universe is mostly dark matter, the search for dark matter particles continues unabated. The Large Underground Xenon (LUX) improvements, aided by advanced computer simulations at the U.S. Department of Energy's Lawrence Berkeley National Laboratory's (Berkeley Lab) National Energy Research Scientific Computing Center (NERSC) and Brown University's Center for Computation and Visualization (CCV), can potentially eliminate some particle models of dark matter. Generally, the proposed candidates can be put in three categories: baryonic dark matter, hot dark matter, and cold dark matter. The Lightest Supersymmetric Particle(LSP) of supersymmetric models is a dark matter candidate, and is classified as a Weakly Interacting Massive Particle (WIMP). Similar to the cosmic microwave background radiation left over from the Big Bang, there is a background of low-energy neutrinos in our Universe. According to some researchers, these may be the explanation for the dark matter. One advantage of the Neutrino Model is that they are known to exist. Dark matter made from neutrinos is termed ``hot dark matter''. We formulate a novel empirical function for the average density profile of cosmic voids, identified via the watershed technique in ΛCDM N-body simulations. This function adequately treats both void size and redshift, and describes the scale radius and the central density of voids. We started with a five-parameter model. Our research is mainly on LSP and Neutrino models.
Comparison of Text Categorization Algorithms
Institute of Scientific and Technical Information of China (English)
SHI Yong-feng; ZHAO Yan-ping
2004-01-01
This paper summarizes several automatic text categorization algorithms in common use recently, analyzes and compares their advantages and disadvantages.It provides clues for making use of appropriate automatic classifying algorithms in different fields.Finally some evaluations and summaries of these algorithms are discussed, and directions to further research have been pointed out.
Comparison of fast discrete wavelet transform algorithms
Institute of Scientific and Technical Information of China (English)
MENG Shu-ping; TIAN Feng-chun; XU Xin
2005-01-01
This paper presents an analysis on and experimental comparison of several typical fast algorithms for discrete wavelet transform (DWT) and their implementation in image compression, particularly the Mallat algorithm, FFT-based algorithm, Short-length based algorithm and Lifting algorithm. The principles, structures and computational complexity of these algorithms are explored in details respectively. The results of the experiments for comparison are consistent to those simulated by MATLAB. It is found that there are limitations in the implementation of DWT. Some algorithms are workable only for special wavelet transform, lacking in generality. Above all, the speed of wavelet transform, as the governing element to the speed of image processing, is in fact the retarding factor for real-time image processing.
Evaluation of GPM candidate algorithms on hurricane observations
Le, M.; Chandrasekar, C. V.
2012-12-01
storms and hurricanes. In this paper, the performance of GPM candidate algorithms [2][3] to perform profile classification, melting region detection as well as drop size distribution retrieval for hurricane Earl will be presented. This analysis will be compared with other storm observations that are not tropical storms. The philosophy of the algorithm is based on the vertical characteristic of measured dual-frequency ratio (DFRm), defined as the difference in measured radar reflectivities at the two frequencies. It helps our understanding of how hurricanes such as Earl form and intensify rapidly. Reference [1] T. Iguchi, R. Oki, A. Eric and Y. Furuhama, "Global precipitation measurement program and the development of dual-frequency precipitation radar," J. Commun. Res. Lab. (Japan), 49, 37-45.2002. [2] M. Le and V. Chandrasekar, Recent updates on precipitation classification and hydrometeor identification algorithm for GPM-DPR, Geoscience science and remote sensing symposium, IGARSS'2012, IEEE International, Munich, Germany. [3] M. Le ,V. Chandrasekar and S. Lim, Microphysical retrieval from dual-frequency precipitation radar board GPM, Geoscience science and remote sensing symposium, IGARSS'2010, IEEE International, Honolulu, USA.
Institute of Scientific and Technical Information of China (English)
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Supercomputers and biological sequence comparison algorithms.
Core, N G; Edmiston, E W; Saltz, J H; Smith, R M
1989-12-01
Comparison of biological (DNA or protein) sequences provides insight into molecular structure, function, and homology and is increasingly important as the available databases become larger and more numerous. One method of increasing the speed of the calculations is to perform them in parallel. We present the results of initial investigations using two dynamic programming algorithms on the Intel iPSC hypercube and the Connection Machine as well as an inexpensive, heuristically-based algorithm on the Encore Multimax.
Dynamic programming algorithms for biological sequence comparison.
Pearson, W R; Miller, W
1992-01-01
Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.
An Algorithm for Selecting QGP Candidate Events from Relativistic Heavy Ion Collision Data Sample
Lian Shou Liu; Yuan, H B; Lianshou, Liu; Qinghua, Chen; Yuan, Hu
1998-01-01
The formation of quark-gluon plasma (QGP) in relativistic heavy ion collision, is expected to be accompanied by a background of ordinary collision events without phase transition. In this short note an algorithm is proposed to select the QGP candidate events from the whole event sample. This algorithm is based on a simple geometrical consideration together with some ordinary QGP signal, e.g. the increasing of $K/\\pi$ ratio. The efficiency of this algorithm in raising the 'signal/noise ratio' of QGP events in the selected sub-sample is shown explicitly by using Monte-Carlo simulation.
A Comparison of Algorithms for the Construction of SZ Cluster Catalogues
Melin, J -B; Bartelmann, M; Bartlett, J G; Betoule, M; Bobin, J; Carvalho, P; Chon, G; Delabrouille, J; Diego, J M; Harrison, D L; Herranz, D; Hobson, M; Kneissl, R; Lasenby, A N; Jeune, M Le; Lopez-Caniego, M; Mazzotta, P; Rocha, G M; Schaefer, B M; Starck, J -L; Waizmann, J -C; Yvon, D
2012-01-01
We evaluate the construction methodology of an all-sky catalogue of galaxy clusters detected through the Sunyaev-Zel'dovich (SZ) effect. We perform an extensive comparison of twelve algorithms applied to the same detailed simulations of the millimeter and submillimeter sky based on a Planck-like case. We present the results of this "SZ Challenge" in terms of catalogue completeness, purity, astrometric and photometric reconstruction. Our results provide a comparison of a representative sample of SZ detection algorithms and highlight important issues in their application. In our study case, we show that the exact expected number of clusters remains uncertain (about a thousand cluster candidates at |b|> 20 deg with 90% purity) and that it depends on the SZ model and on the detailed sky simulations, and on algorithmic implementation of the detection methods. We also estimate the astrometric precision of the cluster candidates which is found of the order of ~2 arcmins on average, and the photometric uncertainty of...
The Performance Comparisons between the Unconstrained and Constrained Equalization Algorithms
Institute of Scientific and Technical Information of China (English)
HE Zhong-qiu; LI Dao-ben
2003-01-01
This paper proposes two unconstrained algorithms, the Steepest Decent (SD) algorithm and the Conjugate Gradient (CG) algorithm, based on a superexcellent cost function [1～3]. At the same time, two constrained algorithms which include the Constrained Steepest Decent (CSD) algorithm and the Constrained Conjugate Gradient algorithm (CCG) are deduced subject to a new constrain condition. They are both implemented in unitary transform domain. The computational complexities of the constrained algorithms are compared to those of the unconstrained algorithms. Resulting simulations show their performance comparisons.
[SKLOF: a new algorithm to reduce the range of supernova candidates].
Tu, Liang-ping; Wei, Hui-ming; Wei, Peng; Pan, Jing-chang; Luo, A-li; Zhao, Yong-heng
2015-01-01
Supernova (SN) is called the "standard candles" in the cosmology, the probability of outbreak in the galaxy is very low and is a kind of special, rare astronomical objects. Only in a large number of galaxies, we have a chance to find the supernova. The supernova which is in the midst of explosion will illuminate the entire galaxy, so the spectra of galaxies we obtained have obvious features of supernova. But the number of supernova have been found is very small relative to the large number of astronomical objects. The time computation that search the supernova be the key to weather the follow-up observations, therefore it needs to look for an efficient method. The time complexity of the density-based outlier detecting algorithm (LOF) is not ideal, which effects its application in large datasets. Through the improvement of LOF algorithm, a new algorithm that reduces the searching range of supernova candidates in a flood of spectra of galaxies is introduced and named SKLOF. Firstly, the spectra datasets are pruned and we can get rid of most objects are impossible to be the outliers. Secondly, we use the improved LOF algorithm to calculate the local outlier factors (LOF) of the spectra datasets remained and all LOFs are arranged in descending order. Finally, we can get the smaller searching range of the supernova candidates for the subsequent identification. The experimental results show that the algorithm is very effective, not only improved in accuracy, but also reduce the operation time compared with LOF algorithm with the guarantee of the accuracy of detection.
Tradeoffs Between Branch Mispredictions and Comparisons for Sorting Algorithms
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Moruz, Gabriel
2005-01-01
Branch mispredictions is an important factor affecting the running time in practice. In this paper we consider tradeoffs between the number of branch mispredictions and the number of comparisons for sorting algorithms in the comparison model. We prove that a sorting algorithm using O(dnlog n...
Line Balancing Using Largest Candidate Rule Algorithm In A Garment Industry: A Case Study
Directory of Open Access Journals (Sweden)
V. P.Jaganathan
2014-12-01
Full Text Available The emergence of fast changes in fashion has given rise to the need to shorten production cycle times in the garment industry. As effective usage of resources has a significant effect on the productivity and efficiency of production operations, garment manufacturers are urged to utilize their resources effectively in order to meet dynamic customer demand. This paper focuses specifically on line balancing and layout modification. The aim of assembly line balance in sewing lines is to assign tasks to the workstations, so that the machines of the workstation can perform the assigned tasks with a balanced loading. Largest Candidate Rule Algorithm (LCR has been deployed in this paper.
Triad pattern algorithm for predicting strong promoter candidates in bacterial genomes
Directory of Open Access Journals (Sweden)
Sakanyan Vehary
2008-05-01
Full Text Available Abstract Background Bacterial promoters, which increase the efficiency of gene expression, differ from other promoters by several characteristics. This difference, not yet widely exploited in bioinformatics, looks promising for the development of relevant computational tools to search for strong promoters in bacterial genomes. Results We describe a new triad pattern algorithm that predicts strong promoter candidates in annotated bacterial genomes by matching specific patterns for the group I σ70 factors of Escherichia coli RNA polymerase. It detects promoter-specific motifs by consecutively matching three patterns, consisting of an UP-element, required for interaction with the α subunit, and then optimally-separated patterns of -35 and -10 boxes, required for interaction with the σ70 subunit of RNA polymerase. Analysis of 43 bacterial genomes revealed that the frequency of candidate sequences depends on the A+T content of the DNA under examination. The accuracy of in silico prediction was experimentally validated for the genome of a hyperthermophilic bacterium, Thermotoga maritima, by applying a cell-free expression assay using the predicted strong promoters. In this organism, the strong promoters govern genes for translation, energy metabolism, transport, cell movement, and other as-yet unidentified functions. Conclusion The triad pattern algorithm developed for predicting strong bacterial promoters is well suited for analyzing bacterial genomes with an A+T content of less than 62%. This computational tool opens new prospects for investigating global gene expression, and individual strong promoters in bacteria of medical and/or economic significance.
Trust Based Algorithm for Candidate Node Selection in Hybrid MANET-DTN
Directory of Open Access Journals (Sweden)
Jan Papaj
2014-01-01
Full Text Available The hybrid MANET - DTN is a mobile network that enables transport of the data between groups of the disconnected mobile nodes. The network provides benefits of the Mobile Ad-Hoc Networks (MANET and Delay Tolerant Network (DTN. The main problem of the MANET occurs if the communication path is broken or disconnected for some short time period. On the other side, DTN allows sending data in the disconnected environment with respect to higher tolerance to delay. Hybrid MANET - DTN provides optimal solution for emergency situation in order to transport information. Moreover, the security is the critical factor because the data are transported by mobile devices. In this paper, we investigate the issue of secure candidate node selection for transportation of the data in a disconnected environment for hybrid MANET- DTN. To achieve the secure selection of the reliable mobile nodes, the trust algorithm is introduced. The algorithm enables select reliable nodes based on collecting routing information. This algorithm is implemented to the simulator OPNET modeler.
COMPARISON OF LOSSLESS DATA COMPRESSION ALGORITHMS FOR TEXT DATA
Directory of Open Access Journals (Sweden)
U. S.Amarasinghe
2010-12-01
Full Text Available Data compression is a common requirement for most of the computerized applications. There are number of data compression algorithms,which are dedicated to compress different data formats. Even for a single data type there are number of different compression algorithms,which use different approaches. This paper examines lossless data compression algorithms and compares their performance. A set ofselected algorithms are examined and implemented to evaluate the performance in compressing text data. An experimental comparison of anumber of different lossless data compression algorithms is presented in this paper. The article is concluded by stating which algorithmperforms well for text data.
Directory of Open Access Journals (Sweden)
Ait-Ali Lamia
2011-11-01
Full Text Available Abstract Background To propose a new diagnostic algorithm for candidates for Fontan and identify those who can skip cardiac catheterization (CC. Methods Forty-four candidates for Fontan (median age 4.8 years, range: 2-29 years were prospectively evaluated by trans-thoracic echocardiography (TTE, Cardiovascular magnetic resonance (CMR and CC. Before CC, according to clinical, echo and CMR findings, patients were divided in two groups: Group I comprised 18 patients deemed suitable for Fontan without requiring CC; group II comprised 26 patients indicated for CC either in order to detect more details, or for interventional procedures. Results In Group I ("CC not required" no unexpected new information affecting surgical planning was provided by CC. Conversely, in Group II new information was provided by CC in three patients (0 vs 11.5%, p = 0.35 and in six an interventional procedure was performed. During CC, minor complications occurred in one patient from Group I and in three from Group II (6 vs 14%, p = 0.7. Radiation Dose-Area product was similar in the two groups (Median 20 Gycm2, range: 5-40 vs 26.5 Gycm2, range: 9-270 p = 0.37. All 18 Group I patients and 19 Group II patients underwent a total cavo-pulmonary anastomosis; in the remaining seven group II patients, four were excluded from Fontan; two are awaiting Fontan; one refused the intervention. Conclusion In this paper we propose a new diagnostic algorithm in a pre-Fontan setting. An accurate non-invasive evaluation comprising TTE and CMR could select patients who can skip CC.
Comparison between Two Text Digital Watermarking Algorithms
Institute of Scientific and Technical Information of China (English)
TANG Sheng; XUE Xu-ce
2011-01-01
In this paper,two text digital watermarking methods are compared in the context of their robustness performances.A nonlinear watermarking algorithm embeds the watermark into the reordered DCT coefficients of a text image,and utilizes a nonlinear detector to detect the watermark in some attacks.Compared with the classical watermarking algorithm,experimental results show that this nonlinear watennarking nlgorithm has some potential merits.
Comparison of greedy algorithms for α-decision tree construction
Alkhalid, Abdulaziz
2011-01-01
A comparison among different heuristics that are used by greedy algorithms which constructs approximate decision trees (α-decision trees) is presented. The comparison is conducted using decision tables based on 24 data sets from UCI Machine Learning Repository [2]. Complexity of decision trees is estimated relative to several cost functions: depth, average depth, number of nodes, number of nonterminal nodes, and number of terminal nodes. Costs of trees built by greedy algorithms are compared with minimum costs calculated by an algorithm based on dynamic programming. The results of experiments assign to each cost function a set of potentially good heuristics that minimize it. © 2011 Springer-Verlag.
Comparison of the SLAM algorithms: Hangar experiments
Directory of Open Access Journals (Sweden)
Korkmaz Mehmet
2016-01-01
Full Text Available This study purposes to compare two known algorithms in an application scenario of simultaneous localization and mapping (SLAM and to present issues related with them as well. Mostly used SLAM algorithms Extended Kalman Filter (EKF and Unscented Kalman Filter (UKF are compared with respect to the point of accuracy of the robot states, localization and mapping. Because of considering the most implementations in the previous studies, the simulation environments are chosen as big as possible to provide reliable results. In this study, two different hangar regions are tried to be simulated. According to the outcomes of the applications, UKF-based SLAM algorithm has superior performance over the EKF-based one, apart from elapsed time.
Does a Least-Preferred Candidate Win a Seat? A Comparison of Three Electoral Systems
Directory of Open Access Journals (Sweden)
Yoichi Hizen
2015-01-01
Full Text Available In this paper, the differences between two variations of proportional representation (PR, open-list PR and closed-list PR, are analyzed in terms of their ability to accurately reflect voter preference. The single nontransferable vote (SNTV is also included in the comparison as a benchmark. We construct a model of voting equilibria with a candidate who is least preferred by voters in the sense that replacing the least-preferred candidate in the set of winners with any loser is Pareto improving, and our focus is on whether the least-preferred candidate wins under each electoral system. We demonstrate that the least-preferred candidate never wins under the SNTV, but can win under open-list PR, although this is less likely than winning under closed-list PR.
Comparison of satellite reflectance algorithms for estimating ...
We analyzed 10 established and 4 new satellite reflectance algorithms for estimating chlorophyll-a (Chl-a) in a temperate reservoir in southwest Ohio using coincident hyperspectral aircraft imagery and dense water truth collected within one hour of image acquisition to develop simple proxies for algal blooms and to facilitate portability between multispectral satellite imagers for regional algal bloom monitoring. Narrow band hyperspectral aircraft images were upscaled spectrally and spatially to simulate 5 current and near future satellite imaging systems. Established and new Chl-a algorithms were then applied to the synthetic satellite images and then compared to calibrated Chl-a water truth measurements collected from 44 sites within one hour of aircraft acquisition of the imagery. Masks based on the spatial resolution of the synthetic satellite imagery were then applied to eliminate mixed pixels including vegetated shorelines. Medium-resolution Landsat and finer resolution data were evaluated against 29 coincident water truth sites. Coarse-resolution MODIS and MERIS-like data were evaluated against 9 coincident water truth sites. Each synthetic satellite data set was then evaluated for the performance of a variety of spectrally appropriate algorithms with regard to the estimation of Chl-a concentrations against the water truth data set. The goal is to inform water resource decisions on the appropriate satellite data acquisition and processing for the es
Reranking candidate gene models with cross-species comparison for improved gene prediction
Directory of Open Access Journals (Sweden)
Pereira Fernando CN
2008-10-01
Full Text Available Abstract Background Most gene finders score candidate gene models with state-based methods, typically HMMs, by combining local properties (coding potential, splice donor and acceptor patterns, etc. Competing models with similar state-based scores may be distinguishable with additional information. In particular, functional and comparative genomics datasets may help to select among competing models of comparable probability by exploiting features likely to be associated with the correct gene models, such as conserved exon/intron structure or protein sequence features. Results We have investigated the utility of a simple post-processing step for selecting among a set of alternative gene models, using global scoring rules to rerank competing models for more accurate prediction. For each gene locus, we first generate the K best candidate gene models using the gene finder Evigan, and then rerank these models using comparisons with putative orthologous genes from closely-related species. Candidate gene models with lower scores in the original gene finder may be selected if they exhibit strong similarity to probable orthologs in coding sequence, splice site location, or signal peptide occurrence. Experiments on Drosophila melanogaster demonstrate that reranking based on cross-species comparison outperforms the best gene models identified by Evigan alone, and also outperforms the comparative gene finders GeneWise and Augustus+. Conclusion Reranking gene models with cross-species comparison improves gene prediction accuracy. This straightforward method can be readily adapted to incorporate additional lines of evidence, as it requires only a ranked source of candidate gene models.
Algorithmic parameterization of mixed treatment comparisons
van Valkenhoef, Gert; Tervonen, Tommi; de Brock, Bert; Hillege, Hans
2012-01-01
Mixed Treatment Comparisons (MTCs) enable the simultaneous meta-analysis (data pooling) of networks of clinical trials comparing a parts per thousand yen2 alternative treatments. Inconsistency models are critical in MTC to assess the overall consistency between evidence sources. Only in the absence
The DCA:SOMe Comparison A comparative study between two biologically-inspired algorithms
Greensmith, Julie; Aickelin, Uwe
2010-01-01
The Dendritic Cell Algorithm (DCA) is an immune-inspired algorithm, developed for the purpose of anomaly detection. The algorithm performs multi-sensor data fusion and correlation which results in a 'context aware' detection system. Previous applications of the DCA have included the detection of potentially malicious port scanning activity, where it has produced high rates of true positives and low rates of false positives. In this work we aim to compare the performance of the DCA and of a Self-Organizing Map (SOM) when applied to the detection of SYN port scans, through experimental analysis. A SOM is an ideal candidate for comparison as it shares similarities with the DCA in terms of the data fusion method employed. It is shown that the results of the two systems are comparable, and both produce false positives for the same processes. This shows that the DCA can produce anomaly detection results to the same standard as an established technique.
An Adaptive Algorithm for Pairwise Comparison-based Preference Measurement
DEFF Research Database (Denmark)
Meissner, Martin; Decker, Reinhold; Scholz, Sören W.
2011-01-01
The Pairwise Comparison‐based Preference Measurement (PCPM) approach has been proposed for products featuring a large number of attributes. In the PCPM framework, a static two‐cyclic design is used to reduce the number of pairwise comparisons. However, adaptive questioning routines that maximize...... the information gained from pairwise comparisons promise to further increase the efficiency of this approach. This paper introduces a new adaptive algorithm for PCPM, which accounts for several response errors. The suggested approach is compared with an adaptive algorithm that was proposed for the Analytic...
Comparison of Three Web Search Algorithms
Institute of Scientific and Technical Information of China (English)
Ying Bao; Zi-hu Zhu
2006-01-01
In this paper we discuss three important kinds of Markov chains used in Web search algorithms-the maximal irreducible Markov chain, the minimal irreducible Markov chain and the middle irreducible Markov chain. We discuss the stationary distributions, the convergence rates and the Maclaurin series of the stationary distributions of the three kinds of Markov chains. Among other things, our results show that the maximal and minimal Markov chains have the same stationary distribution and that the stationary distribution of the middle Markov chain reflects the real Web structure more objectively. Our results also prove that the maximal and middle Markov chains have the same convergence rate and that the maximal Markov chain converges faster than the minimal Markov chain when the damping factor α＞ 1/(√2).
A competitive comparison of different types of evolutionary algorithms
Hrstka, O; Leps, M; Zeman, J; 10.1016/S0045-7949(03)00217-7
2009-01-01
This paper presents comparison of several stochastic optimization algorithms developed by authors in their previous works for the solution of some problems arising in Civil Engineering. The introduced optimization methods are: the integer augmented simulated annealing (IASA), the real-coded augmented simulated annealing (RASA), the differential evolution (DE) in its original fashion developed by R. Storn and K. Price and simplified real-coded differential genetic algorithm (SADE). Each of these methods was developed for some specific optimization problem; namely the Chebychev trial polynomial problem, the so called type 0 function and two engineering problems - the reinforced concrete beam layout and the periodic unit cell problem respectively. Detailed and extensive numerical tests were performed to examine the stability and efficiency of proposed algorithms. The results of our experiments suggest that the performance and robustness of RASA, IASA and SADE methods are comparable, while the DE algorithm perfor...
Novel algorithm of finding good candidate pre-configuration cycles in survivable WDM mesh network
Institute of Scientific and Technical Information of China (English)
ZHAO Tai-fei; YU Hong-fang; LI Le-min
2006-01-01
We present a novel algorithm of finding cycles, called the Fast Cycles Mining Algorithm (FCMA),for effi cient p-cycle network design in WDM networks. The algorithm is also flexible in that the number and the length of cycles generated are controlled by several input parameters. The problem of wavelength assign ment on p-cycle is considered in the algorithm. This algorithm is scalable and especially suitable for surviv able WDM mesh networks. Finally, the performance of the algorithm is gauged by running on some real world network topologies.
Comparison of algorithms for determination of solar wind regimes
Neugebauer, Marcia; Reisenfeld, Daniel; Richardson, Ian G.
2016-09-01
This study compares the designation of different solar wind flow regimes (transient, coronal hole, and streamer belt) according to two algorithms derived from observations by the Solar Wind Ion Composition Spectrometer, the Solar Wind Electron Proton Alpha Monitor, and the Magnetometer on the ACE spacecraft, with a similar regime determination performed on board the Genesis spacecraft. The comparison is made for the interval from late 2001 to early 2004 when Genesis was collecting solar wind ions for return to Earth. The agreement between hourly regime assignments from any pair of algorithms was less than two thirds, while the simultaneous agreement between all three algorithms was only 49%. When the results of the algorithms were compared to a catalog of interplanetary coronal mass ejection events, it was found that almost all the events in the catalog were confirmed by the spacecraft algorithms. On the other hand, many short transient events, lasting 1 to 13 h, that were unanimously selected as transient like by the algorithms, were not included in the catalog.
Criteria for comparison of synchronization algorithms spaced measures time and frequency
Koval, Yuriy; Kostyrya, Alexander; Pryimak, Viacheslav; Al-Tvezhri, Basim
2012-01-01
The role and gives a classification of synchronization algorithms spatially separated measures time and frequency. For comparison algorithms introduced criteria that consider the example of one of the algorithms.
Comparison of face Recognition Algorithms on Dummy Faces
Directory of Open Access Journals (Sweden)
Aruni Singh
2012-09-01
Full Text Available In the age of rising crime face recognition is enormously important in the contexts of computer vision, psychology, surveillance, fraud detection, pattern recognition, neural network, content based video processing, etc. Face is a non intrusive strong biometrics for identification and hence criminals always try to hide their facial organs by different artificial means such as plastic surgery, disguise and dummy. The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms. However, while existing publicly-available face databases contain face images with a wide variety of poses, illumination, gestures and face occlusions but there is no dummy face database is available in public domain. The contributions of this research paper are: i Preparation of dummy face database of 110 subjects ii Comparison of some texture based, feature based and holistic face recognition algorithms on that dummy face database, iii Critical analysis of these types of algorithms on dummy face database.
Comparison of machine learning algorithms for detecting coral reef
Directory of Open Access Journals (Sweden)
Eduardo Tusa
2014-09-01
Full Text Available (Received: 2014/07/31 - Accepted: 2014/09/23This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009 because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing. We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009.
Bosworth, Edward L., Jr.
1987-01-01
The focus of this research is the investigation of data structures and associated search algorithms for automated fault diagnosis of complex systems such as the Hubble Space Telescope. Such data structures and algorithms will form the basis of a more sophisticated Knowledge Based Fault Diagnosis System. As a part of the research, several prototypes were written in VAXLISP and implemented on one of the VAX-11/780's at the Marshall Space Flight Center. This report describes and gives the rationale for both the data structures and algorithms selected. A brief discussion of a user interface is also included.
Comparison of evolutionary algorithms in gene regulatory network model inference.
LENUS (Irish Health Repository)
2010-01-01
ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.
COMPARISON OF VOLUMETRIC REGISTRATION ALGORITHMS FOR TENSOR-BASED MORPHOMETRY
Villalon, Julio; Joshi, Anand A.; Toga, Arthur W.; Thompson, Paul M.
2015-01-01
Nonlinear registration of brain MRI scans is often used to quantify morphological differences associated with disease or genetic factors. Recently, surface-guided fully 3D volumetric registrations have been developed that combine intensity-guided volume registrations with cortical surface constraints. In this paper, we compare one such algorithm to two popular high-dimensional volumetric registration methods: large-deformation viscous fluid registration, formulated in a Riemannian framework, and the diffeomorphic “Demons” algorithm. We performed an objective morphometric comparison, by using a large MRI dataset from 340 young adult twin subjects to examine 3D patterns of correlations in anatomical volumes. Surface-constrained volume registration gave greater effect sizes for detecting morphometric associations near the cortex, while the other two approaches gave greater effects sizes subcortically. These findings suggest novel ways to combine the advantages of multiple methods in the future. PMID:26925198
Comparison between Galileo CBOC Candidates and BOC(1,1 in Terms of Detection Performance
Directory of Open Access Journals (Sweden)
Fabio Dovis
2008-01-01
Full Text Available Many scientific activities within the navigation field have been focused on the analysis of innovative modulations for both GPS L1C and Galileo E1 OS, after the 2004 agreement between United States and European Commission on the development of GPS and Galileo. The joint effort by scientists of both parties has been focused on the multiplexed binary offset carrier (MBOC which is defined on the basis of its spectrum, and in this sense different time waveforms can be selected as possible modulation candidates. The goal of this paper is to present the detection performance of the composite BOC implementation of an MBOC signal in terms of detection and false alarm probabilities. A comparison among the CBOC and BOC(1,1 modulations is also presented to show how the CBOC solution, designed to have excellent tracking performance and multipath rejection capabilities, does not limit the acquisition process.
Comparison of algorithms for ultrasound image segmentation without ground truth
Sikka, Karan; Deserno, Thomas M.
2010-02-01
Image segmentation is a pre-requisite to medical image analysis. A variety of segmentation algorithms have been proposed, and most are evaluated on a small dataset or based on classification of a single feature. The lack of a gold standard (ground truth) further adds to the discrepancy in these comparisons. This work proposes a new methodology for comparing image segmentation algorithms without ground truth by building a matrix called region-correlation matrix. Subsequently, suitable distance measures are proposed for quantitative assessment of similarity. The first measure takes into account the degree of region overlap or identical match. The second considers the degree of splitting or misclassification by using an appropriate penalty term. These measures are shown to satisfy the axioms of a quasi-metric. They are applied for a comparative analysis of synthetic segmentation maps to show their direct correlation with human intuition of similar segmentation. Since ultrasound images are difficult to segment and usually lack a ground truth, the measures are further used to compare the recently proposed spectral clustering algorithm (encoding spatial and edge information) with standard k-means over abdominal ultrasound images. Improving the parameterization and enlarging the feature space for k-means steadily increased segmentation quality to that of spectral clustering.
CANDIDATE TREE-IN-BUD PATTERN SELECTION AND CLASSIFICATION USING BALL SCALE ENCODING ALGORITHM
Directory of Open Access Journals (Sweden)
T. Akilandeswari
2013-10-01
Full Text Available Asthma, Chronic obstructive pulmonary disease, influenza, pneumonia, tuberculosis, lung cancer and many other breathing problems are the leading causes of death and disability all over the world. These diseases affect the lung. Radiology is a primary assessing method with low specificity of the prediction of the presence of these diseases. Computer Assisted Detection (CAD will help the specialists in detecting one of these diseases in an early stage. A method has been proposed by Ulas Bagci to detect lung abnormalities using Fuzzy connected object estimation, Ball scale encoding and comparing various features extracted from local patches of the lung images (CT scan. In this paper, the Tree-in-Bud patterns are selected after segmentation by using ball scale encoding algorithm.
A benchmark for comparison of dental radiography analysis algorithms.
Wang, Ching-Wei; Huang, Cheng-Ta; Lee, Jia-Hong; Li, Chung-Hsing; Chang, Sheng-Wei; Siao, Ming-Jhih; Lai, Tat-Ming; Ibragimov, Bulat; Vrtovec, Tomaž; Ronneberger, Olaf; Fischer, Philipp; Cootes, Tim F; Lindner, Claudia
2016-07-01
Dental radiography plays an important role in clinical diagnosis, treatment and surgery. In recent years, efforts have been made on developing computerized dental X-ray image analysis systems for clinical usages. A novel framework for objective evaluation of automatic dental radiography analysis algorithms has been established under the auspices of the IEEE International Symposium on Biomedical Imaging 2015 Bitewing Radiography Caries Detection Challenge and Cephalometric X-ray Image Analysis Challenge. In this article, we present the datasets, methods and results of the challenge and lay down the principles for future uses of this benchmark. The main contributions of the challenge include the creation of the dental anatomy data repository of bitewing radiographs, the creation of the anatomical abnormality classification data repository of cephalometric radiographs, and the definition of objective quantitative evaluation for comparison and ranking of the algorithms. With this benchmark, seven automatic methods for analysing cephalometric X-ray image and two automatic methods for detecting bitewing radiography caries have been compared, and detailed quantitative evaluation results are presented in this paper. Based on the quantitative evaluation results, we believe automatic dental radiography analysis is still a challenging and unsolved problem. The datasets and the evaluation software will be made available to the research community, further encouraging future developments in this field. (http://www-o.ntust.edu.tw/~cweiwang/ISBI2015/).
Kim, Dae-Won; Byun, Yong-Ik; Alcock, Charles; Khardon, Roni
2011-01-01
We present a new QSO selection algorithm using a Support Vector Machine (SVM), a supervised classification method, on a set of extracted times series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars and microlensing events using 58 known QSOs, 1,629 variable stars and 4,288 non-variables using the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ~80% of known QSOs with a 25% false positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) dataset, which consists of 40 million lightcurves, and found 1,620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false po...
A comparison of computational methods and algorithms for the complex gamma function
Ng, E. W.
1974-01-01
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.
A theoretical comparison of evolutionary algorithms and simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.
Comparison of heterogeneity quantification algorithms for brain SPECT perfusion images
Modzelewski, Romain; Janvresse, Elise; De La Rue, Thierry; Vera, Pierre
2012-01-01
Background Several algorithms from the literature were compared with the original random walk (RW) algorithm for brain perfusion heterogeneity quantification purposes. Algorithms are compared on a set of 210 brain single photon emission computed tomography (SPECT) simulations and 40 patient exams. Methods Five algorithms were tested on numerical phantoms. The numerical anthropomorphic Zubal head phantom was used to generate 42 (6 × 7) different brain SPECT simulations. Seven diffuse cortical ...
Institute of Scientific and Technical Information of China (English)
Li Xi; Ji Hong; Zheng Ruiming; Li Ting
2009-01-01
In order to improve the performance of peer-to-peer files sharing system under mobile distributed environments, a novel always-optimally-coordinated (AOC) criterion and corresponding candidate selection algorithm are proposed in this paper. Compared with the traditional min-hops criterion, the new approach introduces a fuzzy knowledge combination theory to investigate several important factors that influence files transfer success rate and efficiency. Whereas the min-hops based protocols only ask the nearest candidate peer for desired files, the selection algorithm based on AOC comprehensively considers users' preference and network requirements with flexible balancing rules. Furthermore, its advantage also expresses in the independence of specified resource discovering protocols, allowing for scalability. The simulation results show that when using the AOC based peer selection algorithm, system performance is much better than the min-hops scheme, with files successful transfer rate improved more than 50% and transfer time reduced at least 20%.
Directory of Open Access Journals (Sweden)
Erkan Beşdok
2009-08-01
Full Text Available This paper introduces a comparison of training algorithms of radial basis function (RBF neural networks for classification purposes. RBF networks provide effective solutions in many science and engineering fields. They are especially popular in the pattern classification and signal processing areas. Several algorithms have been proposed for training RBF networks. The Artificial Bee Colony (ABC algorithm is a new, very simple and robust population based optimization algorithm that is inspired by the intelligent behavior of honey bee swarms. The training performance of the ABC algorithm is compared with the Genetic algorithm, Kalman filtering algorithm and gradient descent algorithm. In the experiments, not only well known classification problems from the UCI repository such as the Iris, Wine and Glass datasets have been used, but also an experimental setup is designed and inertial sensor based terrain classification for autonomous ground vehicles was also achieved. Experimental results show that the use of the ABC algorithm results in better learning than those of others.
Directory of Open Access Journals (Sweden)
DURUSU, A.
2014-08-01
Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.
A First Comparison of Kepler Planet Candidates in Single and Multiple Systems
Latham, David W; Quinn, Samuel N; Batalha, Natalie M; Borucki, William J; Brown, Timothy M; Bryson, Stephen T; Buchhave, Lars A; Caldwell, Douglas A; Carter, Joshua A; Christiansen, Jesse L; Ciardi, David R; Cochran, William D; Dunham, Edward W; Fabrycky, Daniel C; Ford, Eric B; Gautier, Thomas N; Gilliland, Ronald L; Holman, Matthew J; Howell, Steve B; Ibrahim, Khadeejah A; Isaacson, Howard; Basri, Gibor; Furesz, Gabor; Geary, John C; Jenkins, Jon M; Koch, David G; Lissauer, Jack J; Marcy, Geoffrey W; Quintana, Elisa V; Ragozzine, Darin; Sasselov, Dimitar D; Shporer, Avi; Steffen, Jason H; Welsh, William F; Wohler, Bill
2011-01-01
In this letter we present an overview of the rich population of systems with multiple candidate transiting planets found in the first four months of Kepler data. The census of multiples includes 115 targets that show 2 candidate planets, 45 with 3, 8 with 4, and 1 each with 5 and 6, for a total of 170 systems with 408 candidates. When compared to the 827 systems with only one candidate, the multiples account for 17 percent of the total number of systems, and a third of all the planet candidates. We compare the characteristics of candidates found in multiples with those found in singles. False positives due to eclipsing binaries are much less common for the multiples, as expected. Singles and multiples are both dominated by planets smaller than Neptune; 69 +2/-3 percent for singles and 86 +2/-5 percent for multiples. This result, that systems with multiple transiting planets are less likely to include a transiting giant planet, suggests that close-in giant planets tend to disrupt the orbital inclinations of sm...
Geier, J. E.; Bath, A.; Stephansson, O.; Luukkonen, A.
2012-12-01
Site characterizations for deep radioactive-waste repositories consider rock properties, groundwater conditions, and the influences of regional settings and site-specific evolution. We present a comparison of these aspects for two candidate repository sites that have similar rocks and coastal settings, but are 200 km apart on opposite sides of the Gulf of Bothnia. The Olkiluoto site in Finland and the Forsmark site in Sweden are both in hard crystalline rock (migmatite gneiss and metagranite, respectively) with groundwater flow mainly via fractures. Both sites are undergoing licensing for a high-level radioactive-waste repository. The licensing is stepwise in Finland, and operation in both countries will be strictly regulated, but all responsibility lies with the implementers until accepted closure. The comparison reveals many expected similarities but also unexplained differences, which illustrate the complexities of site characterization in fractured crystalline rock. Both sites underwent a similar sequence of hydrologic conditions over the Weichselian and earlier glacial cycles. Hydrogeologically, Forsmark has more conductive upper bedrock, contributing to a very flat water table. Deep bedrock at Olkiluoto is more fractured in the horizontal plane. At repository depth and below, Forsmark likely contains larger volumes of low-conductivity rock. At both sites, the local model is connected to regional-scale boundaries via submarine deformation zones which (especially at Olkiluoto) are poorly characterized. Stress measurements at the two sites have shown that vertical stress is in agreement with the weight of overburden while horizontal stresses differ in magnitude and orientation. Interpreted overcoring stress measurements from Forsmark are almost twice the magnitudes estimated from hydraulic methods. Rock mechanical differences include the possibility that Olkiluoto bedrock is more prone to spalling than Forsmark. Olkiluoto bedrock is more anisotropic in terms of
Comparison of the 3D Protein Structure Prediction Algorithms
Fadhl M. Al-Akwaa,; Husam Elhetari
2014-01-01
Determining protein 3D structure is important to known protein functions. Protein structure could be determined experimentally and computationally. Experimental methods are expensive and time consuming whereas computational methods are the alternative solution. From the other hand, computational methods require strong computing power, assumed models and effective algorithms. In this paper we compare the performance of these algorithms. We find that Genetic Algorithm with impro...
Lillo-Box, J; Bouy, H
2014-01-01
The Kepler mission has discovered thousands of planet candidates. Currently, some of them have already been discarded; more than 200 have been confirmed by follow-up observations, and several hundreds have been validated. However, most of them are still awaiting for confirmation. Thus, priorities (in terms of the probability of the candidate being a real planet) must be established for subsequent observations. The motivation of this work is to provide a set of isolated (good) host candidates to be further tested by other techniques. We identify close companions of the candidates that could have contaminated the light curve of the planet host. We used the AstraLux North instrument located at the 2.2 m telescope in the Calar Alto Observatory to obtain diffraction-limited images of 174 Kepler objects of interest. The lucky-imaging technique used in this work is compared to other AO and speckle imaging observations of Kepler planet host candidates. We define a new parameter, the blended source confidence level (B...
A comparison of performance measures for online algorithms
DEFF Research Database (Denmark)
Boyar, Joan; Irani, Sandy; Larsen, Kim Skak
2009-01-01
This paper provides a systematic study of several recently suggested measures for online algorithms in the context of a specific problem, namely, the two server problem on three colinear points. Even though the problem is simple, it encapsulates a core challenge in online algorithms which is to b...
Comparison of Duty Cycle Generator Algorithms for SPICE Simulation of SMPS
Directory of Open Access Journals (Sweden)
Alexander Abramovitz
2012-01-01
Full Text Available The paper presents and discusses an algorithm for average modeling of the PWM modulator in switch-mode power systems by general purpose electronic circuit simulators such as PSPICE. A comparison with previous theoretical models is conducted. To test the accuracy of the average PWM models comparison to cycle-by-cycle simulation was conducted. The proposed algorithm shows better accuracy than earlier counterparts.
A Comparison of Evolutionary Algorithms for Tracking Time-Varying Recursive Systems
Directory of Open Access Journals (Sweden)
White Michael S
2003-01-01
Full Text Available A comparison is made of the behaviour of some evolutionary algorithms in time-varying adaptive recursive filter systems. Simulations show that an algorithm including random immigrants outperforms a more conventional algorithm using the breeder genetic algorithm as the mutation operator when the time variation is discontinuous, but neither algorithm performs well when the time variation is rapid but smooth. To meet this deficit, a new hybrid algorithm which uses a hill climber as an additional genetic operator, applied for several steps at each generation, is introduced. A comparison is made of the effect of applying the hill climbing operator a few times to all members of the population or a larger number of times solely to the best individual; it is found that applying to the whole population yields the better results, substantially improved compared with those obtained using earlier methods.
Hees, A; Guéna, J; Abgrall, M; Bize, S; Wolf, P
2016-08-05
We use 6 yrs of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions, and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine structure constant and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed.
Hees, A; Abgrall, M; Bize, S; Wolf, P
2016-01-01
We use six years of accurate hyperfine frequency comparison data of the dual Rubidium and Caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions and of the quantum chromodynamic mass scale, which will directly impact the Rubidium/Caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine-structure constant, and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed.
Hees, A.; Guéna, J.; Abgrall, M.; Bize, S.; Wolf, P.
2016-08-01
We use 6 yrs of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions, and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine structure constant and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed.
Comparison of Workflow Scheduling Algorithms in Cloud Computing
Directory of Open Access Journals (Sweden)
Navjot Kaur
2011-10-01
Full Text Available Cloud computing has gained popularity in recent times. Cloud computing is internet based computing, whereby shared resources, software and information are provided to computers and other devices on demand, like a public utility. Cloud computing is technology that uses the internet and central remote servers to maintain data and applications. This technology allows consumers and businesses to use application without installation and access their personal files at any computer with internet access. The main aim of my work is to study various problems, issues and types of scheduling algorithms for cloud workflows as well as on designing new workflow algorithms for cloud Workflow management system. The proposed algorithms are implemented on real time cloud which is developed using Microsoft .Net Technologies. The algorithms are compared with each other on the basis of parameters like Total execution time, Execution time for algorithm, Estimated execution time. Experimental results generated via simulation shown that Algorithm 2 is much better than Algorithm 1, as it reduced makespan time.
Jelen, Birsen
2015-01-01
In recent years almost every newly opened government funded university in Turkey has established a music department where future music teachers are educated and piano is compulsory for every single music teacher candidate in Turkey. The aim of this research is to compare piano teaching instructors' and their students' perceptions about the current…
Teacher Candidates' Attitudes towards Inclusion Education and Comparison of Self-Compassion Levels
Aydin, Aydan; Kuzu, Seher
2013-01-01
This study has been figured for the purpose of comparing attitudes of teacher candidates toward inclusion education in terms of several variables and self-compassion levels. Sampling of the study consists of Grade 4 students of (547) Marmara University Ataturk, Faculty of Education and Faculty of Science and Letters. In this study, a personnel…
Performance Comparison of Constrained Artificial Bee Colony Algorithm
Directory of Open Access Journals (Sweden)
Soudeh Babaeizadeh
2015-06-01
Full Text Available This study is aimed to evaluate, analyze and compare the performances of available constrained Artificial Bee Colony (ABC algorithms in the literature. In recent decades, many different variants of the ABC algorithms have been suggested to solve Constrained Optimization Problems (COPs. However, to the best of the authors' knowledge, there rarely are comparative studies on the numerical performance of those algorithms. This study is considering a set of well-known benchmark problems from test problems of Congress of Evolutionary Computation 2006 (CEC2006.
Comparison of tracking algorithms implemented in OpenCV
Directory of Open Access Journals (Sweden)
Janku Peter
2016-01-01
Full Text Available Computer vision is very progressive and modern part of computer science. From scientific point of view, theoretical aspects of computer vision algorithms prevail in many papers and publications. The underlying theory is really important, but on the other hand, the final implementation of an algorithm significantly affects its performance and robustness. For this reason, this paper tries to compare real implementation of tracking algorithms (one part of computer vision problem, which can be found in the very popular library OpenCV. Moreover, the possibilities of optimizations are discussed.
An Empirical Comparison of Boosting and Bagging Algorithms
Directory of Open Access Journals (Sweden)
R. Kalaichelvi Chandrahasan
2011-11-01
Full Text Available Classification is one of the data mining techniques that analyses a given data set and induces a model for each class based on their features present in the data. Bagging and boosting are heuristic approaches to develop classification models. These techniques generate a diverse ensemble of classifiers by manipulating the training data given to a base learning algorithm. They are very successful in improving the accuracy of some algorithms in artificial and real world datasets. We review the algorithms such as AdaBoost, Bagging, ADTree, and Random Forest in conjunction with the Meta classifier and the Decision Tree classifier. Also we describe a large empirical study by comparing several variants. The algorithms are analyzed on Accuracy, Precision, Error Rate and Execution Time.
Checking EABC performance in comparison others cryptography algorithms
Directory of Open Access Journals (Sweden)
Hamid Mehdi
2013-08-01
Full Text Available Nowadays, selecting algorithm to encryption data is very important considering attacks are varied andaccording to there are many encryption algorithms to protect information. Therefore choosing one algorithmis hard among many algorithms. Data confidentiality is one of the most important functions of encryptionalgorithms, it means when the transferring data between different systems is vague for unauthorized systemsor people. Moreover, Encryption algorithms must maintain data integrity and provide availability forinformation. New encryption methods cause the attackers cannot simply access to the information and do notallow discovering the relationship between information and the encrypted one. Therefore, availability can bedifficult for them. Existing complexities make their longevity and effectiveness increase. In This Article, Ithas been tried to check EABC performance considering execution time, CPU Utilization, Throughput ofencrypting/decrypting database.
Advanced reconstruction algorithms for electron tomography: From comparison to combination
Energy Technology Data Exchange (ETDEWEB)
Goris, B. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Roelandts, T. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Batenburg, K.J. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Heidari Mezerji, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Bals, S., E-mail: sara.bals@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)
2013-04-15
In this work, the simultaneous iterative reconstruction technique (SIRT), the total variation minimization (TVM) reconstruction technique and the discrete algebraic reconstruction technique (DART) for electron tomography are compared and the advantages and disadvantages are discussed. Furthermore, we describe how the result of a three dimensional (3D) reconstruction based on TVM can provide objective information that is needed as the input for a DART reconstruction. This approach results in a tomographic reconstruction of which the segmentation is carried out in an objective manner. - Highlights: ► A comparative study between different reconstruction algorithms for tomography is performed. ► Reconstruction algorithms that uses prior knowledge about the specimen have a superior result. ► One reconstruction algorithm can provide the prior knowledge for a second algorithm.
COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES
Directory of Open Access Journals (Sweden)
A.A. Haseena Thasneem
2015-05-01
Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.
Comparison of Hierarchical Agglomerative Algorithms for Clustering Medical Documents
Directory of Open Access Journals (Sweden)
Rafa E. Al-Qutaish
2012-06-01
Full Text Available Extensive amount of data stored in medical documents require developing methods that help users to find what they are looking for effectively by organizing large amounts of information into a small number of meaningful clusters. The produced clusters contain groups of objects which are more similar to each other than to the members of any other group. Thus, the aim of high-quality document clustering algorithms is to determine a set of clusters in which the inter-cluster similarity is minimized and intra-cluster similarity is maximized. The most important feature in many clustering algorithms is treating the clustering problem as an optimization process, that is, maximizing or minimizing a particular clustering criterion function defined over the whole clustering solution. The only real difference between agglomerative algorithms is how they choose which clusters to merge. The main purpose of this paper is to compare different agglomerative algorithms based on the evaluation of the clusters quality produced by different hierarchical agglomerative clustering algorithms using different criterion functions for the problem of clustering medical documents. Our experimental results showed that the agglomerative algorithm that uses I1 as its criterion function for choosing which clusters to merge produced better clusters quality than the other criterion functions in term of entropy and purity as external measures.
A comparison of cohesive features in IELTS writing of Chinese candidates and IELTS examiners
Institute of Scientific and Technical Information of China (English)
刘可
2012-01-01
This study aims at investigating cohesive ties applied in IELTS written texts produced by Chinese candidates and IELTS examiners,uncovering the differences in the use of cohesive features between the two groups,and analyzing whether the employment of cohesive ties is a possible problem in the Chinese candidates’ writing.Six written texts are analyzed in the study,with three Chinese candidates’ and three IELTS examiners’ IELTS writing respectively.The findings show that there exist differences in the use of cohesive devices between the two groups.Compared to the IETLS’ examiners’ writing,the group of Chinese candidates employed excessive conjunctions,with relatively less comparative and demonstrative reference ties used in their texts.Additionally,it appears that overusing repetition ties constitutes a potential problem in the candidates’ writing.Implications and suggestions about raising learners’ awareness and helping them to use cohesive devices effectively are discussed.
Ha, Jeongmok; Jeong, Hong
2016-07-01
This study investigates the directed acyclic subgraph (DAS) algorithm, which is used to solve discrete labeling problems much more rapidly than other Markov-random-field-based inference methods but at a competitive accuracy. However, the mechanism by which the DAS algorithm simultaneously achieves competitive accuracy and fast execution speed, has not been elucidated by a theoretical derivation. We analyze the DAS algorithm by comparing it with a message passing algorithm. Graphical models, inference methods, and energy-minimization frameworks are compared between DAS and message passing algorithms. Moreover, the performances of DAS and other message passing methods [sum-product belief propagation (BP), max-product BP, and tree-reweighted message passing] are experimentally compared.
Direct Imaging of Extra-Solar Planets – Homogeneous Comparison of Detected Planets and Candidates
Neuhäuser, Ralph; Schmidt, Tobias
2012-01-01
Searching the literature, we found 25 stars with directly imaged planets and candidates. We gathered photometric and spectral information for all these objects to derive their luminosities in a homogeneous way, taking a bolometric correction into account. Using theoretical evolutionary models, one can then estimate the mass from luminosity, temperature, and age. According to our mass estimates, all of them can have a mass below 25 Jup masses, so that they are considered as planets.
A Comparison of Three Algorithms for Orion Drogue Parachute Release
Matz, Daniel A.; Braun, Robert D.
2015-01-01
The Orion Multi-Purpose Crew Vehicle is susceptible to ipping apex forward between drogue parachute release and main parachute in ation. A smart drogue release algorithm is required to select a drogue release condition that will not result in an apex forward main parachute deployment. The baseline algorithm is simple and elegant, but does not perform as well as desired in drogue failure cases. A simple modi cation to the baseline algorithm can improve performance, but can also sometimes fail to identify a good release condition. A new algorithm employing simpli ed rotational dynamics and a numeric predictor to minimize a rotational energy metric is proposed. A Monte Carlo analysis of a drogue failure scenario is used to compare the performance of the algorithms. The numeric predictor prevents more of the cases from ipping apex forward, and also results in an improvement in the capsule attitude at main bag extraction. The sensitivity of the numeric predictor to aerodynamic dispersions, errors in the navigated state, and execution rate is investigated, showing little degradation in performance.
Performance comparison of several optimization algorithms in matched field inversion
Institute of Scientific and Technical Information of China (English)
ZOU Shixin; YANG Kunde; MA Yuanliang
2004-01-01
Optimization efficiencies and mechanisms of simulated annealing, genetic algorithm, differential evolution and downhill simplex differential evolution are compared and analyzed. Simulated annealing and genetic algorithm use a directed random process to search the parameter space for an optimal solution. They include the ability to avoid local minima, but as no gradient information is used, searches may be relatively inefficient. Differential evolution uses information from a distance and azimuth between individuals of a population to search the parameter space, the initial search is effective, but the search speed decreases quickly because differential information between the individuals of population vanishes. Local downhill simplex and global differential evolution methods are developed separately, and combined to produce a hybrid downhill simplex differential evolution algorithm. The hybrid algorithm is sensitive to gradients of the object function and search of the parameter space is effective. These algorithms are applied to the matched field inversion with synthetic data. Optimal values of the parameters, the final values of object function and inversion time is presented and compared.
Comparison of Algorithms for an Electronic Nose in Identifying Liquors
Institute of Scientific and Technical Information of China (English)
Zhi-biao Shi; Tao Yu; Qun Zhao; Yang Li; Yu-bin Lan
2008-01-01
When the electronic nose is used to identify different varieties of distilled liquors, the pattern recognition algorithm is chosen on the basis of the experience, which lacks the guiding principle. In this research, the different brands of distilled spirits were identified using the pattern recognition algorithms (principal component analysis and the artificial neural network). The recognition rates of different algorithms were compared. The recognition rate of the Back Propagation Neural Network (BPNN) is the highest. Owing to the slow convergence speed of the BPNN, it tends easily to get into a local minimum. A chaotic BPNN was tried in order to overcome the disadvantage of the BPNN. The convergence speed of the chaotic BPNN is 75.5 times faster than that of the BPNN.
A structural comparison of measurement-based admission control algorithms
Institute of Scientific and Technical Information of China (English)
GU Yi-ran; WANG Suo-ping; WU Hai-ya
2006-01-01
Measurement-based admission control (MBAC)algorithm is designed for the relaxed real-time service. In contrast to traditional connection admission control mechanisms,the most attractive feature of MBAC algorithm is that it does not require a prior traffic model and that is very difficult for the user to come up with a tight traffic model before establishing a flow.Other advantages of MBAC include that it can achieve higher network utilization and offer quality service to users. In this article, the study of the equations in the MBAC shows that they can all be expressed in the same form. Based on the same form,some MBAC algorithms can achieve same performance only if they satisfy some conditions.
Comparison of evolutionary algorithms for LPDA antenna optimization
Lazaridis, Pavlos I.; Tziris, Emmanouil N.; Zaharis, Zaharias D.; Xenos, Thomas D.; Cosmas, John P.; Gallion, Philippe B.; Holmes, Violeta; Glover, Ian A.
2016-08-01
A novel approach to broadband log-periodic antenna design is presented, where some of the most powerful evolutionary algorithms are applied and compared for the optimal design of wire log-periodic dipole arrays (LPDA) using Numerical Electromagnetics Code. The target is to achieve an optimal antenna design with respect to maximum gain, gain flatness, front-to-rear ratio (F/R) and standing wave ratio. The parameters of the LPDA optimized are the dipole lengths, the spacing between the dipoles, and the dipole wire diameters. The evolutionary algorithms compared are the Differential Evolution (DE), Particle Swarm (PSO), Taguchi, Invasive Weed (IWO), and Adaptive Invasive Weed Optimization (ADIWO). Superior performance is achieved by the IWO (best results) and PSO (fast convergence) algorithms.
Performance comparison of SLFN training algorithms for DNA microarray classification.
Huynh, Hieu Trung; Kim, Jung-Ja; Won, Yonggwan
2011-01-01
The classification of biological samples measured by DNA microarrays has been a major topic of interest in the last decade, and several approaches to this topic have been investigated. However, till now, classifying the high-dimensional data of microarrays still presents a challenge to researchers. In this chapter, we focus on evaluating the performance of the training algorithms of the single hidden layer feedforward neural networks (SLFNs) to classify DNA microarrays. The training algorithms consist of backpropagation (BP), extreme learning machine (ELM) and regularized least squares ELM (RLS-ELM), and an effective algorithm called neural-SVD has recently been proposed. We also compare the performance of the neural network approaches with popular classifiers such as support vector machine (SVM), principle component analysis (PCA) and fisher discriminant analysis (FDA).
Smail, Linda
2016-06-01
The basic task of any probabilistic inference system in Bayesian networks is computing the posterior probability distribution for a subset or subsets of random variables, given values or evidence for some other variables from the same Bayesian network. Many methods and algorithms have been developed to exact and approximate inference in Bayesian networks. This work compares two exact inference methods in Bayesian networks-Lauritzen-Spiegelhalter and the successive restrictions algorithm-from the perspective of computational efficiency. The two methods were applied for comparison to a Chest Clinic Bayesian Network. Results indicate that the successive restrictions algorithm shows more computational efficiency than the Lauritzen-Spiegelhalter algorithm.
Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification
Directory of Open Access Journals (Sweden)
R. Sathya
2013-02-01
Full Text Available This paper presents a comparative account of unsupervised and supervised learning models and their pattern classification evaluations as applied to the higher education scenario. Classification plays a vital role in machine based learning algorithms and in the present study, we found that, though the error back-propagation learning algorithm as provided by supervised learning model is very efficient for a number of non-linear real-time problems, KSOM of unsupervised learning model, offers efficient solution and classification in the present study.
Comparison of parameter estimation algorithms in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
2006-01-01
Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well for th...
Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches
Directory of Open Access Journals (Sweden)
Ufuk Çelik
2015-01-01
Full Text Available The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy.
Evaluation and Comparison of Motion Estimation Algorithms for Video Compression
Directory of Open Access Journals (Sweden)
Avinash Nayak
2013-08-01
Full Text Available Video compression has become an essential component of broadcast and entertainment media. Motion Estimation and compensation techniques, which can eliminate temporal redundancy between adjacent frames effectively, have been widely applied to popular video compression coding standards such as MPEG-2, MPEG-4. Traditional fast block matching algorithms are easily trapped into the local minima resulting in degradation on video quality to some extent after decoding. In this paper various computing techniques are evaluated in video compression for achieving global optimal solution for motion estimation. Zero motion prejudgment is implemented for finding static macro blocks (MB which do not need to perform remaining search thus reduces the computational cost. Adaptive Rood Pattern Search (ARPS motion estimation algorithm is also adapted to reduce the motion vector overhead in frame prediction. The simulation results showed that the ARPS algorithm is very effective in reducing the computations overhead and achieves very good Peak Signal to Noise Ratio (PSNR values. This method significantly reduces the computational complexity involved in the frame prediction and also least prediction error in all video sequences. Thus ARPS technique is more efficient than the conventional searching algorithms in video compression.
A benchmark for comparison of cell tracking algorithms
M. Maška (Martin); V. Ulman (Vladimír); K. Svoboda; P. Matula (Pavel); P. Matula (Petr); C. Ederra (Cristina); A. Urbiola (Ainhoa); T. España (Tomás); R. Venkatesan (Rajkumar); D.M.W. Balak (Deepak); P. Karas (Pavel); T. Bolcková (Tereza); M. Štreitová (Markéta); C. Carthel (Craig); S. Coraluppi (Stefano); N. Harder (Nathalie); K. Rohr (Karl); K.E.G. Magnusson (Klas E.); J. Jaldén (Joakim); H.M. Blau (Helen); O.M. Dzyubachyk (Oleh); P. Křížek (Pavel); G.M. Hagen (Guy); D. Pastor-Escuredo (David); D. Jimenez-Carretero (Daniel); M.J. Ledesma-Carbayo (Maria); A. Muñoz-Barrutia (Arrate); E. Meijering (Erik); M. Kozubek (Michal); C. Ortiz-De-Solorzano (Carlos)
2014-01-01
textabstractMotivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Sy
Comparison Between Four Detection Algorithms for GEO Objects
Yanagisawa, T.; Uetsuhara, M.; Banno, H.; Kurosaki, H.; Kinoshita, D.; Kitazawa, Y.; Hanada, T.
2012-09-01
Four detection algorithms for GEO objects are being developed under the collaboration between Kyushu University, IHI corporation and JAXA. Each algorithm is designed to process CCD images to detect GEO objects. First one is PC based stacking method which has been developed in JAXA since 2000. Numerous CCD images are used to detect faint GEO objects below the limiting magnitude of a single CCD image. Sub-images are cropped from many CCD image to fit the movement of the objects. A median image of all the sub-images is then created. Although this method has an ability to detect faint objects, it takes time to analyze. Second one is the line-identifying technique which also uses many CCD frames and finds any series of objects that are arrayed on a straight line from the first frame to the last frame. This can analyze data faster than the stacking method, but cannot detect faint objects as the stacking method. Third one is the robust stacking method developed by IHI corporation which uses average instead of median to reduce analysis time. This has same analysis speed as the line-identifying technique and better detection capabilities in terms of the darkness. Forth one is the FPGA based stacking method which uses binalized images and a new algorithm installed in a FPGA board which reduce analysis time about one thousandth. All four algorithms analyzed the same sets of data to evaluate their advantages and disadvantages. By comparing their analysis times and results, an optimal usage of these algorithms are considered.
Parallel divide and conquer bio-sequence comparison based on Smith-Waterman algorithm
Institute of Scientific and Technical Information of China (English)
ZHANG Fa; QIAO Xiangzhen; LIU Zhiyong
2004-01-01
Tools for pair-wise bio-sequence alignment have for long played a central role in computation biology. Several algorithms for bio-sequence alignment have been developed. The Smith-Waterman algorithm, based on dynamic programming, is considered the most fundamental alignment algorithm in bioinformatics. However the existing parallel Smith-Waterman algorithm needs large memory space, and this disadvantage limits the size of a sequence to be handled. As the data of biological sequences expand rapidly, the memory requirement of the existing parallel SmithWaterman algorithm has become a critical problem. For solving this problem, we develop a new parallel bio-sequence alignment algorithm, using the strategy of divide and conquer, named PSW-DC algorithm. In our algorithm, first, we partition the query sequence into several subsequences and distribute them to every processor respectively,then compare each subsequence with the whole subject sequence in parallel, using the Smith-Waterman algorithm, and get an interim result, finally obtain the optimal alignment between the query sequence and subject sequence, through the special combination and extension method. Memory space required in our algorithm is reduced significantly in comparison with existing ones. We also develop a key technique of combination and extension, named the C&E method, to manipulate the interim results and obtain the final sequences alignment. We implement the new parallel bio-sequences alignment algorithm,the PSW-DC, in a cluster parallel system.
A Limited Comparison of the Thermal Durability of Polyimide Candidate Matrix Polymers with PMR-15
Bowles, Kenneth J.; Papadopoulos, Demetrios S.; Scheiman, Daniel A.; Inghram, Linda L.; McCorkle, Linda S.; Klans, Ojars V.
2003-01-01
Studies were conducted with six different candidate high-temperature neat matrix resin specimens of varied geometric shapes to investigate the mechanisms involved in the thermal degradation of polyimides like PMR-15. The metrics for assessing the quality of these candidates were chosen to be glass transition temperature (T(sub g)), thermo-oxidative stability, dynamic mechanical properties, microstructural changes, and dimensional stability. The processing and mechanical properties were not investigated in the study reported herein. The dimensional changes and surface layer growth were measured and recorded. The data were in agreement with earlier published data. An initial weight increase reaction was observed to be dominating at the lower temperatures. However, at the more elevated temperatures, the weight loss reactions were prevalent and probably masked the weight gain reaction. These data confirmed the findings of the existence of an initial weight gain reaction previously reported. Surface- and core-dependent weight losses were shown to control the polymer degradation at the higher temperatures.
Elder, Katherine A; Grilo, Carlos M; Masheb, Robin M; Rothschild, Bruce S; Burke-Martindale, Carolyn H; Brody, Michelle L
2006-04-01
This study compared two self-report methods for assessing binge eating in severely obese bariatric surgery candidates. Participants were 249 gastric bypass candidates who completed the Questionnaire on Eating and Weight Patterns-Revised (QEWP-R) and the Eating Disorder Examination-Questionnaire (EDE-Q) prior to surgery. Participants were classified by binge eating status (i.e., no or recurrent binge eating) with each of the measures. The degree of agreement was examined, as well as the relationship between binge eating and measures of convergent validity. The two measures identified a similar number of patients with recurrent binge eating (i.e., at least 1 binge/week); however, overlap was modest (kappa=.26). Agreement on twice weekly binge eating was poor (kappa=.05). The QEWP-R and EDE-Q both identified clinically meaningful groups of binge eaters. The EDE-Q appeared to differentiate between non/infrequent bingers and recurrent bingers better than the QEWP-R, based on measures of convergent validity. In addition, the EDE-Q demonstrated an advantage because it identified binge eaters with elevated weight and shape overconcern. Using the self-report measures concurrently did not improve identification of binge eating in this study. More work is needed to determine the construct validity and clinical utility of these measures with gastric bypass patients.
Energy Technology Data Exchange (ETDEWEB)
Fotina, Irina; Kragl, Gabriele; Kroupa, Bernhard; Trausmuth, Robert; Georg, Dietmar [Medical Univ. Vienna (Austria). Division of Medical Radiation Physics, Dept. of Radiotherapy
2011-07-15
Comparison of the dosimetric accuracy of the enhanced collapsed cone (eCC) algorithm with the commercially available Monte Carlo (MC) dose calculation for complex treatment techniques. A total of 8 intensity-modulated radiotherapy (IMRT) and 2 stereotactic body radiotherapy (SBRT) lung cases were calculated with eCC and MC algorithms with the treatment planning systems (TPS) Oncentra MasterPlan 3.2 (Nucletron) and Monaco 2.01 (Elekta/CMS). Fluence optimization as well as sequencing of IMRT plans was primarily performed using Monaco. Dose prediction errors were calculated using MC as reference. The dose-volume histrogram (DVH) analysis was complemented with 2D and 3D gamma evaluation. Both algorithms were compared to measurements using the Delta4 system (Scandidos). Recalculated with eCC IMRT plans resulted in lower planned target volume (PTV) coverage, as well as in lower organs-at-risk (OAR) doses up to 8%. Small deviations between MC and eCC in PTV dose (1-2%) were detected for IMRT cases, while larger deviations were observed for SBRT (up to 5%). Conformity indices of both calculations were similar; however, the homogeneity of the eCC calculated plans was slightly better. Delta4 measurements confirmed high dosimetric accuracy of both TPS. Mean dose prediction errors < 3% for PTV suggest that both algorithms enable highly accurate dose calculations under clinical conditions. However, users should be aware of slightly underestimated OAR doses using the eCC algorithm. (orig.)
Comparison of Adaptive Antenna Arrays Controlled by Gradient Algorithms
Directory of Open Access Journals (Sweden)
Z. Raida
1994-09-01
Full Text Available The paper presents the Simple Kalman filter (SKF that has been designed for the control of digital adaptive antenna arrays. The SKF has been applied to the pilot signal system and the steering vector one. The above systems based on the SKF are compared with adaptive antenna arrays controlled by the classical LMS and the Variable Step Size (VSS LMS algorithms and by the pure Kalman filter. It is shown that the pure Kalman filter is the most convenient for the control of the adaptive arrays because it does not require any a priori information about noise statistics and excels in high rate of convergence and low misadjustment. Extremely high computational requirements are drawback of this filter. Hence, if low computational power of signal processors is at the disposal, the SKF is recommended to be used. Computational requirements of the SKF are of the same order as the classical LMS algorithm exhibits. On the other hand, all the important features of the pure Kalman filter are inherited by the SKF. The paper shows that presented Kalman filters can be regarded as special gradient algorithms. That is why they can be compared with the LMS family.
Comparison of cluster expansion fitting algorithms for interactions at surfaces
Herder, Laura M.; Bray, Jason M.; Schneider, William F.
2015-10-01
Cluster expansions (CEs) are Ising-type interaction models that are increasingly used to model interaction and ordering phenomena at surfaces, such as the adsorbate-adsorbate interactions that control coverage-dependent adsorption or surface-vacancy interactions that control surface reconstructions. CEs are typically fit to a limited set of data derived from density functional theory (DFT) calculations. The CE fitting process involves iterative selection of DFT data points to include in a fit set and selection of interaction clusters to include in the CE. Here we compare the performance of three CE fitting algorithms-the MIT Ab-initio Phase Stability code (MAPS, the default in ATAT software), a genetic algorithm (GA), and a steepest descent (SD) algorithm-against synthetic data. The synthetic data is encoded in model Hamiltonians of varying complexity motivated by the observed behavior of atomic adsorbates on a face-centered-cubic transition metal close-packed (111) surface. We compare the performance of the leave-one-out cross-validation score against the true fitting error available from knowledge of the hidden CEs. For these systems, SD achieves lowest overall fitting and prediction error independent of the underlying system complexity. SD also most accurately predicts cluster interaction energies without ignoring or introducing extra interactions into the CE. MAPS achieves good results in fewer iterations, while the GA performs least well for these particular problems.
A comparison of clustering algorithms in article recommendation system
Tantanasiriwong, Supaporn
2012-01-01
Recommendation system is considered a tool that can be used to recommend researchers about resources that are suitable for their research of interest by using content-based filtering. In this paper, clustering algorithm as an unsupervised learning is introduced for grouping objects based on their feature selection and similarities. The information of publication in Science Cited Index is used to be dataset for clustering as a feature extraction in terms of dimensionality reduction of these articles by comparing Latent Dirichlet Allocation (LDA), Principal Component Analysis (PCA), and K-Mean to determine the best algorithm. In my experiment, the selected database consists of 2625 documents extraction extracted from SCI corpus from 2001 to 2009. Clustering into ranks as 50,100,200,250 is used to consider and using F-Measure evaluate among them in three algorithms. The result of this paper showed that LDA technique given the accuracy up to 95.5% which is the highest effective than any other clustering technique.
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations.
Comparison with reconstruction algorithms in magnetic induction tomography.
Han, Min; Cheng, Xiaolin; Xue, Yuyan
2016-05-01
Magnetic induction tomography (MIT) is a kind of imaging technology, which uses the principle of electromagnetic detection to measure the conductivity distribution. In this research, we make an effort to improve the quality of image reconstruction mainly via the image reconstruction of MIT analysis, including solving the forward problem and image reconstruction. With respect to the forward problem, the variational finite element method is adopted. We transform the solution of a nonlinear partial differential equation into linear equations by using field subdividing and the appropriate interpolation function so that the voltage data of the sensing coils can be calculated. With respect to the image reconstruction, a method of modifying the iterative Newton-Raphson (NR) algorithm is presented in order to improve the quality of the image. In the iterative NR, weighting matrix and L1-norm regularization are introduced to overcome the drawbacks of large estimation errors and poor stability of the reconstruction image. On the other hand, within the incomplete-data framework of the expectation maximization (EM) algorithm, the image reconstruction can be converted to the problem of EM through the likelihood function for improving the under-determined problem. In the EM, the missing-data is introduced and the measurement data and the sensitivity matrix are compensated to overcome the drawback that the number of the measurement voltage is far less than the number of the unknown. In addition to the two aspects above, image segmentation is also used to make the lesion more flexible and adaptive to the patients' real conditions, which provides a theoretical reference for the development of the application of the MIT technique in clinical applications. The results show that solving the forward problem with the variational finite element method can provide the measurement voltage data for image reconstruction, the improved iterative NR method and EM algorithm can enhance the image
Kim, Dae-Won; Protopapas, Pavlos; Byun, Yong-Ik; Alcock, Charles; Khardon, Roni; Trichas, Markos
2011-07-01
We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ~80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.
Comparison of Algorithms for Control of Loads for Voltage Regulation
DEFF Research Database (Denmark)
Douglass, Philip James; Han, Xue; You, Shi
2014-01-01
Autonomous flexible loads can be utilized to regulate voltag e on low voltage feeders. This paper compares two algorithms for controllin g loads: a simple voltage droop, where load power consumption is a varied in proportio n to RMS voltage; and a normalized relative voltage droop, which modifies...... the simpl e voltage droop by subtracting the mean voltage value at the bus and dividing by the standard deviation. These two controllers are applied to hot water heaters simul ated in a simple residential feeder. The simulation results show that both controllers r educe the frequency of undervoltage events...
Liu, Hongbo; Yuan, Mei; Yang, Xiaolan; Hu, Xiaolei; Liao, Juan; Dang, Jizheng; Xie, Yanling; Pu, Jun; Li, Yuanli; Zhan, Chang-Guo; Liao, Fei
2015-01-01
Spectrophotometric-dual-enzyme-simultaneous-assay (SDESA) for enzyme-linked-immunosorbent-assay (ELISA) of two components in one well is a patented platform when a special pair of labels is accessible. With microplate readers, alkaline phosphatase on 4-nitro-1-naphthylphosphate (4NNPP) served as label A; Pseudomonas aeruginosa arylsulfatase (PAAS) and acetylcholinesterase (AChE) on their substrates derived from 4-nitrophenol/analogue served as candidate label B, and were compared for SDESA with an engineered alkaline phosphatase of Eschrichia coli (ECAP). For SDESA, the interference from overlapped absorbance was corrected based on linear additivity of absorbance to derive initial rates reflected by absorbance change at 450 nm for ECAP and at 405 nm for PAAS or AChE, after the correction of spontaneous hydrolysis. For SDESA with ECAP, AChE already had sufficient activity in an optimized buffer; PAAS was more favorable for substrate stability and product absorbance except for lower activity. Therefore, PAAS engineered for sufficient activity plus alkaline phosphatase is absorbing for ELISA via SDESA.
A Damage Resistance Comparison Between Candidate Polymer Matrix Composite Feedline Materials
Nettles, A. T
2000-01-01
As part of NASAs focused technology programs for future reusable launch vehicles, a task is underway to study the feasibility of using the polymer matrix composite feedlines instead of metal ones on propulsion systems. This is desirable to reduce weight and manufacturing costs. The task consists of comparing several prototype composite feedlines made by various methods. These methods are electron-beam curing, standard hand lay-up and autoclave cure, solvent assisted resin transfer molding, and thermoplastic tape laying. One of the critical technology drivers for composite components is resistance to foreign objects damage. This paper presents results of an experimental study of the damage resistance of the candidate materials that the prototype feedlines are manufactured from. The materials examined all have a 5-harness weave of IM7 as the fiber constituent (except for the thermoplastic, which is unidirectional tape laid up in a bidirectional configuration). The resin tested were 977-6, PR 520, SE-SA-1, RS-E3 (e-beam curable), Cycom 823 and PEEK. The results showed that the 977-6 and PEEK were the most damage resistant in all tested cases.
Energy Technology Data Exchange (ETDEWEB)
Carroll, Mark C
2014-09-01
High-purity graphite is the core structural material of choice in the Very High Temperature Reactor (VHTR) design, a graphite-moderated, helium-cooled configuration that is capable of producing thermal energy for power generation as well as process heat for industrial applications that require temperatures higher than the outlet temperatures of present nuclear reactors. The Baseline Graphite Characterization Program is endeavoring to minimize the conservative estimates of as-manufactured mechanical and physical properties in nuclear-grade graphites by providing comprehensive data that captures the level of variation in measured values. In addition to providing a thorough comparison between these values in different graphite grades, the program is also carefully tracking individual specimen source, position, and orientation information in order to provide comparisons both in specific properties and in the associated variability between different lots, different billets, and different positions from within a single billet. This report is a preliminary comparison between each of the grades of graphite that are considered “candidate” grades from four major international graphite producers. These particular grades (NBG-18, NBG-17, PCEA, IG-110, and 2114) are the major focus of the evaluations presently underway on irradiated graphite properties through the series of Advanced Graphite Creep (AGC) experiments. NBG-18, a medium-grain pitch coke graphite from SGL from which billets are formed via vibration molding, was the favored structural material in the pebble-bed configuration. NBG-17 graphite from SGL is essentially NBG-18 with the grain size reduced by a factor of two. PCEA, petroleum coke graphite from GrafTech with a similar grain size to NBG-17, is formed via an extrusion process and was initially considered the favored grade for the prismatic layout. IG-110 and 2114, from Toyo Tanso and Mersen (formerly Carbone Lorraine), respectively, are fine-grain grades
VennPainter: A Tool for the Comparison and Identification of Candidate Genes Based on Venn Diagrams.
Directory of Open Access Journals (Sweden)
Guoliang Lin
Full Text Available VennPainter is a program for depicting unique and shared sets of genes lists and generating Venn diagrams, by using the Qt C++ framework. The software produces Classic Venn, Edwards' Venn and Nested Venn diagrams and allows for eight sets in a graph mode and 31 sets in data processing mode only. In comparison, previous programs produce Classic Venn and Edwards' Venn diagrams and allow for a maximum of six sets. The software incorporates user-friendly features and works in Windows, Linux and Mac OS. Its graphical interface does not require a user to have programing skills. Users can modify diagram content for up to eight datasets because of the Scalable Vector Graphics output. VennPainter can provide output results in vertical, horizontal and matrix formats, which facilitates sharing datasets as required for further identification of candidate genes. Users can obtain gene lists from shared sets by clicking the numbers on the diagram. Thus, VennPainter is an easy-to-use, highly efficient, cross-platform and powerful program that provides a more comprehensive tool for identifying candidate genes and visualizing the relationships among genes or gene families in comparative analysis.
Fast and Faster: A Comparison of Two Streamed Matrix Decomposition Algorithms
Řeh{\\ru}řek, Radim
2011-01-01
With the explosion of the size of digital dataset, the limiting factor for decomposition algorithms is the \\emph{number of passes} over the input, as the input is often stored out-of-core or even off-site. Moreover, we're only interested in algorithms that operate in \\emph{constant memory} w.r.t. to the input size, so that arbitrarily large input can be processed. In this paper, we present a practical comparison of two such algorithms: a distributed method that operates in a single pass over the input vs. a streamed two-pass stochastic algorithm. The experiments track the effect of distributed computing, oversampling and memory trade-offs on the accuracy and performance of the two algorithms. To ensure meaningful results, we choose the input to be a real dataset, namely the whole of the English Wikipedia, in the application settings of Latent Semantic Analysis.
Comparison and evaluation of network clustering algorithms applied to genetic interaction networks.
Hou, Lin; Wang, Lin; Berg, Arthur; Qian, Minping; Zhu, Yunping; Li, Fangting; Deng, Minghua
2012-01-01
The goal of network clustering algorithms detect dense clusters in a network, and provide a first step towards the understanding of large scale biological networks. With numerous recent advances in biotechnologies, large-scale genetic interactions are widely available, but there is a limited understanding of which clustering algorithms may be most effective. In order to address this problem, we conducted a systematic study to compare and evaluate six clustering algorithms in analyzing genetic interaction networks, and investigated influencing factors in choosing algorithms. The algorithms considered in this comparison include hierarchical clustering, topological overlap matrix, bi-clustering, Markov clustering, Bayesian discriminant analysis based community detection, and variational Bayes approach to modularity. Both experimentally identified and synthetically constructed networks were used in this comparison. The accuracy of the algorithms is measured by the Jaccard index in comparing predicted gene modules with benchmark gene sets. The results suggest that the choice differs according to the network topology and evaluation criteria. Hierarchical clustering showed to be best at predicting protein complexes; Bayesian discriminant analysis based community detection proved best under epistatic miniarray profile (EMAP) datasets; the variational Bayes approach to modularity was noticeably better than the other algorithms in the genome-scale networks.
Comparison of Greedy Algorithms for Decision Tree Optimization
Alkhalid, Abdulaziz
2013-01-01
This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values of average depth, depth, number of nodes, number of terminal nodes, and number of nonterminal nodes of decision trees. We compare average depth, depth, number of nodes, number of terminal nodes and number of nonterminal nodes of constructed trees with minimum values of the considered parameters obtained based on a dynamic programming approach. We report experiments performed on data sets from UCI ML Repository and randomly generated binary decision tables. As a result, for depth, average depth, and number of nodes we propose a number of good heuristics. © Springer-Verlag Berlin Heidelberg 2013.
Comparison between margin-growing algorithms in radiotherapy software environments.
Smith, D W; Morgan, A M; Pooler, A M; Thwaites, D I
2008-05-01
Margin-growing algorithms are commonly used tools that are available within virtual simulation and treatment planning software. We report on the accuracy of the margin-growing algorithms available in six commercially available radiotherapy software environments. A phantom containing two differently sized spheres and two rods (one level and one inclined) was constructed and scanned by CT with 1.25 mm, 2.5 mm, 3.75 mm and 5 mm slice thicknesses. The objects were outlined on a GE Advantage Simulator, and the outlined volumes recorded. Images and structures were transferred to MasterPlan, Xio, Pinnacle, Eclipse and Prosoma, where imported volumes were recorded. The contours on each system were grown isotropically by 10 mm, 20 mm and 30 mm, and volumes for each grown contour were recorded. Transfer of structure sets created in GE Advantage Simulator to the other software environments showed that the reported volumes of the four structures differ on each system. Results showed no correlation between volume accuracy and slice thickness. In general, margin growth of up to 30 mm for the rods and spheres is shown to be consistent between systems to within 1.33 mm for all slice thicknesses. Slice thickness did not appear to influence the accuracy of margin growth. Although this work highlights apparent differences in the reported volumes grown from the same original structure sets, the significance of this aspect of the planning process needs to weighed against reported intra- and inter-clinician variability in contour definition. It is not unreasonable, however, to expect that software packages should at least be consistent in volume information provided to the user.
COMPARISON OF TDOA LOCATION ALGORITHMS WITH DIRECT SOLUTION METHOD
Institute of Scientific and Technical Information of China (English)
Li Chun; Liu Congfeng; Liao Guisheng
2011-01-01
For Time Difference Of Arrival (TDOA) location based on multi-ground stations scene,two direct solution methods are proposed to solve the target position in TDOA location.Therein,the solving methods are realized in the rectangular and polar coordinates.On the condition of rectangular coordinates,first of all,it solves the radial range between the target and reference station,then calculates the location of the target.In the case of polar coordinates,the azimuth between the target and reference station is solved first,then the radial range between the target and reference station is figured out,finally the location of the target is obtained.Simultaneously,the simulation and comparison analysis are given in detail,and show that the polar solving method has the better fuzzy performance than that of rectangular coordinate.
Performance Comparison of Total Variation based Image Regularization Algorithms
Directory of Open Access Journals (Sweden)
Kamalaveni Vanjigounder
2016-07-01
Full Text Available The mathematical approach calculus of variation is commonly used to find an unknown function that minimizes or maximizes the functional. Retrieving the original image from the degraded one, such problems are called inverse problems. The most basic example for inverse problem is image denoising. Variational methods are formulated as optimization problems and provides a good solution to image denoising. Three such variational methods Tikhonov model, ROF model and Total Variation-L1 model for image denoising are studied and implemented. Performance of these variational algorithms are analyzed for different values of regularization parameter. It is found that small value of regularization parameter causes better noise removal whereas large value of regularization parameter preserves well sharp edges. The Euler’s Lagrangian equation corresponding to an energy functional used in variational methods is solved using gradient descent method and the resulting partial differential equation is solved using Euler’s forward finite difference method. The quality metrics are computed and the results are compared in this paper.
Comparison of different classification algorithms for landmine detection using GPR
Karem, Andrew; Fadeev, Aleksey; Frigui, Hichem; Gader, Paul
2010-04-01
The Edge Histogram Detector (EHD) is a landmine detection algorithm that has been developed for ground penetrating radar (GPR) sensor data. It has been tested extensively and has demonstrated excellent performance. The EHD consists of two main components. The first one maps the raw data to a lower dimension using edge histogram based feature descriptors. The second component uses a possibilistic K-Nearest Neighbors (pK-NN) classifier to assign a confidence value. In this paper we show that performance of the baseline EHD could be improved by replacing the pK-NN classifier with model based classifiers. In particular, we investigate two such classifiers: Support Vector Regression (SVR), and Relevance Vector Machines (RVM). We investigate the adaptation of these classifiers to the landmine detection problem with GPR, and we compare their performance to the baseline EHD with a pK-NN classifier. As in the baseline EHD, we treat the problem as a two class classification problem: mine vs. clutter. Model parameters for the SVR and the RVM classifiers are estimated from training data using logarithmic grid search. For testing, soft labels are assigned to the test alarms. A confidence of zero indicates the maximum probability of being a false alarm. Similarly, a confidence of one represents the maximum probability of being a mine. Results on large and diverse GPR data collections show that the proposed modification to the classifier component can improve the overall performance of the EHD significantly.
Effective Comparison and Evaluation of DES and Rijndael Algorithm (AES
Directory of Open Access Journals (Sweden)
Prof.N..Penchalaiah,
2010-08-01
Full Text Available This paper discusses the effective coding of Rijndael algorithm, Advanced Encryption Standard (AES in Hardware Description Language, Verilog. In this work we analyze the structure and design of new AES, following three criteria: a resistance against all known attacks; b speed and code compactness on a wide range of platforms; and c designsimplicity; as well as its similarities and dissimilarities with other symmetric ciphers. On the other side, the principal advantages of new AES with respect to DES, as well as its limitations, are investigated. Thus, for example, the fact that the new cipher and its inverse use different components, which practically eliminates the possibility for weak and semi-weak keys, as existing for DES, and the non-linearity of the key expansion, which practically eliminates the possibility of equivalent keys, are two of the principal advantages of new cipher. Finally, the implementation aspects of Rijndael cipherand its inverse are treated. Thus, although Rijndael is well suited to be implemented efficiently on a wide range of processors and in dedicated hardware, we have concentrated our study on 8-bit processors, typical for current Smart Cards and on 32-bit processors, typical for PCs.
A comparison between two algorithms for the retrieval of soil moisture using AMSR-E data
A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS) watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground condit...
V.A.F. Dallagnol (V. A F); J.H. van den Berg (Jan); L. Mous (Lonneke)
2009-01-01
textabstractIn this paper, it is shown a comparison of the application of particle swarm optimization and genetic algorithms to portfolio management, in a constrained portfolio optimization problem where no short sales are allowed. The objective function to be minimized is the value at risk calculat
Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison
Directory of Open Access Journals (Sweden)
Olympia Roeva
2005-12-01
Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.
A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller
Energy Technology Data Exchange (ETDEWEB)
Tapp, P.A.
1992-04-01
A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms` performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.
A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller
Energy Technology Data Exchange (ETDEWEB)
Tapp, P.A.
1992-04-01
A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.
Determining OBS Instrument Orientations: A Comparison of Algorithms
Doran, A. K.; Laske, G.
2015-12-01
The alignment of the orientation of the horizontal seismometer components with the geographical coordinate system is critical for a wide variety of seismic analyses, but the traditional deployment method of ocean bottom seismometers (OBS) precludes knowledge of this parameter. Current techniques for determining the orientation predominantly rely on body and surface wave data recorded from teleseismic events with sufficiently large magnitudes. Both wave types experience lateral refraction between the source and receiver as a result of heterogeneity and anisotropy, and therefore the arrival angle of any one phase can significantly deviate from the great circle minor arc. We systematically compare the results and uncertainties obtained through current determination methods, as well as describe a new algorithm that uses body wave, surface wave, and differential pressure gauge data (where available) to invert for horizontal orientation. To start with, our method is based on the easily transportable computer code of Stachnik et al. (2012) that is publicly available through IRIS. A major addition is that we utilize updated global dispersion maps to account for lateral refraction, as was done by Laske (1995). We also make measurements in a wide range of frequencies, and analyze surface wave trains of repeat orbits. Our method has the advantage of requiring fewer total events to achieve high precision estimates, which is beneficial for OBS deployments that can be as short as weeks. Although the program is designed for the purpose of use with OBS instruments, it also works with standard land installations. We intend to provide the community with a program that is easy to use, requires minimal user input, and is optimized to work with data cataloged at the IRIS DMC.
Lu, Jing; Chen, Lei; Yin, Jun; Huang, Tao; Bi, Yi; Kong, Xiangyin; Zheng, Mingyue; Cai, Yu-Dong
2016-01-01
Lung cancer, characterized by uncontrolled cell growth in the lung tissue, is the leading cause of global cancer deaths. Until now, effective treatment of this disease is limited. Many synthetic compounds have emerged with the advancement of combinatorial chemistry. Identification of effective lung cancer candidate drug compounds among them is a great challenge. Thus, it is necessary to build effective computational methods that can assist us in selecting for potential lung cancer drug compounds. In this study, a computational method was proposed to tackle this problem. The chemical-chemical interactions and chemical-protein interactions were utilized to select candidate drug compounds that have close associations with approved lung cancer drugs and lung cancer-related genes. A permutation test and K-means clustering algorithm were employed to exclude candidate drugs with low possibilities to treat lung cancer. The final analysis suggests that the remaining drug compounds have potential anti-lung cancer activities and most of them have structural dissimilarity with approved drugs for lung cancer.
Practical comparison of aberration detection algorithms for biosurveillance systems.
Zhou, Hong; Burkom, Howard; Winston, Carla A; Dey, Achintya; Ajani, Umed
2015-10-01
National syndromic surveillance systems require optimal anomaly detection methods. For method performance comparison, we injected multi-day signals stochastically drawn from lognormal distributions into time series of aggregated daily visit counts from the U.S. Centers for Disease Control and Prevention's BioSense syndromic surveillance system. The time series corresponded to three different syndrome groups: rash, upper respiratory infection, and gastrointestinal illness. We included a sample of facilities with data reported every day and with median daily syndromic counts ⩾1 over the entire study period. We compared anomaly detection methods of five control chart adaptations, a linear regression model and a Poisson regression model. We assessed sensitivity and timeliness of these methods for detection of multi-day signals. At a daily background alert rate of 1% and 2%, the sensitivities and timeliness ranged from 24 to 77% and 3.3 to 6.1days, respectively. The overall sensitivity and timeliness increased substantially after stratification by weekday versus weekend and holiday. Adjusting the baseline syndromic count by the total number of facility visits gave consistently improved sensitivity and timeliness without stratification, but it provided better performance when combined with stratification. The daily syndrome/total-visit proportion method did not improve the performance. In general, alerting based on linear regression outperformed control chart based methods. A Poisson regression model obtained the best sensitivity in the series with high-count data.
Gharsalli, Leila; Mohammad-Djafari, Ali; Fraysse, Aurélia; Rodet, Thomas
2013-08-01
Our aim is to solve a linear inverse problem using various methods based on the Variational Bayesian Approximation (VBA). We choose to take sparsity into account via a scale mixture prior, more precisely a student-t model. The joint posterior of the unknown and hidden variable of the mixtures is approximated via the VBA. To do this approximation, classically the alternate algorithm is used. But this method is not the most efficient. Recently other optimization algorithms have been proposed; indeed classical iterative algorithms of optimization such as the steepest descent method and the conjugate gradient have been studied in the space of the probability densities involved in the Bayesian methodology to treat this problem. The main object of this work is to present these three algorithms and a numerical comparison of their performances.
Gallenne, A; Kervella, P; Monnier, J D; Schaefer, G H; Baron, F; Breitfelder, J; Bouquin, J B Le; Roettenbacher, R M; Gieren, W; Pietrzynski, G; McAlister, H; Brummelaar, T ten; Sturmann, J; Sturmann, L; Turner, N; Ridgway, S; Kraus, S
2015-01-01
Long-baseline interferometry is an important technique to spatially resolve binary or multiple systems in close orbits. By combining several telescopes together and spectrally dispersing the light, it is possible to detect faint components around bright stars. Aims. We provide a rigorous and detailed method to search for high-contrast companions around stars, determine the detection level, and estimate the dynamic range from interferometric observations. We developed the code CANDID (Companion Analysis and Non-Detection in Interferometric Data), a set of Python tools that allows us to search systematically for point-source, high-contrast companions and estimate the detection limit. The search pro- cedure is made on a N x N grid of fit, whose minimum needed resolution is estimated a posteriori. It includes a tool to estimate the detection level of the companion in the number of sigmas. The code CANDID also incorporates a robust method to set a 3{\\sigma} detection limit on the flux ratio, which is based on an a...
Directory of Open Access Journals (Sweden)
B. Y. Volochiy
2014-12-01
Full Text Available Introduction. Nowadays it is actual task to provide the necessary efficiency indexes of radioelectronic complex system by its behavior algorithm design. There are several methods using for solving this task, intercomparison of which is required. Main part. For behavior algorithm of radioelectronic complex system four mathematical models were built by two known methods (the space of states method and the algorithmic algebras method and new scheme of paths method. Scheme of paths is compact representation of the radioelectronic complex system’s behavior and it is easily and directly formed from the behavior algorithm’s flowchart. Efficiency indexes of tested behavior algorithm - probability and mean time value of successful performance - were obtained. The intercomparison of estimated results was carried out. Conclusion. The model of behavior algorithm, which was constructed using scheme of paths method, gives commensurate values of efficiency indexes in comparison with mathematical models of the same behavior algorithm, which were obtained by space of states and algorithmic algebras methods.
Comparison of GOES Cloud Classification Algorithms Employing Explicit and Implicit Physics
Bankert, Richard L.; Mitrescu, Cristian; Miller, Steven D.; Wade, Robert H.
2009-01-01
Cloud-type classification based on multispectral satellite imagery data has been widely researched and demonstrated to be useful for distinguishing a variety of classes using a wide range of methods. The research described here is a comparison of the classifier output from two very different algorithms applied to Geostationary Operational Environmental Satellite (GOES) data over the course of one year. The first algorithm employs spectral channel thresholding and additional physically based tests. The second algorithm was developed through a supervised learning method with characteristic features of expertly labeled image samples used as training data for a 1-nearest-neighbor classification. The latter's ability to identify classes is also based in physics, but those relationships are embedded implicitly within the algorithm. A pixel-to-pixel comparison analysis was done for hourly daytime scenes within a region in the northeastern Pacific Ocean. Considerable agreement was found in this analysis, with many of the mismatches or disagreements providing insight to the strengths and limitations of each classifier. Depending upon user needs, a rule-based or other postprocessing system that combines the output from the two algorithms could provide the most reliable cloud-type classification.
Directory of Open Access Journals (Sweden)
Nur Ariffin Mohd Zin
2012-01-01
Full Text Available This paper discusses on a comparative study towards solution for solving Travelling Salesman Problem based on three techniques proposed namely exhaustive, heuristic and genetic algorithm. Each solution is to cater on finding an optimal path of available 25 contiguous cities in England whereby solution is written in Prolog. Comparisons were made with emphasis against time consumed and closeness to optimal solutions. Based on the experimental, we found that heuristic is very promising in terms of time taken, while on the other hand, Genetic Algorithm manages to be outstanding on big number of traversal by resulting the shortest path among the others.
Two-dimensional phase unwrapping algorithms for fringe pattern analysis: a comparison study
Yang, Fang; Wang, Zhaomin; Wen, Yongfu; Qu, Weijuan
2015-03-01
Phase unwrapping is a process to reconstruct the absolute phase from a wrapped phase map whose range is (-π, π]. As the absolute phase cannot be directly extracted from the fringe pattern, phase unwrapping is therefore required by phasemeasure techniques. Currently, many phase unwrapping algorithms have been proposed. In this paper, four popular phase unwrapping algorithms, including the Goldstein's branch cut method, the quality-guided method, the Phase Unwrapping via Max Flow (PUMA) method, and the phase estimation using adaptive regularization based on local smoothing method (PERALS), are reviewed and discussed. Detailed accuracy comparisons of these methods are provided as well.
DEFF Research Database (Denmark)
Knöös, Tommy; Wieslander, Elinore; Cozzi, Luca;
2006-01-01
A study of the performance of five commercial radiotherapy treatment planning systems (TPSs) for common treatment sites regarding their ability to model heterogeneities and scattered photons has been performed. The comparison was based on CT information for prostate, head and neck, breast and lung...... correction-based equivalent path length algorithms to model-based algorithms. These were divided into two groups based on how changes in electron transport are accounted for ((a) not considered and (b) considered). Increasing the complexity from the relatively homogeneous pelvic region to the very...
Zhao, Zhongming; Webb, Bradley T; Jia, Peilin; Bigdeli, T Bernard; Maher, Brion S; van den Oord, Edwin; Bergen, Sarah E; Amdur, Richard L; O'Neill, Francis A; Walsh, Dermot; Thiselton, Dawn L; Chen, Xiangning; Pato, Carlos N; Riley, Brien P; Kendler, Kenneth S; Fanous, Ayman H
2013-01-01
Integrating evidence from multiple domains is useful in prioritizing disease candidate genes for subsequent testing. We ranked all known human genes (n=3819) under linkage peaks in the Irish Study of High-Density Schizophrenia Families using three different evidence domains: 1) a meta-analysis of microarray gene expression results using the Stanley Brain collection, 2) a schizophrenia protein-protein interaction network, and 3) a systematic literature search. Each gene was assigned a domain-specific p-value and ranked after evaluating the evidence within each domain. For comparison to this ranking process, a large-scale candidate gene hypothesis was also tested by including genes with Gene Ontology terms related to neurodevelopment. Subsequently, genotypes of 3725 SNPs in 167 genes from a custom Illumina iSelect array were used to evaluate the top ranked vs. hypothesis selected genes. Seventy-three genes were both highly ranked and involved in neurodevelopment (category 1) while 42 and 52 genes were exclusive to neurodevelopment (category 2) or highly ranked (category 3), respectively. The most significant associations were observed in genes PRKG1, PRKCE, and CNTN4 but no individual SNPs were significant after correction for multiple testing. Comparison of the approaches showed an excess of significant tests using the hypothesis-driven neurodevelopment category. Random selection of similar sized genes from two independent genome-wide association studies (GWAS) of schizophrenia showed the excess was unlikely by chance. In a further meta-analysis of three GWAS datasets, four candidate SNPs reached nominal significance. Although gene ranking using integrated sources of prior information did not enrich for significant results in the current experiment, gene selection using an a priori hypothesis (neurodevelopment) was superior to random selection. As such, further development of gene ranking strategies using more carefully selected sources of information is warranted.
A COMPARISON BETWEEN TWO ALGORITHMS FOR THE RETRIEVAL OF SOIL MOISTURE USING AMSR-E DATA
Directory of Open Access Journals (Sweden)
Simonetta ePaloscia
2015-04-01
Full Text Available A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions and provide long-term in-situ data. One of the algorithms is the artificial neural network-based algorithm developed by the Institute of Applied Physics of the National Research Council (IFAC-CNR (HydroAlgo and the second one is the Single Channel Algorithm (SCA developed by USDA-ARS (US Department of Agriculture-Agricultural Research Service. Both algorithms are based on the same radiative transfer equations but are implemented very differently. Both made use of datasets provided by the Japanese Aerospace Exploration Agency (JAXA, within the framework of Advanced Microwave Scanning Radiometer–Earth Observing System (AMSR-E and Global Change Observation Mission–Water GCOM/AMSR-2 programs. Results demonstrated that both algorithms perform better than the mission specified accuracy, with Root Mean Square Error (RMSE ≤0.06 m3/m3 and Bias <0.02 m3/m3. These results expand on previous investigations using different algorithms and sites. The novelty of the paper consists of the fact that it is the first intercomparison of the HydroAlgo algorithm with a more traditional retrieval algorithm, which offers an approach to higher spatial resolution products.
Energy Technology Data Exchange (ETDEWEB)
Antoniucci, S.; Giannini, T.; Li Causi, G.; Lorenzetti, D., E-mail: simone.antoniucci@oa-roma.inaf.it, E-mail: teresa.giannini@oa-roma.inaf.it, E-mail: gianluca.licausi@oa-roma.inaf.it, E-mail: dario.lorenzetti@oa-roma.inaf.it [INAF-Osservatorio Astronomico di Roma, via Frascati 33, I-00040 Monte Porzio (Italy)
2014-02-10
Aiming to statistically study the variability in the mid-IR of young stellar objects, we have compared the 3.6, 4.5, and 24 μm Spitzer fluxes of 1478 sources belonging to the C2D (Cores to Disks) legacy program with the WISE fluxes at 3.4, 4.6, and 22 μm. From this comparison, we have selected a robust sample of 34 variable sources. Their variations were classified per spectral Class (according to the widely accepted scheme of Class I/flat/II/III protostars), and per star forming region. On average, the number of variable sources decreases with increasing Class and is definitely higher in Perseus and Ophiuchus than in Chamaeleon and Lupus. According to the paradigm Class ≡ Evolution, the photometric variability can be considered to be a feature more pronounced in less evolved protostars, and, as such, related to accretion processes. Moreover, our statistical findings agree with the current knowledge of star formation activity in different regions. The 34 selected variables were further investigated for similarities with known young eruptive variables, namely the EXors. In particular, we analyzed (1) the shape of the spectral energy distribution, (2) the IR excess over the stellar photosphere, (3) magnitude versus color variations, and (4) output parameters of model fitting. This first systematic search for EXors ends up with 11 bona fide candidates that can be considered as suitable targets for monitoring or future investigations.
Spranger, K.; Capelli, C.; Bosi, G.M.; Schievano, S.; Ventikos, Y.
2015-01-01
In this paper, we perform a comparative analysis between two computational methods for virtual stent deployment: a novel fast virtual stenting method, which is based on a spring–mass model, is compared with detailed finite element analysis in a sequence of in silico experiments. Given the results of the initial comparison, we present a way to optimise the fast method by calibrating a set of parameters with the help of a genetic algorithm, which utilises the outcomes of the finite element analysis as a learning reference. As a result of the calibration phase, we were able to substantially reduce the force measure discrepancy between the two methods and validate the fast stenting method by assessing the differences in the final device configurations. PMID:26664007
Spranger, K; Capelli, C; Bosi, G M; Schievano, S; Ventikos, Y
2015-08-15
In this paper, we perform a comparative analysis between two computational methods for virtual stent deployment: a novel fast virtual stenting method, which is based on a spring-mass model, is compared with detailed finite element analysis in a sequence of in silico experiments. Given the results of the initial comparison, we present a way to optimise the fast method by calibrating a set of parameters with the help of a genetic algorithm, which utilises the outcomes of the finite element analysis as a learning reference. As a result of the calibration phase, we were able to substantially reduce the force measure discrepancy between the two methods and validate the fast stenting method by assessing the differences in the final device configurations.
EXPERIMENTAL COMPARISON OF HOMODYNE DEMODULATION ALGORITHMS FOR PHASE FIBER-OPTIC SENSOR
Directory of Open Access Journals (Sweden)
M. N. Belikin
2015-11-01
Full Text Available Subject of Research. The paper presents the results of experimental comparative analysis of homodyne demodulation algorithms based on differential cross multiplying method and on arctangent method under the same conditions. The dependencies of parameters for the output signals on the optical radiation intensity are studied for the considered demodulation algorithms. Method. The prototype of single fiber optic phase interferometric sensor has been used for experimental comparison of signal demodulation algorithms. Main Results. We have found that homodyne demodulation based on arctangent method provides greater (by 7 dB at average signal-to-noise ratio of output signals over the frequency band of acoustic impact from 100 Hz to 500 Hz as compared to differential cross multiplying algorithms. We have demonstrated that no change in the output signal amplitude occurs for the studied range of values of the optical pulses amplitudes. Obtained results indicate that the homodyne demodulation based on arctangent method is most suitable for application in the phase fiber-optic sensors. It provides higher repeatability of their characteristics than the differential cross multiplying algorithm. Practical Significance. Algorithms of interferometric signals demodulation are widely used in phase fiber-optic sensors. Improvement of their characteristics has a positive effect on the performance of such sensors.
Particle Swarm Algorithms to Solve Engineering Problems: A Comparison of Performance
Directory of Open Access Journals (Sweden)
Giordano Tomassetti
2013-01-01
Full Text Available In many disciplines, the use of evolutionary algorithms to perform optimizations is limited because of the extensive number of objective evaluations required. In fact, in real-world problems, each objective evaluation is frequently obtained by time-expensive numerical calculations. On the other hand, gradient-based algorithms are able to identify optima with a reduced number of objective evaluations, but they have limited exploration capabilities of the search domain and some restrictions when dealing with noncontinuous functions. In this paper, two PSO-based algorithms are compared to evaluate their pros and cons with respect to the effort required to find acceptable solutions. The algorithms implement two different methodologies to solve widely used engineering benchmark problems. Comparison is made both in terms of fixed iterations tests to judge the solution quality reached and fixed threshold to evaluate how quickly each algorithm reaches near-optimal solutions. The results indicate that one PSO algorithm achieves better solutions than the other one in fixed iterations tests, and the latter achieves acceptable results in less-function evaluations with respect to the first PSO in the case of fixed threshold tests.
Inversion of Land Surface Temperature (LST Using Terra ASTER Data: A Comparison of Three Algorithms
Directory of Open Access Journals (Sweden)
Milton Isaya Ndossi
2016-12-01
Full Text Available Land Surface Temperature (LST is an important measurement in studies related to the Earth surface’s processes. The Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER instrument onboard the Terra spacecraft is the currently available Thermal Infrared (TIR imaging sensor with the highest spatial resolution. This study involves the comparison of LSTs inverted from the sensor using the Split Window Algorithm (SWA, the Single Channel Algorithm (SCA and the Planck function. This study has used the National Oceanic and Atmospheric Administration’s (NOAA data to model and compare the results from the three algorithms. The data from the sensor have been processed by the Python programming language in a free and open source software package (QGIS to enable users to make use of the algorithms. The study revealed that the three algorithms are suitable for LST inversion, whereby the Planck function showed the highest level of accuracy, the SWA had moderate level of accuracy and the SCA had the least accuracy. The algorithms produced results with Root Mean Square Errors (RMSE of 2.29 K, 3.77 K and 2.88 K for the Planck function, the SCA and SWA respectively.
K-Means Re-Clustering-Algorithmic Options with Quantifiable Performance Comparisons
Energy Technology Data Exchange (ETDEWEB)
Meyer, A W; Paglieroni, D; Asteneh, C
2002-12-17
This paper presents various architectural options for implementing a K-Means Re-Clustering algorithm suitable for unsupervised segmentation of hyperspectral images. Performance metrics are developed based upon quantitative comparisons of convergence rates and segmentation quality. A methodology for making these comparisons is developed and used to establish K values that produce the best segmentations with minimal processing requirements. Convergence rates depend on the initial choice of cluster centers. Consequently, this same methodology may be used to evaluate the effectiveness of different initialization techniques.
Code Syntax-Comparison Algorithm Based on Type-Redefinition-Preprocessing and Rehash Classification
Directory of Open Access Journals (Sweden)
Baojiang Cui
2011-08-01
Full Text Available The code comparison technology plays an important role in the fields of software security protection and plagiarism detection. Nowadays, there are mainly FIVE approaches of plagiarism detection, file-attribute-based, text-based, token-based, syntax-based and semantic-based. The prior three approaches have their own limitations, while the technique based on syntax has its shortage of detection ability and low efficiency that all of these approaches cannot meet the requirements on large-scale software plagiarism detection. Based on our prior research, we propose an algorithm on type redefinition plagiarism detection, which could detect the level of simple type redefinition, repeating pattern redefinition, and the redefinition of type with pointer. Besides, this paper also proposes a code syntax-comparison algorithm based on rehash classification, which enhances the node storage structure of the syntax tree, and greatly improves the efficiency.
Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region
Directory of Open Access Journals (Sweden)
Matko Šarić
2008-06-01
Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.
Spectrum Feature Retrieval and Comparison of Remote Sensing Images Using Improved ISODATA Algorithm
Institute of Scientific and Technical Information of China (English)
刘磊; 敬忠良; 肖刚
2004-01-01
Due to the large quantities of data and high relativity of the spectra of remote sensing images, K-L transformation is used to eliminate the relativity. An improved ISODATA(Interative Self-Organizing Data Analysis Technique A) algorithm is used to extract the spectrum features of the images. The computation is greatly reduced and the dynamic arguments are realized. The comparison of features between two images is carried out, and good results are achieved in simulation.
PN code acquisition algorithm in DS-UWB system based on threshold comparison criterion
Institute of Scientific and Technical Information of China (English)
Qi Lina; Gan Zongliang; Zhu Hongbo
2009-01-01
The direct sequence ultra-wideband (DS-UWB) is a promising technology for short-range wireless communications. The UWB signal is a stream of very low power density and ultra-short pulses, and the great potential of DS-UWB depends critically on the success of timing acquisition. A rapid acquisition algorithm for reducing the acquisition time of the coarse pseudo-noise (PN) sequences is proposed. The algorithm utilizes the auxiliary sequence and biscarch strategy based on the threshold comparison criterion. Both theoretical analysis and simulation tests show that with the proposed search strategy and simple operations over the symbol duration at the receiver, the proposed algorithm can considerably reduce the acquisition time even as it maintains the PN sequence acquisition probability in the DS-UWB system over the dense multipath environment.
Directory of Open Access Journals (Sweden)
Ji Xinglai
2010-08-01
Full Text Available Abstract Background We are developing a cross-species comparison strategy to distinguish between cancer driver- and passenger gene alteration candidates, by utilizing the difference in genomic location of orthologous genes between the human and other mammals. As an initial test of this strategy, we conducted a pilot study with human colorectal cancer (CRC and its mouse model C57BL/6J ApcMin/+, focusing on human 5q22.2 and 18q21.1-q21.2. Methods We first performed bioinformatics analysis on the evolution of 5q22.2 and 18q21.1-q21.2 regions. Then, we performed exon-targeted sequencing, real time quantitative polymerase chain reaction (qPCR, and real time quantitative reverse transcriptase PCR (qRT-PCR analyses on a number of genes of both regions with both human and mouse colon tumors. Results These two regions (5q22.2 and 18q21.1-q21.2 are frequently deleted in human CRCs and encode genuine colorectal tumor suppressors APC and SMAD4. They also encode genes such as MCC (mutated in colorectal cancer with their role in CRC etiology unknown. We have discovered that both regions are evolutionarily unstable, resulting in genes that are clustered in each human region being found scattered at several distinct loci in the genome of many other species. For instance, APC and MCC are within 200 kb apart in human 5q22.2 but are 10 Mb apart in the mouse genome. Importantly, our analyses revealed that, while known CRC driver genes APC and SMAD4 were disrupted in both human colorectal tumors and tumors from ApcMin/+ mice, the questionable MCC gene was disrupted in human tumors but appeared to be intact in mouse tumors. Conclusions These results indicate that MCC may not actually play any causative role in early colorectal tumorigenesis. We also hypothesize that its disruption in human CRCs is likely a mere result of its close proximity to APC in the human genome. Expanding this pilot study to the entire genome may identify more questionable genes like MCC
A comparison of semiglobal and local dense matching algorithms for surface reconstruction
Dall'Asta, E.; Roncella, R.
2014-06-01
Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.
Lehmann, Christoph; Koenig, Thomas; Jelic, Vesna; Prichep, Leslie; John, Roy E; Wahlund, Lars-Olof; Dodge, Yadolah; Dierks, Thomas
2007-04-15
The early detection of subjects with probable Alzheimer's disease (AD) is crucial for effective appliance of treatment strategies. Here we explored the ability of a multitude of linear and non-linear classification algorithms to discriminate between the electroencephalograms (EEGs) of patients with varying degree of AD and their age-matched control subjects. Absolute and relative spectral power, distribution of spectral power, and measures of spatial synchronization were calculated from recordings of resting eyes-closed continuous EEGs of 45 healthy controls, 116 patients with mild AD and 81 patients with moderate AD, recruited in two different centers (Stockholm, New York). The applied classification algorithms were: principal component linear discriminant analysis (PC LDA), partial least squares LDA (PLS LDA), principal component logistic regression (PC LR), partial least squares logistic regression (PLS LR), bagging, random forest, support vector machines (SVM) and feed-forward neural network. Based on 10-fold cross-validation runs it could be demonstrated that even tough modern computer-intensive classification algorithms such as random forests, SVM and neural networks show a slight superiority, more classical classification algorithms performed nearly equally well. Using random forests classification a considerable sensitivity of up to 85% and a specificity of 78%, respectively for the test of even only mild AD patients has been reached, whereas for the comparison of moderate AD vs. controls, using SVM and neural networks, values of 89% and 88% for sensitivity and specificity were achieved. Such a remarkable performance proves the value of these classification algorithms for clinical diagnostics.
Mikhaylova, E.; Kolstein, M.; De Lorenzo, G.; Chmeissani, M.
2014-07-01
A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm3) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.
Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS
Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.
Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.
Scanpath similarity depends on how you look at it: Evaluating a ‘MultiMatch’ comparison algorithm
Dewhurst, Richard; Nyström, Marcus; Jarodzka, Halszka; Holmqvist, Kenneth
2011-01-01
Dewhurst, R., Nyström, M., Jarodzka, H., & Holmqvist, K. (2011, August). Scanpath similarity depends on how you look at it: Evaluating a ‘MultiMatch’ comparison algorithm. Presentation at ECEM, Marseille, France.
Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia
2011-01-01
LS-Lab provides automatic support to comparison/evaluation of the Learning Object Sequences produced by different Curriculum Sequencing Algorithms. Through this framework a teacher can verify the correspondence between the behaviour of different sequencing algorithms and her pedagogical preferences. In fact the teacher can compare algorithms…
Sensitivity study of voxel-based PET image comparison to image registration algorithms
Energy Technology Data Exchange (ETDEWEB)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Chen, Aileen B.; Berbeco, Ross [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Aerts, Hugo J. W. L. [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 and Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts 02115 (United States)
2014-11-01
Purpose: Accurate deformable registration is essential for voxel-based comparison of sequential positron emission tomography (PET) images for proper adaptation of treatment plan and treatment response assessment. The comparison may be sensitive to the method of deformable registration as the optimal algorithm is unknown. This study investigated the impact of registration algorithm choice on therapy response evaluation. Methods: Sixteen patients with 20 lung tumors underwent a pre- and post-treatment computed tomography (CT) and 4D FDG-PET scans before and after chemoradiotherapy. All CT images were coregistered using a rigid and ten deformable registration algorithms. The resulting transformations were then applied to the respective PET images. Moreover, the tumor region defined by a physician on the registered PET images was classified into progressor, stable-disease, and responder subvolumes. Particularly, voxels with standardized uptake value (SUV) decreases >30% were classified as responder, while voxels with SUV increases >30% were progressor. All other voxels were considered stable-disease. The agreement of the subvolumes resulting from difference registration algorithms was assessed by Dice similarity index (DSI). Coefficient of variation (CV) was computed to assess variability of DSI between individual tumors. Root mean square difference (RMS{sub rigid}) of the rigidly registered CT images was used to measure the degree of tumor deformation. RMS{sub rigid} and DSI were correlated by Spearman correlation coefficient (R) to investigate the effect of tumor deformation on DSI. Results: Median DSI{sub rigid} was found to be 72%, 66%, and 80%, for progressor, stable-disease, and responder, respectively. Median DSI{sub deformable} was 63%–84%, 65%–81%, and 82%–89%. Variability of DSI was substantial and similar for both rigid and deformable algorithms with CV > 10% for all subvolumes. Tumor deformation had moderate to significant impact on DSI for progressor
Comparison of Different Independent Component Analysis Algorithms for Output-Only Modal Analysis
Directory of Open Access Journals (Sweden)
Jianying Wang
2016-01-01
Full Text Available From the principle of independent component analysis (ICA and the uncertainty of amplitude, order, and number of source signals, this paper expounds the root reasons for modal energy uncertainty, identified order uncertainty, and modal missing in output-only modal analysis based on ICA methods. Aiming at the problem of lack of comparison and evaluation of different ICA algorithms for output-only modal analysis, this paper studies the different objective functions and optimization methods of ICA for output-only modal parameter identification. Simulation results on simply supported beam verify the effectiveness, robustness, and convergence rate of five different ICA algorithms for output-only modal parameters identification and show that maximization negentropy with quasi-Newton iterative of ICA method is more suitable for modal parameter identification.
The comparison of network congestion avoidance algorithms in data exchange networks.
Grzyb, S.; Orłowski, P.
2017-01-01
Effective congestion control strategies allow in maintaining low delay and high throughput in data exchange networks. These requirements seem to be the most desired by network environment participants. Wide range of algorithms are proposed in the literature to approximate to these ideal parameters. All of these approaches are focused on alleviating the results of sudden, unexpected network state changes. This paper discuss a comparison of four control strategies, focused on congestion avoidance. All of them are in charge of queue length control in active network nodes. For research purpose, non-stationary, discrete, dynamical model of communication channel has been used. Research results are presented in table and chart form.
Directory of Open Access Journals (Sweden)
V. Sedenka
2010-09-01
Full Text Available The paper deals with efficiency comparison of two global evolutionary optimization methods implemented in MATLAB. Attention is turned to an elitist Non-dominated Sorting Genetic Algorithm (NSGA-II and a novel multi-objective Particle Swarm Optimization (PSO. The performance of optimizers is compared on three different test functions and on a cavity resonator synthesis. The microwave resonator is modeled using the Finite Element Method (FEM. The hit rate and the quality of the Pareto front distribution are classified.
Bircher, Pascal; Liniger, Hanspeter; Prasuhn, Volker
2016-04-01
Soil erosion is a well-known challenge both from a global perspective and in Switzerland, and it is assessed and discussed in many projects (e.g. national or European erosion risk maps). Meaningful assessment of soil erosion requires models that adequately reflect surface water flows. Various studies have attempted to achieve better modelling results by including multiple flow algorithms in the topographic length and slope factor (LS-factor) of the Revised Universal Soil Loss Equation (RUSLE). The choice of multiple flow algorithms is wide, and many of them have been implemented in programs or tools like Saga-Gis, GrassGis, ArcGIS, ArcView, Taudem, and others. This study compares six different multiple flow algorithms with the aim of identifying a suitable approach to calculating the LS factor for a new soil erosion risk map of Switzerland. The comparison of multiple flow algorithms is part of a broader project to model soil erosion for the entire agriculturally used area in Switzerland and to renew and optimize the current erosion risk map of Switzerland (ERM2). The ERM2 was calculated in 2009, using a high resolution digital elevation model (2 m) and a multiple flow algorithm in ArcView. This map has provided the basis for enforcing soil protection regulations since 2010 and has proved its worth in practice, but it has become outdated (new basic data are now available, e.g. data on land use change, a new rainfall erosivity map, a new digital elevation model, etc.) and is no longer user friendly (ArcView). In a first step towards its renewal, a new data set from the Swiss Federal Office of Topography (Swisstopo) was used to generate the agricultural area based on the existing field block map. A field block is an area consisting of farmland, pastures, and meadows which is bounded by hydrological borders such as streets, forests, villages, surface waters, etc. In our study, we compared the six multiple flow algorithms with the LS factor calculation approach used in
Directory of Open Access Journals (Sweden)
Prabhat Kumar Giri
2016-01-01
Full Text Available In the present era of globalization and competitive market, cellular manufacturing has become a vital tool for meeting the challenges of improving productivity, which is the way to sustain growth. Getting best results of cellular manufacturing depends on the formation of the machine cells and part families. This paper examines advantages of ART method of cell formation over array based clustering algorithms, namely ROC-2 and DCA. The comparison and evaluation of the cell formation methods has been carried out in the study. The most appropriate approach is selected and used to form the cellular manufacturing system. The comparison and evaluation is done on the basis of performance measure as grouping efficiency and improvements over the existing cellular manufacturing system is presented.
Comparison of algorithms to infer genetic population structure from unlinked molecular markers.
Peña-Malavera, Andrea; Bruno, Cecilia; Fernandez, Elmer; Balzarini, Monica
2014-08-01
Identifying population genetic structure (PGS) is crucial for breeding and conservation. Several clustering algorithms are available to identify the underlying PGS to be used with genetic data of maize genotypes. In this work, six methods to identify PGS from unlinked molecular marker data were compared using simulated and experimental data consisting of multilocus-biallelic genotypes. Datasets were delineated under different biological scenarios characterized by three levels of genetic divergence among populations (low, medium, and high FST) and two numbers of sub-populations (K=3 and K=5). The relative performance of hierarchical and non-hierarchical clustering, as well as model-based clustering (STRUCTURE) and clustering from neural networks (SOM-RP-Q). We use the clustering error rate of genotypes into discrete sub-populations as comparison criterion. In scenarios with great level of divergence among genotype groups all methods performed well. With moderate level of genetic divergence (FST=0.2), the algorithms SOM-RP-Q and STRUCTURE performed better than hierarchical and non-hierarchical clustering. In all simulated scenarios with low genetic divergence and in the experimental SNP maize panel (largely unlinked), SOM-RP-Q achieved the lowest clustering error rate. The SOM algorithm used here is more effective than other evaluated methods for sparse unlinked genetic data.
A comparison of two adaptive algorithms for the control of active engine mounts
Hillis, A. J.; Harrison, A. J. L.; Stoten, D. P.
2005-08-01
This paper describes work conducted in order to control automotive active engine mounts, consisting of a conventional passive mount and an internal electromagnetic actuator. Active engine mounts seek to cancel the oscillatory forces generated by the rotation of out-of-balance masses within the engine. The actuator generates a force dependent on a control signal from an algorithm implemented with a real-time DSP. The filtered-x least-mean-square (FXLMS) adaptive filter is used as a benchmark for comparison with a new implementation of the error-driven minimal controller synthesis (Er-MCSI) adaptive controller. Both algorithms are applied to an active mount fitted to a saloon car equipped with a four-cylinder turbo-diesel engine, and have no a priori knowledge of the system dynamics. The steady-state and transient performance of the two algorithms are compared and the relative merits of the two approaches are discussed. The Er-MCSI strategy offers significant computational advantages as it requires no cancellation path modelling. The Er-MCSI controller is found to perform in a fashion similar to the FXLMS filter—typically reducing chassis vibration by 50-90% under normal driving conditions.
Directory of Open Access Journals (Sweden)
Gaurav Prakash
2016-01-01
Conclusions: Preoperative whole eye HOA were similar for refractive surgery candidates of Arab and South.Asian origin. The values were comparable to historical data for Caucasian eyes and were lower than Asian. (Chinese eyes. These findings may aid in refining refractive nomograms for wavefront ablations.
Comparison of PID Controller Tuning Methods with Genetic Algorithm for FOPTD System
Directory of Open Access Journals (Sweden)
K. Mohamed Hussain
2014-02-01
Full Text Available Measurement of Level, Temperature, Pressure and Flow parameters are very vital in all process industries. A combination of a few transducers with a controller, that forms a closed loop system leads to a stable and effective process. This article deals with control of in the process tank and comparative analysis of various PID control techniques and Genetic Algorithm (GA technique. The model for such a Real-time process is identified as First Order Plus Dead Time (FOPTD process and validated. The need for improved performance of the process has led to the development of model based controllers. Well-designed conventional Proportional, Integral and Derivative (PID controllers are the most widely used controller in the chemical process industries because of their simplicity, robustness and successful practical applications. Many tuning methods have been proposed for PID controllers. Many tuning methods have been proposed for obtaining better PID controller parameter settings. The comparison of various tuning methods for First Order Plus Dead Time (FOPTD process are analysed using simulation software. Our purpose in this study is comparison of these tuning methods for single input single output (SISO systems using computer simulation.Also efficiency of various PID controller are investigated for different performance metrics such as Integral Square Error (ISE, Integral Absolute Error (IAE, Integral Time absolute Error (ITAE, and Mean square Error (MSE is presented and simulation is carried out. Work in this paper explores basic concepts, mathematics, and design aspect of PID controller. Comparison between the PID controller and Genetic Algorithm (GA will have been carried out to determine the best controller for the temperature system.
An Incremental Algorithm of Text Clustering Based on Semantic Sequences
Institute of Scientific and Technical Information of China (English)
FENG Zhonghui; SHEN Junyi; BAO Junpeng
2006-01-01
This paper proposed an incremental textclustering algorithm based on semantic sequence.Using similarity relation of semantic sequences and calculating the cover of similarity semantic sequences set, the candidate cluster with minimum entropy overlap value was selected as a result cluster every time in this algorithm.The comparison of experimental results shows that the precision of the algorithm is higher than other algorithms under same conditions and this is obvious especially on long documents set.
Algorithm, applications and evaluation for protein comparison by Ramanujan Fourier transform.
Zhao, Jian; Wang, Jiasong; Hua, Wei; Ouyang, Pingkai
2015-12-01
The amino acid sequence of a protein determines its chemical properties, chain conformation and biological functions. Protein sequence comparison is of great importance to identify similarities of protein structures and infer their functions. Many properties of a protein correspond to the low-frequency signals within the sequence. Low frequency modes in protein sequences are linked to the secondary structures, membrane protein types, and sub-cellular localizations of the proteins. In this paper, we present Ramanujan Fourier transform (RFT) with a fast algorithm to analyze the low-frequency signals of protein sequences. The RFT method is applied to similarity analysis of protein sequences with the Resonant Recognition Model (RRM). The results show that the proposed fast RFT method on protein comparison is more efficient than commonly used discrete Fourier transform (DFT). RFT can detect common frequencies as significant feature for specific protein families, and the RFT spectrum heat-map of protein sequences demonstrates the information conservation in the sequence comparison. The proposed method offers a new tool for pattern recognition, feature extraction and structural analysis on protein sequences.
Comparison of Bayesian Land Surface Temperature algorithm performance with Terra MODIS observations
Morgan, J A
2009-01-01
An approach to land surface temperature (LST) estimation that relies upon Bayesian inference has been validated against multiband infrared radiometric imagery from the Terra MODIS instrument. Bayesian LST estimators are shown to reproduce standard MODIS product LST values starting from a parsimoniously chosen (hence, uninformative) range of prior band emissivity knowledge. Two estimation methods have been tested. The first is the iterative contraction mapping of joint expectation values for LST and surface emissivity described in a previous paper. In the second method, the Bayesian algorithm is reformulated as a Maximum \\emph{A-Posteriori} (MAP) search for the maximum joint \\emph{a-posteriori} probability for LST, given observed sensor aperture radiances and \\emph{a-priori} probabilities for LST and emissivity. Two MODIS data granules each for daytime and nighttime were used for the comparison. The granules were chosen to be largely cloud-free, with limited vertical relief in those portions of the granules fo...
A damage diagnostic imaging algorithm based on the quantitative comparison of Lamb wave signals
Wang, Dong; Ye, Lin; Lu, Ye; Li, Fucai
2010-06-01
With the objective of improving the temperature stability of the quantitative comparison of Lamb wave signals captured in different states, a damage diagnostic imaging algorithm integrated with Shannon-entropy-based interrogation was proposed. It was evaluated experimentally by identifying surface damage in a stiffener-reinforced CF/EP quasi-isotropic woven laminate. The variations in Shannon entropy of the reference (without damage) and present (with damage) signals from individual sensing paths were calibrated as damage signatures and utilized to estimate the probability of the presence of damage in the monitoring area enclosed by an active sensor network. The effects of temperature change on calibration of the damage signatures and estimation of the probability values for the presence of damage were investigated using a set of desynchronized signals. The results demonstrate that the Shannon-entropy-based damage diagnostic imaging algorithm with improved robustness in the presence of temperature change has the capability of providing accurate identification of damage in actual environments.
Comparison of the most common HRV computation algorithms from the systems designer point of view.
Manis, G
2009-01-01
In this paper we examine the most commonly used algorithms for the computation of heart rate variability as well as some other interesting approaches. The aim of the paper is to study the problem from a different point of view, that of the systems designer. The selected algorithms are compared to each other through experimental analysis and theoretical study. The comparison criteria are efficiency, complexity, size of the object code, memory requirements, power consumption, parallel complexity and speedup achieved, ability to response in real time, simplicity of the interface and implementation in hardware. The motivation is strong since heart rate variability is an interesting problem which finds application not only in conventional computing systems, but also in small, even wearable devices implemented using embedded systems technology or directly in hardware. The computation of heart rate variability for a set of signal recordings and the classification achieved for these signals with each examined method are also presented. Signals have been recorded from young and elderly subjects and the examined methods are used to classify them into these two distinct groups.
Comparison of Semi-Lagrangian Algorithms for Solving Vlasov-type Equations
Brunner, Stephan
2005-10-01
In view of pursuing CRPP's effort in carrying out gyrokinetic simulations using an Eulerian-type approach [M. Brunetti et. al, Comp. Phys. Comm. 163, 1 (2004)], different alternative algorithms have been considered. The issue is to identify the most appropriate time-stepping scheme, both from a point of view of numerical accuracy and numerical efficiency. Our efforts have concentrated on two semi-Lagrangian approaches: The widely used cubic B-spline interpolation scheme, based on the original work of Cheng and Knorr [C. Z. Cheng and G. Knorr, J. Comp. Phys. 22, 330 (1976)], as well as the Cubic Interpolation Propagation (CIP) scheme, based on cubic Hermite interpolation, which has only more recently been applied for solving Vlasov-type equations [T. Nakamura and T. Yabe, Comp. Phys. Comm. 120, 122 (1999)]. The systematic comparison of these algorithms with respect to their basic spectral (diffusion/dispersion) properties, as well as their ability to avoid the overshoot (Gibbs) problem, is first presented. Results from solving a guiding-center model of the two-dimensional Kelvin-Helmholtz instability are then compared. This test problem enables to address some of the key technical issues also met with the more complex gyrokinetic-type equations.
Lu, Lee-Jane W.; Nishino, Thomas K.; Johnson, Raleigh F.; Nayeem, Fatima; Brunder, Donald G.; Ju, Hyunsu; Leonard, Morton H., Jr.; Grady, James J.; Khamapirad, Tuenchit
2012-11-01
Women with mostly mammographically dense fibroglandular tissue (breast density, BD) have a four- to six-fold increased risk for breast cancer compared to women with little BD. BD is most frequently estimated from two-dimensional (2D) views of mammograms by a histogram segmentation approach (HSM) and more recently by a mathematical algorithm consisting of mammographic imaging parameters (MATH). Two non-invasive clinical magnetic resonance imaging (MRI) protocols: 3D gradient-echo (3DGRE) and short tau inversion recovery (STIR) were modified for 3D volumetric reconstruction of the breast for measuring fatty and fibroglandular tissue volumes by a Gaussian-distribution curve-fitting algorithm. Replicate breast exams (N = 2 to 7 replicates in six women) by 3DGRE and STIR were highly reproducible for all tissue-volume estimates (coefficients of variation tissue, (2) 0.72-0.82, 0.64-0.96, and 0.77-0.91, for glandular volume, (3) 0.87-0.98, 0.94-1.07, and 0.89-0.99, for fat volume, and (4) 0.89-0.98, 0.94-1.00, and 0.89-0.98, for total breast volume. For all values estimated, the correlation was stronger for comparisons between the two MRI than between each MRI versus mammography, and between each MRI versus MATH data than between each MRI versus HSM data. All ICC values were >0.75 indicating that all four methods were reliable for measuring BD and that the mathematical algorithm and the two complimentary non-invasive MRI protocols could objectively and reliably estimate different types of breast tissues.
Knöös, Tommy; Wieslander, Elinore; Cozzi, Luca; Brink, Carsten; Fogliata, Antonella; Albers, Dirk; Nyström, Håkan; Lassen, Søren
2006-11-21
A study of the performance of five commercial radiotherapy treatment planning systems (TPSs) for common treatment sites regarding their ability to model heterogeneities and scattered photons has been performed. The comparison was based on CT information for prostate, head and neck, breast and lung cancer cases. The TPSs were installed locally at different institutions and commissioned for clinical use based on local procedures. For the evaluation, beam qualities as identical as possible were used: low energy (6 MV) and high energy (15 or 18 MV) x-rays. All relevant anatomical structures were outlined and simple treatment plans were set up. Images, structures and plans were exported, anonymized and distributed to the participating institutions using the DICOM protocol. The plans were then re-calculated locally and exported back for evaluation. The TPSs cover dose calculation techniques from correction-based equivalent path length algorithms to model-based algorithms. These were divided into two groups based on how changes in electron transport are accounted for ((a) not considered and (b) considered). Increasing the complexity from the relatively homogeneous pelvic region to the very inhomogeneous lung region resulted in less accurate dose distributions. Improvements in the calculated dose have been shown when models consider volume scatter and changes in electron transport, especially when the extension of the irradiated volume was limited and when low densities were present in or adjacent to the fields. A Monte Carlo calculated algorithm input data set and a benchmark set for a virtual linear accelerator have been produced which have facilitated the analysis and interpretation of the results. The more sophisticated models in the type b group exhibit changes in both absorbed dose and its distribution which are congruent with the simulations performed by Monte Carlo-based virtual accelerator.
2010-01-01
In this work we have compared two indexing algorithms that have been used to index and retrieve Carnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithm for music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. The modification in the dual ternary algorithm was essential to handle variable length query phrase and to accommodate features specific to Carnatic music. The dual ternary indexing algorithm is ...
Truntzler, M; Barrière, Y; Sawkins, M C; Lespinasse, D; Betran, J; Charcosset, A; Moreau, L
2010-11-01
A meta-analysis of quantitative trait loci (QTL) associated with plant digestibility and cell wall composition in maize was carried out using results from 11 different mapping experiments. Statistical methods implemented in "MetaQTL" software were used to build a consensus map, project QTL positions and perform meta-analysis. Fifty-nine QTL for traits associated with digestibility and 150 QTL for traits associated with cell wall composition were included in the analysis. We identified 26 and 42 metaQTL for digestibility and cell wall composition traits, respectively. Fifteen metaQTL with confidence interval (CI) smaller than 10 cM were identified. As expected from trait correlations, 42% of metaQTL for digestibility displayed overlapping CIs with metaQTL for cell wall composition traits. Coincidences were particularly strong on chromosomes 1 and 3. In a second step, 356 genes selected from the MAIZEWALL database as candidates for the cell wall biosynthesis pathway were positioned on our consensus map. Colocalizations between candidate genes and metaQTL positions appeared globally significant based on χ(2) tests. This study contributed in identifying key chromosomal regions involved in silage quality and potentially associated genes for most of these regions. These genes deserve further investigation, in particular through association mapping.
Directory of Open Access Journals (Sweden)
Mirek eFatyga
2015-02-01
Full Text Available Background: Commonly used methods of assessing the accuracy of Deformable Image Registration (DIR rely on image segmentation or landmark selection. These methods are very labor intensive and thus limited to relatively small number of image pairs. The direct voxel-by-voxel comparison can be automated to examine fluctuations in DIR quality on a long series of image pairs.Methods: A voxel-by-voxel comparison of three DIR algorithms applied to lung patients is presented. Registrations are compared by comparing volume histograms formed both with individual DIR maps and with a voxel-by-voxel subtraction of the two maps. When two DIR maps agree one concludes that both maps are interchangeable in treatment planning applications, though one cannot conclude that either one agrees with the ground truth. If two DIR maps significantly disagree one concludes that at least one of the maps deviates from the ground truth. We use the method to compare three DIR algorithms applied to peak inhale-peak exhale registrations of 4DFBCT data obtained from thirteen patients. Results: All three algorithms appear to be nearly equivalent when compared using DICE similarity coefficients. A comparison based on Jacobian Volume Histograms shows that all three algorithms measure changes in total volume of the lungs with reasonable accuracy, but show large differences in the variance of Jacobian distribution on all contoured structures. Analysis of voxel-by-voxel subtraction of DIR maps shows that the three algorithms differ to a degree which is sufficient to create a potential for dosimetric discrepancy during dose accumulation.Conclusions: DIR algorithms can perform well in some clinical applications, while potentially fail in others. These algorithms are best treated as potentially useful approximations of tissue deformation that need to be separately validated for every intended clinical application.
Query by image example: The CANDID approach
Energy Technology Data Exchange (ETDEWEB)
Kelly, P.M.; Cannon, M. [Los Alamos National Lab., NM (United States). Computer Research and Applications Group; Hush, D.R. [Univ. of New Mexico, Albuquerque, NM (United States). Dept. of Electrical and Computer Engineering
1995-02-01
CANDID (Comparison Algorithm for Navigating Digital Image Databases) was developed to enable content-based retrieval of digital imagery from large databases using a query-by-example methodology. A user provides an example image to the system, and images in the database that are similar to that example are retrieved. The development of CANDID was inspired by the N-gram approach to document fingerprinting, where a ``global signature`` is computed for every document in a database and these signatures are compared to one another to determine the similarity between any two documents. CANDID computes a global signature for every image in a database, where the signature is derived from various image features such as localized texture, shape, or color information. A distance between probability density functions of feature vectors is then used to compare signatures. In this paper, the authors present CANDID and highlight two results from their current research: subtracting a ``background`` signature from every signature in a database in an attempt to improve system performance when using inner-product similarity measures, and visualizing the contribution of individual pixels in the matching process. These ideas are applicable to any histogram-based comparison technique.
Comparison of algorithms for blood stain detection applied to forensic hyperspectral imagery
Yang, Jie; Messinger, David W.; Mathew, Jobin J.; Dube, Roger R.
2016-05-01
Blood stains are among the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Early detection of blood stains is particularly important since the blood reacts physically and chemically with air and materials over time. Accurate identification of blood remnants, including regions that might have been intentionally cleaned, is an important aspect of forensic investigation. Hyperspectral imaging might be a potential method to detect blood stains because it is non-contact and provides substantial spectral information that can be used to identify regions in a scene with trace amounts of blood. The potential complexity of scenes in which such vast violence occurs can be high when the range of scene material types and conditions containing blood stains at a crime scene are considered. Some stains are hard to detect by the unaided eye, especially if a conscious effort to clean the scene has occurred (we refer to these as "latent" blood stains). In this paper we present the initial results of a study of the use of hyperspectral imaging algorithms for blood detection in complex scenes. We describe a hyperspectral imaging system which generates images covering 400 nm - 700 nm visible range with a spectral resolution of 10 nm. Three image sets of 31 wavelength bands were generated using this camera for a simulated indoor crime scene in which blood stains were placed on a T-shirt and walls. To detect blood stains in the scene, Principal Component Analysis (PCA), Subspace Reed Xiaoli Detection (SRXD), and Topological Anomaly Detection (TAD) algorithms were used. Comparison of the three hyperspectral image analysis techniques shows that TAD is most suitable for detecting blood stains and discovering latent blood stains.
A Comparison of Prediction Algorithms for Prefetching in the Current Web
Josep Domenech; Sahuquillo Borrás, Julio; Gil Salinas, José Antonio; Pont Sanjuan, Ana
2012-01-01
This paper reviews a representative subset of the prediction algorithms used for Web prefetching classifying them according to the information gathered. Then, the DDG algorithm is described. The main novelty of this algorithm lies in the fact that, unlike previous algorithms, it creates a prediction model according to the structure of the current web. To this end, the algorithm distinguishes between container objects and embedded objects. Its performance is compared against important existing...
Sivakumar, P. Bagavathi; Mohandas, V. P.
Stock price prediction and stock trend prediction are the two major research problems of financial time series analysis. In this work, performance comparison of various attribute set reduction algorithms were made for short term stock price prediction. Forward selection, backward elimination, optimized selection, optimized selection based on brute force, weight guided and optimized selection based on the evolutionary principle and strategy was used. Different selection schemes and cross over types were explored. To supplement learning and modeling, support vector machine was also used in combination. The algorithms were applied on a real time Indian stock data namely CNX Nifty. The experimental study was conducted using the open source data mining tool Rapidminer. The performance was compared in terms of root mean squared error, squared error and execution time. The obtained results indicates the superiority of evolutionary algorithms and the optimize selection algorithm based on evolutionary principles outperforms others.
Energy Technology Data Exchange (ETDEWEB)
Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.
2000-01-01
We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.
Ueno, Hiroki; Urasaki, Naoya; Natsume, Satoshi; Yoshida, Kentaro; Tarora, Kazuhiko; Shudo, Ayano; Terauchi, Ryohei; Matsumura, Hideo
2015-04-01
The sex type of papaya (Carica papaya) is determined by the pair of sex chromosomes (XX, female; XY, male; and XY(h), hermaphrodite), in which there is a non-recombining genomic region in the Y and Y(h) chromosomes. This region is presumed to be involved in determination of males and hermaphrodites; it is designated as the male-specific region in the Y chromosome (MSY) and the hermaphrodite-specific region in the Y(h) chromosome (HSY). Here, we identified the genes determining male and hermaphrodite sex types by comparing MSY and HSY genomic sequences. In the MSY and HSY genomic regions, we identified 14,528 nucleotide substitutions and 965 short indels with a large gap and two highly diverged regions. In the predicted genes expressed in flower buds, we found no nucleotide differences leading to amino acid changes between the MSY and HSY. However, we found an HSY-specific transposon insertion in a gene (SVP like) showing a similarity to the Short Vegetative Phase (SVP) gene. Study of SVP-like transcripts revealed that the MSY allele encoded an intact protein, while the HSY allele encoded a truncated protein. Our findings demonstrated that the SVP-like gene is a candidate gene for male-hermaphrodite determination in papaya.
Fast Fourier transform for Voigt profile: Comparison with some other algorithms
Abousahl, S.; Gourma, M.; Bickel, M.
1997-02-01
There are different algorithms describing the Voigt profile. This profile is encountered in many areas of physics which could be limited by the resolution of the instrumentation used to measure it and by other phenomena like the interaction between the emitted waves and matter. In nuclear measurement field, the codes used to characterise radionucleides rely on algorithms resolving the Voigt profile equation. The Fast Fourier Transform (FFT) algorithm allows the validation of some algorithms.
Institute of Scientific and Technical Information of China (English)
姜斌; 罗阿理; 赵永恒
2012-01-01
提出一种适用于在郭守敬望远镜海量光谱中自动、快速筛选激变变星的方法.利用已证认的激变变星光谱作为模板,通过随机森林分类训练,得到一个分类模型,该模型给出了各个波长对应流量的重要性排序,可根据该排序进行降维并用于激变变星判别,结果作为反馈进一步丰富模板库.实验中共发现了16个新的激变变星候选体,表明了该方法的可行性.%An automatic and efficient method for cataclysmic variables candidates is presented in the present paper. The identified CVs were selected as templates. A model was constructed by random forest algorithm with templates and random selected spectra. Wavelength ranking was described by the model and the classifier was constructed afterwards. Most of the non-candidates were excluded by the method. Template matching strategy was used to identify the final candidates which were analyzed to complement the templates as feedback. 16 new CVs candidates were found in the experiment that shows that our approach to finding special celestial bodies can be feasible in LAMOST.
Martin-Fernandez, Manuel; Revuelta, Javier
2017-01-01
This study compares the performance of two estimation algorithms of new usage, the Metropolis-Hastings Robins-Monro (MHRM) and the Hamiltonian MCMC (HMC), with two consolidated algorithms in the psychometric literature, the marginal likelihood via EM algorithm (MML-EM) and the Markov chain Monte Carlo (MCMC), in the estimation of multidimensional…
Comparison and analysis of nonlinear algorithms for compressed sensing in MRI.
Yu, Yeyang; Hong, Mingjian; Liu, Feng; Wang, Hua; Crozier, Stuart
2010-01-01
Compressed sensing (CS) theory has been recently applied in Magnetic Resonance Imaging (MRI) to accelerate the overall imaging process. In the CS implementation, various algorithms have been used to solve the nonlinear equation system for better image quality and reconstruction speed. However, there are no explicit criteria for an optimal CS algorithm selection in the practical MRI application. A systematic and comparative study of those commonly used algorithms is therefore essential for the implementation of CS in MRI. In this work, three typical algorithms, namely, the Gradient Projection For Sparse Reconstruction (GPSR) algorithm, Interior-point algorithm (l(1)_ls), and the Stagewise Orthogonal Matching Pursuit (StOMP) algorithm are compared and investigated in three different imaging scenarios, brain, angiogram and phantom imaging. The algorithms' performances are characterized in terms of image quality and reconstruction speed. The theoretical results show that the performance of the CS algorithms is case sensitive; overall, the StOMP algorithm offers the best solution in imaging quality, while the GPSR algorithm is the most efficient one among the three methods. In the next step, the algorithm performances and characteristics will be experimentally explored. It is hoped that this research will further support the applications of CS in MRI.
Mosconi, E; Sima, D M; Osorio Garcia, M I; Fontanella, M; Fiorini, S; Van Huffel, S; Marzola, P
2014-04-01
Proton magnetic resonance spectroscopy (MRS) is a sensitive method for investigating the biochemical compounds in a tissue. The interpretation of the data relies on the quantification algorithms applied to MR spectra. Each of these algorithms has certain underlying assumptions and may allow one to incorporate prior knowledge, which could influence the quality of the fit. The most commonly considered types of prior knowledge include the line-shape model (Lorentzian, Gaussian, Voigt), knowledge of the resonating frequencies, modeling of the baseline, constraints on the damping factors and phase, etc. In this article, we study whether the statistical outcome of a biological investigation can be influenced by the quantification method used. We chose to study lipid signals because of their emerging role in the investigation of metabolic disorders. Lipid spectra, in particular, are characterized by peaks that are in most cases not Lorentzian, because measurements are often performed in difficult body locations, e.g. in visceral fats close to peristaltic movements in humans or very small areas close to different tissues in animals. This leads to spectra with several peak distortions. Linear combination of Model spectra (LCModel), Advanced Method for Accurate Robust and Efficient Spectral fitting (AMARES), quantitation based on QUantum ESTimation (QUEST), Automated Quantification of Short Echo-time MRS (AQSES)-Lineshape and Integration were applied to simulated spectra, and area under the curve (AUC) values, which are proportional to the quantity of the resonating molecules in the tissue, were compared with true values. A comparison between techniques was also carried out on lipid signals from obese and lean Zucker rats, for which the polyunsaturation value expressed in white adipose tissue should be statistically different, as confirmed by high-resolution NMR measurements (considered the gold standard) on the same animals. LCModel, AQSES-Lineshape, QUEST and Integration
Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms
Directory of Open Access Journals (Sweden)
Dominik Zurek
2013-01-01
Full Text Available Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting algorithms on GPU cards and multicore processors. Then hybrid algorithm will be presented which consists of parts executed on both platforms, standard CPU and GPU.
Antoniucci, S; Causi, G Li; Lorenzetti, D
2014-01-01
Aiming at statistically studying the variability in the mid-IR of young stellar objects (YSOs), we have compared the 3.6, 4.5, and 24 um Spitzer fluxes of 1478 sources belonging to the C2D (Cores to Disks) legacy program with the WISE fluxes at 3.4, 4.6, and 22 um. From this comparison we have selected a robust sample of 34 variable sources. Their variations were classified per spectral Class (according to the widely accepted scheme of Class I/flat/II/III protostars), and per star forming region. On average, the number of variable sources decreases with increasing Class and is definitely higher in Perseus and Ophiuchus than in Chamaeleon and Lupus. According to the paradigm Class Evolution, the photometric variability can be considered to be a feature more pronounced in less evolved protostars, and, as such, related to accretion processes. Moreover, our statistical findings agree with the current knowledge of the star formation activity in different regions. The 34 selected variables were further investigate...
Directory of Open Access Journals (Sweden)
Li Li
2012-07-01
Full Text Available Abstract Background Several biclustering algorithms have been proposed to identify biclusters, in which genes share similar expression patterns across a number of conditions. However, different algorithms would yield different biclusters and further lead to distinct conclusions. Therefore, some testing and comparisons between these algorithms are strongly required. Methods In this study, five biclustering algorithms (i.e. BIMAX, FABIA, ISA, QUBIC and SAMBA were compared with each other in the cases where they were used to handle two expression datasets (GDS1620 and pathway with different dimensions in Arabidopsis thaliana (A. thaliana GO (gene ontology annotation and PPI (protein-protein interaction network were used to verify the corresponding biological significance of biclusters from the five algorithms. To compare the algorithms’ performance and evaluate quality of identified biclusters, two scoring methods, namely weighted enrichment (WE scoring and PPI scoring, were proposed in our study. For each dataset, after combining the scores of all biclusters into one unified ranking, we could evaluate the performance and behavior of the five biclustering algorithms in a better way. Results Both WE and PPI scoring methods has been proved effective to validate biological significance of the biclusters, and a significantly positive correlation between the two sets of scores has been tested to demonstrate the consistence of these two methods. A comparative study of the above five algorithms has revealed that: (1 ISA is the most effective one among the five algorithms on the dataset of GDS1620 and BIMAX outperforms the other algorithms on the dataset of pathway. (2 Both ISA and BIMAX are data-dependent. The former one does not work well on the datasets with few genes, while the latter one holds well for the datasets with more conditions. (3 FABIA and QUBIC perform poorly in this study and they may be suitable to large datasets with more genes and
A comparison of kinematic algorithms to estimate gait events during overground running.
Smith, Laura; Preece, Stephen; Mason, Duncan; Bramah, Christopher
2015-01-01
The gait cycle is frequently divided into two distinct phases, stance and swing, which can be accurately determined from ground reaction force data. In the absence of such data, kinematic algorithms can be used to estimate footstrike and toe-off. The performance of previously published algorithms is not consistent between studies. Furthermore, previous algorithms have not been tested at higher running speeds nor used to estimate ground contact times. Therefore the purpose of this study was to both develop a new, custom-designed, event detection algorithm and compare its performance with four previously tested algorithms at higher running speeds. Kinematic and force data were collected on twenty runners during overground running at 5.6m/s. The five algorithms were then implemented and estimated times for footstrike, toe-off and contact time were compared to ground reaction force data. There were large differences in the performance of each algorithm. The custom-designed algorithm provided the most accurate estimation of footstrike (True Error 1.2 ± 17.1 ms) and contact time (True Error 3.5 ± 18.2 ms). Compared to the other tested algorithms, the custom-designed algorithm provided an accurate estimation of footstrike and toe-off across different footstrike patterns. The custom-designed algorithm provides a simple but effective method to accurately estimate footstrike, toe-off and contact time from kinematic data.
Analysis of data mining classification by comparison of C4.5 and ID algorithms
Sudrajat, R.; Irianingsih, I.; Krisnawan, D.
2017-01-01
The rapid development of information technology, triggered by the intensive use of information technology. For example, data mining widely used in investment. Many techniques that can be used assisting in investment, the method that used for classification is decision tree. Decision tree has a variety of algorithms, such as C4.5 and ID3. Both algorithms can generate different models for similar data sets and different accuracy. C4.5 and ID3 algorithms with discrete data provide accuracy are 87.16% and 99.83% and C4.5 algorithm with numerical data is 89.69%. C4.5 and ID3 algorithms with discrete data provides 520 and 598 customers and C4.5 algorithm with numerical data is 546 customers. From the analysis of the both algorithm it can classified quite well because error rate less than 15%.
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low
Korean Medication Algorithm for Bipolar Disorder 2014: comparisons with other treatment guidelines
Directory of Open Access Journals (Sweden)
Jeong JH
2015-06-01
with MS or AAP for dysphoric/psychotic mania. Aripiprazole, olanzapine, quetiapine, and risperidone were the first-line AAPs in nearly all of the phases of bipolar disorder across the guidelines. Most guidelines advocated newer AAPs as first-line treatment options in all phases, and lamotrigine in depressive and maintenance phases. Lithium and valproic acid were commonly used as MSs in all phases of bipolar disorder. As research evidence accumulated over time, recommendations of newer AAPs – such as asenapine, paliperidone, lurasidone, and long-acting injectable risperidone – became prominent. This comparison identifies that the treatment recommendations of the KMAP-BP 2014 are similar to those of other treatment guidelines and reflect current changes in prescription patterns for bipolar disorder based on accumulated research data. Further studies are needed to address several issues identified in our review. Keywords: bipolar disorder, pharmacotherapy, treatment algorithm, guideline comparison, KMAP-2014
Institute of Scientific and Technical Information of China (English)
吴文玲; 贺也平; 冯登国; 卿斯汉
2001-01-01
简要介绍了欧洲NESSIE(new European schemes for signatures, integrity, and encrypt ion)大计划最近公布的17个分组密码算法的基本设计思想、最新分析结果及其有效性%In this paper, the basic design ideas, recent analysis results and validity of the 17 NESSIE (new european schemes for signat ures, integrity, and encryption) candidate algorithms are introduced.
A comparison of two incompressible Navier-Stokes algorithms for unsteady internal flow
Wiltberger, N. Lyn; Rogers, Stuart E.; Kwak, Dochan
1993-01-01
A comparative study of two different incompressible Navier-Stokes algorithms for solving an unsteady, incompressible, internal flow problem is performed. The first algorithm uses an artificial compressibility method coupled with upwind differencing and a line relaxation scheme. The second algorithm uses a fractional step method with a staggered grid, finite volume approach. Unsteady, viscous, incompressible, internal flow through a channel with a constriction is computed using the first algorithm. A grid resolution study and parameter studies on the artificial compressibility coefficient and the maximum allowable residual of the continuity equation are performed. The periodicity of the solution is examined and several periodic data sets are generated using the first algorithm. These computational results are compared with previously published results computed using the second algorithm and experimental data.
A Comparison of the Effects of K-Anonymity on Machine Learning Algorithms
Directory of Open Access Journals (Sweden)
Hayden Wimmer
2014-11-01
Full Text Available While research has been conducted in machine learning algorithms and in privacy preserving in data mining (PPDM, a gap in the literature exists which combines the aforementioned areas to determine how PPDM affects common machine learning algorithms. The aim of this research is to narrow this literature gap by investigating how a common PPDM algorithm, K-Anonymity, affects common machine learning and data mining algorithms, namely neural networks, logistic regression, decision trees, and Bayesian classifiers. This applied research reveals practical implications for applying PPDM to data mining and machine learning and serves as a critical first step learning how to apply PPDM to machine learning algorithms and the effects of PPDM on machine learning. Results indicate that certain machine learning algorithms are more suited for use with PPDM techniques.
Guo, Liyong; Yan, Zhiqiang; Zheng, Xiliang; Hu, Liang; Yang, Yongliang; Wang, Jin
2014-07-01
In protein-ligand docking, an optimization algorithm is used to find the best binding pose of a ligand against a protein target. This algorithm plays a vital role in determining the docking accuracy. To evaluate the relative performance of different optimization algorithms and provide guidance for real applications, we performed a comparative study on six efficient optimization algorithms, containing two evolutionary algorithm (EA)-based optimizers (LGA, DockDE) and four particle swarm optimization (PSO)-based optimizers (SODock, varCPSO, varCPSO-ls, FIPSDock), which were implemented into the protein-ligand docking program AutoDock. We unified the objective functions by applying the same scoring function, and built a new fitness accuracy as the evaluation criterion that incorporates optimization accuracy, robustness, and efficiency. The varCPSO and varCPSO-ls algorithms show high efficiency with fast convergence speed. However, their accuracy is not optimal, as they cannot reach very low energies. SODock has the highest accuracy and robustness. In addition, SODock shows good performance in efficiency when optimizing drug-like ligands with less than ten rotatable bonds. FIPSDock shows excellent robustness and is close to SODock in accuracy and efficiency. In general, the four PSO-based algorithms show superior performance than the two EA-based algorithms, especially for highly flexible ligands. Our method can be regarded as a reference for the validation of new optimization algorithms in protein-ligand docking.
Comparison of strapdown inertial navigation algorithm based on rotation vector and dual quaternion
Institute of Scientific and Technical Information of China (English)
Wang Zhenhuan; Chen Xijun; Zeng Qingshuang
2013-01-01
For the navigation algorithm of the strapdown inertial navigation system,by comparing to the equations of the dual quaternion and quaternion,the superiority of the attitude algorithm based on dual quaternion over the ones based on rotation vector in accuracy is analyzed in the case of the rotation of navigation frame.By comparing the update algorithm of the gravitational velocity in dual quaternion solution with the compensation algorithm of the harmful acceleration in traditional velocity solution,the accuracy advantage of the gravitational velocity based on dual quaternion is addressed.In view of the idea of the attitude and velocity algorithm based on dual quaternion,an improved navigation algorithm is proposed,which is as much as the rotation vector algorithm in computational complexity.According to this method,the attitude quaternion does not require compensating as the navigation frame rotates.In order to verify the correctness of the theoretical analysis,simulations are carried out utilizing the software,and the simulation results show that the accuracy of the improved algorithm is approximately equal to the dual quaternion algorithm.
Directory of Open Access Journals (Sweden)
Rajeswari Sridhar
2010-07-01
Full Text Available In this work we have compared two indexing algorithms that have been used to index and retrieve Carnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithm for music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. The modification in the dual ternary algorithm was essential to handle variable length query phrase and to accommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted for Carnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternary algorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval in which features like MFCC, spectral flux, melody string and spectral centroid are used as features for indexing data into a hash table. The way in which collision resolution was handled by this hash table is different than the normal hash table approaches. It was observed that multi-key hashing based retrieval had a lesser time complexity than dual-ternary based indexing The algorithms were also compared for their precision and recall in which multi-key hashing had a better recall than modified dual ternary indexing for the sample data considered.
Directory of Open Access Journals (Sweden)
Imam Ahmad Ashari
2016-11-01
Full Text Available Scheduling problems at the university is a complex type of scheduling problems. The scheduling process should be carried out at every turn of the semester's. The core of the problem of scheduling courses at the university is that the number of components that need to be considered in making the schedule, some of the components was made up of students, lecturers, time and a room with due regard to the limits and certain conditions so that no collision in the schedule such as mashed room, mashed lecturer and others. To resolve a scheduling problem most appropriate technique used is the technique of optimization. Optimization techniques can give the best results desired. Metaheuristic algorithm is an algorithm that has a lot of ways to solve the problems to the very limit the optimal solution. In this paper, we use a genetic algorithm and ant colony optimization algorithm is an algorithm metaheuristic to solve the problem of course scheduling. The two algorithm will be tested and compared to get performance is the best. The algorithm was tested using data schedule courses of the university in Semarang. From the experimental results we conclude that the genetic algorithm has better performance than the ant colony optimization algorithm in solving the case of course scheduling.
Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.
2012-01-01
Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.
Wolock, David M.; McCabe, Gregory J.
1995-05-01
Single flow direction (sfd) and multiple flow direction (mfd) algorithms were used to compute the spatial and statistical distributions of the topographic index used in the watershed model TOPMODEL. An sfd algorithm assumes that subsurface flow occurs only in the steepest downslope direction from any given point; an mfd algorithm assumes that subsurface flow occurs in all downslope directions from any given point. The topographic index in TOPMODEL is ln (a/tan β), where In is the Napierian logarithm, a is the upslope area per unit contour length, and tan β is the slope gradient. The ln (a/tanβ) distributions were computed from digital elevation model (DEM) data for locations with diverse topography in Arizona, Colorado, Louisiana, Nebraska, North Carolina, Oregon, Pennsylvania, Tennessee, Vermont, and Virginia. The means of the ln (a/tan β) distributions were higher when the mfd algorithm was used for computation compared to when the sfd algorithm was used. The variances and skews of the distributions were lower for the mfd algorithm compared to the sfd algorithm. The differences between the mfd and sfd algorithms in the mean, variance, and skew of the ln (a/tan β) distribution were almost identical for the various DEMs and were not affected by DEM resolution or watershed size. TOPMODEL model efficiency and simulated flow paths were affected only slightly when the ln (a/tan β) distribution was computed with the sfd algorithm instead of the mfd algorithm. Any difference in the model efficiency and simulated flow paths between the sfd and mfd algorithms essentially disappeared when the model was calibrated by adjusting subsurface hydraulic parameters.
Sridhar, Rajeswari; Karthiga, S; T, Geetha; 10.5121/ijaia.2010.1305
2010-01-01
In this work we have compared two indexing algorithms that have been used to index and retrieve Carnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithm for music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. The modification in the dual ternary algorithm was essential to handle variable length query phrase and to accommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted for Carnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternary algorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval in which features like MFCC, spectral flux, melody string and spectral centroid are used as features for indexing data into a hash table. The way in which collision resolution was handled by this hash table is different than the normal hash table approaches. It was observed that multi-key hashing based retrieval had a lesser ...
DEFF Research Database (Denmark)
Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika
2013-01-01
, genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time......Nowadays the requirements imposed by the industry and economy ask for better quality and performance while the price must be maintained in the same range. To achieve this goal optimization must be introduced in the design process. Two of the best known optimization algorithms for machine design....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application....
Banks, H Thomas; Hu, Shuhua; Joyner, Michele; Broido, Anna; Canter, Brandi; Gayvert, Kaitlyn; Link, Kathryn
2012-07-01
In this paper, we investigate three particular algorithms: a stochastic simulation algorithm (SSA), and explicit and implicit tau-leaping algorithms. To compare these methods, we used them to analyze two infection models: a Vancomycin-resistant enterococcus (VRE) infection model at the population level, and a Human Immunodeficiency Virus (HIV) within host infection model. While the first has a low species count and few transitions, the second is more complex with a comparable number of species involved. The relative efficiency of each algorithm is determined based on computational time and degree of precision required. The numerical results suggest that all three algorithms have the similar computational efficiency for the simpler VRE model, and the SSA is the best choice due to its simplicity and accuracy. In addition, we have found that with the larger and more complex HIV model, implementation and modification of tau-Leaping methods are preferred.
Dai, Chenyun; Li, Yejin; Christie, Anita; Bonato, Paolo; McGill, Kevin C; Clancy, Edward A
2015-01-01
The reliability of clinical and scientific information provided by algorithms that automatically decompose the electromyogram (EMG) depends on the algorithms' accuracies. We used experimental and simulated data to assess the agreement and accuracy of three publicly available decomposition algorithms-EMGlab (McGill , 2005) (single channel data only), Fuzzy Expert (Erim and Lim, 2008) and Montreal (Florestal , 2009). Data consisted of quadrifilar needle EMGs from the tibialis anterior of 12 subjects at 10%, 20% and 50% maximum voluntary contraction (MVC); single channel needle EMGs from the biceps brachii of 10 controls and 10 patients during contractions just above threshold; and matched simulated data. Performance was assessed via agreement between pairs of algorithms for experimental data and accuracy with respect to the known decomposition for simulated data. For the quadrifilar experimental data, median agreements between the Montreal and Fuzzy Expert algorithms at 10%, 20%, and 50% MVC were 95%, 86%, and 64%, respectively. For the single channel control and patient data, median agreements between the three algorithm pairs were statistically similar at ∼ 97% and ∼ 92%, respectively. Accuracy on the simulated data exceeded this performance. Agreement/accuracy was strongly related to the Decomposability Index (Florestal , 2009). When agreement was high between algorithm pairs applied to simulated data, so was accuracy.
Comparison of two approximal proximal point algorithms for monotone variational inequalities
Institute of Scientific and Technical Information of China (English)
TAO Min
2007-01-01
Proximal point algorithms (PPA) are attractive methods for solving monotone variational inequalities (MVI). Since solving the sub-problem exactly in each iteration is costly or sometimes impossible, various approximate versions ofPPA (APPA)are developed for practical applications. In this paper, we compare two APPA methods, both of which can be viewed as prediction-correction methods. The only difference is that they use different search directions in the correction-step. By extending the general forward-backward splitting methods, we obtain Algorithm Ⅰ; in the same way, Algorithm Ⅱ is proposed by spreading the general extra-gradient methods. Our analysis explains theoretically why Algorithm Ⅱ usually outperforms Algorithm Ⅰ.For computation practice, we consider a class of MVI with a special structure, and choose the extending Algorithm Ⅱ to implement, which is inspired by the idea of Gauss-Seidel iteration method making full use of information about the latest iteration.And in particular, self-adaptive techniques are adopted to adjust relevant parameters for faster convergence. Finally, some numerical experiments are reported on the separated MVI. Numerical results showed that the extending Algorithm Ⅱ is feasible and easy to implement with relatively low computation load.
Directory of Open Access Journals (Sweden)
Rajeswari Sridhar
2010-07-01
Full Text Available In this work we have compared two indexing algorithms that have been used to index and retrieveCarnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithmfor music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. Themodification in the dual ternary algorithm was essential to handle variable length query phrase and toaccommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted forCarnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternaryalgorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval inwhich features like MFCC, spectral flux, melody string and spectral centroid are used as features forindexing data into a hash table. The way in which collision resolution was handled by this hash table isdifferent than the normal hash table approaches. It was observed that multi-key hashing based retrievalhad a lesser time complexity than dual-ternary based indexing The algorithms were also compared fortheir precision and recall in which multi-key hashing had a better recall than modified dual ternaryindexing for the sample data considered.
Comparison of fractal dimension estimation algorithms for epileptic seizure onset detection
Polychronaki, G. E.; Ktonas, P. Y.; Gatzonis, S.; Siatouni, A.; Asvestas, P. A.; Tsekou, H.; Sakas, D.; Nikita, K. S.
2010-08-01
Fractal dimension (FD) is a natural measure of the irregularity of a curve. In this study the performances of three waveform FD estimation algorithms (i.e. Katz's, Higuchi's and the k-nearest neighbour (k-NN) algorithm) were compared in terms of their ability to detect the onset of epileptic seizures in scalp electroencephalogram (EEG). The selection of parameters involved in FD estimation, evaluation of the accuracy of the different algorithms and assessment of their robustness in the presence of noise were performed based on synthetic signals of known FD. When applied to scalp EEG data, Katz's and Higuchi's algorithms were found to be incapable of producing consistent changes of a single type (either a drop or an increase) during seizures. On the other hand, the k-NN algorithm produced a drop, starting close to the seizure onset, in most seizures of all patients. The k-NN algorithm outperformed both Katz's and Higuchi's algorithms in terms of robustness in the presence of noise and seizure onset detection ability. The seizure detection methodology, based on the k-NN algorithm, yielded in the training data set a sensitivity of 100% with 10.10 s mean detection delay and a false positive rate of 0.27 h-1, while the corresponding values in the testing data set were 100%, 8.82 s and 0.42 h-1, respectively. The above detection results compare favourably to those of other seizure onset detection methodologies applied to scalp EEG in the literature. The methodology described, based on the k-NN algorithm, appears to be promising for the detection of the onset of epileptic seizures based on scalp EEG.
Comparison of SAR calculation algorithms for the finite-difference time-domain method.
Laakso, Ilkka; Uusitupa, Tero; Ilvonen, Sami
2010-08-07
Finite-difference time-domain (FDTD) simulations of specific-absorption rate (SAR) have several uncertainty factors. For example, significantly varying SAR values may result from the use of different algorithms for determining the SAR from the FDTD electric field. The objective of this paper is to rigorously study the divergence of SAR values due to different SAR calculation algorithms and to examine if some SAR calculation algorithm should be preferred over others. For this purpose, numerical FDTD results are compared to analytical solutions in a one-dimensional layered model and a three-dimensional spherical object. Additionally, the implications of SAR calculation algorithms for dosimetry of anatomically realistic whole-body models are studied. The results show that the trapezium algorithm-based on the trapezium integration rule-is always conservative compared to the analytic solution, making it a good choice for worst-case exposure assessment. In contrast, the mid-ordinate algorithm-named after the mid-ordinate integration rule-usually underestimates the analytic SAR. The linear algorithm-which is approximately a weighted average of the two-seems to be the most accurate choice overall, typically giving the best fit with the shape of the analytic SAR distribution. For anatomically realistic models, the whole-body SAR difference between different algorithms is relatively independent of the used body model, incident direction and polarization of the plane wave. The main factors affecting the difference are cell size and frequency. The choice of the SAR calculation algorithm is an important simulation parameter in high-frequency FDTD SAR calculations, and it should be explained to allow intercomparison of the results between different studies.
Iwan Solihin, Mahmud; Fauzi Zanil, Mohd
2016-11-01
Cuckoo Search (CS) and Differential Evolution (DE) algorithms are considerably robust meta-heuristic algorithms to solve constrained optimization problems. In this study, the performance of CS and DE are compared in solving the constrained optimization problem from selected benchmark functions. Selection of the benchmark functions are based on active or inactive constraints and dimensionality of variables (i.e. number of solution variable). In addition, a specific constraint handling and stopping criterion technique are adopted in the optimization algorithm. The results show, CS approach outperforms DE in term of repeatability and the quality of the optimum solutions.
Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling
DEFF Research Database (Denmark)
Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath
2012-01-01
This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals....... The performance of the existing compressed sensing reconstruction algorithms have not been investigated yet for the discrete multi-coset sampling. We compare the following algorithms – orthogonal matching pursuit, multiple signal classification, subspace-augmented multiple signal classification, focal under...
Performance comparison of some evolutionary algorithms on job shop scheduling problems
Mishra, S. K.; Rao, C. S. P.
2016-09-01
Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.
Comparison of Algorithms for Prediction of Protein Structural Features from Evolutionary Data.
Bywater, Robert P
2016-01-01
Proteins have many functions and predicting these is still one of the major challenges in theoretical biophysics and bioinformatics. Foremost amongst these functions is the need to fold correctly thereby allowing the other genetically dictated tasks that the protein has to carry out to proceed efficiently. In this work, some earlier algorithms for predicting protein domain folds are revisited and they are compared with more recently developed methods. In dealing with intractable problems such as fold prediction, when different algorithms show convergence onto the same result there is every reason to take all algorithms into account such that a consensus result can be arrived at. In this work it is shown that the application of different algorithms in protein structure prediction leads to results that do not converge as such but rather they collude in a striking and useful way that has never been considered before.
Comparison between summing-up algorithms to determine areas of small peaks on high baselines
Shi, Quanlin; Zhang, Jiamei; Chang, Yongfu; Qian, Shaojun
2005-12-01
It is found that the minimum detectable activity (MDA) has a same tendency as the relative standard deviation (RSD) and a particular application is characteristic of the ratio of the peak area to the baseline height. Different applications need different algorithms to reduce the RSD of peak areas or the MDA of potential peaks. A model of Gaussian peaks superposed on linear baselines is established to simulate the multichannel spectrum and summing-up algorithms such as total peak area (TPA), and Covell and Sterlinski are compared to find the most appropriate algorithm for different applications. The results show that optimal Covell and Sterlinski algorithms will yield MDA or RSD half lower than TPA when the areas of small peaks on high baselines are to be determined. The conclusion is proved by experiment.
Institute of Scientific and Technical Information of China (English)
Liu Jie; Shi Shu-Ting; Zhao Jun-Chan
2013-01-01
The three most widely used methods for reconstructing the underlying time series via the recurrence plots (RPs) of a dynamical system are compared with each other in this paper.We aim to reconstruct a toy series,a periodical series,a random series,and a chaotic series to compare the effectiveness of the most widely used typical methods in terms of signal correlation analysis.The application of the most effective algorithm to the typical chaotic Lorenz system verifies the correctness of such an effective algorithm.It is verified that,based on the unthresholded RPs,one can reconstruct the original attractor by choosing different RP thresholds based on the Hirata algorithm.It is shown that,in real applications,it is possible to reconstruct the underlying dynamics by using quite little information from observations of real dynamical systems.Moreover,rules of the threshold chosen in the algorithm are also suggested.
Algorithm comparison and benchmarking using a parallel spectra transform shallow water model
Energy Technology Data Exchange (ETDEWEB)
Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)
1995-04-01
In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.
Directory of Open Access Journals (Sweden)
D.V.MANJUNATHA
2011-10-01
Full Text Available In Digital video communication it is not practical, to store the full digital video without processing, because of the problems encountered in storage and transmission, so the processing technique called videocompression is essential. In video compression, one of the computationally expensive and resource hungry key element is the Motion Estimation. The Motion estimation is a process which determines the motion between two or more frames of video. In this paper, Four block matching motion estimation algorithms, namely Exhaustive Search (ES, Three Step Search (TSS, New Three Step Search (NTSS, and Diamond Search (DS algorithms are compared and implemented for different distances between the frames of the video by exploiting the temporal correlation between successive frames of mristack and foreman slow motion videos and proved that Diamond Search (DS algorithm is the best matching motion estimation algorithm that achieve best tradeoff between search speed (number of computations and reconstructed picture quality with extensive simulation results and comparative analysis.
MOHAMMED, M. A. SI; BOUSSADIA, H.; BELLAR, A.; ADNANE, A.
2017-01-01
This paper presents a brief synthesis and useful performance analysis of different attitude filtering algorithms (attitude determination algorithms, attitude estimation algorithms, and nonlinear observers) applied to Low Earth Orbit Satellite in terms of accuracy, convergence time, amount of memory, and computation time. This latter is calculated in two ways, using a personal computer and also using On-board computer 750 (OBC 750) that is being used in many SSTL Earth observation missions. The use of this comparative study could be an aided design tool to the designer to choose from an attitude determination or attitude estimation or attitude observer algorithms. The simulation results clearly indicate that the nonlinear Observer is the more logical choice.
Directory of Open Access Journals (Sweden)
Natarajan Meghanathan
2013-05-01
Full Text Available The high-level contribution of this paper is an exhaustive simulation-based comparison study of three categories (density, node id and stability-based of algorithms to determine connected dominating sets (CDS for mobile ad hoc networks and evaluate their performance under two categories (random node mobility and grid-based vehicular ad hoc network of mobility models. The CDS algorithms studied are the maximum density-based (MaxD-CDS, node ID-based (ID-CDS and the minimum velocity-based (MinV-CDS algorithms representing the density, node id and stability categories respectively. The node mobility models used are the Random Waypoint model (representing random node mobility and the City Section and Manhattan mobility models (representing the grid-based vehicular ad hoc networks. The three CDS algorithms under the three mobility models are evaluated with respect to two critical performance metrics: the effective CDS lifetime (calculated taking into consideration the CDS connectivity and absolute CDS lifetime and the CDS node size. Simulations are conducted under a diverse set of conditions representing low, moderate and high network density, coupled with low, moderate and high node mobility scenarios. For each CDS, the paper identifies the mobility model that can be employed to simultaneously maximize the lifetime and minimize the node size with minimal tradeoff. For the two VANET mobility models, the impact of the grid block length on the CDS lifetime and node size is also evaluated.
Cost-conscious comparison of supervised learning algorithms over multiple data sets
Ulaş, Aydın; Yıldız, Olcay Taner; Alpaydın, Ahmet İbrahim Ethem
2012-01-01
In the literature, there exist statistical tests to compare supervised learning algorithms on multiple data sets in terms of accuracy but they do not always generate an ordering. We propose Multi(2)Test, a generalization of our previous work, for ordering multiple learning algorithms on multiple data sets from "best" to "worst" where our goodness measure is composed of a prior cost term additional to generalization error. Our simulations show that Multi2Test generates orderings using pairwise...
Routing Flow-Shop with Buffers and Ready Times – Comparison of Selected Solution Algorithms
Directory of Open Access Journals (Sweden)
Józefczyk Jerzy
2014-12-01
Full Text Available This article extends the former results concerning the routing flow-shop problem to minimize the makespan on the case with buffers, non-zero ready times and different speeds of machines. The corresponding combinatorial optimization problem is formulated. The exact as well as four heuristic solution algorithms are presented. The branch and bound approach is applied for the former one. The heuristic algorithms employ known constructive idea proposed for the former version of the problem as well as the Tabu Search metaheuristics. Moreover, the improvement procedure is proposed to enhance the quality of both heuristic algorithms. The conducted simulation experiments allow evaluating all algorithms. Firstly, the heuristic algorithms are compared with the exact one for small instances of the problem in terms of the criterion and execution times. Then, for larger instances, the heuristic algorithms are mutually compared. The case study regarding the maintenance of software products, given in the final part of the paper, illustrates the possibility to apply the results for real-world manufacturing systems.
Khare, Kshitij; 10.1214/11-AOS916
2012-01-01
The data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo algorithm that is easy to implement but often suffers from slow convergence. The sandwich algorithm is an alternative that can converge much faster while requiring roughly the same computational effort per iteration. Theoretically, the sandwich algorithm always converges at least as fast as the corresponding DA algorithm in the sense that $\\Vert {K^*}\\Vert \\le \\Vert {K}\\Vert$, where $K$ and $K^*$ are the Markov operators associated with the DA and sandwich algorithms, respectively, and $\\Vert\\cdot\\Vert$ denotes operator norm. In this paper, a substantial refinement of this operator norm inequality is developed. In particular, under regularity conditions implying that $K$ is a trace-class operator, it is shown that $K^*$ is also a positive, trace-class operator, and that the spectrum of $K^*$ dominates that of $K$ in the sense that the ordered elements of the former are all less than or equal to the corresponding elements of the lat...
Comparison of ultrasonic array imaging algorithms for non-destructive evaluation
Zhang, J.; Drinkwater, B. W.; Wilcox, P. D.
2013-01-01
Ultrasonic array imaging algorithms have been widely used and developed in nondestructive evaluation in the last 10 years. In this paper, three imaging algorithms (Total Focusing Method (TFM), Phase Coherent Imaging (PCI), and Spatial Compounding Imaging (SCI)) are compared through both simulation and experimental measurements. In the simulation, array data sets were generated using a hybrid forward model containing a single defect amongst a multitude of randomly distributed point scatterers to represent backscatter from material microstructure. The Signal to Noise Ratio (SNR) of the final images and their resolution were used to indicate the quality of the different imaging algorithms. The images of different types of defect (point reflectors and planar cracks) were used to investigate the robustness of the imaging algorithms. It is shown that PCI can yield higher image resolution than the TFM, but that the images of cracks are distorted. Overall, the TFM is the most robust algorithm across a range of different types of defects. It is also shown that the detection limit of all three imaging algorithms is almost equal for weakly scattering defects.
Comparison of load-based and queue-based active queue management algorithms
Kwon, Minseok; Fahmy, Sonia
2002-07-01
A number of active queue management algorithms have been studied since Random Early Detection (RED) was first introduced in 1993. While analytical and experimental studies have debated whether dropping/marking should be based on average or instantaneous queue length or, alternatively, based on input and output rates (or queue length slope), the merits and drawbacks of the proposed algorithms, and the effect of load-based versus queue-based control have not been adequately examined. In particular, only RED has been tested in realistic configurations and in terms of user metrics, such as response times and average delays. In this paper, we examine active queue management (AQM) that uses both load and queuing delay to determine its packet drop/mark probabilities. This class of algorithms, which we call load/delay controllers (LDC), has the advantage of controlling the queuing delay as well as accurately anticipating incipient congestion. We compare LDC to a number of well-known active queue management algorithms including RED, BLUE, FRED, SRED, and REM in configurations with multiple bottlenecks, round trip times and bursty Web traffic. We evaluate each algorithm in terms of Web response time, delay, packet loss, and throughput, in addition to examining algorithm complexity and ease of configuration. Our results demonstrate that load information, along with queue length, can aid in making more accurate packet drop/mark decisions that reduce the Web response time.
Web代理缓存算法的性能比较%Comparison of Web Proxy Caching Algorithm Performance
Institute of Scientific and Technical Information of China (English)
温志浩
2012-01-01
Web代理是现代Internet的一个重要中间网络器件．代理缓存算法的优劣不但影响到客户端的浏览速度，还关系到目标服务器的性能以及中间通信网络的整体表现。对目前几种流行的Web代理缓存算法进行比较与研究。%Web proxy is an important intermediate network devices in the modem Intemet, proxy caching algorithm not only affect the merits of the client＇s browsing speed, but also related to the target server＇s performance and the overall performance of the middle of the communication network. Presents several popular Web proxy caching algorithms to make comparison and research.
Institute of Scientific and Technical Information of China (English)
宋渊; 姚向华; 张新曼
2012-01-01
应用于无线Ad Hoc网络中的机会路由,结点转发候选集的选取通常是基于最短路径期望传输次数,没有充分考虑无线网络结点进行数据转发的广播特性.以多路径期望传输次数为路由量度,提出一种最优转发候选集算法MCET.实现对无线网络中除了目的结点以外的所有结点选取考虑多路径转发期望值的转发候选集,并在按照结点选取的顺序依次优先排列优先级.仿真结果表明,比较于传统的基于最短路径期望传输次数的机会路由,应用了最优转发候选集算法的机会路由明显减少了数据的平均传输次数,增加了数据报文的成功传输率.%The opportunistic routing for wireless Ad Hoc networks usually selects node forwarding candidate set based on shortest path ETX (expected transmission count) but regardless the broadcasting property of wireless networks nodes when forwarding data. In this paper, we take multipath ETX as the routing metric, and propose an optimal forwarding candidates set algorithm MCET (multipath-considered expected transmission). It realises the selection of forwarding candidate set with multipath forwarding expected value consideration for all nodes in wireless networks other than the destination, and then prioritise them in turn according to the order the nodes selected. Simulation results indicate that compared with traditional opportunistic routing based on shortest path ETX, the opportunistic routing applied the optimal forwarding candidate set algorithm has noticeably reduced the average number of data transmission, and increased the successful delivering rate of data packet.
Comparison of algorithms for automatic border detection of melanoma in dermoscopy images
Srinivasa Raghavan, Sowmya; Kaur, Ravneet; LeAnder, Robert
2016-09-01
Melanoma is one of the most rapidly accelerating cancers in the world [1]. Early diagnosis is critical to an effective cure. We propose a new algorithm for more accurately detecting melanoma borders in dermoscopy images. Proper border detection requires eliminating occlusions like hair and bubbles by processing the original image. The preprocessing step involves transforming the RGB image to the CIE L*u*v* color space, in order to decouple brightness from color information, then increasing contrast, using contrast-limited adaptive histogram equalization (CLAHE), followed by artifacts removal using a Gaussian filter. After preprocessing, the Chen-Vese technique segments the preprocessed images to create a lesion mask which undergoes a morphological closing operation. Next, the largest central blob in the lesion is detected, after which, the blob is dilated to generate an image output mask. Finally, the automatically-generated mask is compared to the manual mask by calculating the XOR error [3]. Our border detection algorithm was developed using training and test sets of 30 and 20 images, respectively. This detection method was compared to the SRM method [4] by calculating the average XOR error for each of the two algorithms. Average error for test images was 0.10, using the new algorithm, and 0.99, using SRM method. In comparing the average error values produced by the two algorithms, it is evident that the average XOR error for our technique is lower than the SRM method, thereby implying that the new algorithm detects borders of melanomas more accurately than the SRM algorithm.
Directory of Open Access Journals (Sweden)
C. Keim
2009-05-01
Full Text Available This paper presents a first statistical validation of tropospheric ozone products derived from measurements of the satellite instrument IASI. Since end of 2006, IASI (Infrared Atmospheric Sounding Interferometer aboard the polar orbiter Metop-A measures infrared spectra of the Earth's atmosphere in nadir geometry. This validation covers the northern mid-latitudes and the period from July 2007 to August 2008. The comparison of the ozone products with the vertical ozone concentration profiles from balloon sondes leads to estimates of the systematic and random errors in the IASI ozone products. The intercomparison of the retrieval results from four different sources (including the EUMETSAT ozone products shows systematic differences due to the used methods and algorithms. On average the tropospheric columns have a small bias of less than 2 Dobson Units (DU when compared to the sonde measured columns. The comparison of the still pre-operational EUMETSAT columns shows higher mean differences of about 5 DU.
Comparison of Nonequilibrium Solution Algorithms Applied to Chemically Stiff Hypersonic Flows
Palmer, Grant; Venkatapathy, Ethiraj
1995-01-01
Three solution algorithms, explicit under-relaxation, point implicit, and lower-upper symmetric Gauss-Seidel, are used to compute nonequilibrium flow around the Apollo 4 return capsule at the 62-km altitude point in its descent trajectory. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness.The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15 and 30, the lower-upper symmetric Gauss-Seidel method produces an eight order of magnitude drop in the energy residual in one-third to one-half the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 30 and above. At Mach 40 the performance of the lower-upper symmetric Gauss-Seidel algorithm deteriorates to the point that it is out performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.
2016-06-01
High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.
Comparison of algorithms for finding the air-ground interface in ground penetrating radar signals
Wood, Joshua; Bolton, Jeremy; Casella, George; Collins, Leslie; Gader, Paul; Glenn, Taylor; Ho, Jeffery; Lee, Wen; Mueller, Richard; Smock, Brandon; Torrione, Peter; Watford, Ken; Wilson, Joseph
2011-06-01
In using GPR images for landmine detection it is often useful to identify the air-ground interface in the GPR signal for alignment purposes. A number of algorithms have been proposed to solve the air-ground interface detection problem, including some which use only A-scan data, and others which track the ground in B-scans or C-scans. Here we develop a framework for comparing these algorithms relative to one another and we examine the results. The evaluations are performed on data that have been categorized in terms of features that make the air-ground interface difficult to find or track. The data also have associated human selected ground locations, from multiple evaluators, that can be used for determining correctness. A distribution is placed over each of the human selected ground locations, with the sum of these distributions at the algorithm selected location used as a measure of its correctness. Algorithms are also evaluated in terms of how they affect the false alarm and true positive rates of mine detection algorithms that use ground aligned data.
Wu, Vincent W C; Tse, Teddy K H; Ho, Cola L M; Yeung, Eric C Y
2013-01-01
Monte Carlo (MC) simulation is currently the most accurate dose calculation algorithm in radiotherapy planning but requires relatively long processing time. Faster model-based algorithms such as the anisotropic analytical algorithm (AAA) by the Eclipse treatment planning system and multigrid superposition (MGS) by the XiO treatment planning system are 2 commonly used algorithms. This study compared AAA and MGS against MC, as the gold standard, on brain, nasopharynx, lung, and prostate cancer patients. Computed tomography of 6 patients of each cancer type was used. The same hypothetical treatment plan using the same machine and treatment prescription was computed for each case by each planning system using their respective dose calculation algorithm. The doses at reference points including (1) soft tissues only, (2) bones only, (3) air cavities only, (4) soft tissue-bone boundary (Soft/Bone), (5) soft tissue-air boundary (Soft/Air), and (6) bone-air boundary (Bone/Air), were measured and compared using the mean absolute percentage error (MAPE), which was a function of the percentage dose deviations from MC. Besides, the computation time of each treatment plan was recorded and compared. The MAPEs of MGS were significantly lower than AAA in all types of cancers (pplans was significantly lower than that of the MGS (palgorithms demonstrated dose deviations of less than 4.0% in most clinical cases and their performance was better in homogeneous tissues than at tissue boundaries. In general, MGS demonstrated relatively smaller dose deviations than AAA but required longer computation time.
Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre
2016-01-01
A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749
Lebreton, Carole; Stelzer, Kerstin; Brockmann, Carsten; Bertels, Luc; Pringle, Nicholas; Paperin, Michael; Danne, Olaf; Knaeps, Els; Ruddick, Kevin
2016-08-01
Image processing for satellite water quality products requires reliable cloud and cloud shadow detection and cloud classification before atmospheric correction. Within the FP7/HIGHROC ("HIGH spatial and temporal Resolution Ocean Colour") Project, it was necessary to improve cloud detection and the cloud classification algorithms for the spatial high resolution sensors, aiming at Sentinel 2 and using Landsat 8 as a precursor. We present a comparison of three different algorithms, AFAR developed by RBINS; ACCAm created by VITO, and IDEPIX developed by Brockmann Consult. We show image comparisons and the results of the comparison using a pixel identification database (PixBox); FMASK results are also presented as reference.
Olbrich, Sebastian; Fischer, Marie M; Sander, Christian; Hegerl, Ulrich; Wirtz, Hubert; Bosse-Henck, Andrea
2015-08-01
The regulation of wakefulness is important for high-order organisms. Its dysregulation is involved in the pathomechanism of several psychiatric disorders. Thus, a tool for its objective but little time-consuming assessment would be of importance. The Vigilance Algorithm Leipzig allows the objective measurement of sleep propensity, based on a single resting state electroencephalogram. To compare the Vigilance Algorithm Leipzig with the standard for objective assessment of excessive daytime sleepiness, a four-trial Multiple Sleep Latency Test in 25 healthy subjects was conducted. Between the first two trials, a 15-min, 25-channel resting electroencephalogram was recorded, and Vigilance Algorithm Leipzig was used to classify the sleep propensity (i.e., type of vigilance regulation) of each subject. The results of both methods showed significant correlations with the Epworth Sleepiness Scale (ρ = -0.70; ρ = 0.45, respectively) and correlated with each other (ρ = -0.54). Subjects with a stable electroencephalogram-vigilance regulation yielded significant increased sleep latencies compared with an unstable regulation (multiple sleep latency 898.5 s versus 549.9 s; P = 0.03). Further, Vigilance Algorithm Leipzig classifications allowed the identification of subjects with average sleep latencies Leipzig provides similar information on wakefulness regulation in comparison to the much more cost- and time-consuming Multiple Sleep Latency Test. Due to its high sensitivity and specificity for large sleep propensity, Vigilance Algorithm Leipzig could be an effective and reliable alternative to the Multiple Sleep Latency Test, for example for screening purposes in large cohorts, where objective information about wakefulness regulation is needed.
Directory of Open Access Journals (Sweden)
M. Mohammadi
2015-01-01
Full Text Available This paper presents the optimal planning of harmonic passive filters in distribution system using three intelligent methods including genetic algorithm (GA, particle swarm optimization (PSO, artificial bee colony (ABC and as a new research is compared with biogeography based optimization (BBO algorithm. In this work, the considered objective function is to minimize the value of investment cost of filters and total harmonic distortion of three-phase current. It is shown that through an economical placement and sizing of LC passive filters the total voltage harmonic distortion and cost could be minimized simultaneously. BBO is a novel evolutionary algorithm that is based on the mathematics of biogeography. In the BBO model, problem solutions are represented as islands, and the sharing of features between solutions is represented as immigration and emigration between the islands. The simulation results show that the proposed method is efficient for solving the presented problem.
COMPARISON AND ANALYSIS OF WATERMARKING ALGORITHMS IN COLOR IMAGES – IMAGE SECURITY PARADIGM
Directory of Open Access Journals (Sweden)
D. Biswas
2011-06-01
Full Text Available This paper is based on a comparative study between different watermarking techniques such as LSB hiding algorithm, (2, 2 visual cryptography based watermarking for color images [3,4] and Randomized LSB-MSB hiding algorithm [1]. Here, we embed the secret image in a host or original image, by using these bit-wise pixel manipulation algorithms. This is followed by a comparative study of the resultantimages through Peak Signal to Noise Ratio (PSNR calculation. The property wise variation of differenttypes of secret images that are embedded into the host image plays an important role in this context. The calculation of the Peak Signal to Noise Ratio is done for different color levels (red, green, blue and also for their equivalent gray level images. From the results, we are trying to predict which technique is more suitable to which type of secret image.
Performance Comparison of Different System Identification Algorithms for FACET and ATF2
Pfingstner, J; Schulte, D
2013-01-01
Good system knowledge is an essential ingredient for the operation of modern accelerator facilities. For example, beam-based alignment algorithms and orbit feedbacks rely strongly on a precise measurement of the orbit response matrix. The quality of the measurement of this matrix can be improved over time by statistically combining the effects of small system excitations with the help of system identification algorithms. These small excitations can be applied in a parasitic mode without stopping the accelerator operation (on-line). In this work, different system identification algorithms are used in simulation studies for the response matrix measurement at ATF2. The results for ATF2 are finally compared with the results for FACET, latter originating from an earlier work.
Institute of Scientific and Technical Information of China (English)
LI Gui; LIN Hui; WU Ai-Dong; SONG Gang; WU Yi-Can
2008-01-01
To determine the electron energy spectra for medical accelerator effectively, we investigate a nonlinear programming model with several nonlinear regression algorithms, including Levenberg-Marquardt, Quasi-Newton, Gradient, Conjugate Gradient, Newton, Principal-Axis and NMinimize algorithms. The local relaxation-bound method is also developed to increase the calculation accuracy. The testing results demonstrate that the above methods could reconstruct the electron energy spectra effectively. Especially, further with the local relaxationbound method the Levenberg Marquardt, Newton and N Minimize algorithms could precisely obtain both the electron energy spectra and the photon contamination. Further study shows that ignoring about 4% photon contamination would increase error greatly, and it also inaccurately makes the electron energy spectra 'drift' to the low energy.
Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models
Directory of Open Access Journals (Sweden)
Hiroki Yoshioka
2011-07-01
Full Text Available The fraction of vegetation cover (FVC is often estimated by unmixing a linear mixture model (LMM to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could be used to assess the robustness of LMM-based algorithms under a two-endmember assumption. The factor was derived from the analytical relationship between FVC values determined according to several previously described algorithms. The factor depended on the target spectra, endmember spectra, and choice of the spectral vegetation index. Numerical simulations were conducted to demonstrate the dependence and usefulness of the technique in terms of robustness against the measurement noise.
Amooee, Golriz; Bagheri-Dehnavi, Malihe
2012-01-01
In the current competitive world, industrial companies seek to manufacture products of higher quality which can be achieved by increasing reliability, maintainability and thus the availability of products. On the other hand, improvement in products lifecycle is necessary for achieving high reliability. Typically, maintenance activities are aimed to reduce failures of industrial machinery and minimize the consequences of such failures. So the industrial companies try to improve their efficiency by using different fault detection techniques. One strategy is to process and analyze previous generated data to predict future failures. The purpose of this paper is to detect wasted parts using different data mining algorithms and compare the accuracy of these algorithms. A combination of thermal and physical characteristics has been used and the algorithms were implemented on Ahanpishegan's current data to estimate the availability of its produced parts. Keywords: Data Mining, Fault Detection, Availability, Predictio...
A comparison of two estimation algorithms for Samejima's continuous IRT model.
Zopluoglu, Cengiz
2013-03-01
This study compares two algorithms, as implemented in two different computer softwares, that have appeared in the literature for estimating item parameters of Samejima's continuous response model (CRM) in a simulation environment. In addition to the simulation study, a real-data illustration is provided, and CRM is used as a potential psychometric tool for analyzing measurement outcomes in the context of curriculum-based measurement (CBM) in the field of education. The results indicate that a simplified expectation-maximization (EM) algorithm is as effective and efficient as the traditional EM algorithm for estimating the CRM item parameters. The results also show promise for using this psychometric model to analyze CBM outcomes, although more research is needed in order to recommend CRM as a standard practice in the CBM context.
Directory of Open Access Journals (Sweden)
Murat KUL
2014-07-01
Full Text Available The purpose of the study , candidates who participated in a special aptitude test of Physical Education and Sports School are compared those who were eligible to register with the win of Multiple Inte lligence Areas. In the research Scan model was used. Within the investigation, in 785 candidates who applied Bartin Universty School of Physical Education and Sports Special Ability Test for 2013 - 2014 academic year, 536 volunteer candidates who have average age x yaş = 21.15± 2.66 constitude. As data collection tool, belogns to the candidates personal information form and “Multiple Intelligences Inventory” which was developed by Özden (2003 for he identification of multiple intellegences was applied. Reliability coefficient was discovered as .96. In evaluation of data, SPSS data an alysis program was used. In evaluation of data, frequency, average, standard, deviation from descriptive statistical techniques was used. Also by taking into account normal distribution of the data, Independent Sample T - test of statistical techniques was u sed. In considering the findings of the study “Bodily - Kinesthetic Intelligence” which is a field of Multiple Intelligences of candidates as statistically significant diffirence was found in the area. Candidates winning higher than avarage scores candidates who can not win are seen to have. Also, “Social - Interpersonal Intelligence” of candidates qualifing to register with who can not qualify to register statistically significant results were observed in the levels. Winning candidates in this area compared t o the candidates who win more than others, it is concluded that they carry the dominant features. As a result of “Verbal - Linguistic Intelligence”, “Logical - Mathematical Intelligence”, “Musical - Rhythmic Intelligence”, “Bodily - Kinesthetic Intelligence, “Soci al - Interpersonal Intelligence” of Multiple Intelligence Areas candidates who participated in Physical Education
Li, Haisen S; Romeijn, H Edwin; Fox, Christopher; Palta, Jatinder R; Dempsey, James F
2008-03-01
The authors present a comparative study of intensity modulated proton therapy (IMPT) treatment planning employing algorithms of three-dimensional (3D) modulation, and 2.5-dimensional (2.5D) modulation, and intensity modulated distal edge tracking (DET) [A. Lomax, Phys. Med. Biol. 44, 185-205 (1999)] applied to the treatment of head-and-neck cancer radiotherapy. These three approaches were also compared with 6 MV photon intensity modulated radiation therapy (IMRT). All algorithms were implemented in the University of Florida Optimized Radiation Therapy system using a finite sized pencil beam dose model and a convex fluence map optimization model. The 3D IMPT and the DET algorithms showed considerable advantages over the photon IMRT in terms of dose conformity and sparing of organs at risk when the beam number was not constrained. The 2.5D algorithm did not show an advantage over the photon IMRT except in the dose reduction to the distant healthy tissues, which is inherent in proton beam delivery. The influences of proton beam number and pencil beam size on the IMPT plan quality were also studied. Out of 24 cases studied, three cases could be adequately planned with one beam and 12 cases could be adequately planned with two beams, but the dose uniformity was often marginally acceptable. Adding one or two more beams in each case dramatically improved the dose uniformity. The finite pencil beam size had more influence on the plan quality of the 2.5D and DET algorithms than that of the 3D IMPT. To obtain a satisfactory plan quality, a 0.5 cm pencil beam size was required for the 3D IMPT and a 0.3 cm size was required for the 2.5D and the DET algorithms. Delivery of the IMPT plans produced in this study would require a proton beam spot scanning technique that has yet to be developed clinically.
Modeling Signal Transduction Networks: A comparison of two Stochastic Kinetic Simulation Algorithms
Energy Technology Data Exchange (ETDEWEB)
Pettigrew, Michel F.; Resat, Haluk
2005-09-15
Simulations of a scalable four compartment reaction model based on the well known epidermal growth factor receptor (EGFR) signal transduction system are used to compare two stochastic algorithms ? StochSim and the Gibson-Gillespie. It is concluded that the Gibson-Gillespie is the algorithm of choice for most realistic cases with the possible exception of signal transduction networks characterized by a moderate number (< 100) of complex types, each with a very small population, but with a high degree of connectivity amongst the complex types. Keywords: Signal transduction networks, Stochastic simulation, StochSim, Gillespie
Institute of Scientific and Technical Information of China (English)
Haixing Liu,Jing Lu,Ming Zhao∗; Yixing Yuan
2016-01-01
In order to compare two advanced multi⁃objective evolutionary algorithms, a multi⁃objective water distribution problem is formulated in this paper. The multi⁃objective optimization has received more attention in the water distribution system design. On the one hand the cost of water distribution system including capital, operational, and maintenance cost is mostly concerned issue by the utilities all the time; on the other hand improving the performance of water distribution systems is of equivalent importance, which is often conflicting with the previous goal. Many performance metrics of water networks are developed in recent years, including total or maximum pressure deficit, resilience, inequity, probabilistic robustness, and risk measure. In this paper, a new resilience metric based on the energy analysis of water distribution systems is proposed. Two optimization objectives are comprised of capital cost and the new resilience index. A heuristic algorithm, speed⁃constrained multi⁃objective particle swarm optimization ( SMPSO) extended on the basis of the multi⁃objective particle swarm algorithm, is introduced to compare with another state⁃of⁃the⁃art heuristic algorithm, NSGA⁃II. The solutions are evaluated by two metrics, namely spread and hypervolume. To illustrate the capability of SMPSO to efficiently identify good designs, two benchmark problems ( two⁃loop network and Hanoi network) are employed. From several aspects the results demonstrate that SMPSO is a competitive and potential tool to tackle with the optimization problem of complex systems.
Delimata, Paweł
2010-01-01
We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.
Pick-N Multiple Choice-Exams: A Comparison of Scoring Algorithms
Bauer, Daniel; Holzer, Matthias; Kopp, Veronika; Fischer, Martin R.
2011-01-01
To compare different scoring algorithms for Pick-N multiple correct answer multiple-choice (MC) exams regarding test reliability, student performance, total item discrimination and item difficulty. Data from six 3rd year medical students' end of term exams in internal medicine from 2005 to 2008 at Munich University were analysed (1,255 students,…
Comparison of SAR Wind Speed Retrieval Algorithms for Evaluating Offshore Wind Energy Resources
DEFF Research Database (Denmark)
Kozai, K.; Ohsawa, T.; Takeyama, Y.
2010-01-01
Envisat/ASAR-derived offshore wind speeds and energy densities based on 4 different SAR wind speed retrieval algorithms (CMOD4, CMOD-IFR2, CMOD5, CMOD5.N) are compared with observed wind speeds and energy densities for evaluating offshore wind energy resources. CMOD4 ignores effects of atmospheric...
DEFF Research Database (Denmark)
Cook, Gerald; Lin, Ching-Fang
1980-01-01
The local linearization algorithm is presented as a possible numerical integration scheme to be used in real-time simulation. A second-order nonlinear example problem is solved using different methods. The local linearization approach is shown to require less computing time and give significant...... improvement in accuracy over the classical second-order integration methods....
1990-01-01
A new recursive prediction error routine is compared with the backpropagation method of training neural networks. Results based on simulated systems, the prediction of Canadian Lynx data and the modelling of an automotive diesel engine indicate that the recursive prediction error algorithm is far superior to backpropagation.
Directory of Open Access Journals (Sweden)
V. Elamaran
2012-12-01
Full Text Available In this study, we present Embedded Zerotree Wavelet (EZW algorithm to compress the image using different wavelet filters such as Biorthogonal, Coiflets, Daubechies, Symlets and Reverse Biorthogonal and to remove noise by setting appropriate threshold value while decoding. Compression methods are important in telemedicine applications by reducing number of bits per pixel to adequately represent the image. Data storage requirements are reduced and transmission efficiency is improved because of compressing the image. The EZW algorithm is an effective and computationally efficient technique in image coding. Obtaining the best image quality for a given bit rate and accomplishing this task in an embedded fashion are the two problems addressed by the EZW algorithm. A technique to decompose the image using wavelets has gained a great deal of popularity in recent years. Apart from very good compression performance, EZW algorithm has the property that the bitstream can be truncated at any point and still be decoded with a good quality image. All the standard wavelet filters are used and the results are compared with different thresholds in the encoding section. Bit rate versus PSNR simulation results are obtained for the image 256x256 barbara with different wavelet filters. It shows that the computational overhead involved with Daubechies wavelet filters but are produced better results. Like even missing details i.e., higher frequency components are picked by them which are missed by other family of wavelet filters.
A comparison of two open source LiDAR surface classification algorithms
With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are op...
A comparison of 12 algorithms for matching on the propensity score.
Austin, Peter C
2014-03-15
Propensity-score matching is increasingly being used to reduce the confounding that can occur in observational studies examining the effects of treatments or interventions on outcomes. We used Monte Carlo simulations to examine the following algorithms for forming matched pairs of treated and untreated subjects: optimal matching, greedy nearest neighbor matching without replacement, and greedy nearest neighbor matching without replacement within specified caliper widths. For each of the latter two algorithms, we examined four different sub-algorithms defined by the order in which treated subjects were selected for matching to an untreated subject: lowest to highest propensity score, highest to lowest propensity score, best match first, and random order. We also examined matching with replacement. We found that (i) nearest neighbor matching induced the same balance in baseline covariates as did optimal matching; (ii) when at least some of the covariates were continuous, caliper matching tended to induce balance on baseline covariates that was at least as good as the other algorithms; (iii) caliper matching tended to result in estimates of treatment effect with less bias compared with optimal and nearest neighbor matching; (iv) optimal and nearest neighbor matching resulted in estimates of treatment effect with negligibly less variability than did caliper matching; (v) caliper matching had amongst the best performance when assessed using mean squared error; (vi) the order in which treated subjects were selected for matching had at most a modest effect on estimation; and (vii) matching with replacement did not have superior performance compared with caliper matching without replacement.
Comparison of adaptive algorithms for the control of tonal disturbances in mechanical systems
Zilletti, M.; Elliott, S. J.; Cheer, J.
2016-09-01
This paper presents a study on the performance of adaptive control algorithms designed to reduce the vibration of mechanical systems excited by a harmonic disturbance. The mechanical system consists of a mass suspended on a spring and a damper. The system is equipped with a force actuator in parallel with the suspension. The control signal driving the actuator is generated by adjusting the amplitude and phase of a sinusoidal reference signal at the same frequency as the excitation. An adaptive feedforward control algorithm is used to adapt the amplitude and phase of the control signal, to minimise the mean square velocity of the mass. Two adaptation strategies are considered in which the control signal is either updated after each period of the oscillation or at every time sample. The first strategy is traditionally used in vibration control in helicopters for example; the second strategy is normally referred to as the filtered-x least mean square algorithm and is often used to control engine noise in cars. The two adaptation strategies are compared through a parametric study, which investigates the influence of the properties of both the mechanical system and the control system on the convergence speed of the two algorithms.
Bonte, M.H.A.
2005-01-01
During the last decades, Finite Element (FEM) simulations of metal forming processes have become important tools for designing feasible production processes. In more recent years, several authors recognised the potential of coupling FEM simulations to mathematical opti- misation algorithms to design
Directory of Open Access Journals (Sweden)
Yong Tian
2014-12-01
Full Text Available State of charge (SOC estimation is essential to battery management systems in electric vehicles (EVs to ensure the safe operations of batteries and providing drivers with the remaining range of the EVs. A number of estimation algorithms have been developed to get an accurate SOC value because the SOC cannot be directly measured with sensors and is closely related to various factors, such as ambient temperature, current rate and battery aging. In this paper, two model-based adaptive algorithms, including the adaptive unscented Kalman filter (AUKF and adaptive slide mode observer (ASMO are applied and compared in terms of convergence behavior, tracking accuracy, computational cost and estimation robustness against parameter uncertainties of the battery model in SOC estimation. Two typical driving cycles, including the Dynamic Stress Test (DST and New European Driving Cycle (NEDC are applied to evaluate the performance of the two algorithms. Comparison results show that the AUKF has merits in convergence ability and tracking accuracy with an accurate battery model, while the ASMO has lower computational cost and better estimation robustness against parameter uncertainties of the battery model.
Directory of Open Access Journals (Sweden)
Manel Hlaili
2016-01-01
Full Text Available Photovoltaic (PV energy is one of the most important energy sources since it is clean and inexhaustible. It is important to operate PV energy conversion systems in the maximum power point (MPP to maximize the output energy of PV arrays. An MPPT control is necessary to extract maximum power from the PV arrays. In recent years, a large number of techniques have been proposed for tracking the maximum power point. This paper presents a comparison of different MPPT methods and proposes one which used a power estimator and also analyses their suitability for systems which experience a wide range of operating conditions. The classic analysed methods, the incremental conductance (IncCond, perturbation and observation (P&O, ripple correlation (RC algorithms, are suitable and practical. Simulation results of a single phase NPC grid connected PV system operating with the aforementioned methods are presented to confirm effectiveness of the scheme and algorithms. Simulation results verify the correct operation of the different MPPT and the proposed algorithm.
Otfinowski, P.; Maj, P.; Deptuch, G.; Fahim, F.; Hoff, J.
2017-01-01
Charge sharing is the fractional collection of the charge cloud generated in a detector by two or more adjacent pixels. It may lead to excessive or inefficient registration of hits comparing to the number of impinging photons depending on how discrimination thresholds are set in typical photon counting pixel detector. The problems are particularly exposed for fine pixel sizes and/or for thick planar detectors. Presence of charge sharing is one of the limiting factors that discourages decreasing sizes of pixels in photon counting mode X-ray radiation imaging systems. Currently, a few different approaches tackling with the charge sharing problem exist (e.g. Medipix3RX, PIXIE, miniVIPIC or PIX45). The general idea is, first, to reconstruct the entire signal from adjacent pixels and, secondly, to allocate the hit to a single pixel. This paper focuses on the latter part of the process, i.e. on a comparison of how different hit allocation algorithms affect the spatial accuracy and false registration vs. missed hit probability. Different hit allocation algorithms were simulated, including standard photon counting (no full signal reconstruction) and the C8P1 algorithm. Also, a novel approach, based on a detection of patterns, with significantly limited analog signal processing, was proposed and characterized.
Betremieux, Yan
2015-01-01
Atmospheric refraction affects to various degrees exoplanet transit, lunar eclipse, as well as stellar occultation observations. Exoplanet retrieval algorithms often use analytical expressions for the column abundance along a ray traversing the atmosphere as well as for the deflection of that ray, which are first order approximations valid for low densities in a spherically symmetric homogeneous isothermal atmosphere. We derive new analytical formulae for both of these quantities, which are valid for higher densities, and use them to refine and validate a new ray tracing algorithm which can be used for arbitrary atmospheric temperature-pressure profiles. We illustrate with simple isothermal atmospheric profiles the consequences of our model for different planets: temperate Earth-like and Jovian-like planets, as well as HD189733b, and GJ1214b. We find that, for both hot exoplanets, our treatment of refraction does not make much of a difference to pressures as high as 10 atmosphere, but that it is important to ...
Energy Technology Data Exchange (ETDEWEB)
Fan, Chengguang [College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha 410073, PR China and Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom); Drinkwater, Bruce W. [Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom)
2014-02-18
In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.
QOS,Comparison of BNP Scheduling Algorithms with Expanded Fuzzy System
Directory of Open Access Journals (Sweden)
Amita Sharma
2013-06-01
Full Text Available Parallel processing is a filed in which different systems run together to save the time of the processing and to increase the performance of the system. It has been also seen that it works somewhat up to the load balancing concept. Previous algorithms like HLFET ,MCP, DLS ,ETF , have shown that they can reduce the burden of the processor by working simultaneous working system .In our research work , we have combined HLFET ,MCP, DLS ,ETF with FUZZY logic to check out what effect it makes to the parameters which has been taken from the previous work done like Makespan, SLR ,Speedup , Process Utilization.It has been found that the fuzzy logic system works better than the single algorithm.
Directory of Open Access Journals (Sweden)
M. Frutos
2013-01-01
Full Text Available Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier.
Comparison Study on the Battery SoC Estimation with EKF and UKF Algorithms
Directory of Open Access Journals (Sweden)
Hongwen He
2013-09-01
Full Text Available The battery state of charge (SoC, whose estimation is one of the basic functions of battery management system (BMS, is a vital input parameter in the energy management and power distribution control of electric vehicles (EVs. In this paper, two methods based on an extended Kalman filter (EKF and unscented Kalman filter (UKF, respectively, are proposed to estimate the SoC of a lithium-ion battery used in EVs. The lithium-ion battery is modeled with the Thevenin model and the model parameters are identified based on experimental data and validated with the Beijing Driving Cycle. Then space equations used for SoC estimation are established. The SoC estimation results with EKF and UKF are compared in aspects of accuracy and convergence. It is concluded that the two algorithms both perform well, while the UKF algorithm is much better with a faster convergence ability and a higher accuracy.
Directory of Open Access Journals (Sweden)
Y. Zhang
2009-09-01
Full Text Available Infiltration into frozen and unfrozen soils is critical in hydrology, controlling active layer soil water dynamics and influencing runoff. Few Land Surface Models (LSMs and Hydrological Models (HMs have been developed, adapted or tested for frozen conditions and permafrost soils. Considering the vast geographical area influenced by freeze/thaw processes and permafrost, and the rapid environmental change observed worldwide in these regions, a need exists to improve models to better represent their hydrology.
In this study, various infiltration algorithms and parameterisation methods, which are commonly employed in current LSMs and HMs were tested against detailed measurements at three sites in Canada's discontinuous permafrost region with organic soil depths ranging from 0.02 to 3 m. Field data from two consecutive years were used to calibrate and evaluate the infiltration algorithms and parameterisations. Important conclusions include: (1 the single most important factor that controls the infiltration at permafrost sites is ground thaw depth, (2 differences among the simulated infiltration by different algorithms and parameterisations were only found when the ground was frozen or during the initial fast thawing stages, but not after ground thaw reaches a critical depth of 15–30 cm, (3 despite similarities in simulated total infiltration after ground thaw reaches the critical depth, the choice of algorithm influenced the distribution of water among the soil layers, and (4 the ice impedance factor for hydraulic conductivity, which is commonly used in LSMs and HMs, may not be necessary once the water potential driven frozen soil parameterisation is employed. Results from this work provide guidelines and can be directly implemented in LSMs and HMs to improve their application in organic covered permafrost soils.
Sheta, B.; M. Elhabiby; Sheimy, N.
2012-01-01
A robust scale and rotation invariant image matching algorithm is vital for the Visual Based Navigation (VBN) of aerial vehicles, where matches between an existing geo-referenced database images and the real-time captured images are used to georeference (i.e. six transformation parameters - three rotation and three translation) the real-time captured image from the UAV through the collinearity equations. The georeferencing information is then used in aiding the INS integration Kalman filter a...
Directory of Open Access Journals (Sweden)
Y. Zhang
2010-05-01
Full Text Available Infiltration into frozen and unfrozen soils is critical in hydrology, controlling active layer soil water dynamics and influencing runoff. Few Land Surface Models (LSMs and Hydrological Models (HMs have been developed, adapted or tested for frozen conditions and permafrost soils. Considering the vast geographical area influenced by freeze/thaw processes and permafrost, and the rapid environmental change observed worldwide in these regions, a need exists to improve models to better represent their hydrology.
In this study, various infiltration algorithms and parameterisation methods, which are commonly employed in current LSMs and HMs were tested against detailed measurements at three sites in Canada's discontinuous permafrost region with organic soil depths ranging from 0.02 to 3 m. Field data from two consecutive years were used to calibrate and evaluate the infiltration algorithms and parameterisations. Important conclusions include: (1 the single most important factor that controls the infiltration at permafrost sites is ground thaw depth, (2 differences among the simulated infiltration by different algorithms and parameterisations were only found when the ground was frozen or during the initial fast thawing stages, but not after ground thaw reaches a critical depth of 15 to 30 cm, (3 despite similarities in simulated total infiltration after ground thaw reaches the critical depth, the choice of algorithm influenced the distribution of water among the soil layers, and (4 the ice impedance factor for hydraulic conductivity, which is commonly used in LSMs and HMs, may not be necessary once the water potential driven frozen soil parameterisation is employed. Results from this work provide guidelines that can be directly implemented in LSMs and HMs to improve their application in organic covered permafrost soils.
Comparison and quantitative verification of mapping algorithms for whole genome bisulfite sequencing
Coupling bisulfite conversion with next-generation sequencing (Bisulfite-seq) enables genome-wide measurement of DNA methylation, but poses unique challenges for mapping. However, despite a proliferation of Bisulfite-seq mapping tools, no systematic comparison of their genomic coverage and quantitat...
Comparison of different breast planning techniques and algorithms for radiation therapy treatment.
Borges, C; Cunha, G; Monteiro-Grillo, I; Vaz, P; Teixeira, N
2014-03-01
This work aims at investigating the impact of treating breast cancer using different radiation therapy (RT) techniques--forwardly-planned intensity-modulated, f-IMRT, inversely-planned IMRT and dynamic conformal arc (DCART) RT--and their effects on the whole-breast irradiation and in the undesirable irradiation of the surrounding healthy tissues. Two algorithms of iPlan BrainLAB treatment planning system were compared: Pencil Beam Convolution (PBC) and commercial Monte Carlo (iMC). Seven left-sided breast patients submitted to breast-conserving surgery were enrolled in the study. For each patient, four RT techniques--f-IMRT, IMRT using 2-fields and 5-fields (IMRT2 and IMRT5, respectively) and DCART - were applied. The dose distributions in the planned target volume (PTV) and the dose to the organs at risk (OAR) were compared analyzing dose-volume histograms; further statistical analysis was performed using IBM SPSS v20 software. For PBC, all techniques provided adequate coverage of the PTV. However, statistically significant dose differences were observed between the techniques, in the PTV, OAR and also in the pattern of dose distribution spreading into normal tissues. IMRT5 and DCART spread low doses into greater volumes of normal tissue, right breast, right lung and heart than tangential techniques. However, IMRT5 plans improved distributions for the PTV, exhibiting better conformity and homogeneity in target and reduced high dose percentages in ipsilateral OAR. DCART did not present advantages over any of the techniques investigated. Differences were also found comparing the calculation algorithms: PBC estimated higher doses for the PTV, ipsilateral lung and heart than the iMC algorithm predicted.
A Comparison of Three Voting Methods for Bagging with the MLEM2 Algorithm
Energy Technology Data Exchange (ETDEWEB)
Clinton Cohagan; Jerzy W. Grzymala-Busse; Zdzislaw S. Hippe
2010-03-17
This paper presents results of experiments on some data sets using bagging on the MLEM2 rule induction algorithm. Three different methods of ensemble voting, based on support (a non-democratic voting in which ensembles vote with their strengths), strength only (an ensemble with the largest strength decides to which concept a case belongs) and democratic voting (each ensemble has at most one vote) were used. Our conclusions are that though in most cases democratic voting was the best, it is not significantly better than voting based on support. The strength voting was the worst voting method.
DEFF Research Database (Denmark)
Fabricius, Anne; Watt, Dominic; Johnson, Daniel Ezra
2009-01-01
This paper evaluates a speaker-intrinsic vowel formant frequency normalization algorithm initially proposed in Watt & Fabricius (2002). We compare how well this routine, known as the S-centroid procedure, performs as a sociophonetic research tool in three ways: reducing variance in area ratios...... from RP and Aberdeen English (northeast Scotland). We conclude that, for the data examined here, the S-centroid W&F procedures performs at least as well as the two most recognized speaker-intrinsic, vowel-extrinsic, formant-intrinsic normalization methods, Lobanov's (1971) z-score procedure and Nearey...
Directory of Open Access Journals (Sweden)
R. Stübi
2009-12-01
Full Text Available This paper presents a first statistical validation of tropospheric ozone products derived from measurements of the IASI satellite instrument. Since the end of 2006, IASI (Infrared Atmospheric Sounding Interferometer aboard the polar orbiter Metop-A measures infrared spectra of the Earth's atmosphere in nadir geometry. This validation covers the northern mid-latitudes and the period from July 2007 to August 2008. Retrieval results from four different sources are presented: three are from scientific products (LATMOS, LISA, LPMAA and the fourth one is the pre-operational product distributed by EUMETSAT (version 4.2. The different products are derived from different algorithms with different approaches. The difference and their implications for the retrieved products are discussed. In order to evaluate the quality and the performance of each product, comparisons with the vertical ozone concentration profiles measured by balloon sondes are performed and lead to estimates of the systematic and random errors in the IASI ozone products (profiles and partial columns. A first comparison is performed on the given profiles; a second comparison takes into account the altitude dependent sensitivity of the retrievals. Tropospheric columnar amounts are compared to the sonde for a lower tropospheric column (surface to about 6 km and a "total" tropospheric column (surface to about 11 km. On average both tropospheric columns have small biases for the scientific products, less than 2 Dobson Units (DU for the lower troposphere and less than 1 DU for the total troposphere. The comparison of the still pre-operational EUMETSAT columns shows higher mean differences of about 5 DU.
2003-03-01
to as Derivative- Switching (DS) and developed by Dr. Brad Liebst and Capt. Mike Chapa of AFIT, used the first and second derivatives of the pilot’s...referred to as Derivative-Switching (DS), was designed by Dr. Brad Liebst and Captain Mike Chapa of the Air Force Institute of Technology (AFIT) [11...methods of reducing phase lag, published by Dr. Rundqwist and Capt. Chapa , were the subjects for comparison in this study. 1.4 Objectives The objective of
Hoffman, Ryan A; Kothari, Sonal; Wang, May D
2014-01-01
Automated processing of digital histopathology slides has the potential to streamline patient care and provide new tools for cancer classification and grading. Before automatic analysis is possible, quality control procedures are applied to ensure that each image can be read consistently. One important quality control step is color normalization of the slide image, which adjusts for color variances (batch-effects) caused by differences in stain preparation and image acquisition equipment. Color batch-effects affect color-based features and reduce the performance of supervised color segmentation algorithms on images acquired separately. To identify an optimal normalization technique for histopathological color segmentation applications, five color normalization algorithms were compared in this study using 204 images from four image batches. Among the normalization methods, two global color normalization methods normalized colors from all stain simultaneously and three stain color normalization methods normalized colors from individual stains extracted using color deconvolution. Stain color normalization methods performed significantly better than global color normalization methods in 11 of 12 cross-batch experiments (pnormalization method using k-means clustering was found to be the best choice because of high stain segmentation accuracy and low computational complexity.
Directory of Open Access Journals (Sweden)
Parul Rastogi
2011-03-01
Full Text Available Search Engines are the basic tool of fetching the information on the web. The IT revolution not only affected the technocrats, but the native users are also affected. The native users also tend to look for any information on web nowadays. This leads to the need of effective search engines to fulfill native user's needs and provide them information in their native languages. The major population of India use Hindi as a first language. The Hindi language web information retrieval is not in a satisfactory condition. Besides the other technical setbacks, the Hindi language search engines face the problem of sense ambiguity. Our WSD method is based on Highest Sense Count (HSC. It works well with Google. The objective of the paper is comparative analysis of the WSD algorithm results on the three Hindi language search engines- Google, Raftaar and Guruji. We have taken a test sample of 100 queries to check the performance level of the WSD algorithm on various search engines. The results show promising improvement in performance of Google search engine whereas the least performance improvement was there in Guruji search engine.
Chandratilleke, Dinusha; Silvestrini, Roger; Culican, Sue; Campbell, David; Byth-Wilson, Karen; Swaminathan, Sanjay; Lin, Ming-Wei
2016-08-01
Extractable nuclear antigen (ENA) antibody testing is often requested in patients with suspected connective tissue diseases. Most laboratories in Australia use a two step process involving a high sensitivity screening assay followed by a high specificity confirmation test. Multiplexing technology with Addressable Laser Bead Immunoassay (e.g., FIDIS) offers simultaneous detection of multiple antibody specificities, allowing a single step screening and confirmation. We compared our current diagnostic laboratory testing algorithm [Organtec ELISA screen / Euroimmun line immunoassay (LIA) confirmation] and the FIDIS Connective Profile. A total of 529 samples (443 consecutive+86 known autoantibody positivity) were run through both algorithms, and 479 samples (90.5%) were concordant. The same autoantibody profile was detected in 100 samples (18.9%) and 379 were concordant negative samples (71.6%). The 50 discordant samples (9.5%) were subdivided into 'likely FIDIS or current method correct' or 'unresolved' based on ancillary data. 'Unresolved' samples (n = 25) were subclassified into 'potentially' versus 'potentially not' clinically significant based on the change to clinical interpretation. Only nine samples (1.7%) were deemed to be 'potentially clinically significant'. Overall, we found that the FIDIS Connective Profile ENA kit is non-inferior to the current ELISA screen/LIA characterisation. Reagent and capital costs may be limiting factors in using the FIDIS, but potential benefits include a single step analysis and simultaneous detection of dsDNA antibodies.
Energy Technology Data Exchange (ETDEWEB)
Dong, Feng; Pierpaoli, Elena; Gunn, James E.; Wechsler, Risa H.
2007-10-29
We present a modified adaptive matched filter algorithm designed to identify clusters of galaxies in wide-field imaging surveys such as the Sloan Digital Sky Survey. The cluster-finding technique is fully adaptive to imaging surveys with spectroscopic coverage, multicolor photometric redshifts, no redshift information at all, and any combination of these within one survey. It works with high efficiency in multi-band imaging surveys where photometric redshifts can be estimated with well-understood error distributions. Tests of the algorithm on realistic mock SDSS catalogs suggest that the detected sample is {approx} 85% complete and over 90% pure for clusters with masses above 1.0 x 10{sup 14}h{sup -1} M and redshifts up to z = 0.45. The errors of estimated cluster redshifts from maximum likelihood method are shown to be small (typically less that 0.01) over the whole redshift range with photometric redshift errors typical of those found in the Sloan survey. Inside the spherical radius corresponding to a galaxy overdensity of {Delta} = 200, we find the derived cluster richness {Lambda}{sub 200} a roughly linear indicator of its virial mass M{sub 200}, which well recovers the relation between total luminosity and cluster mass of the input simulation.
Kim, R S J; Postman, M; Strauss, M A; Bahcall, Neta A; Gunn, J E; Lupton, R H; Annis, J; Nichol, R C; Castander, F J; Brinkmann, J; Brunner, R J; Connolly, A; Csabai, I; Hindsley, R B; Ivezic, Z; Vogeley, M S; York, D G; Kim, Rita S. J.; Kepner, Jeremy V.; Postman, Marc; Strauss, Michael A.; Bahcall, Neta A.; Gunn, James E.; Lupton, Robert H.; Annis, James; Nichol, Robert C.; Castander, Francisco J.; Brunner, Robert J.; Connolly, Andrew; Csabai, Istvan; Hindsley, Robert B.; Ivezic, Zeljko; Vogeley, Michael S.; York, Donald G.
2002-01-01
We present a comparison of three cluster finding algorithms from imaging data using Monte Carlo simulations of clusters embedded in a 25 deg^2 region of Sloan Digital Sky Survey (SDSS) imaging data: the Matched Filter (MF; Postman et al. 1996), the Adaptive Matched Filter (AMF; Kepner et al. 1999) and a color-magnitude filtered Voronoi Tessellation Technique (VTT). Among the two matched filters, we find that the MF is more efficient in detecting faint clusters, whereas the AMF evaluates the redshifts and richnesses more accurately, therefore suggesting a hybrid method (HMF) that combines the two. The HMF outperforms the VTT when using a background that is uniform, but it is more sensitive to the presence of a non-uniform galaxy background than is the VTT; this is due to the assumption of a uniform background in the HMF model. We thus find that for the detection thresholds we determine to be appropriate for the SDSS data, the performance of both algorithms are similar; we present the selection function for eac...
Pulse shape analysis of a two fold clover detector with an EMD based new algorithm: A comparison
Energy Technology Data Exchange (ETDEWEB)
Siwal, Davinder, E-mail: dev84sonu@gmail.com [Department of Physics and Astrophysics, University of Delhi, Delhi 110007 (India); Mandal, S. [Department of Physics and Astrophysics, University of Delhi, Delhi 110007 (India); Palit, R.; Sethi, J. [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Mumbai 400005 (India); Garg, R. [Department of Physics and Astrophysics, University of Delhi, Delhi 110007 (India); Saha, S. [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Mumbai 400005 (India); Prasad, Awadhesh [Department of Physics and Astrophysics, University of Delhi, Delhi 110007 (India); Chavan, P.B.; Naidu, B.S.; Jadhav, S.; Donthi, R. [Department of Nuclear and Atomic Physics, Tata Institute of Fundamental Research, Mumbai 400005 (India); Schaffner, H.; Adamczewski-Musch, J.; Kurz, N.; Wollersheim, H.J. [GSI Helmholtzzentrum für Schwerionenforschung GmbH, 64291 Darmstadt (Germany); Singh, R. [Amity Institute of Nuclear Science and Technology, Amity University, Noida 201303 (India)
2014-03-21
An investigation of Empirical Mode Decomposition (EMD) based noise filtering algorithm has been carried out on a mirror signal from a two fold germanium clover detector. EMD technique can decompose linear as well as nonlinear and chaotic signals with a precise frequency resolution. It allows to decompose the preamplifier signal (charge pulse) on an event-by-event basis. The filtering algorithm provides the information about the Intrinsic Mode Functions (IMFs) mainly dominated by the noise. It preserves the signal information and separates the overriding noise oscillations from the signals. The identification of noise structure is based on the frequency distributions of different IMFs. The preamplifier noise components which distort the azimuthal co-ordinates information have been extracted on the basis of the correlation between the different IMFs and the mirror signal. The correlation studies have been carried out both in frequency and time domain. The extracted correlation coefficient provides an important information regarding the pulse shape of the γ-ray interaction in the detector. A comparison between the EMD based and state-of-the-art wavelet based denoising techniques has also been made and discussed. It has been observed that the fractional noise strength distribution varies with the position of the collimated gamma-ray source. Above trend has been reproduced by both the denoising techniques.
Azin, Meysam; Chiel, Hillel J; Mohseni, Pedram
2007-01-01
Finite impulse response (FIR) and infinite impulse response (IIR) temporal filtering techniques are investigated to assess the feasibility of very-large-scale-integrated (VLSI) implementation of a subtraction-based stimulus artifact rejection (SAR) algorithm in implantable, closed-loop neuroprostheses. The two approaches are compared in terms of their system architectures, overall performances, and the associated computational costs. Pre-recorded neural data from an Aplysia californica are used to demonstrate the functionality of the proposed implementations. Digital building blocks for an FIR-based system are also simulated in a 0.18-microm CMOS technology, showing a total power consumption of IIR-based system can further reduce the required power consumption and die area.
VHDL IMPLEMENTATION AND COMPARISON OF COMPLEX MUL-TIPLIER USING BOOTH’S AND VEDIC ALGORITHM
Directory of Open Access Journals (Sweden)
Rajashri K. Bhongade
2015-11-01
Full Text Available For designing of complex number multiplier basic idea is adopted from designing of multiplier. An ancient Indian mathematics "Vedas" is used for designing the multiplier unit. There are 16 sutra in Vedas, from that the Urdhva Tiryakb-hyam sutra (method was selected for implementation complex multiplication and basically Urdhva Tiryakbhyam sutra appli-cable to all cases of multiplication. Any multi-bit multiplication can be reduced down to single bit multiplication and addition by using Urdhva Tiryakbhyam sutra is performed by vertically and crosswise. The partial products and sums are generated in single step which reduces the carry propagation from LSB to MSB by using these formulas. In this paper simulation result for 4bit complex no. multiplication using Booth‟s algorithm and using Vedic sutra are illustrated. The implementation of the Vedic mathematics and their application to the complex multiplier was checked parameter like propagation delay.
Koloch, Grzegorz; Kaminski, Bogumil
2010-10-01
In the paper we examine a modification of the classical Vehicle Routing Problem (VRP) in which shapes of transported cargo are accounted for. This problem, known as a three-dimensional VRP with loading constraints (3D-VRP), is appropriate when transported commodities are not perfectly divisible, but they have fixed and heterogeneous dimensions. In the paper restrictions on allowable cargo positionings are also considered. These restrictions are derived from business practice and they extended the baseline 3D-VRP formulation as considered by Koloch and Kaminski (2010). In particular, we investigate how additional restrictions influence relative performance of two proposed optimization algorithms: the nested and the joint one. Performance of both methods is compared on artificial problems and on a big-scale real life case study.
Martin, Jacob A.; Gross, Kevin C.
2016-05-01
As off-nadir viewing platforms become increasingly prevalent in remote sensing, material identification techniques must be robust to changing viewing geometries. Current identification strategies generally rely on estimating reflectivity or emissivity, both of which vary with viewing angle. Presented here is a technique, leveraging polarimetric and hyperspectral imaging (P-HSI), to estimate index of refraction which is invariant to viewing geometry. Results from a quartz window show that index of refraction can be retrieved to within 0.08 rms error from 875-1250 cm-1 for an amorphous material. Results from a silicon carbide (SiC) wafer, which has much sharper features than quartz glass, show the index of refraction can be retrieved to within 0.07 rms error. The results from each of these datasets show an improvement when compared with a maximum smoothness TES algorithm.
A comparison algorithm to check LTSA Layer 1 and SCORM compliance in e-Learning sites
Sengupta, Souvik; Banerjee, Nilanjan
2012-01-01
The success of e-Learning is largely dependent on the impact of its multimedia aided learning content on the learner over the hyper media. The e-Learning portals with different proportion of multimedia elements have different impact on the learner, as there is lack of standardization. The Learning Technology System Architecture (LTSA) Layer 1 deals with the effect of environment on the learner. From an information technology perspective it specifies learner interaction from the environment to the learner via multimedia content. Sharable Content Object Reference Model (SCROM) is a collection of standards and specifications for content of web-based e-learning and specifies how JavaScript API can be used to integrate content development. In this paper an examination is made on the design features of interactive multimedia components of the learning packages by creating an algorithm which will give a comparative study of multimedia component used by different learning packages. The resultant graph as output helps...
Directory of Open Access Journals (Sweden)
Peter Domonkos
2013-01-01
Full Text Available Efficiency evaluations for change point Detection methods used in nine major Objective Homogenization Methods (DOHMs are presented. The evaluations are conducted using ten different simulated datasets and four efficiency measures: detection skill, skill of linear trend estimation, sum of squared error, and a combined efficiency measure. Test datasets applied have a diverse set of inhomogeneity (IH characteristics and include one dataset that is similar to the monthly benchmark temperature dataset of the European benchmarking effort known by the acronym COST HOME. The performance of DOHMs is highly dependent on the characteristics of test datasets and efficiency measures. Measures of skills differ markedly according to the frequency and mean duration of inhomogeneities and vary with the ratio of IH-magnitudes and background noise. The study focuses on cases when high quality relative time series (i.e., the difference between a candidate and reference series can be created, but the frequency and intensity of inhomogeneities are high. Results show that in these cases the Caussinus-Mestre method is the most effective, although appreciably good results can also be achieved by the use of several other DOHMs, such as the Multiple Analysis of Series for Homogenisation, Bayes method, Multiple Linear Regression, and the Standard Normal Homogeneity Test.
Comparison of IPSA and HIPO inverse planning optimization algorithms for prostate HDR brachytherapy.
Panettieri, Vanessa; Smith, Ryan L; Mason, Natasha J; Millar, Jeremy L
2014-11-08
Publications have reported the benefits of using high-dose-rate brachytherapy (HDRB) for the treatment of prostate cancer, since it provides similar biochemical control as other treatments while showing lowest long-term complications to the organs at risk (OAR). With the inclusion of anatomy-based inverse planning opti- mizers, HDRB has the advantage of potentially allowing dose escalation. Among the algorithms used, the Inverse Planning Simulated Annealing (IPSA) optimizer is widely employed since it provides adequate dose coverage, minimizing dose to the OAR, but it is known to generate large dwell times in particular positions of the catheter. As an alternative, the Hybrid Inverse treatment Planning Optimization (HIPO) algorithm was recently implemented in Oncentra Brachytherapy V. 4.3. The aim of this work was to compare, with the aid of radiobiological models, plans obtained with IPSA and HIPO to assess their use in our clinical practice. Thirty patients were calculated with IPSA and HIPO to achieve our department's clinical constraints. To evaluate their performance, dosimetric data were collected: Prostate PTV D90(%), V100(%), V150(%), and V200(%), Urethra D10(%), Rectum D2cc(%), and conformity indices. Additionally tumor control probability (TCP) and normal tissue complication probability (NTCP) were calculated with the BioSuite software. The HIPO optimization was performed firstly with Prostate PTV (HIPOPTV) and then with Urethra as priority 1 (HIPOurethra). Initial optimization constraints were then modified to see the effects on dosimetric parameters, TCPs, and NTCPs. HIPO optimizations could reduce TCPs up to 10%-20% for all PTVs lower than 74 cm3. For the urethra, IPSA and HIPOurethra provided similar NTCPs for the majority of volume sizes, whereas HIPOPTV resulted in large NTCP values. These findings were in agreement with dosimetric values. By increasing the PTV maximum dose constraints for HIPOurethra plans, TCPs were found to be in agreement with
A comparison of Eulerian and Lagrangian transport and non-linear reaction algorithms
Benson, David A.; Aquino, Tomás; Bolster, Diogo; Engdahl, Nicholas; Henri, Christopher V.; Fernàndez-Garcia, Daniel
2017-01-01
When laboratory-measured chemical reaction rates are used in simulations at the field-scale, the models typically overpredict the apparent reaction rates. The discrepancy is primarily due to poorer mixing of chemically distinct waters at the larger scale. As a result, realistic field-scale predictions require accurate simulation of the degree of mixing between fluids. The Lagrangian particle-tracking (PT) method is a now-standard way to simulate the transport of conservative or sorbing solutes. The method's main advantage is the absence of numerical dispersion (and its artificial mixing) when simulating advection. New algorithms allow particles of different species to interact in nonlinear (e.g., bimolecular) reactions. Therefore, the PT methods hold a promise of more accurate field-scale simulation of reactive transport because they eliminate the masking effects of spurious mixing due to advection errors inherent in grid-based methods. A hypothetical field-scale reaction scenario is constructed and run in PT and Eulerian (finite-volume/finite-difference) simulators. Grid-based advection schemes considered here include 1st- to 3rd-order spatially accurate total-variation-diminishing flux-limiting schemes, both of which are widely used in current transport/reaction codes. A homogeneous velocity field in which the Courant number is everywhere unity, so that the chosen Eulerian methods incur no error when simulating advection, shows that both the Eulerian and PT methods can achieve convergence in the L1 (integrated concentration) norm, but neither shows stricter pointwise convergence. In this specific case with a constant dispersion coefficient and bimolecular reaction A + B → P , the correct total amount of product is 0.221MA0, where MA0 is the original mass of reactant A. When the Courant number drops, the grid-based simulations can show remarkable errors due to spurious over- and under-mixing. In a heterogeneous velocity field (keeping the same constant and
DEFF Research Database (Denmark)
Tamborrini, Marco; Stoffel, Sabine A; Westerfeld, Nicole;
2011-01-01
In clinical trials, immunopotentiating reconstituted influenza virosomes (IRIVs) have shown great potential as a versatile antigen delivery platform for synthetic peptides derived from Plasmodium falciparum antigens. This study describes the immunogenicity of a virosomally-formulated recombinant...... fusion protein comprising domains of the two malaria vaccine candidate antigens MSP3 and GLURP....
Park, No-Wook; Jang, Dong-Ho
2014-01-01
This paper compares the predictive performance of different geostatistical kriging algorithms for intertidal surface sediment facies mapping using grain size data. Indicator kriging, which maps facies types from conditional probabilities of predefined facies types, is first considered. In the second approach, grain size fractions are first predicted using cokriging and the facies types are then mapped. As grain size fractions are compositional data, their characteristics should be considered during spatial prediction. For efficient prediction of compositional data, additive log-ratio transformation is applied before cokriging analysis. The predictive performance of cokriging of the transformed variables is compared with that of cokriging of raw fractions in terms of both prediction errors of fractions and facies mapping accuracy. From a case study of the Baramarae tidal flat, Korea, the mapping method based on cokriging of log-ratio transformation of fractions outperformed the one based on cokriging of untransformed fractions in the prediction of fractions and produced the best facies mapping accuracy. Indicator kriging that could not account for the variation of fractions within each facies type showed the worst mapping accuracy. These case study results indicate that the proper processing of grain size fractions as compositional data is important for reliable facies mapping.
Directory of Open Access Journals (Sweden)
No-Wook Park
2014-01-01
Full Text Available This paper compares the predictive performance of different geostatistical kriging algorithms for intertidal surface sediment facies mapping using grain size data. Indicator kriging, which maps facies types from conditional probabilities of predefined facies types, is first considered. In the second approach, grain size fractions are first predicted using cokriging and the facies types are then mapped. As grain size fractions are compositional data, their characteristics should be considered during spatial prediction. For efficient prediction of compositional data, additive log-ratio transformation is applied before cokriging analysis. The predictive performance of cokriging of the transformed variables is compared with that of cokriging of raw fractions in terms of both prediction errors of fractions and facies mapping accuracy. From a case study of the Baramarae tidal flat, Korea, the mapping method based on cokriging of log-ratio transformation of fractions outperformed the one based on cokriging of untransformed fractions in the prediction of fractions and produced the best facies mapping accuracy. Indicator kriging that could not account for the variation of fractions within each facies type showed the worst mapping accuracy. These case study results indicate that the proper processing of grain size fractions as compositional data is important for reliable facies mapping.
Directory of Open Access Journals (Sweden)
Deyuan Meng
2014-05-01
Full Text Available The dynamics of pneumatic systems are highly nonlinear, and there normally exists a large extent of model uncertainties; the precision motion trajectory tracking control of pneumatic cylinders is still a challenge. In this paper, two typical nonlinear controllers—adaptive controller and deterministic robust controller—are constructed firstly. Considering that they have both benefits and limitations, an adaptive robust controller (ARC is further proposed. The ARC is a combination of the first two controllers; it employs online recursive least squares estimation (RLSE to reduce the extent of parametric uncertainties, and utilizes the robust control method to attenuate the effects of parameter estimation errors, unmodeled dynamics, and disturbances. In order to solve the conflicts between the robust control design and the parameter adaption law design, the projection mapping is used to condition the RLSE algorithm so that the parameter estimates are kept within a known bounded convex set. Theoretically, ARC possesses the advantages of the adaptive control and the deterministic robust control, and thus an even better tracking performance can be expected. Extensive comparative experimental results are presented to illustrate the achievable performance of the three proposed controllers and their performance robustness to the parameter variations and sudden disturbance.
Implementing and Comparison between Two Algorithms to Make a Decision in a Wireless Sensors Network
Directory of Open Access Journals (Sweden)
Fouad Essahlaoui
2016-08-01
Full Text Available The clinical presentation of acute CO poisoning and hydrocarbon gas (Butane CAS 106-97-8 varies depending on terrain, humidity, temperature, duration of exposure and the concentration of gas toxic: From then consciousness disorders (100 ppm or 15% rapidly limiting miners to ambient air and under oxygen until sudden coma (300 ppm or 45% required hospitalization monitoring unit, if not the result in few minutes it’s death in the poisoning site [1]. Leakage of the filling butane gas in the plant and very close to the latter position at the Faculty and under gas detection project. Has met a set of sensors to warn of possible leak, which can affect students, teachers and staff of the institution. Therefore, this document describes the implementation of two methods: the first is Average filter and the second as Cusum algorithm, to make a warning decision swished a signal given by the wireless sensors [9] [14-15]. Which installed in the inner side of Faculty of Science and Technology in Errachidia.
Comparison of most adaptive meta model With newly created Quality Meta-Model using CART Algorithm
Directory of Open Access Journals (Sweden)
Jasbir Malik
2012-09-01
Full Text Available To ensure that the software developed is of high quality, it is now widely accepted that various artifacts generated during the development process should be rigorously evaluated using domain-specific quality model. However, a domain-specific quality model should be derived from a generic quality model which is time-proven, well-validated and widely-accepted. This thesis lays down a clear definition of quality meta-model and then identifies various quality meta-models existing in the research and practice-domains. This thesis then compares the various existing quality meta-models to identify which model is the most adaptable to various domains. A set of criteria is used to compare the various quality meta-models. In this we specify the categories, as the CART Algorithms is completely a tree architecture which works on either true or false meta model decision making power .So in the process it has been compared that , if the following items has been found in one category then it falls under true section else under false section .
Comparison of parametric FBP and OS-EM reconstruction algorithm images for PET dynamic study
Energy Technology Data Exchange (ETDEWEB)
Oda, Keiichi; Uemura, Koji; Kimura, Yuichi; Senda, Michio [Tokyo Metropolitan Inst. of Gerontology (Japan). Positron Medical Center; Toyama, Hinako; Ikoma, Yoko
2001-10-01
An ordered subsets expectation maximization (OS-EM) algorithm is used for image reconstruction to suppress image noise and to make non-negative value images. We have applied OS-EM to a digital brain phantom and to human brain {sup 18}F-FDG PET kinetic studies to generate parametric images. A 45 min dynamic scan was performed starting injection of FDG with a 2D PET scanner. The images were reconstructed with OS-EM (6 iterations, 16 subsets) and with filtered backprojection (FBP), and K1, k2 and k3 images were created by the Marquardt non-linear least squares method based on the 3-parameter kinetic model. Although the OS-EM activity images correlated fairly well with those obtained by FBP, the pixel correlations were poor for the k2 and k3 parametric images, but the plots were scattered along the line of identity and the mean values for K1, k2 and k3 obtained by OS-EM were almost equal to those by FBP. The kinetic fitting error for OS-EM was no smaller than that for FBP. The results suggest that OS-EM is not necessarily superior to FBP for creating parametric images. (author)
ECG De-noising: A comparison between EEMD-BLMS and DWT-NN algorithms.
Kærgaard, Kevin; Jensen, Søren Hjøllund; Puthusserypady, Sadasivan
2015-08-01
Electrocardiogram (ECG) is a widely used non-invasive method to study the rhythmic activity of the heart and thereby to detect the abnormalities. However, these signals are often obscured by artifacts from various sources and minimization of these artifacts are of paramount important. This paper proposes two adaptive techniques, namely the EEMD-BLMS (Ensemble Empirical Mode Decomposition in conjunction with the Block Least Mean Square algorithm) and DWT-NN (Discrete Wavelet Transform followed by Neural Network) methods in minimizing the artifacts from recorded ECG signals, and compares their performance. These methods were first compared on two types of simulated noise corrupted ECG signals: Type-I (desired ECG+noise frequencies outside the ECG frequency band) and Type-II (ECG+noise frequencies both inside and outside the ECG frequency band). Subsequently, they were tested on real ECG recordings. Results clearly show that both the methods works equally well when used on Type-I signals. However, on Type-II signals the DWT-NN performed better. In the case of real ECG data, though both methods performed similar, the DWT-NN method was a slightly better in terms of minimizing the high frequency artifacts.
Characterization and Comparison of the 10-2 SITA-Standard and Fast Algorithms
Directory of Open Access Journals (Sweden)
Yaniv Barkana
2012-01-01
Full Text Available Purpose: To compare the 10-2 SITA-standard and SITA-fast visual field programs in patients with glaucoma. Methods: We enrolled 26 patients with open angle glaucoma with involvement of at least one paracentral location on 24-2 SITA-standard field test. Each subject performed 10-2 SITA-standard and SITA-fast tests. Within 2 months this sequence of tests was repeated. Results: SITA-fast was 30% shorter than SITA-standard (5.5±1.1 vs 7.9±1.1 minutes, <0.001. Mean MD was statistically significantly higher for SITA-standard compared with SITA-fast at first visit (Δ=0.3 dB, =0.017 but not second visit. Inter-visit difference in MD or in number of depressed points was not significant for both programs. Bland-Altman analysis showed that clinically significant variations can exist in individual instances between the 2 programs and between repeat tests with the same program. Conclusions: The 10-2 SITA-fast algorithm is significantly shorter than SITA-standard. The two programs have similar long-term variability. Average same-visit between-program and same-program between-visit sensitivity results were similar for the study population, but clinically significant variability was observed for some individual test pairs. Group inter- and intra-program test results may be comparable, but in the management of the individual patient field change should be verified by repeat testing.
A task-based comparison of two reconstruction algorithms for digital breast tomosynthesis
Mahadevan, Ravi; Ikejimba, Lynda C.; Lin, Yuan; Samei, Ehsan; Lo, Joseph Y.
2014-03-01
Digital breast tomosynthesis (DBT) generates 3-D reconstructions of the breast by taking X-Ray projections at various angles around the breast. DBT improves cancer detection as it minimizes tissue overlap that is present in traditional 2-D mammography. In this work, two methods of reconstruction, filtered backprojection (FBP) and the Newton-Raphson iterative reconstruction were used to create 3-D reconstructions from phantom images acquired on a breast tomosynthesis system. The task based image analysis method was used to compare the performance of each reconstruction technique. The task simulated a 10mm lesion within the breast containing iodine concentrations between 0.0mg/ml and 8.6mg/ml. The TTF was calculated using the reconstruction of an edge phantom, and the NPS was measured with a structured breast phantom (CIRS 020) over different exposure levels. The detectability index d' was calculated to assess image quality of the reconstructed phantom images. Image quality was assessed for both conventional, single energy and dual energy subtracted reconstructions. Dose allocation between the high and low energy scans was also examined. Over the full range of dose allocations, the iterative reconstruction yielded a higher detectability index than the FBP for single energy reconstructions. For dual energy subtraction, detectability index was maximized when most of the dose was allocated to the high energy image. With that dose allocation, the performance trend for reconstruction algorithms reversed; FBP performed better than the corresponding iterative reconstruction. However, FBP performance varied very erratically with changing dose allocation. Therefore, iterative reconstruction is preferred for both imaging modalities despite underperforming dual energy FBP, as it provides stable results.
Directory of Open Access Journals (Sweden)
Armando Marino
2015-04-01
Full Text Available The surveillance of maritime areas with remote sensing is vital for security reasons, as well as for the protection of the environment. Satellite-borne synthetic aperture radar (SAR offers large-scale surveillance, which is not reliant on solar illumination and is rather independent of weather conditions. The main feature of vessels in SAR images is a higher backscattering compared to the sea background. This peculiarity has led to the development of several ship detectors focused on identifying anomalies in the intensity of SAR images. More recently, different approaches relying on the information kept in the spectrum of a single-look complex (SLC SAR image were proposed. This paper is focused on two main issues. Firstly, two recently developed sub-look detectors are applied for the first time to ship detection. Secondly, new and well-known ship detection algorithms are compared in order to understand which has the best performance under certain circumstances and if the sub-look analysis improves ship detection. The comparison is done on real SAR data exploiting diversity in frequency and polarization. Specifically, the employed data consist of six RADARSAT-2 fine quad-polacquisitions over the North Sea, five TerraSAR-X HH/VV dual-polarimetric data-takes, also over the North Sea, and one ALOS-PALSAR quad-polarimetric dataset over Tokyo Bay. Simultaneously to the SAR images, validation data were collected, which include the automatic identification system (AIS position of ships and wind speeds. The results of the analysis show that the performance of the different sub-look algorithms considered here is strongly dependent on polarization, frequency and resolution. Interestingly, these sub-look detectors are able to outperform the classical SAR intensity detector when the sea state is particularly high, leading to a strong clutter contribution. It was also observed that there are situations where the performance improvement thanks to the sub
Middle matching mining algorithm
Institute of Scientific and Technical Information of China (English)
GUO Ping; CHEN Li
2003-01-01
A new algorithm for fast discovery of sequential patterns to solve the problems of too many candidate sets made by SPADE is presented, which is referred to as middle matching algorithm. Experiments on a large customer transaction database consisting of customer_id, transaction time, and transaction items demonstrate that the proposed algorithm performs better than SPADE attributed to its philosophy to generate a candidate set by matching two sequences in the middle place so as to reduce the number of the candidate sets.
Directory of Open Access Journals (Sweden)
P. S. Hiremath
2014-11-01
Full Text Available In mobile ad-hoc networks (MANET, the movement of the nodes may quickly change the networks topology resulting in the increase of the overhead message in topology maintenance. The nodes communicate with each other by exchanging the hello packet and constructing the neighbor list at each node. MANET is vulnerable to attacks such as black hole attack, gray hole attack, worm hole attack and sybil attack. A black hole attack makes a serious impact on routing, packet delivery ratio, throughput, and end to end delay of packets. In this paper, the performance comparison of clustering based and threshold based algorithms for detection and prevention of cooperative in MANETs is examined. In this study every node is monitored by its own cluster head (CH, while server (SV monitors the entire network by channel overhearing method. Server computes the trust value based on sent and receive count of packets of the receiver node. It is implemented using AODV routing protocol in the NS2 simulations. The results are obtained by comparing the performance of clustering based and threshold based methods by varying the concentration of black hole nodes and are analyzed in terms of throughput, packet delivery ratio. The results demonstrate that the threshold based method outperforms the clustering based method in terms of throughput, packet delivery ratio and end to end delay.
Hudson, Parisa; Hudson, Stephen D; Handler, William B; Scholl, Timothy J; Chronik, Blaine A
2010-04-01
High-performance shim coils are required for high-field magnetic resonance imaging and spectroscopy. Complete sets of high-power and high-performance shim coils were designed using two different methods: the minimum inductance and the minimum power target field methods. A quantitative comparison of shim performance in terms of merit of inductance (ML) and merit of resistance (MR) was made for shim coils designed using the minimum inductance and the minimum power design algorithms. In each design case, the difference in ML and the difference in MR given by the two design methods was inductance designs tend to feature oscillations within the current density; while minimum power designs tend to feature less rapidly varying current densities and lower power dissipation. Overall, the differences in coil performance obtained by the two methods are relatively small. For the specific case of shim systems customized for small animal imaging, the reduced power dissipation obtained when using the minimum power method is judged to be more significant than the improvements in switching speed obtained from the minimum inductance method.
Directory of Open Access Journals (Sweden)
Tamborrini Marco
2011-12-01
Full Text Available Abstract Background In clinical trials, immunopotentiating reconstituted influenza virosomes (IRIVs have shown great potential as a versatile antigen delivery platform for synthetic peptides derived from Plasmodium falciparum antigens. This study describes the immunogenicity of a virosomally-formulated recombinant fusion protein comprising domains of the two malaria vaccine candidate antigens MSP3 and GLURP. Methods The highly purified recombinant protein GMZ2 was coupled to phosphatidylethanolamine and the conjugates incorporated into the membrane of IRIVs. The immunogenicity of this adjuvant-free virosomal formulation was compared to GMZ2 formulated with the adjuvants Montanide ISA 720 and Alum in three mouse strains with different genetic backgrounds. Results Intramuscular injections of all three candidate vaccine formulations induced GMZ2-specific antibody responses in all mice tested. In general, the humoral immune response in outbred NMRI mice was stronger than that in inbred BALB/c and C57BL/6 mice. ELISA with the recombinant antigens demonstrated immunodominance of the GLURP component over the MSP3 component. However, compared to the Al(OH3-adjuvanted formulation the two other formulations elicited in NMRI mice a larger proportion of anti-MSP3 antibodies. Analyses of the induced GMZ2-specific IgG subclass profiles showed for all three formulations a predominance of the IgG1 isotype. Immune sera against all three formulations exhibited cross-reactivity with in vitro cultivated blood-stage parasites. Immunofluorescence and immunoblot competition experiments showed that both components of the hybrid protein induced IgG cross-reactive with the corresponding native proteins. Conclusion A virosomal formulation of the chimeric protein GMZ2 induced P. falciparum blood stage parasite cross-reactive IgG responses specific for both MSP3 and GLURP. GMZ2 thus represents a candidate component suitable for inclusion into a multi-valent virosomal
Directory of Open Access Journals (Sweden)
Yinliang Wang
Full Text Available The leaf beetle Ambrostoma quadriimpressum (Coleoptera: Chrysomelidae is a predominant forest pest that causes substantial damage to the lumber industry and city management. However, no effective and environmentally friendly chemical method has been discovered to control this pest. Until recently, the molecular basis of the olfactory system in A. quadriimpressum was completely unknown. In this study, antennae and leg transcriptomes were analyzed and compared using deep sequencing data to identify the olfactory genes in A. quadriimpressum. Moreover, the expression profiles of both male and female candidate olfactory genes were analyzed and validated by bioinformatics, motif analysis, homology analysis, semi-quantitative RT-PCR and RT-qPCR experiments in antennal and non-olfactory organs to explore the candidate olfactory genes that might play key roles in the life cycle of A. quadriimpressum. As a result, approximately 102.9 million and 97.3 million clean reads were obtained from the libraries created from the antennas and legs, respectively. Annotation led to 34344 Unigenes, which were matched to known proteins. Annotation data revealed that the number of genes in antenna with binding functions and receptor activity was greater than that of legs. Furthermore, many pathway genes were differentially expressed in the two organs. Sixteen candidate odorant binding proteins (OBPs, 10 chemosensory proteins (CSPs, 34 odorant receptors (ORs, 20 inotropic receptors [1] and 2 sensory neuron membrane proteins (SNMPs and their isoforms were identified. Additionally, 15 OBPs, 9 CSPs, 18 ORs, 6 IRs and 2 SNMPs were predicted to be complete ORFs. Using RT-PCR, RT-qPCR and homology analysis, AquaOBP1/2/4/7/C1/C6, AquaCSP3/9, AquaOR8/9/10/14/15/18/20/26/29/33, AquaIR8a/13/25a showed olfactory-specific expression, indicating that these genes might play a key role in olfaction-related behaviors in A. quadriimpressum such as foraging and seeking. AquaOBP4/C5, Aqua
Wang, Yinliang; Chen, Qi; Zhao, Hanbo; Ren, Bingzhong
2016-01-01
The leaf beetle Ambrostoma quadriimpressum (Coleoptera: Chrysomelidae) is a predominant forest pest that causes substantial damage to the lumber industry and city management. However, no effective and environmentally friendly chemical method has been discovered to control this pest. Until recently, the molecular basis of the olfactory system in A. quadriimpressum was completely unknown. In this study, antennae and leg transcriptomes were analyzed and compared using deep sequencing data to identify the olfactory genes in A. quadriimpressum. Moreover, the expression profiles of both male and female candidate olfactory genes were analyzed and validated by bioinformatics, motif analysis, homology analysis, semi-quantitative RT-PCR and RT-qPCR experiments in antennal and non-olfactory organs to explore the candidate olfactory genes that might play key roles in the life cycle of A. quadriimpressum. As a result, approximately 102.9 million and 97.3 million clean reads were obtained from the libraries created from the antennas and legs, respectively. Annotation led to 34344 Unigenes, which were matched to known proteins. Annotation data revealed that the number of genes in antenna with binding functions and receptor activity was greater than that of legs. Furthermore, many pathway genes were differentially expressed in the two organs. Sixteen candidate odorant binding proteins (OBPs), 10 chemosensory proteins (CSPs), 34 odorant receptors (ORs), 20 inotropic receptors [1] and 2 sensory neuron membrane proteins (SNMPs) and their isoforms were identified. Additionally, 15 OBPs, 9 CSPs, 18 ORs, 6 IRs and 2 SNMPs were predicted to be complete ORFs. Using RT-PCR, RT-qPCR and homology analysis, AquaOBP1/2/4/7/C1/C6, AquaCSP3/9, AquaOR8/9/10/14/15/18/20/26/29/33, AquaIR8a/13/25a showed olfactory-specific expression, indicating that these genes might play a key role in olfaction-related behaviors in A. quadriimpressum such as foraging and seeking. AquaOBP4/C5, AquaOBP4/C5, AquaCSP7
A comparison of iterative algorithms and a mixed approach for in-line x-ray phase retrieval.
Meng, Fanbo; Zhang, Da; Wu, Xizeng; Liu, Hong
2009-08-15
Previous studies have shown that iterative in-line x-ray phase retrieval algorithms may have higher precision than direct retrieval algorithms. This communication compares three iterative phase retrieval algorithms in terms of accuracy and efficiency using computer simulations. We found the Fourier transformation based algorithm (FT) is of the fastest convergence, while the Poisson-solver based algorithm (PS) has higher precision. The traditional Gerchberg-Saxton algorithm (GS) is very slow and sometimes does not converge in our tests. Then a mixed FT-PS algorithm is presented to achieve both high efficiency and high accuracy. The mixed algorithm is tested using simulated images with different noise level and experimentally obtained images of a piece of chicken breast muscle.
Directory of Open Access Journals (Sweden)
Małgorzata Stramska
2013-02-01
Full Text Available The quasi-synoptic view available from satellites has been broadly used in recent years to observe in near-real time the large-scale dynamics of marine ecosystems and to estimate primary productivity in the world ocean. However, the standard global NASA ocean colour algorithms generally do not produce good results in the Baltic Sea. In this paper, we compare the ability of seven algorithms to estimate depth-integrated daily primary production (PP, mg C m-2 in the Baltic Sea. All the algorithms use surface chlorophyll concentration, sea surface temperature, photosynthetic available radiation, latitude, longitude and day of the year as input data. Algorithm-derived PP is then compared with PP estimates obtained from 14C uptake measurements. The results indicate that the best agreement between the modelled and measured PP in the Baltic Sea is obtained with the DESAMBEM algorithm. This result supports the notion that a regional approach should be used in the interpretation of ocean colour satellite data in the Baltic Sea.
Directory of Open Access Journals (Sweden)
Fatemeh Masoudnia
2013-11-01
Full Text Available In this paper three optimum approaches to design PID controller for a Gryphon Robot are presented. The three applied approaches are Artificial Bee Colony, Shuffled Frog Leaping algorithms and nero-fuzzy system. The design goal is to minimize the integral absolute error and reduce transient response by minimizing overshoot, settling time and rise time of step response. An Objective function of these indexes is defined and minimized applying Shuffled Frog Leaping (SFL algorithm, Artificial Bee Colony (ABC algorithm and Nero-Fuzzy System (FNN. After optimization of the objective function, the optimal parameters for the PID controller are adjusted. Simulation results show that FNN has a remarkable effect on decreasing the amount of settling time and rise-time and eliminating of steady-state error while the SFL algorithm performs better on steady-state error and the ABC algorithm is better on decreasing of overshoot. In steady state manner, all of the methods react robustly to the disturbance, but FNN shows more stability in transient response.
Directory of Open Access Journals (Sweden)
Deleuze Jean-Francois
2006-04-01
Full Text Available Abstract Background The recent advances in genotyping and molecular techniques have greatly increased the knowledge of the human genome structure. Millions of polymorphisms are reported and freely available in public databases. As a result, there is now a need to identify among all these data, the relevant markers for genetic association studies. Recently, several methods have been published to select subsets of markers, usually Single Nucleotide Polymorphisms (SNPs, that best represent genetic polymorphisms in the studied candidate gene or region. Results In this paper, we compared four of these selection methods, two based on haplotype information and two based on pairwise linkage disequilibrium (LD. The methods were applied to the genotype data on twenty genes with different patterns of LD and different numbers of SNPs. A measure of the efficiency of the different methods to select SNPs was obtained by comparing, for each gene and under several single disease susceptibility models, the power to detect an association that will be achieved with the selected SNP subsets. Conclusion None of the four selection methods stands out systematically from the others. Methods based on pairwise LD information turn out to be the most interesting methods in a context of association study in candidate gene. In a context where the number of SNPs to be tested in a given region needs to be more limited, as in large-scale studies or wide genome scans, one of the two methods based on haplotype information, would be more suitable.
Lemasson, Elise; Bertin, Sophie; Hennig, Philippe; Lesellier, Eric; West, Caroline
2016-11-11
Impurity profiling of organic products synthesized as possible drug candidates represents a major analytical challenge. Complementary analytical methods are required to ensure that all impurities are detected. Both high-performance liquid chromatography (HPLC) and supercritical fluid chromatography (SFC) can be used for this purpose. In this study, we compared ultra-high performance HPLC (UHPLC) and ultra-high performance SFC (UHPSFC) using a large dataset of 140 pharmaceutical compounds. Four previously optimized methods (two on each technique) were selected to ensure fast high-resolution separations. The four methods were evaluated based on response rate, peak capacity, peak shape and capability to detect impurities (UV). The orthogonality between all methods was also assessed. The best UHPLC method and UHPSFC methods provided comparable quality for the 140 compounds included in this study. Moreover, they were found to be highly orthogonal. At last, the potential of the combined use of UHPLC and UHPSFC for impurity profiling is illustrated with practical examples.
DEFF Research Database (Denmark)
Ivanova, N.; Pedersen, L. T.; Tonboe, R. T.;
2015-01-01
algorithm inter-comparison and evaluation experiment. The skills of 30 sea ice algorithms were evaluated systematically over low and high sea ice concentrations. Evaluation criteria included standard deviation relative to independent validation data, performance in the presence of thin ice and melt ponds......Sea ice concentration has been retrieved in polar regions with satellite microwave radiometers for over 30 years. However, the question remains as to what is an optimal sea ice concentration retrieval method for climate monitoring. This paper presents some of the key results of an extensive...... to retrieve sea ice concentration globally for climate monitoring purposes. This approach consists of a combination of two algorithms plus dynamic tie points implementation and atmospheric correction of input brightness temperatures. The method minimizes inter-sensor calibration discrepancies and sensitivity...
mohamad yusefi, mahboobeh; Safari, Abdolreza; Shahbazi, Anahita; Foroughi, Ismael
2016-04-01
In local-scale applications, Radial Basis Functions (RBFs) are appropriate tools for the purpose of high spatial/spectral resolution gravity field modeling. Due to the availability of different types of RBF kernels, different behaviors are expected in both spectral and spatial domains. While the spectral behavior of RBFs is dependent on the type of kernels, their spatial behavior significantly depends on the choice of their bandwidth. In this study, the functionality of various types of RBF kernels is addressed in coastal gravity field modeling. Four of the most well-known gravimetric RBF kernels including point-mass, radial multi-poles, Poisson wavelet and Poisson kernel are considered for the comparison aim. The area under consideration is the coastal region of the Persian Gulf which consists of 6244 terrestrial/marine gravity observations. The optimal RBF parameterization of the gravity field, i.e. specifying the optimal number of kernels and their 3D spatial configuration (their horizontal locations in the area of interest and their depth below the Bjerhammar sphere), is performed using the iterative Levenberg-Marquardt Algorithm (LMA). Our previous studies indicated that the LMA is a practical choice to deal with ill- conditioned problem of gravity field modeling. The stopping criterion is considered as the minimum L2-norm of the differences between the predicted and observed quantities at the independent control points. The numerical experiments reveal that the accuracy of gravity field and geoid models, regarding different types of RBF kernels, depends on the selection of RBF parameters; if RBF parameters are spatially optimized, they would lead to almost same results.
Novel multi-objective optimization algorithm
Institute of Scientific and Technical Information of China (English)
Jie Zeng; Wei Nie
2014-01-01
Many multi-objective evolutionary algorithms (MOEAs) can converge to the Pareto optimal front and work wel on two or three objectives, but they deteriorate when faced with many-objective problems. Indicator-based MOEAs, which adopt various indicators to evaluate the fitness values (instead of the Pareto-dominance relation to select candidate solutions), have been regarded as promising schemes that yield more satisfactory re-sults than wel-known algorithms, such as non-dominated sort-ing genetic algorithm (NSGA-II) and strength Pareto evolution-ary algorithm (SPEA2). However, they can suffer from having a slow convergence speed. This paper proposes a new indicator-based multi-objective optimization algorithm, namely, the multi-objective shuffled frog leaping algorithm based on the ε indicator (ε-MOSFLA). This algorithm adopts a memetic meta-heuristic, namely, the SFLA, which is characterized by the powerful capa-bility of global search and quick convergence as an evolutionary strategy and a simple and effective ε-indicator as a fitness as-signment scheme to conduct the search procedure. Experimental results, in comparison with other representative indicator-based MOEAs and traditional Pareto-based MOEAs on several standard test problems with up to 50 objectives, show thatε-MOSFLA is the best algorithm for solving many-objective optimization problems in terms of the solution quality as wel as the speed of convergence.
Congcong Li; Jie Wang; Lei Wang; Luanyun Hu; Peng Gong
2014-01-01
Although a large number of new image classification algorithms have been developed, they are rarely tested with the same classification task. In this research, with the same Landsat Thematic Mapper (TM) data set and the same classification scheme over Guangzhou City, China, we tested two unsupervised and 13 supervised classification algorithms, including a number of machine learning algorithms that became popular in remote sensing during the past 20 years. Our analysis focused primarily on ...
Bein, Berthold; Gruenewald, Matthias; Masing, Sarah; Huenges, Katharina; Haneya, Assad; Steinfath, Markus; Renner, Jochen
2016-01-01
Objective. Today, there exist several different pulse contour algorithms for calculation of cardiac output (CO). The aim of the present study was to compare the accuracy of nine different pulse contour algorithms with transpulmonary thermodilution before and after cardiopulmonary bypass (CPB). Methods. Thirty patients scheduled for elective coronary surgery were studied before and after CPB. A passive leg raising maneuver was also performed. Measurements included CO obtained by transpulmonary thermodilution (COTPTD) and by nine pulse contour algorithms (COX1–9). Calibration of pulse contour algorithms was performed by esophageal Doppler ultrasound after induction of anesthesia and 15 min after CPB. Correlations, Bland-Altman analysis, four-quadrant, and polar analysis were also calculated. Results. There was only a poor correlation between COTPTD and COX1–9 during passive leg raising and in the period before and after CPB. Percentage error exceeded the required 30% limit. Four-quadrant and polar analysis revealed poor trending ability for most algorithms before and after CPB. The Liljestrand-Zander algorithm revealed the best reliability. Conclusions. Estimation of CO by nine different pulse contour algorithms revealed poor accuracy compared with transpulmonary thermodilution. Furthermore, the less-invasive algorithms showed an insufficient capability for trending hemodynamic changes before and after CPB. The Liljestrand-Zander algorithm demonstrated the highest reliability. This trial is registered with NCT02438228 (ClinicalTrials.gov).
Energy Technology Data Exchange (ETDEWEB)
Kobayashi, Naoko [Univ. of Nebraska, Lincoln, NE (United States); Barnes, Austin [Univ. of Nebraska, Lincoln, NE (United States); Jensen, Travis [Univ. of Nebraska, Lincoln, NE (United States); Noel, Eric [Univ. of Nebraska, Lincoln, NE (United States); Andlay, Gunjan [Synaptic Research, Baltimore, MD (United States); Rosenberg, Julian N. [Johns Hopkins Univ., Baltimore, MD (United States); Betenbaugh, Michael J. [Johns Hopkins Univ., Baltimore, MD (United States); Guarnieri, Michael T. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Oyler, George A. [Univ. of Nebraska, Lincoln, NE (United States); Johns Hopkins Univ., Baltimore, MD (United States); Synaptic Research, Baltimore, MD (United States)
2015-09-01
Chlorella species from the UTEX collection, classified by rDNA-based phylogenetic analysis, were screened based on biomass and lipid production in different scales and modes of culture. Lead candidate strains of C. sorokiniana UTEX 1230 and C. vulgaris UTEX 395 and 259 were compared between conditions of vigorous aeration with filtered atmospheric air and 3% CO_{2} shake-flask cultivation. We found that the biomass of UTEX 1230 produced 2 times higher at 652 mg L^{-1} dry weight under both ambient CO_{2} vigorous aeration and 3% CO_{2} conditions, while UTEX 395 and 259 under 3% CO_{2} increased to 3 times higher at 863 mg L^{-1} dry weight than ambient CO_{2} vigorous aeration. The triacylglycerol contents of UTEX 395 and 259 increased more than 30 times to 30% dry weight with 3% CO_{2}, indicating that additional CO_{2} is essential for both biomass and lipid accumulation in UTEX 395 and 259.
Energy Technology Data Exchange (ETDEWEB)
Kapp, Eugene; Schutz, Frederick; Connolly, Lisa M.; Chakel, John A.; Meza, Jose E.; Miller, Christine A.; Fenyo, David; Eng, Jimmy K.; Adkins, Joshua N.; Omenn, Gilbert; Simpson, Richard
2005-08-01
MS/MS and associated database search algorithms are essential proteomic tools for identifying peptides. Due to their widespread use, it is now time to perform a systematic analysis of the various algorithms currently in use. Using blood specimens used in the HUPO Plasma Proteome Project, we have evaluated five search algorithms with respect to their sensitivity and specificity, and have also accurately benchmarked them based on specified false-positive (FP) rates. Spectrum Mill and SEQUEST performed well in terms of sensitivity, but were inferior to MASCOT, X-Tandem, and Sonar in terms of specificity. Overall, MASCOT, a probabilistic search algorithm, correctly identified most peptides based on a specified FP rate. The rescoring algorithm, Peptide Prophet, enhanced the overall performance of the SEQUEST algorithm, as well as provided predictable FP error rates. Ideally, score thresholds should be calculated for each peptide spectrum or minimally, derived from a reversed-sequence search as demonstrated in this study based on a validated data set. The availability of open-source search algorithms, such as X-Tandem, makes it feasible to further improve the validation process (manual or automatic) on the basis of ''consensus scoring'', i.e., the use of multiple (at least two) search algorithms to reduce the number of FPs. complement.
DEFF Research Database (Denmark)
Sossan, Fabrizio; Bindner, Henrik W.
2012-01-01
responses provided by three algorithms for controlling electric space heating through a broadcasted price signal are compared. The algorithms have been tested in a software platform with a population of buildings using a hardware-in-the-loop approach that allows to feedback into the simulation the thermal...
Indian Academy of Sciences (India)
P Chitra; P Venkatesh; R Rajaram
2011-04-01
The task scheduling problem in heterogeneous distributed computing systems is a multiobjective optimization problem (MOP). In heterogeneous distributed computing systems (HDCS), there is a possibility of processor and network failures and this affects the applications running on the HDCS. To reduce the impact of failures on an application running on HDCS, scheduling algorithms must be devised which minimize not only the schedule length (makespan) but also the failure probability of the application (reliability). These objectives are conﬂicting and it is not possible to minimize both objectives at the same time. Thus, it is needed to develop scheduling algorithms which account both for schedule length and the failure probability. Multiobjective Evolutionary Computation algorithms (MOEAs) are well-suited for Multiobjective task scheduling on heterogeneous environment. The two Multi-Objective Evolutionary Algorithms such as Multiobjective Genetic Algorithm (MOGA) and Multiobjective Evolutionary Programming (MOEP) with non-dominated sorting are developed and compared for the various random task graphs and also for a real-time numerical application graph. The metrics for evaluating the convergence and diversity of the obtained non-dominated solutions by the two algorithms are reported. The simulation results conﬁrm that the proposed algorithms can be used for solving the task scheduling at reduced computational times compared to the weighted-sum based biobjective algorithm in the literature.
Matsuda, Kiku; Chaudhari, Atul A; Lee, John Hwa
2011-09-01
We evaluated a recently developed live vaccine candidate for fowl typhoid (FT)-JOL916, a lon/cpxR mutant of Salmonella Gallinarum (SG)-by comparing its safety and efficacy with that of the well-known rough mutant strain SG9R vaccine in 6-wk-old Hy-Line hens. Forty-five chickens were divided into three groups of 15 chickens each. The chickens were then intramuscularly inoculated with 2 x 10(7) colony-forming units (CFUs) of JOL916 (JOL916 group), 2 x 10(7) CFUs of SG9R (SG9R group), or phosphate-buffered saline (control group). After vaccination, no clinical symptoms were observed in any of the groups. No differences in body weight increase were detected among the three groups postvaccination. A cellular immune response was observed at 2 wk postvaccination (wpv) in the JOL916 group with the peripheral lymphocyte proliferation assay, whereas no response was detected in the SG9R group. Elevation of SG antigen-specific plasma immunoglobulin was observed 2 and 3 wpv in the JOL916 and SG9R vaccine groups, respectively. After virulent challenge on day 25 postvaccination, 0, 1, and 15 chickens in the JOL916 group, SG9R group, and control group, respectively, died by 12 days postchallenge; the death rate of the SG9R vaccine group was statistically similar to that of the JOL916 group. Postmortem examination revealed that the JOL916 vaccine offered more efficient protection than the SG9R vaccine, with significantly decreased hepatic necrotic foci scores, splenic enlargement scores, necrotic foci scores, and recovery of the challenge strain from the spleen. Vaccination with JOL916 appears to be safe and offers better protection than SG9R against FT in chickens.
Brewin, Robert J.W.; Sathyendranath, Shubha; Muller, Dagmar; Brockmann, Carsten; Deschamps, Pierre-Yves; Devred, Emmanuel; Doerffer, Roland; Fomferra, Norman; Franz, Bryan; Grant, Mike; Groom, Steve; Horseman, Andrew; Hu, Chuanmin; Krasemann, Hajo; Lee, ZhongPing; Maritorena, Stephane; Melin, Frederic; Peters, Marco; Platt, Trevor; Regner, Peter; Smyth, Tim; Steinmetz, Francois; Swinton, John; Werdell, Jeremy; White, George N., III
2013-01-01
Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situ Rrs as input to the models, the performance of eleven semianalytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional
Abuhadi, Nouf; Bradley, David; Katarey, Dev; Podolyak, Zsolt; Sassi, Salem
2014-03-01
Introduction: Single-Photon Emission Computed Tomography (SPECT) is used to measure and quantify radiopharmaceutical distribution within the body. The accuracy of quantification depends on acquisition parameters and reconstruction algorithms. Until recently, most SPECT images were constructed using Filtered Back Projection techniques with no attenuation or scatter corrections. The introduction of 3-D Iterative Reconstruction algorithms with the availability of both computed tomography (CT)-based attenuation correction and scatter correction may provide for more accurate measurement of radiotracer bio-distribution. The effect of attenuation and scatter corrections on accuracy of SPECT measurements is well researched. It has been suggested that the combination of CT-based attenuation correction and scatter correction can allow for more accurate quantification of radiopharmaceutical distribution in SPECT studies (Bushberg et al., 2012). However, The effect of respiratory induced cardiac motion on SPECT images acquired using higher resolution algorithms such 3-D iterative reconstruction with attenuation and scatter corrections has not been investigated. Aims: To investigate the quantitative accuracy of 3D iterative reconstruction algorithms in comparison to filtered back projection (FBP) methods implemented on cardiac SPECT/CT imaging with and without CT-attenuation and scatter corrections. Also to investigate the effects of respiratory induced cardiac motion on myocardium perfusion quantification. Lastly, to present a comparison of spatial resolution for FBP and ordered subset expectation maximization (OSEM) Flash 3D together with and without respiratory induced motion, and with and without attenuation and scatter correction. Methods: This study was performed on a Siemens Symbia T16 SPECT/CT system using clinical acquisition protocols. Respiratory induced cardiac motion was simulated by imaging a cardiac phantom insert whilst moving it using a respiratory motion motor
Knox, James; Gregory, Claire; Prendergast, Louise; Perera, Chandrika; Robson, Jennifer; Waring, Lynette
2017-01-01
Stool specimens spiked with a panel of 46 carbapenemase-producing Enterobacteriaceae (CPE) and 59 non-carbapenemase producers were used to compare the diagnostic accuracy of 4 testing algorithms for the detection of intestinal carriage of CPE: (1) culture on Brilliance ESBL agar followed by the Carba NP test; (2) Brilliance ESBL followed by the Carba NP test, plus chromID OXA-48 agar with no Carba NP test; (3) chromID CARBA agar followed by the Carba NP test; (4) chromID CARBA followed by the Carba NP test, plus chromID OXA-48 with no Carba NP test. All algorithms were 100% specific. When comparing algorithms (1) and (3), Brilliance ESBL agar followed by the Carba NP test was significantly more sensitive than the equivalent chromID CARBA algorithm at the lower of 2 inoculum strengths tested (84.8% versus 63.0%, respectively [Palgorithms was marginally increased.
Obuchowski, Nancy A; Barnhart, Huiman X; Buckler, Andrew J; Pennello, Gene; Wang, Xiao-Feng; Kalpathy-Cramer, Jayashree; Kim, Hyun J Grace; Reeves, Anthony P
2015-02-01
Quantitative imaging biomarkers are being used increasingly in medicine to diagnose and monitor patients' disease. The computer algorithms that measure quantitative imaging biomarkers have different technical performance characteristics. In this paper we illustrate the appropriate statistical methods for assessing and comparing the bias, precision, and agreement of computer algorithms. We use data from three studies of pulmonary nodules. The first study is a small phantom study used to illustrate metrics for assessing repeatability. The second study is a large phantom study allowing assessment of four algorithms' bias and reproducibility for measuring tumor volume and the change in tumor volume. The third study is a small clinical study of patients whose tumors were measured on two occasions. This study allows a direct assessment of six algorithms' performance for measuring tumor change. With these three examples we compare and contrast study designs and performance metrics, and we illustrate the advantages and limitations of various common statistical methods for quantitative imaging biomarker studies.
Chen, R C; Rigon, L; Longo, R
2013-03-25
Phase retrieval is a technique for extracting quantitative phase information from X-ray propagation-based phase-contrast tomography (PPCT). In this paper, the performance of different single distance phase retrieval algorithms will be investigated. The algorithms are herein called phase-attenuation duality Born Algorithm (PAD-BA), phase-attenuation duality Rytov Algorithm (PAD-RA), phase-attenuation duality Modified Bronnikov Algorithm (PAD-MBA), phase-attenuation duality Paganin algorithm (PAD-PA) and phase-attenuation duality Wu Algorithm (PAD-WA), respectively. They are all based on phase-attenuation duality property and on weak absorption of the sample and they employ only a single distance PPCT data. In this paper, they are investigated via simulated noise-free PPCT data considering the fulfillment of PAD property and weakly absorbing conditions, and with experimental PPCT data of a mixture sample containing absorbing and weakly absorbing materials, and of a polymer sample considering different degrees of statistical and structural noise. The simulation shows all algorithms can quantitatively reconstruct the 3D refractive index of a quasi-homogeneous weakly absorbing object from noise-free PPCT data. When the weakly absorbing condition is violated, the PAD-RA and PAD-PA/WA obtain better result than PAD-BA and PAD-MBA that are shown in both simulation and mixture sample results. When considering the statistical noise, the contrast-to-noise ratio values decreases as the photon number is reduced. The structural noise study shows that the result is progressively corrupted by ring-like artifacts with the increase of structural noise (i.e. phantom thickness). The PAD-RA and PAD-PA/WA gain better density resolution than the PAD-BA and PAD-MBA in both statistical and structural noise study.
Indian Academy of Sciences (India)
Sachin Vrajlal Rajani; Vivek J Pandya
2015-02-01
Solar energy is a clean, green and renewable source of energy. It is available in abundance in nature. Solar cells by photovoltaic action are able to convert the solar energy into electric current. The output power of solar cell depends upon factors such as solar irradiation (insolation), temperature and other climatic conditions. Present commercial efficiency of solar cells is not greater than 15% and therefore the available efficiency is to be exploited to the maximum possible value and the maximum power point tracking (MPPT) with the aid of power electronics to solar array can make this possible. There are many algorithms proposed to realize maximum power point tracking. These algorithms have their own merits and limitations. In this paper, an attempt is made to understand the basic functionality of the two most popular algorithms viz. Perturb and Observe (P & O) algorithm and Incremental conductance algorithm. These algorithms are compared by simulating a 100 kW solar power generating station connected to grid. MATLAB M-files are generated to understand MPPT and its dependency on insolation and temperature. MATLAB Simulink software is used to simulate the MPPT systems. Simulation results are presented to verify these assumptions.
Kamel, Laurent; Tang, Nianwu; Malbreil, Mathilde; San Clemente, Hélène; Le Marquer, Morgane; Roux, Christophe; Frei dit Frey, Nicolas
2017-01-01
Arbuscular mycorrhizal fungi (AMF), belonging to the fungal phylum Glomeromycota, form mutualistic symbioses with roots of almost 80% of land plants. The release of genomic data from the ubiquitous AMF Rhizophagus irregularis revealed that this species possesses a large set of putative secreted proteins (RiSPs) that could be of major importance for establishing the symbiosis. In the present study, we aimed to identify SPs involved in the establishment of AM symbiosis based on comparative gene expression analyses. We first curated the secretome of the R. irregularis DAOM 197198 strain based on two available genomic assemblies. Then we analyzed the expression patterns of the putative RiSPs obtained from the fungus in symbiotic association with three phylogenetically distant host plants—a monocot, a dicot and a liverwort—in comparison with non-symbiotic stages. We found that 33 out of 84 RiSPs induced in planta were commonly up-regulated in these three hosts. Most of these common RiSPs are small proteins of unknown function that may represent putative host non-specific effector proteins. We further investigated the expressed secretome of Gigaspora rosea, an AM fungal species phylogenetically distant from R. irregularis. G. rosea also presents original symbiotic features, a narrower host spectrum and a restrictive geographic distribution compared to R. irregularis. Interestingly, when analyzing up-regulated G. rosea SPs (GrSPs) in different hosts, a higher ratio of host-specific GrSPs was found compared to RiSPs. Such difference of expression patterns may mirror the restrained host spectrum of G. rosea compared to R. irregularis. Finally, we identified a set of conserved SPs, commonly up-regulated by both fungi in all hosts tested, that could correspond to common keys of AMF to colonize host plants. Our data thus highlight the specificities of two distant AM fungi and help in understanding their conserved and specific strategies to invade different hosts.
Independent candidates in Mexico
Campos, Gonzalo Santiago
2014-01-01
In this paper we discuss the issue of independent candidates in Mexico, because through the so-called political reform of 2012 was incorporated in the Political Constitution of the Mexican United States the right of citizens to be registered as independent candidates. Also, in September 2013 was carried out a reform of Article 116 of the Political Constitution of the Mexican United States in order to allow independent candidates in each state of the Republic. However, prior to the constitutio...
Cunliffe, Alexandra R; White, Bradley; Justusson, Julia; Straus, Christopher; Malik, Renuka; Al-Hallaq, Hania A; Armato, Samuel G
2015-12-01
We evaluated the image registration accuracy achieved using two deformable registration algorithms when radiation-induced normal tissue changes were present between serial computed tomography (CT) scans. Two thoracic CT scans were collected for each of 24 patients who underwent radiation therapy (RT) treatment for lung cancer, eight of whom experienced radiologically evident normal tissue damage between pre- and post-RT scan acquisition. For each patient, 100 landmark point pairs were manually placed in anatomically corresponding locations between each pre- and post-RT scan. Each post-RT scan was then registered to the pre-RT scan using (1) the Plastimatch demons algorithm and (2) the Fraunhofer MEVIS algorithm. The registration accuracy for each scan pair was evaluated by comparing the distance between landmark points that were manually placed in the post-RT scans and points that were automatically mapped from pre- to post-RT scans using the displacement vector fields output by the two registration algorithms. For both algorithms, the registration accuracy was significantly decreased when normal tissue damage was present in the post-RT scan. Using the Plastimatch algorithm, registration accuracy was 2.4 mm, on average, in the absence of radiation-induced damage and 4.6 mm, on average, in the presence of damage. When the Fraunhofer MEVIS algorithm was instead used, registration errors decreased to 1.3 mm, on average, in the absence of damage and 2.5 mm, on average, when damage was present. This work demonstrated that the presence of lung tissue changes introduced following RT treatment for lung cancer can significantly decrease the registration accuracy achieved using deformable registration.
Directory of Open Access Journals (Sweden)
M O Qutub
2011-01-01
Full Text Available Purpose: To evaluate usefulness of applying either the two-step algorithm (Ag-EIAs and CCNA or the three-step algorithm (all three assays for better confirmation of toxigenic Clostridium difficile. The antigen enzyme immunoassays (Ag-EIAs can accurately identify the glutamate dehydrogenase antigen of toxigenic and nontoxigenic Clostridium difficile. Therefore, it is used in combination with a toxin-detecting assay [cell line culture neutralization assay (CCNA, or the enzyme immunoassays for toxins A and B (TOX-A/BII EIA] to provide specific evidence of Clostridium difficile-associated diarrhoea. Materials and Methods: A total of 151 nonformed stool specimens were tested by Ag-EIAs, TOX-A/BII EIA, and CCNA. All tests were performed according to the manufacturer′s instructions and the results of Ag-EIAs and TOX-A/BII EIA were read using a spectrophotometer at a wavelength of 450 nm. Results: A total of 61 (40.7%, 38 (25.3%, and 52 (34.7% specimens tested positive with Ag-EIA, TOX-A/BII EIA, and CCNA, respectively. Overall, the sensitivity, specificity, negative predictive value, and positive predictive value for Ag-EIA were 94%, 87%, 96.6%, and 80.3%, respectively. Whereas for TOX-A/BII EIA, the sensitivity, specificity, negative predictive value, and positive predictive value were 73.1%, 100%, 87.5%, and 100%, respectively. With the two-step algorithm, all 61 Ag-EIAs-positive cases required 2 days for confirmation. With the three-step algorithm, 37 (60.7% cases were reported immediately, and the remaining 24 (39.3% required further testing by CCNA. By applying the two-step algorithm, the workload and cost could be reduced by 28.2% compared with the three-step algorithm. Conclusions: The two-step algorithm is the most practical for accurately detecting toxigenic Clostridium difficile, but it is time-consuming.
Nieuwlaat, Robby; Hubers, Lowiek M; Spyropoulos, Alex C; Eikelboom, John W; Connolly, Benjamin J; Van Spall, Harriette G C; Schulze, Karleen M; Cuddy, Spencer M; Stehouwer, Alexander C; Schulman, Sam; Connolly, Stuart J
2012-12-01
Excellent control of the international normalised ratio (INR) is associated with improved clinical outcomes in patients receiving warfarin, and can be achieved by anticoagulation clinics but is difficult in general practice. Anticoagulation clinics have often used validated commercial computer systems to manage the INR, but these are not usually available to general practitioners. It was the objective of this study to perform a randomised trial of a simple one-step warfarin dosing algorithm against a widely used computerised dosing system. During the period of introduction of a commercial computerised warfarin dosing system (DAWN AC) to an anticoagulation clinic, patients were randomised to have warfarin dose adjustment done according to recommendations of the existing warfarin dosing algorithm or to those of the computerised system. The study tested if the computerised system was non-inferior to the existing algorithm for the primary outcome of time in therapeutic INR range of 2.0-3.0 (TTR), with a one-sided non-inferiority margin of 4.5%. There were 541 patients randomised to commercial computerised system and 527 to the algorithm. Median follow-up was 159 days. A dose recommendation was provided and followed in 91% of occasions for the computerised system and in 90% for the algorithm (p=0.03). The mean TTR was 71.0% (standard deviation [SD] 23.2) for the computerised system and 71.9% (SD 22.9) for the algorithm (difference 0.9% [95% confidence interval: -1.4% to 4.1%]; p-value for non-inferiority=0.002; p-value for superiority=0.34). In conclusion, similar maintenance control of the INR was achieved with a simple one-step dosing algorithm and a commercial computerised management system.
Palmer, Grant; Venkatapathy, Ethiraj
1993-01-01
Three solution algorithms, explicit underrelaxation, point implicit, and lower upper symmetric Gauss-Seidel (LUSGS), are used to compute nonequilibrium flow around the Apollo 4 return capsule at 62 km altitude. By varying the Mach number, the efficiency and robustness of the solution algorithms were tested for different levels of chemical stiffness. The performance of the solution algorithms degraded as the Mach number and stiffness of the flow increased. At Mach 15, 23, and 30, the LUSGS method produces an eight order of magnitude drop in the L2 norm of the energy residual in 1/3 to 1/2 the Cray C-90 computer time as compared to the point implicit and explicit under-relaxation methods. The explicit under-relaxation algorithm experienced convergence difficulties at Mach 23 and above. At Mach 40 the performance of the LUSGS algorithm deteriorates to the point it is out-performed by the point implicit method. The effects of the viscous terms are investigated. Grid dependency questions are explored.
Energy Technology Data Exchange (ETDEWEB)
Yin, Jiandong; Yang, Jiawen; Guo, Qiyong [Shengjing Hospital of China Medical University, Department of Radiology, Shenyang (China)
2015-05-01
Arterial input function (AIF) plays an important role in the quantification of cerebral hemodynamics. The purpose of this study was to select the best reproducible clustering method for AIF detection by comparing three algorithms reported previously in terms of detection accuracy and computational complexity. First, three reproducible clustering methods, normalized cut (Ncut), hierarchy (HIER), and fast affine propagation (FastAP), were applied independently to simulated data which contained the true AIF. Next, a clinical verification was performed where 42 subjects participated in dynamic susceptibility contrast MRI (DSC-MRI) scanning. The manual AIF and AIFs based on the different algorithms were obtained. The performance of each algorithm was evaluated based on shape parameters of the estimated AIFs and the true or manual AIF. Moreover, the execution time of each algorithm was recorded to determine the algorithm that operated more rapidly in clinical practice. In terms of the detection accuracy, Ncut and HIER method produced similar AIF detection results, which were closer to the expected AIF and more accurate than those obtained using FastAP method; in terms of the computational efficiency, the Ncut method required the shortest execution time. Ncut clustering appears promising because it facilitates the automatic and robust determination of AIF with high accuracy and efficiency. (orig.)
Burt, Adam O.; Tinker, Michael L.
2014-01-01
In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.
Directory of Open Access Journals (Sweden)
Jeng-Fung Chen
2014-10-01
Full Text Available Predicting student academic performance with a high accuracy facilitates admission decisions and enhances educational services at educational institutions. This raises the need to propose a model that predicts student performance, based on the results of standardized exams, including university entrance exams, high school graduation exams, and other influential factors. In this study, an approach to the problem based on the artificial neural network (ANN with the two meta-heuristic algorithms inspired by cuckoo birds and their lifestyle, namely, Cuckoo Search (CS and Cuckoo Optimization Algorithm (COA is proposed. In particular, we used previous exam results and other factors, such as the location of the student’s high school and the student’s gender as input variables, and predicted the student academic performance. The standard CS and standard COA were separately utilized to train the feed-forward network for prediction. The algorithms optimized the weights between layers and biases of the neuron network. The simulation results were then discussed and analyzed to investigate the prediction ability of the neural network trained by these two algorithms. The findings demonstrated that both CS and COA have potential in training ANN and ANN-COA obtained slightly better results for predicting student academic performance in this case. It is expected that this work may be used to support student admission procedures and strengthen the service system in educational institutions.
Xu, Beijie; Recker, Mimi; Qi, Xiaojun; Flann, Nicholas; Ye, Lei
2013-01-01
This article examines clustering as an educational data mining method. In particular, two clustering algorithms, the widely used K-means and the model-based Latent Class Analysis, are compared, using usage data from an educational digital library service, the Instructional Architect (IA.usu.edu). Using a multi-faceted approach and multiple data…
Directory of Open Access Journals (Sweden)
Kekana M.C
2015-09-01
Full Text Available In this paper, Volterra Integro differential equations are solved using the Adomian decomposition method. The solutions are obtained in form of infinite series and compared to Runge-Kutta4 algorithm. The technique is described and illustrated with examples; numerical results are also presented graphically. The software used in this study is mathematica10.
Energy Technology Data Exchange (ETDEWEB)
Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael
2009-09-01
There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.
Ohazulike, A.E.; Brands, T.
2013-01-01
Genetic algorithms (GAs) are widely accepted by researchers as a method of solving multi-objective optimization problems (MOPs), at least for listing a high quality approximation of the Pareto front of a MOP. In traffic management, it has been long established that tolls can be used to optimally dis
Novel hybrid genetic algorithm for progressive multiple sequence alignment.
Afridi, Muhammad Ishaq
2013-01-01
The family of evolutionary or genetic algorithms is used in various fields of bioinformatics. Genetic algorithms (GAs) can be used for simultaneous comparison of a large pool of DNA or protein sequences. This article explains how the GA is used in combination with other methods like the progressive multiple sequence alignment strategy to get an optimal multiple sequence alignment (MSA). Optimal MSA get much importance in the field of bioinformatics and some other related disciplines. Evolutionary algorithms evolve and improve their performance. In this optimisation, the initial pair-wise alignment is achieved through a progressive method and then a good objective function is used to select and align more alignments and profiles. Child and subpopulation initialisation is based upon changes in the probability of similarity or the distance matrix of the alignment population. In this genetic algorithm, optimisation of mutation, crossover and migration in the population of candidate solution reflect events of natural organic evolution.
Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.
2016-03-01
The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care
New focused crawling algorithm
Institute of Scientific and Technical Information of China (English)
Su Guiyang; Li Jianhua; Ma Yinghua; Li Shenghong; Song Juping
2005-01-01
Focused carawling is a new research approach of search engine. It restricts information retrieval and provides search service in specific topic area. Focused crawling search algorithm is a key technique of focused crawler which directly affects the search quality. This paper first introduces several traditional topic-specific crawling algorithms, then an inverse link based topic-specific crawling algorithm is put forward. Comparison experiment proves that this algorithm has a good performance in recall, obviously better than traditional Breadth-First and Shark-Search algorithms. The experiment also proves that this algorithm has a good precision.
Directory of Open Access Journals (Sweden)
A. A. Kokhanovsky
2010-07-01
Full Text Available Remote sensing of aerosol from space is a challenging and typically underdetermined retrieval task, requiring many assumptions to be made with respect to the aerosol and surface models. Therefore, the quality of a priori information plays a central role in any retrieval process (apart from the cloud screening procedure and the forward radiative transfer model, which to be most accurate should include the treatment of light polarization and molecular-aerosol coupling. In this paper the performance of various algorithms with respect to the of spectral aerosol optical thickness determination from optical spaceborne measurements is studied. The algorithms are based on various types of measurements (spectral, angular, polarization, or some combination of these. It is confirmed that multiangular spectropolarimetric measurements provide more powerful constraints compared to spectral intensity measurements alone, particularly those acquired at a single view angle and which rely on a priori assumptions regarding the particle phase function in the retrieval process.
Khehra, Baljit Singh; Pharwaha, Amar Partap Singh
2016-06-01
Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-01-01
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-05-23
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.
Kim, Sung Jin; Kim, Sung Kyu
2015-01-01
Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam, collapsed cone, and Monte-Carlo, provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated using the PB, CC, and MC algorithms. Planning treatment volume and organs at risk delineation was performed according to our institutions protocols on the Oncentra MasterPlan image registration module, on 0.3 to 0.5 cm computed tomography slices taken under normal respiration conditions. Four intensity-modulated radiation therapy plans were calculated according to each algorithm for each patient. The plans were conducted on the Oncentra MasterPlan and CMS Monaco treatment planning systems, for 6 MV. The plans were compared in terms of the dose distribution in target, OAR volumes, and...
Directory of Open Access Journals (Sweden)
Gilson Alexandre Pinto
2005-06-01
Full Text Available This work presented the results of the implementation of an off-line smoothing algorithm in the monitoring system, for the partial hydrolysis of cheese whey proteins using enzymes, which used penalized least squares. Different algorithms for on-line signals filtering used by the control were also compared: artificial neural networks, moving average and smoothing algorithm.A hidrólise parcial de proteínas do soro de queijo, realizada por enzimas imobilizadas em suporte inerte, pode alterar ou evidenciar propriedades funcionais dos polipeptídeos produzidos, aumentando assim suas aplicações. O controle do pH do reator de proteólise é de fundamental importância para modular a distribuição de pesos moleculares dos peptídeos formados. Os sinais de pH e temperatura utilizados pelo algoritmo de controle e inferência de estado podem estar sujeitos a ruído considerável, tornando importante sua filtragem. Apresentam-se aqui resultados da implementação, no sistema de monitoramento do processo, de algoritmo suavizador, que utiliza mínimos quadrados com penalização para o pós-tratamento dos dados. Compara-se ainda o desempenho de diferentes algoritmos na filtragem em tempo real dos sinais utilizados pelo sistema de controle, a saber: redes neurais artificiais, média móvel e o sobredito suavizador.
Energy Technology Data Exchange (ETDEWEB)
Puchner, Stefan B. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Medical University of Vienna, Department of Biomedical Imaging and Image-Guided Therapy, Vienna (Austria); Ferencik, Maros [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Harvard Medical School, Division of Cardiology, Massachusetts General Hospital, Boston, MA (United States); Maurovich-Horvat, Pal [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Semmelweis University, MTA-SE Lenduelet Cardiovascular Imaging Research Group, Heart and Vascular Center, Budapest (Hungary); Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu [CV Path Institute Inc., Gaithersburg, MD (United States); Kauczor, Hans-Ulrich [University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany); Hoffmann, Udo [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Schlett, Christopher L. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany)
2015-01-15
To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area <60 HU was semiautomatically quantified in CCTA. LCP was defined by histology as fibroatheroma with a large lipid/necrotic core. Area under the curve (AUC) was derived from logistic regression analysis as a measure of diagnostic accuracy. Overall, 173 CCTA triplets (FBP/ASIR/MBIR) were co-registered with histology. LCP was present in 26 cross-sections. Average measured plaque area <60 HU was significantly larger in LCP compared to non-LCP cross-sections (mm{sup 2}: 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)
DiStasio, Robert J., Jr.; Resmini, Ronald G.
2010-04-01
The in-scene atmospheric compensation (ISAC) algorithm of Young et al. (2002) [14] (and as implemented in the ENVI® software system [16] as 'Thermal Atm Correction') is commonly applied to thermal infrared multi- and hyperspectral imagery (MSI and HSI, respectively). ISAC estimates atmospheric transmissivity and upwelling radiance using only the scene data. The ISAC-derived transmissivity and upwelling radiance are compared to those derived from the emissive empirical line method (EELM), another in-scene atmospheric compensation algorithm for thermal infrared MSI and HSI data. EELM is based on the presence of calibration targets (e.g., panels, water pools) captured in the spectral image data for which the emissivity and temperature are well known at the moment of MSI/HSI data acquisition. EELM is similar in concept to the empirical line method (ELM) algorithm commonly applied to visible/near-infrared to shortwave infrared (VNIR/SWIR) spectral imagery and is implemented as a custom ENVI® plugin application. Both ISAC and EELM are in-scene methods and do not require radiative transfer modeling. ISAC and EELM have been applied to airborne longwave infrared (LWIR; ~7.5 μm to ~13.5 μm) HSI data. Captured in the imagery are calibration panels and/or water pools maintained at different temperatures facilitating the application of EELM. Overall, the atmospheric compensation parameters derived from the two methods are in close agreement: the EELM-derived ground-leaving radiance spectra generally contain fewer residual atmospheric spectral features, although ISAC sometimes produces smoother ground-leaving radiance spectra. Nonetheless, the agreement is viewed as validation of ISAC. ISAC is an effective atmospheric compensation algorithm that is readily available to the remote sensing community in the ENVI® software system. Thus studies such as the present testing and comparing ISAC to other methods are important. The ISAC and EELM algorithms are discussed as are the
Vio, R; Wamsteker, W
2004-01-01
It is well-known that the noise associated with the collection of an astronomical image by a CCD camera is, in large part, Poissonian. One would expect, therefore, that computational approaches that incorporate this a priori information will be more effective than those that do not. The Richardson-Lucy (RL) algorithm, for example, can be viewed as a maximum-likelihood (ML) method for image deblurring when the data noise is assumed to be Poissonian. Least-squares (LS) approaches, on the other hand, arises from the assumption that the noise is Gaussian with fixed variance across pixels, which is rarely accurate. Given this, it is surprising that in many cases results obtained using LS techniques are relatively insensitive to whether the noise is Poissonian or Gaussian. Furthermore, in the presence of Poisson noise, results obtained using LS techniques are often comparable with those obtained by the RL algorithm. We seek an explanation of these phenomena via an examination of the regularization properties of par...
Comparison of Chlorophyll-A Algorithms for the Transition Zone Between the North Sea and Baltic Sea
Huber, Silvia; Hansen, Lars B.; Rasmussen, Mads O.; Kaas, Hanne
2015-12-01
Monitoring water quality of the transition zone between the North Sea and Baltic Sea from space is still a challenge because of the optically complex waters. The presence of suspended sediments and dissolved substances often interfere with the phytoplankton signal and thus confound conventional case-1 algorithms developed for the open ocean. Specific calibration to case-2 waters may compensate for this. In this study we compared chlorophyll-a (chl-a) concentrations derived with three different case-2 algorithms: C2R, FUB/WeW and CoastColour using MERIS data as basis. Default C2R and FUB clearly underestimate higher chl-a concentrations. However, with local tuning we could significantly improve the fit with in-situ data. For instance, the root mean square error is reduced by roughly 50% from 3.06 to 1.6 μ g/L for the calibrated C2R processor as compared to the default C2R. This study is part of the FP7 project AQUA-USERS which has the overall goal to provide the aquaculture industry with timely information based on satellite data and optical in-situ measurements. One of the products is chlorophyll-a concentration.
Umehara, K-I; Iwatsubo, T; Noguchi, K; Kamimura, H
2007-06-01
In this study, the comparison of the transport of substrates (1-methyl-4-phenylpydinium (MPP) and tetraethyl ammonium (TEA)) and the inhibition potency of the inhibitors (biguanides and H(2)-blockers) for human and rat organic cation transporters (hOCTs and rOcts), and the inhibition type of inhibitors for these transporters were investigated using HEK293 cells that stably express hOCT/rOct. The concentration-dependent uptake of [(3)H]-MPP and [(14)C]-TEA by hOCT1-3/rOct1-3 had K(m) values similar to those in the literature. It was also deduced that MPP and TEA are competitive inhibitors for hOCT1-2/rOct1-2. The K(i) values for phenformin inhibition of [(3)H]-MPP and [(14)C]-TEA uptake by hOCT1-3/rOct1-3 were lower than that for metformin. The [(3)H]-MPP uptake by hOCT1/rOct1 and hOCT3/rOct3 was inhibited by famotidine and ranitidine whereas that by hOCT2/rOct2 was not. The inhibitory potency of cimetidine for hOCT1-2 was very weak. In most cases, the differences in the V(max)/K(m) values of substrates and the K(i) values of inhibitors between hOCT and rOct were minor. The acquisition of information on OCT/Oct mediated-transport and/or inhibition such as that presented in this report is very useful for further understanding of certain aspects of uptake, distribution, and excretion for drug candidates.
Energy Technology Data Exchange (ETDEWEB)
Mennecke, Angelika; Svergun, Stanislav; Doerfler, Arnd; Struffert, Tobias [University of Erlangen-Nuremberg, Department of Neuroradiology, Erlangen (Germany); Scholz, Bernhard [Siemens Healthcare GmbH, Forchheim (Germany); Royalty, Kevin [Siemens Medical Solutions, USA, Inc., Hoffman Estates, IL (United States)
2017-01-15
Metal artefacts can impair accurate diagnosis of haemorrhage using flat detector CT (FD-CT), especially after aneurysm coiling. Within this work we evaluate a prototype metal artefact reduction algorithm by comparison of the artefact-reduced and the non-artefact-reduced FD-CT images to pre-treatment FD-CT and multi-slice CT images. Twenty-five patients with acute aneurysmal subarachnoid haemorrhage (SAH) were selected retrospectively. FD-CT and multi-slice CT before endovascular treatment as well as FD-CT data sets after treatment were available for all patients. The algorithm was applied to post-treatment FD-CT. The effect of the algorithm was evaluated utilizing the pre-post concordance of a modified Fisher score, a subjective image quality assessment, the range of the Hounsfield units within three ROIs, and the pre-post slice-wise Pearson correlation. The pre-post concordance of the modified Fisher score, the subjective image quality, and the pre-post correlation of the ranges of the Hounsfield units were significantly higher for artefact-reduced than for non-artefact-reduced images. Within the metal-affected slices, the pre-post slice-wise Pearson correlation coefficient was higher for artefact-reduced than for non-artefact-reduced images. The overall diagnostic quality of the artefact-reduced images was improved and reached the level of the pre-interventional FD-CT images. The metal-unaffected parts of the image were not modified. (orig.)
Zaki, Mohammad Reza; Varshosaz, Jaleh; Fathi, Milad
2015-05-20
Multivariate nature of drug loaded nanospheres manufacturing in term of multiplicity of involved factors makes it a time consuming and expensive process. In this study genetic algorithm (GA) and artificial neural network (ANN), two tools inspired by natural process, were employed to optimize and simulate the manufacturing process of agar nanospheres. The efficiency of GA was evaluated against the response surface methodology (RSM). The studied responses included particle size, poly dispersity index, zeta potential, drug loading and release efficiency. GA predicted greater extremum values for response factors compared to RSM. However, real values showed some deviations from predicted data. Appropriate agreement was found between ANN model predicted and real values for all five response factors with high correlation coefficients. GA was more successful than RSM in optimization and along with ANN were efficient tools in optimizing and modeling the fabrication process of drug loaded in agar nanospheres.
Comparison between SARS CoV and MERS CoV Using Apriori Algorithm, Decision Tree, SVM
Directory of Open Access Journals (Sweden)
Jang Seongpil
2016-01-01
Full Text Available MERS (Middle East Respiratory Syndrome is a worldwide disease these days. The number of infected people is 1038(08/03/2015 in Saudi Arabia and 186(08/03/2015 in South Korea. MERS is all over the world including Europe and the fatality rate is 38.8%, East Asia and the Middle East. The MERS is also known as a cousin of SARS (Severe Acute Respiratory Syndrome because both diseases show similar symptoms such as high fever and difficulty in breathing. This is why we compared MERS with SARS. We used data of the spike glycoprotein from NCBI. As a way of analyzing the protein, apriori algorithm, decision tree, SVM were used, and particularly SVM was iterated by normal, polynomial, and sigmoid. The result came out that the MERS and the SARS are alike but also different in some way.
2010-05-01
dominé (NSGA-II). Nous introduisons l’algorithme Limiting Index Sort (LIS) et démontrons sa supériorité pour le tri des données positivement corrélés ou...et démontrons sa supériorité pour le tri des données positivement corrélées ou décorrélées. LIS indexe les solutions individuelles sur la base de...Limiting Index Sort (LIS) et démontrons sa supériorité pour le tri des données positivement corrélés ou décorrélés. LIS indexe les solutions individuelles
Irha, E; Vrdoljak, J
2000-01-01
The aim of this study was to select children with pathological lesions of the intra-articular structures from children with identical complaints but with no pathological intra-articular changes. The younger the child, the more difficult it is to make the diagnosis, and the expected distribution of pathology changes increasingly. This is particularly stressed in children aged younger than 13 years. Synovial inflammatory alterations are more frequent, and osteochondral and chondral fractures appear to be more problematic than meniscal and cruciate ligament lesions. Before establishing the indication for knee arthroscopy it is mandatory to implement the algorithm of diagnostic and conservative therapeutic procedures. The indication for knee arthroscopy is considered in cases when complaints persist after conservative treatment, a lesion of intra-articular segments is suspected, and the pathological condition is deemed arthroscopically treatable. Arthroscopy before conservative treatment is justified only in acute cases.
Rueda, Antonio J.; Noguera, José M.; Luque, Adrián
2016-02-01
In recent years GPU computing has gained wide acceptance as a simple low-cost solution for speeding up computationally expensive processing in many scientific and engineering applications. However, in most cases accelerating a traditional CPU implementation for a GPU is a non-trivial task that requires a thorough refactorization of the code and specific optimizations that depend on the architecture of the device. OpenACC is a promising technology that aims at reducing the effort required to accelerate C/C++/Fortran code on an attached multicore device. Virtually with this technology the CPU code only has to be augmented with a few compiler directives to identify the areas to be accelerated and the way in which data has to be moved between the CPU and GPU. Its potential benefits are multiple: better code readability, less development time, lower risk of errors and less dependency on the underlying architecture and future evolution of the GPU technology. Our aim with this work is to evaluate the pros and cons of using OpenACC against native GPU implementations in computationally expensive hydrological applications, using the classic D8 algorithm of O'Callaghan and Mark for river network extraction as case-study. We implemented the flow accumulation step of this algorithm in CPU, using OpenACC and two different CUDA versions, comparing the length and complexity of the code and its performance with different datasets. We advance that although OpenACC can not match the performance of a CUDA optimized implementation (×3.5 slower in average), it provides a significant performance improvement against a CPU implementation (×2-6) with by far a simpler code and less implementation effort.
Cai, Xiuhong; Li, Xiang; Qi, Hong; Wei, Fang; Chen, Jianyong; Shuai, Jianwei
2016-10-01
The gating properties of the inositol 1, 4, 5-trisphosphate (IP3) receptor (IP3R) are determined by the binding and unbinding capability of Ca2+ ions and IP3 messengers. With the patch clamp experiments, the stationary properties have been discussed for Xenopus oocyte type-1 IP3R (Oo-IP3R1), type-3 IP3R (Oo-IP3R3) and Spodoptera frugiperda IP3R (Sf-IP3R). In this paper, in order to provide insights about the relation between the observed gating characteristics and the gating parameters in different IP3Rs, we apply the immune algorithm to fit the parameters of a modified DeYoung-Keizer model. By comparing the fitting parameter distributions of three IP3Rs, we suggest that the three types of IP3Rs have the similar open sensitivity in responding to IP3. The Oo-IP3R3 channel is easy to open in responding to low Ca2+ concentration, while Sf-IP3R channel is easily inhibited in responding to high Ca2+ concentration. We also show that the IP3 binding rate is not a sensitive parameter for stationary gating dynamics for three IP3Rs, but the inhibitory Ca2+ binding/unbinding rates are sensitive parameters for gating dynamics for both Oo-IP3R1 and Oo-IP3R3 channels. Such differences may be important in generating the spatially and temporally complex Ca2+ oscillations in cells. Our study also demonstrates that the immune algorithm can be applied for model parameter searching in biological systems.
Directory of Open Access Journals (Sweden)
Songqiu Deng
2016-12-01
Full Text Available Individual tree delineation using remotely sensed data plays a very important role in precision forestry because it can provide detailed forest information on a large scale, which is required by forest managers. This study aimed to evaluate the utility of airborne laser scanning (ALS data for individual tree detection and species classification in Japanese coniferous forests with a high canopy density. Tree crowns in the study area were first delineated by the individual tree detection approach using a canopy height model (CHM derived from the ALS data. Then, the detected tree crowns were classified into four classes—Pinus densiflora, Chamaecyparis obtusa, Larix kaempferi, and broadleaved trees—using a tree crown-based classification approach with different combinations of 23 features derived from the ALS data and true-color (red-green-blue—RGB orthoimages. To determine the best combination of features for species classification, several loops were performed using a forward iteration method. Additionally, several classification algorithms were compared in the present study. The results of this study indicate that the combination of the RGB images with laser intensity, convex hull area, convex hull point volume, shape index, crown area, and crown height features produced the highest classification accuracy of 90.8% with the use of the quadratic support vector machines (QSVM classifier. Compared to only using the spectral characteristics of the orthophotos, the overall accuracy was improved by 14.1%, 9.4%, and 8.8% with the best combination of features when using the QSVM, neural network (NN, and random forest (RF approaches, respectively. In terms of different classification algorithms, the findings of our study recommend the QSVM approach rather than NNs and RFs to classify the tree species in the study area. However, these classification approaches should be further tested in other forests using different data. This study demonstrates
Withofs, Nadia; Bernard, Claire; Van der Rest, Catherine; Martinive, Philippe; Hatt, Mathieu; Jodogne, Sebastien; Visvikis, Dimitris; Lee, John A; Coucke, Philippe A; Hustinx, Roland
2014-09-08
PET/CT imaging could improve delineation of rectal carcinoma gross tumor volume (GTV) and reduce interobserver variability. The objective of this work was to compare various functional volume delineation algorithms. We enrolled 31 consecutive patients with locally advanced rectal carcinoma. The FDG PET/CT and the high dose CT (CTRT) were performed in the radiation treatment position. For each patient, the anatomical GTVRT was delineated based on the CTRT and compared to six different functional/metabolic GTVPET derived from two automatic segmentation approaches (FLAB and a gradient-based method); a relative threshold (45% of the SUVmax) and an absolute threshold (SUV > 2.5), using two different commercially available software (Philips EBW4 and Segami OASIS). The spatial sizes and shapes of all volumes were compared using the conformity index (CI). All the delineated metabolic tumor volumes (MTVs) were significantly different. The MTVs were as follows (mean ± SD): GTVRT (40.6 ± 31.28ml); FLAB (21.36± 16.34 ml); the gradient-based method (18.97± 16.83ml); OASIS 45% (15.89 ± 12.68 ml); Philips 45% (14.52 ± 10.91 ml); OASIS 2.5 (41.6 2 ± 33.26 ml); Philips 2.5 (40 ± 31.27 ml). CI between these various volumes ranged from 0.40 to 0.90. The mean CI between the different MTVs and the GTVCT was algorithms and the software products. The manipulation of PET/CT images and MTVs, such as the DICOM transfer to the Radiation Oncology Department, induced additional volume variations.
Directory of Open Access Journals (Sweden)
S. V. Bukharin
2016-01-01
Full Text Available The financial condition of the enterprise can be estimated by a set of characteristics (solvency and liquidity, structure of the capital, profitability, etc.. The part of financial coefficients is low-informative, and other part contains the interconnected sizes. Therefore for elimination of ambiguity we will pass to the generalized indicators – rating numbers, and as the main means of research it is offered to use the theory of expert systems. As characteristic of the modern theory of expert systems it is necessary to consider application of intellectual ways of data processing of data mining, or simply data mining. The method of immersion of a problem of comparison of a financial condition of economic objects in an expert cover in a class of systems of artificial intelligence is offered (algorithms of a method of the analysis of hierarchies, contiguity leaning of a neural network, algorithm of training with function of activation softmax. The generalized indicator of structure of the capital in the form of rating number is entered and the sign (factorial space for seven concrete enterprises is created. Quantitative signs (financial coefficients of structure of the capital are allocated and their normalization by rules of the theory of expert systems is carried out. To the received set of the generalized indicators the method of the analysis of hierarchies is applied: on the basis of a linguistic scale of T. Saaty the ranks of signs reflecting the relative importance of various financial coefficients are defined and the matrix of pair comparisons is constructed. The vector of priority signs on the basis of the solution of the equation for own numbers and own vectors of the mentioned matrix is calculated. As a result the visualization of the received results which has allowed to eliminate difficulties of interpretation of small and negative values of the generalized indicator is carried out. The neural network with contiguity leaning and
Primary and Presidential Candidates
DEFF Research Database (Denmark)
Goddard, Joseph
2012-01-01
This article looks at primary and presidential candidates in 2008 and 2012. Evidence suggests that voters are less influenced by candidates’ color, gender, or religious observation than previously. Conversely, markers of difference remain salient in the imaginations of pollsters and journalists...
Directory of Open Access Journals (Sweden)
Raju Datla
2016-02-01
Full Text Available The radiometric calibration equations for the thermal emissive bands (TEB and the reflective solar bands (RSB measurements of the earth scenes by the polar satellite sensors, (Terra and Aqua MODIS and Suomi NPP (VIIRS, and geostationary sensors, GOES Imager and the GOES-R Advanced Baseline Imager (ABI are analyzed towards calibration algorithm harmonization on the basis of SI traceability which is one of the goals of the NOAA National Calibration Center (NCC. One of the overarching goals of NCC is to provide knowledge base on the NOAA operational satellite sensors and recommend best practices for achieving SI traceability for the radiance measurements on-orbit. As such, the calibration methodologies of these satellite optical sensors are reviewed in light of the recommended practice for radiometric calibration at the National Institute of Standards and Technology (NIST. The equivalence of some of the spectral bands in these sensors for their end products is presented. The operational and calibration features of the sensors for on-orbit observation of radiance are also compared in tabular form. This review is also to serve as a quick cross reference to researchers and analysts on how the observed signals from these sensors in space are converted to radiances.
Odindi, John; Adam, Elhadi; Ngubane, Zinhle; Mutanga, Onisimo; Slotow, Rob
2014-01-01
Plant species invasion is known to be a major threat to socioeconomic and ecological systems. Due to high cost and limited extents of urban green spaces, high mapping accuracy is necessary to optimize the management of such spaces. We compare the performance of the new-generation WorldView-2 (WV-2) and SPOT-5 images in mapping the bracken fern [Pteridium aquilinum (L) kuhn] in a conserved urban landscape. Using the random forest algorithm, grid-search approaches based on out-of-bag estimate error were used to determine the optimal ntree and mtry combinations. The variable importance and backward feature elimination techniques were further used to determine the influence of the image bands on mapping accuracy. Additionally, the value of the commonly used vegetation indices in enhancing the classification accuracy was tested on the better performing image data. Results show that the performance of the new WV-2 bands was better than that of the traditional bands. Overall classification accuracies of 84.72 and 72.22% were achieved for the WV-2 and SPOT images, respectively. Use of selected indices from the WV-2 bands increased the overall classification accuracy to 91.67%. The findings in this study show the suitability of the new generation in mapping the bracken fern within the often vulnerable urban natural vegetation cover types.
Repetto, Silvia A; Ruybal, Paula; Solana, María Elisa; López, Carlota; Berini, Carolina A; Alba Soto, Catalina D; Cappa, Stella M González
2016-05-01
Underdiagnosis of chronic infection with the nematode Strongyloides stercoralis may lead to severe disease in the immunosuppressed. Thus, we have set-up a specific and highly sensitive molecular diagnosis in stool samples. Here, we compared the accuracy of our polymerase chain reaction (PCR)-based method with that of conventional diagnostic methods for chronic infection. We also analyzed clinical and epidemiological predictors of infection to propose an algorithm for the diagnosis of strongyloidiasis useful for the clinician. Molecular and gold standard methods were performed to evaluate a cohort of 237 individuals recruited in Buenos Aires, Argentina. Subjects were assigned according to their immunological status, eosinophilia and/or history of residence in endemic areas. Diagnosis of strongyloidiasis by PCR on the first stool sample was achieved in 71/237 (29.9%) individuals whereas only 35/237(27.4%) were positive by conventional methods, requiring up to four serial stool samples at weekly intervals. Eosinophilia and history of residence in endemic areas have been revealed as independent factors as they increase the likelihood of detecting the parasite according to our study population. Our results underscore the usefulness of robust molecular tools aimed to diagnose chronic S. stercoralis infection. Evidence also highlights the need to survey patients with eosinophilia even when history of an endemic area is absent.
Hayat, Nasir; Ameen, Muhammad Tahir; Tariq, Muhammad Kashif; Shah, Syed Nadeem Abbas; Naveed, Ahmad
2017-03-01
Exploitation of low potential waste thermal energy for useful net power output can be done by manipulating organic Rankine cycle systems. In the current article dual-objectives ({{η}}_{th} and SIC) optimization of ORC systems [basic organic Rankine cycle (BORC) and recuperative organic Rankine cycle (RORC)] has been done using non-dominated sorting genetic algorithm (II). Seven organic compounds (R-123, R-1234ze, R-152a, R-21, R-236ea, R-245ca and R-601) have been employed in basic cycle and four dry compounds (R-123, R-236ea, R-245ca and R-601) have been employed in recuperative cycle to investigate the behaviour of two systems and compare their performance. Sensitivity analyses show that recuperation boosts the thermodynamic behaviour of systems but it also raises specific investment cost significantly. R-21, R-245ca and R-601 show attractive performance in BORC whereas R-601 and R-236ea in RORC. RORC, due to higher total investment cost and operation & maintenance costs, has longer payback periods as compared to BORC.
Serag, Ahmed; Blesa, Manuel; Moore, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Wilkinson, A. G.; MacNaught, Gillian; Semple, Scott I.; Boardman, James P.
2016-03-01
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
Directory of Open Access Journals (Sweden)
Shankar Chakraborty
2012-01-01
Full Text Available Control chart pattern (CCP recognition can act as a problem identification tool in any manufacturing organization. Feature-based rules in the form of decision trees have become quite popular in recent years for CCP recognition. This is because the practitioners can clearly understand how a particular pattern has been identified by the use of relevant shape features. Moreover, since the extracted features represent the main characteristics of the original data in a condensed form, it can also facilitate efficient pattern recognition. The reported feature-based decision trees can recognize eight types of CCPs using extracted values of seven shape features. In this paper, a different set of seven most useful features is presented that can recognize nine main CCPs, including mixture pattern. Based on these features, decision trees are developed using CART (classification and regression tree and QUEST (quick unbiased efficient statistical tree algorithms. The relative performance of the CART and QUEST-based decision trees are extensively studied using simulated pattern data. The results show that the CART-based decision trees result in better recognition performance but lesser consistency, whereas, the QUEST-based decision trees give better consistency but lesser recognition performance.
DEFF Research Database (Denmark)
Larsen, Thomas Ostenfeld; Petersen, Bent O.; Duus, Jens Øllgaard;
2005-01-01
X-hitting, a newly developed algorithm for automated comparison of UV data, has been used for the tracking of two novel spiro-quinazoline metabolites, lapatins A (1)andB(2), in a screening study targeting quinazolines. The structures of 1 and 2 were elucidated by analysis of spectroscopic data, p......, primarily 2D NMR....
Directory of Open Access Journals (Sweden)
Li Zhen
2008-05-01
analysis of data sets in which in vitro bioassay data is being used to predict in vivo chemical toxicology. From our analysis, we can recommend that several ML methods, most notably SVM and ANN, are good candidates for use in real world applications in this area.
Directory of Open Access Journals (Sweden)
Matthew M Cousins
Full Text Available Multi-assay algorithms (MAAs can be used to estimate HIV incidence in cross-sectional surveys. We compared the performance of two MAAs that use HIV diversity as one of four biomarkers for analysis of HIV incidence.Both MAAs included two serologic assays (LAg-Avidity assay and BioRad-Avidity assay, HIV viral load, and an HIV diversity assay. HIV diversity was quantified using either a high resolution melting (HRM diversity assay that does not require HIV sequencing (HRM score for a 239 base pair env region or sequence ambiguity (the percentage of ambiguous bases in a 1,302 base pair pol region. Samples were classified as MAA positive (likely from individuals with recent HIV infection if they met the criteria for all of the assays in the MAA. The following performance characteristics were assessed: (1 the proportion of samples classified as MAA positive as a function of duration of infection, (2 the mean window period, (3 the shadow (the time period before sample collection that is being assessed by the MAA, and (4 the accuracy of cross-sectional incidence estimates for three cohort studies.The proportion of samples classified as MAA positive as a function of duration of infection was nearly identical for the two MAAs. The mean window period was 141 days for the HRM-based MAA and 131 days for the sequence ambiguity-based MAA. The shadows for both MAAs were <1 year. Both MAAs provided cross-sectional HIV incidence estimates that were very similar to longitudinal incidence estimates based on HIV seroconversion.MAAs that include the LAg-Avidity assay, the BioRad-Avidity assay, HIV viral load, and HIV diversity can provide accurate HIV incidence estimates. Sequence ambiguity measures obtained using a commercially-available HIV genotyping system can be used as an alternative to HRM scores in MAAs for cross-sectional HIV incidence estimation.
Pepijn Veefkind, J.; de Haan, Johan F.; Sneep, Maarten; Levelt, Pieternel F.
2016-12-01
The OMI (Ozone Monitoring Instrument on board NASA's Earth Observing System (EOS) Aura satellite) OMCLDO2 cloud product supports trace gas retrievals of for example ozone and nitrogen dioxide. The OMCLDO2 algorithm derives the effective cloud fraction and effective cloud pressure using a DOAS (differential optical absorption spectroscopy) fit of the O2-O2 absorption feature around 477 nm. A new version of the OMI OMCLDO2 cloud product is presented that contains several improvements, of which the introduction of a temperature correction on the O2-O2 slant columns and the updated look-up tables have the largest impact. Whereas the differences in the effective cloud fraction are on average limited to 0.01, the differences of the effective cloud pressure can be up to 200 hPa, especially at cloud fractions below 0.3. As expected, the temperature correction depends on latitude and season. The updated look-up tables have a systematic effect on the cloud pressure at low cloud fractions. The improvements at low cloud fractions are very important for the retrieval of trace gases in the lower troposphere, for example for nitrogen dioxide and formaldehyde. The cloud pressure retrievals of the improved algorithm are compared with ground-based radar-lidar observations for three sites at mid-latitudes. For low clouds that have a limited vertical extent the comparison yields good agreement. For higher clouds, which are vertically extensive and often contain several layers, the satellite retrievals give a lower cloud height. For high clouds, mixed results are obtained.
1989-05-01
pilot selection system and to best support up-front track selection for SUPT? Assumptions The USAF Trainer Masterplan does not include a plan to...replace the T-41 with a new flight screening aircraft. In addition, the Masterplan states that candidates will be track selected prior to entry into primary...training. (3:10) While the Masterplan is not a static document and aircraft procurement plans and/or the timing of track selection are subject to
Release of uranium from candidate wasteforms
Collier, N.; Harrison, M.; Brogden, M,; Hanson, B
2012-01-01
Large volumes of depleted natural and low-enriched uranium exist in the UK waste inventory. This work reports on initial investigations of the leaching performance of candidate glass and cement encapsulation matrices containing UO3 powder as well as that of uranium oxide powders. The surface areas of UO3 powder and the monolith samples of UO3 conditioned in the glass and cement matrices were very different making leaching comparisons difficult. The results showed that for both types of monoli...
Comparison Research on the Algorithms of Network Traffic Classification%网络流量分类算法比较研究
Institute of Scientific and Technical Information of China (English)
彭勃
2012-01-01
Accurate traffic classification is of fundamental importance to numerous network activities and it has been a hot topic in network measurement for a long time. A comparison of six algorithms of traffic classification based on flow features is conducted. Analysis and experiment show that using feature seletion method the support vector machine (SVM) method has high accuracy and better computational performance for network traffic classification.%准确的网络流量分类既是众多网络研究工作的重要基础,也是网络测量领域的研究热点.基于流特征的六种分类算法进行比较分析,实验结果表明,使用特征选择方法,SVM算法具有较高的整体准确率和较好的计算性能,适合用于网络流量分类.
Sambuelli, L.; Bohm, G.; Capizzi, P.; Cardarelli, E.; Cosentino, P.
2011-09-01
By late 2008 one of the most important pieces of the 'Museo delle Antichità Egizie' of Turin, the sculpture of the Pharaoh with god Amun, was planned to be one of the masterpieces of a travelling exhibition in Japan. The 'Fondazione Museo delle Antichità Egizie di Torino', who manages the museum, was concerned with the integrity of the base of the statue which actually presents visible signs of restoration dating back to the early 19th century. It was required to estimate the persistence of the visible fractures, to search for unknown ones and to provide information about the overall mechanical strength of the base. To tackle the first question a GPR reflection survey along three sides of the base was performed and the results were assembled in a 3D rendering. As far as the second question is concerned, two parallel, horizontal ultrasonic 2D tomograms across the base were made. We acquired, for each section, 723 ultrasonic signals corresponding to different transmitter and receiver positions. The tomographic data were inverted using four different software packages based upon different algorithms. The obtained velocity images were then compared each other, with the GPR results and with the visible fractures in the base. A critical analysis of the comparisons is finally presented.
Directory of Open Access Journals (Sweden)
Tummala Pradeep
2011-11-01
Full Text Available This paper investigates the use of variable learning rate back-propagation algorithm and Levenberg-Marquardt back-propagation algorithm in Intrusion detection system for detecting attacks. Inthe present study, these 2 neural network (NN algorithms are compared according to their speed,accuracy and, performance using mean squared error (MSE (Closer the value of MSE to 0, higher willbe the performance. Based on the study and test results, the Levenberg-Marquardt algorithm has been found to be faster and having more accuracy and performance than variable learning rate backpropagation algorithm.
Application of Hybrid Optimization Algorithm in the Synthesis of Linear Antenna Array
Directory of Open Access Journals (Sweden)
Ezgi Deniz Ülker
2014-01-01
Full Text Available The use of hybrid algorithms for solving real-world optimization problems has become popular since their solution quality can be made better than the algorithms that form them by combining their desirable features. The newly proposed hybrid method which is called Hybrid Differential, Particle, and Harmony (HDPH algorithm is different from the other hybrid forms since it uses all features of merged algorithms in order to perform efficiently for a wide variety of problems. In the proposed algorithm the control parameters are randomized which makes its implementation easy and provides a fast response. This paper describes the application of HDPH algorithm to linear antenna array synthesis. The results obtained with the HDPH algorithm are compared with three merged optimization techniques that are used in HDPH. The comparison shows that the performance of the proposed algorithm is comparatively better in both solution quality and robustness. The proposed hybrid algorithm HDPH can be an efficient candidate for real-time optimization problems since it yields reliable performance at all times when it gets executed.
Kim, Sung Jin; Kim, Sung Kyu; Kim, Dong Ho
2015-07-01
Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam (PB), collapsed cone (CC), and Monte-Carlo (MC), provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated by using the PB, CC, and MC algorithms. Planning treatment volume (PTV) and organs at risk (OARs) delineations were performed according to our institution's protocols on the Oncentra MasterPlan image registration module, on 0.3-0.5 cm computed tomography (CT) slices taken under normal respiration conditions. Intensitymodulated radiation therapy (IMRT) plans were calculated for the three algorithm for each patient. The plans were conducted on the Oncentra MasterPlan (PB and CC) and CMS Monaco (MC) treatment planning systems for 6 MV. The plans were compared in terms of the dose distribution in target, the OAR volumes, and the monitor units (MUs). Furthermore, absolute dosimetry was measured using a three-dimensional diode array detector (ArcCHECK) to evaluate the dose differences in a homogeneous phantom. Comparing the dose distributions planned by using the PB, CC, and MC algorithms, the PB algorithm provided adequate coverage of the PTV. The MUs calculated using the PB algorithm were less than those calculated by using. The MC algorithm showed the highest accuracy in terms of the absolute dosimetry. Differences were found when comparing the calculation algorithms. The PB algorithm estimated higher doses for the target than the CC and the MC algorithms. The PB algorithm actually overestimated the dose compared with those calculated by using the CC and the MC algorithms. The MC algorithm showed better accuracy than the other algorithms.
Energy Technology Data Exchange (ETDEWEB)
Ortiz J, J. [Instituto Nacional de Investigaciones Nucleares, Depto. Sistemas Nucleares, A.P. 18-1027, 11801 Mexico D.F. (Mexico); Requena, I. [Universidad de Granada (Spain)
2002-07-01
In this work the results of a genetic algorithm (AG) and a neural recurrent multi state network (RNRME) for optimizing the fuel reload of 5 cycles of the Laguna Verde nuclear power plant (CNLV) are presented. The fuel reload obtained by both methods are compared and it was observed that the RNRME creates better fuel distributions that the AG. Moreover a comparison of the utility for using one or another one techniques is make. (Author)
Rocha, Helder R. de O.; Benincá, Matheus O. L.; Castellani, Carlos E. S.; Pontes, Maria J.; Segatto, Marcelo E. V.; Silva, Jair A. L.
2016-08-01
This paper presents a performance analysis and comparison of optimized multipump Raman and hybrid erbium-doped fiber amplifier (EDFA) + Raman amplifiers, operating simultaneously at conventional (C) and long (L) bands, using multiobjective optimization based on evolutionary elitist nondominated sorting genetic algorithm. The amplifiers performance was measured in terms of on-off gain, ripple, optical signal-to-noise ratio (OSNR) and noise figure (NF), after propagating over 90 and 180 km of single-mode fiber (SMF). Numerical simulation results of the first analysis show that only three pumps are necessary to generate optimal gains in both amplifiers. Comparing the results of the second performance analysis, we conclude that, after 90 km SMF, the two amplifiers has the same on-off gain, if the total pump power (1807.1 mW) of the Raman amplifier is approximately double (100+994.7 mW) of the hybrid amplifier, when the EDFA is operating at 1480 nm with 5 m of doped fiber. Furthermore, the Raman amplifier needs a single laser with at most 741.1 mW, against 343.9 mW of the distributed Raman amplifier (DRA) pump in the hybrid system. Finally, the results of the last analysis, which considers only the EDFA + Raman amplifier, shows that with on-off gain of 26.14 dB, ripple close to 1.54 dB over a bandwidth of 66 nm and using three pumps lasers in the DRA the achieved OSNR was 39.6 dB with an NF lower than 3.3 dB, after 90 km of SMF.
Nandre, Rahul M; Eo, Seong Kug; Park, Sang Youel; Lee, John Hwa
2015-07-01
This study compared a new live attenuated Salmonella Enteritidis vaccine candidate secreting Escherichia coli heat-labile enterotoxin B subunit (SE-LTB) with a commercial Salmonella Enteritidis (SE) vaccine for efficacy of protection against SE infection in laying hens. Chickens were divided into 3 groups of 20 each. Group A chickens were inoculated orally with phosphate-buffered saline and served as controls, group B chickens were inoculated orally with the vaccine candidate, and group C chickens were inoculated intramuscularly with a commercial vaccine, the primary inoculation in groups B and C being at 10 wk of age and the booster at 16 wk. Groups B and C showed significantly higher titers of plasma immunoglobulin G, intestinal secretory immunoglobulin A, and egg yolk immunoglobulin Y antibodies compared with the control group, and both vaccinated groups showed a significantly elevated cellular immune response. After virulent challenge, group B had significantly lower production of thin-shelled and/or malformed eggs and a significantly lower rate of SE contamination of eggs compared with the control group. Furthermore, the challenge strain was detected significantly less in all of the examined organs of group B compared with the control group. Group C had lower gross lesion scores only in the spleen and had lower bacterial counts only in the spleen, ceca, and ovary. These findings indicate that vaccination with the SE-LTB vaccine candidate can efficiently reduce internal egg and internal organ contamination by Salmonella and has advantages over the commercial vaccine.
An inversion algorithm for general tridiagonal matrix
Institute of Scientific and Technical Information of China (English)
Rui-sheng RAN; Ting-zhu HUANG; Xing-ping LIU; Tong-xiang GU
2009-01-01
An algorithm for the inverse of a general tridiagonal matrix is presented. For a tridiagonal matrix having the Doolittle factorization, an inversion algorithm is established.The algorithm is then generalized to deal with a general tridiagonal matrix without any restriction. Comparison with other methods is provided, indicating low computational complexity of the proposed algorithm, and its applicability to general tridiagonal matrices.
Many-Objective Distinct Candidates Optimization using Differential Evolution
DEFF Research Database (Denmark)
Justesen, Peter; Ursem, Rasmus Kjær
2010-01-01
for each objective. The Many-Objective Distinct Candidates Optimization using Differential Evolution (MODCODE) algorithm takes a novel approach by focusing search using a user-defined number of subpopulations each returning a distinct optimal solution within the preferred region of interest. In this paper...
Energy Technology Data Exchange (ETDEWEB)
Llacer Martos, S.; Herraiz Lablanca, M. D.; Puchal Ane, R.
2011-07-01
This paper compares the image quality obtained with each of the algorithms is evaluated and its running time, to optimize the choice of algorithm to use taking into account both the quality of the reconstructed image as the time spent on the reconstruction.
Comparison of Two Fast Space Vector Pulse Width Modulation Algorithms%两种快速的空间矢量脉宽调制算法比较
Institute of Scientific and Technical Information of China (English)
范必双; 谭冠政; 樊绍胜; 王玉凤
2014-01-01
A comparison is made between two fast space vector pulse width modulation (SVPWM) algorithms: the 60° non-orthogonal coordinate SVPWM and the 45° rotating coordinate SVPWM. New general methods of the 60° and 45° algorithms for any level SVPWM are also provided, which need only the angle and the modulation depth to generate and arrange the final vector sequence. The analysis shows the latter offers better flexibility with fewer calculations and is well suited for digital implementation. Both methods are implemented in a field programmable gate array (FPGA) with very high speed integrated circuit hardware description language (VHDL) and compared on the basis of implementation complexity and logic resources required. Simulation results show the overwhelm in advantages of the 45° rotating coordinate SVPWM in brevity and efficiency. Finally, the experimental test results for a three-level neutral-point-clamped (NPC) inverter are presented.%对60°非正交坐标系和45°旋转坐标系这两种快速空间矢量脉宽调制算法(space vector pulse width modulation， SVPWM)进行比较，目的是为工程技术人员在这两种SVPWM算法之间选择时提供一个理论和实践的参考。提出了一种新的针对这两种方法的任意多电平调制通用算法，只需角度和调制比两个信号就能够决定最终的矢量开关顺序。理论分析结果表明，45°旋转坐标系 SVPWM 比60°非正交坐标系SVPWM更简单，计算量小，更适合于数字实现。两种算法都以硬件的方式在FPGA上用VHDL语言编程实现，并对实现的复杂度和逻辑资源占用这两方面进行比较。仿真结果表明，45°旋转坐标系下的SVPWM算法比60°坐标系下的SVPWM算法在简洁性和资源占用方面具有明显的优势。最后，在一个三电平中点箝位型逆变器上对所提的两种通用算法进行了实验验证。
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Kleinberg, Jon
2006-01-01
Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.
Pirotta, Martin; Aquilina, Dorothy; Bhikha, Tilluck; Georg, Dietmar
2005-01-01
The ESTRO formalism for monitor unit (MU) calculations was evaluated and implemented to replace a previous methodology based on dosimetric data measured in a full-scatter phantom. This traditional method relies on data normalised at the depth of dose maximum (Zm), as well as on the utilisation of the BJR 25 table for the conversion of rectangular fields into equivalent square fields. The treatment planning system (TPS) was subsequently updated to reflect the new beam data normalised at a depth ZR of 10 cm. Comparisons were then carried out between the ESTRO formalism, the Clarkson-based dose calculation algorithm on the TPS (with beam data normalised at Zm and ZR), and the traditional "full-scatter" methodology. All methodologies, except for the "full-scatter" methodology, separated head-scatter from phantom-scatter effects and none of the methodologies; except for the ESTRO formalism, utilised wedge depth dose information for calculations. The accuracy of MU calculations was verified against measurements in a homogeneous phantom for square and rectangular open and wedged fields, as well as blocked open and wedged fields, at 5, 10, and 20 cm depths, under fixed SSD and isocentric geometries for 6 and 10 MV. Overall, the ESTRO Formalism showed the most accurate performance, with the root mean square (RMS) error with respect to measurements remaining below 1% even for the most complex beam set-ups investigated. The RMS error for the TPS deteriorated with the introduction of a wedge, with a worse RMS error for the beam data normalised at Zm (4% at 6 MV and 1.6% at 10 MV) than at ZR (1.-9% at 6 MV and 1.1% at 10 MV). The further addition of blocking had only a marginal impact on the accuracy of this methodology. The "full-scatter" methodology showed a loss in accuracy for calculations involving either wedges or blocking, and performed worst for blocked wedged fields (RMS errors of 7.1% at 6 MV and 5% at 10 MV). The origins of these discrepancies were quantified and the
Determination of candidate subjects for better recognition of faces
Wang, Xuansheng; Chen, Zhen; Teng, Zhongming
2016-05-01
In order to improve the accuracy of face recognition and to solve the problem of various poses, we present an improved collaborative representation classification (CRC) algorithm using original training samples and the corresponding mirror images. First, the mirror images are generated from the original training samples. Second, both original training samples and their mirror images are simultaneously used to represent the test sample via improved collaborative representation. Then, some classes which are "close" to the test sample are coarsely selected as candidate classes. At last, the candidate classes are used to represent the test sample again, and then the class most similar to the test sample can be determined finely. The experimental results show our proposed algorithm has more robustness than the original CRC algorithm and can effectively improve the accuracy of face recognition.
DEFF Research Database (Denmark)
Olsen, Emil; Boye, Jenny Katrine; Pfau, Thilo;
2012-01-01
Motion capture is frequently used over ground in equine locomotion science to study kinematics. Determination of gait events (hoof-on/off and stance) without force plates is essential to cut the data into strides. The lack of comparative evidence emphasise the need to compare existing algorithms...... and use robust and validated algorithms. It is the objective of this study to compare accuracy (bias) and precision (SD) for five published human and equine motion capture foot-on/off and stance phase detection algorithms during walk. Six horses were walked over 8 seamlessly embedded force plates...
Themistocleous, Kyriacos; Hadjimitsis, Diofantos G.; Alexakis, Dimitrios
2011-11-01
Darkest pixel atmospheric correction is the simplest and fully image-based correction method. This paper presents an overview of a proposed 'fast atmospheric correction algorithm' developed at MATLAB based on the RT equation basics and the darkest pixel approach. The task is to retrieve the aerosol optical thickness (AOT) from the application of this atmospheric correction. The effectiveness of this algorithm is performed by comparing the AOT values from the algorithm with those measured in-situ both from MICROTOPS II hand-held sunphotometer and the CIMEL sunphotometer (AERONET).
Building Better Nurse Scheduling Algorithms
Aickelin, Uwe
2008-01-01
The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.
Indian Academy of Sciences (India)
SHIDROKH GOUDARZI; WAN HASLINA HASSAN; MOHAMMAD HOSSEIN ANISI; SEYED AHMAD SOLEYMANI
2016-07-01
Genetic algorithms (GAs) and simulated annealing (SA) have emerged as leading methods for search and optimization problems in heterogeneous wireless networks. In this paradigm, various access technologies need to be interconnected; thus, vertical handovers are necessary for seamless mobility. In this paper, the hybrid algorithm for real-time vertical handover using different objective functions has been presented to find the optimal network to connect with a good quality of service in accordance with the user’s preferences. As it is, the characteristics of the current mobile devices recommend using fast andefficient algorithms to provide solutions near to real-time. These constraints have moved us to develop intelligent algorithms that avoid slow and massive computations. This was to, specifically, solve two major problems in GA optimization, i.e. premature convergence and slow convergence rate, and the facilitation of simulated annealing in the merging populations phase of the search. The hybrid algorithm was expected to improve on the pure GA in two ways, i.e., improved solutions for a given number of evaluations, and more stability over many runs. This paper compares the formulation and results of four recent optimization algorithms: artificial bee colony (ABC), genetic algorithm(GA), differential evolution (DE), and particle swarm optimization (PSO). Moreover, a cost function is used to sustain the desired QoS during the transition between networks, which is measured in terms of the bandwidth, BER, ABR, SNR, and monetary cost. Simulation results indicated that choosing the SA rules would minimize the cost function and the GA– SA algorithm could decrease the number of unnecessary handovers, and thereby prevent the ‘Ping-Pong’ effect.
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Directory of Open Access Journals (Sweden)
Pongpan Nakkaew
2016-06-01
Full Text Available In manufacturing process where efficiency is crucial in order to remain competitive, flowshop is a common configuration in which machines are arranged in series and products are produced through the stages one by one. In certain production processes, the machines are frequently configured in the way that each production stage may contain multiple processing units in parallel or hybrid. Moreover, along with precedent conditions, the sequence dependent setup times may exist. Finally, in case there is no buffer, a machine is said to be blocked if the next stage to handle its output is being occupied. Such NP-Hard problem, referred as Blocking Hybrid Flowshop Scheduling Problem with Sequence Dependent Setup/Changeover Times, is usually not possible to find the best exact solution to satisfy optimization objectives such as minimization of the overall production time. Thus, it is usually solved by approximate algorithms such as metaheuristics. In this paper, we investigate comparatively the effectiveness of the two approaches: a Genetic Algorithm (GA and an Artificial Bee Colony (ABC algorithm. GA is inspired by the process of natural selection. ABC, in the same manner, resembles the way types of bees perform specific functions and work collectively to find their foods by means of division of labor. Additionally, we apply an algorithm to improve the GA and ABC algorithms so that they can take advantage of parallel processing resources of modern multiple core processors while eliminate the need for screening the optimal parameters of both algorithms in advance.
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Schmitt, Joseph R; Fischer, Debra A; Jek, Kian J; Moriarty, John C; Boyajian, Tabetha S; Schwamb, Megan E; Lintott, Chris; Smith, Arfon M; Parrish, Michael; Schawinski, Kevin; Lynn, Stuart; Simpson, Robert; Omohundro, Mark; Winarski, Troy; Goodman, Samuel J; Jebson, Tony; Lacourse, Daryll
2013-01-01
We report the discovery of 14 new transiting planet candidates in the Kepler field from the Planet Hunters citizen science program. None of these candidates overlap with Kepler Objects of Interest (KOIs), and five of the candidates were missed by the Kepler Transit Planet Search (TPS) algorithm. The new candidates have periods ranging from 124-904 days, eight residing in their host star's habitable zone (HZ) and two (now) in multiple planet systems. We report the discovery of one more addition to the six planet candidate system around KOI-351, marking the first seven planet candidate system from Kepler. Additionally, KOI-351 bears some resemblance to our own solar system, with the inner five planets ranging from Earth to mini-Neptune radii and the outer planets being gas giants; however, this system is very compact, with all seven planet candidates orbiting $\\lesssim 1$ AU from their host star. We perform a numerical integration of the orbits and show that the system remains stable for over 100 million years....
An Improved Weighted Clustering Algorithm in MANET
Institute of Scientific and Technical Information of China (English)
WANG Jin; XU Li; ZHENG Bao-yu
2004-01-01
The original clustering algorithms in Mobile Ad hoc Network (MANET) are firstly analyzed in this paper.Based on which, an Improved Weighted Clustering Algorithm (IWCA) is proposed. Then, the principle and steps of our algorithm are explained in detail, and a comparison is made between the original algorithms and our improved method in the aspects of average cluster number, topology stability, clusterhead load balance and network lifetime. The experimental results show that our improved algorithm has the best performance on average.
Larsen, Ross E; Bedard-Hearn, Michael J; Schwartz, Benjamin J
2006-10-12
Mixed quantum/classical (MQC) molecular dynamics simulation has become the method of choice for simulating the dynamics of quantum mechanical objects that interact with condensed-phase systems. There are many MQC algorithms available, however, and in cases where nonadiabatic coupling is important, different algorithms may lead to different results. Thus, it has been difficult to reach definitive conclusions about relaxation dynamics using nonadiabatic MQC methods because one is never certain whether any given algorithm includes enough of the necessary physics. In this paper, we explore the physics underlying different nonadiabatic MQC algorithms by comparing and contrasting the excited-state relaxation dynamics of the prototypical condensed-phase MQC system, the hydrated electron, calculated using different algorithms, including: fewest-switches surface hopping, stationary-phase surface hopping, and mean-field dynamics with surface hopping. We also describe in detail how a new nonadiabatic algorithm, mean-field dynamics with stochastic decoherence (MF-SD), is to be implemented for condensed-phase problems, and we apply MF-SD to the excited-state relaxation of the hydrated electron. Our discussion emphasizes the different ways quantum decoherence is treated in each algorithm and the resulting implications for hydrated-electron relaxation dynamics. We find that for three MQC methods that use Tully's fewest-switches criterion to determine surface hopping probabilities, the excited-state lifetime of the electron is the same. Moreover, the nonequilibrium solvent response function of the excited hydrated electron is the same with all of the nonadiabatic MQC algorithms discussed here, so that all of the algorithms would produce similar agreement with experiment. Despite the identical solvent response predicted by each MQC algorithm, we find that MF-SD allows much more mixing of multiple basis states into the quantum wave function than do other methods. This leads to an
数字全息技术中散斑噪声滤波算法比较%Comparison of algorithms for filtering speckle noise in digital holography
Institute of Scientific and Technical Information of China (English)
潘云; 潘卫清; 晁明举
2011-01-01
In the recording process of digital holographic measurement, the hologram is easily polluted by speckle noise, which may decrease the resolution of the hologram. In addition, the reconstructed effect is seriously affected by speckle noise in digital reconstruction. Thus it is important to study the filtering speckle algorithms for digital hologram. The median filtering algorithm, Lee filtering algorithm, Kuan filtering algorithm and SUSAN filtering algorithm were introduced to filter the speckle noise in hologram and reconstructed image. Then these algorithms were compared. The results showed that the SUSAN filtering algorithm was better in digital holographic technology. The speckle noises were suppressed significantly and the information of reconstructed images were well maintained.%在数字全息测量记录过程中,其所记录的全息图易受到散斑噪声的污染造成分辨率下降,同时也严重影响数字全息再现的效果,因此研究适用于数字全息技术中散斑滤波的算法具有重要的实用价值.介绍了中值滤波、Lee滤波、Kuan滤波和SUSAN滤波这四种常用的散斑滤波算法,并将它们运用于数字全息实验所记录图像和数字再现图像的散斑噪声滤波处理中,然后对这四种算法的处理结果进行评价.结果表明,在数字全息技术中使用SUSAN滤波算法进行处理,既明显抑制了散斑噪声,又有效保证了再现图像信息的完整性.
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
A Clustal Alignment Improver Using Evolutionary Algorithms
DEFF Research Database (Denmark)
Thomsen, Rene; Fogel, Gary B.; Krink, Thimo
2002-01-01
Multiple sequence alignment (MSA) is a crucial task in bioinformatics. In this paper we extended previous work with evolutionary algorithms (EA) by using MSA solutions obtained from the wellknown Clustal V algorithm as a candidate solution seed of the initial EA population. Our results clearly show...
Multithreaded Implementation of Hybrid String Matching Algorithm
Directory of Open Access Journals (Sweden)
Akhtar Rasool
2012-03-01
Full Text Available Reading and taking reference from many books and articles, and then analyzing the Navies algorithm, Boyer Moore algorithm and Knuth Morris Pratt (KMP algorithm and a variety of improved algorithms, summarizes various advantages and disadvantages of the pattern matching algorithms. And on this basis, a new algorithm – Multithreaded Hybrid algorithm is introduced. The algorithm refers to Boyer Moore algorithm, KMP algorithm and the thinking of improved algorithms. Utilize the last character of the string, the next character and the method to compare from side to side, and then advance a new hybrid pattern matching algorithm. And it adjusted the comparison direction and the order of the comparison to make the maximum moving distance of each time to reduce the pattern matching time. The algorithm reduces the comparison number and greatlyreduces the moving number of the pattern and improves the matching efficiency. Multithreaded implementation of hybrid, pattern matching algorithm performs the parallel string searching on different text data by executing a number of threads simultaneously. This approach is advantageous from all other string-pattern matching algorithm in terms of time complexity. This again improves the overall string matching efficiency.
Encoded expansion: an efficient algorithm to discover identical string motifs.
Directory of Open Access Journals (Sweden)
Aqil M Azmi
Full Text Available A major task in computational biology is the discovery of short recurring string patterns known as motifs. Most of the schemes to discover motifs are either stochastic or combinatorial in nature. Stochastic approaches do not guarantee finding the correct motifs, while the combinatorial schemes tend to have an exponential time complexity with respect to motif length. To alleviate the cost, the combinatorial approach exploits dynamic data structures such as trees or graphs. Recently (Karci (2009 Efficient automatic exact motif discovery algorithms for biological sequences, Expert Systems with Applications 36:7952-7963 devised a deterministic algorithm that finds all the identical copies of string motifs of all sizes [Formula: see text] in theoretical time complexity of [Formula: see text] and a space complexity of [Formula: see text] where [Formula: see text] is the length of the input sequence and [Formula: see text] is the length of the longest possible string motif. In this paper, we present a significant improvement on Karci's original algorithm. The algorithm that we propose reports all identical string motifs of sizes [Formula: see text] that occur at least [Formula: see text] times. Our algorithm starts with string motifs of size 2, and at each iteration it expands the candidate string motifs by one symbol throwing out those that occur less than [Formula: see text] times in the entire input sequence. We use a simple array and data encoding to achieve theoretical worst-case time complexity of [Formula: see text] and a space complexity of [Formula: see text] Encoding of the substrings can speed up the process of comparison between string motifs. Experimental results on random and real biological sequences confirm that our algorithm has indeed a linear time complexity and it is more scalable in terms of sequence length than the existing algorithms.
Tel, G.
1993-01-01
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of distri
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
Institute of Scientific and Technical Information of China (English)
张斐; 谭军; 谢竞博
2009-01-01
研究转录因子结合位点(TFBs)的主要预测模型及其预测的算法,通过基于调控元件预测的3种代表性的算法MEME、Gibbs采样和Weeder预测拟南芥基因组.比较结果表明,Gibbs采样算法和Weeder算法预测长、短motif效率较高.重点分析MEME算法,提出结合不同算法查找motif的优化方法,并以实验验证该方法能有效提高预测效率.%This paper studies some models and discrimination algorithms of Transcription Factor Binding sites(TFBs). Experiment compares advantages and disadvantages in three representative discrimination algorithms which are based on regulation elements, including MEME, Gibbs sample and Weeder through predicting arabidopsis thaliana genome, against Gibbs sampling algorithm and Weeder algorithms are forecast long and short motif of the characteristics of high efficiency, MEME is intensively analyzed, and proposed an effective way to forecast motifs through MEME binding other discrimination algorithms. Experimental result proves that the method can improve the efficiency of motif finding efficiently.
基于Snort的模式匹配算法比较%Comparison of Several Pattern Matching Algorithms Based on Snort
Institute of Scientific and Technical Information of China (English)
王敏杰; 朱连轩
2011-01-01
The string pattern matching algorithm is the key for intrusion detection. Several algorithms including BM, BMG, AC and AC-BM algorithm are discussed, the running time and memory consumption of these for algorithms are measured by snort-based pattern matching algorithm on the snort intrusion detection system. The results show that AC and AC-BM are faster than BM and BMG on the large number of pattern, but on the small number of pattern, the opposite result can be obtained .%字符串模式匹配算法是入侵检测的的关键,为了测试BM,BMG,AC,AC-BM四种算法性能,基于Snort的模式匹配算法在Snort入侵检测系统下测量了四种算法的运行时间和内存消耗.实验结果表明当模式数量较大时AC,AC-BM算法运行时间小于BM和BMG算法,但内存消耗相对较大；当模式数量较少时,BM和BMG算法优于AC,AC-BM算法.
Analysis and Comparisons of Four Kinds of Ensemble Pulsar Time Algorithm%四种综合脉冲星时算法比较
Institute of Scientific and Technical Information of China (English)
仲崇霞; 杨廷高
2009-01-01
Pulsars, rapidly rotating neutron stars, have extremely stable rotating periods. The pulsar time denned by a single pulsar is influenced by several noise resources. To weaken these influences, the ensemble analysis method is used to obtain the ensemble pulsar time so that the long-term stability of the ensemble pulsar time can be improved. In this paper, four algorithms — the classical weighted average algorithm, the wavelet analysis algorithm, the Wiener filtration analysis algorithm and the Wiener filtration analysis in wavelet domain — are applied to synthetically make an ensemble pulsar time. The data used are the residuals of the two millisecond pulsars (PSR B1855+09 and PSR B1937+21) observed by Arecibo Observatory. First, the classical weighted average algorithm is developed by Petit, in which only one weight can be chosen within the whole interval of the observation on each single pulsar time, and the criterion for weight is the stability σ_x~2(T) of each single pulsar time. Second, an ensemble pulsar time algorithm is developed based on the wavelet multi-resolution analysis and the wavelet packet analysis, which can be obtained by decomposing the observation residuals of pulsars, extracting the components of different frequency domain and then choosing the weight according to the stability of different component denoted with wavelet variance. Third, the pulsar timing residuals are caused by reference atomic clock and pulsar itself, which are uncorrelated. Considering this uncorrelation and the peculiarity of Wiener filtration, we put forward an ensemble pulsar time algorithm of Wiener filtration. Using this algorithm, the error can be separated from an atomic clock and the pulsar itself in the post-fit pulsar timing residuals. The atomic scale component can be filtered from the pulsar phase variations and the remains can be integrated to the ensemble pulsar time. Weights are chosen according to the mean square root. Forth, the wavelet analysis and the
A Modern Non Candidate Approach for sequential pattern mining with Dynamic Minimum Support
Directory of Open Access Journals (Sweden)
Kumudbala Saxena
2011-12-01
Full Text Available Finding frequent patterns in data mining plays a significant role for finding the relational patterns. Data mining is also called knowledge discovery in several database including mobile databases and for heterogeneous environment. In this paper we proposed a modern non candidate approach for sequential pattern mining with dynamic minimum support. Our modern approach is divided into six parts. 1 Accept the dataset from the heterogeneous input set. 2 Generate Token Based on the character, we only generate posterior tokens. 3 Minimum support is entering by the user according to the need and place. 4 Find the frequent pattern which is according to the dynamic minimum support 5 Find associated member according to the token value 6 Find useful pattern after applying pruning. Our approach is not based on candidate key so it takes less time and memory in comparison to the previous algorithm. Second and main thing is the dynamic minimum support which gives us the flexibility to find the frequent pattern based on location and user requirement.
Capacity Constrained Routing Algorithms for Evacuation Route Planning
2006-05-04
April 30, 2006 DRAFT 9 D. Scope and Outline of the Paper The main focus of the paper is on the analysis of a heuristic algorithm which effectively...CCRP Algorithms In this section, we present a generic description of the Capacity Constrained Route Planner (CCRP). CCRP is a heuristic algorithm which...qualifies to be a candidate algorithm. E. Solution Quality of CCRP Since CCRP is a heuristic algorithm , it does not produce optimal solutions for all
Genetic algorithms as global random search methods
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
Energy Technology Data Exchange (ETDEWEB)
Pokhrel, D; Sood, S; Badkul, R; Jiang, H; Saleh, H; Wang, F [University of Kansas Hospital, Kansas City, KS (United States)
2015-06-15
Purpose: To compare dose distributions calculated using PB-hete vs. XVMC algorithms for SRT treatments of cavernous sinus tumors. Methods: Using PB-hete SRT, five patients with cavernous sinus tumors received the prescription dose of 25 Gy in 5 fractions for planning target volume PTV(V100%)=95%. Gross tumor volume (GTV) and organs at risk (OARs) were delineated on T1/T2 MRI-CT-fused images. PTV (range 2.1–84.3cc, mean=21.7cc) was generated using a 5mm uniform-margin around GTV. PB-hete SRT plans included a combination of non-coplanar conformal arcs/static beams delivered by Novalis-TX consisting of HD-MLCs and a 6MV-SRS(1000 MU/min) beam. Plans were re-optimized using XVMC algorithm with identical beam geometry and MLC positions. Comparison of plan specific PTV(V99%), maximal, mean, isocenter doses, and total monitor units(MUs) were evaluated. Maximal dose to OARs such as brainstem, optic-pathway, spinal cord, and lenses as well as normal tissue volume receiving 12Gy(V12) were compared between two algorithms. All analysis was performed using two-tailed paired t-tests of an upper-bound p-value of <0.05. Results: Using either algorithm, no dosimetrically significant differences in PTV coverage (PTVV99%,maximal, mean, isocenter doses) and total number of MUs were observed (all p-values >0.05, mean ratios within 2%). However, maximal doses to optic-chiasm and nerves were significantly under-predicted using PB-hete (p=0.04). Maximal brainstem, spinal cord, lens dose and V12 were all comparable between two algorithms, with exception of one patient with the largest PTV who exhibited 11% higher V12 with XVMC. Conclusion: Unlike lung tumors, XVMC and PB-hete treatment plans provided similar PTV coverage for cavernous sinus tumors. Majority of OARs doses were comparable between two algorithms, except for small structures such as optic chiasm/nerves which could potentially receive higher doses when using XVMC algorithm. Special attention may need to be paid on a case
We analyzed 10 established and 4 new satellite reflectance algorithms for estimating chlorophyll-a (Chl-a) in a temperate reservoir in southwest Ohio using coincident hyperspectral aircraft imagery and dense water truth collected within one hour of image acquisition to develop si...
DEFF Research Database (Denmark)
Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels
2014-01-01
in multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results....
两种改进的变步长MPPT算法性能对比研究%Comparison research of two improved variable step length MPPT algorithm
Institute of Scientific and Technical Information of China (English)
潘逸菎; 窦伟
2016-01-01
Photovoltaic array maximum power point tracking technology is one of the key technologies of the appli-cation of photovoltaic power generation.In this paper, based on the academic research, the variable step length in-cremental conductance algorithm and the perturbation and observation algorithm MPPT technologies which are the most practical application algorithm were optimized and the advantages and disadvantages are compared in detail. Comparing the results of simulations and experiments, the improved perturbation and observation algorithm could be more accurately and faster track the maximum power point, and is more suitable for the actual product.%光伏阵列最大功率点跟踪(MPPT)技术是光伏发电应用的关键技术之一.本文基于近年来学界研究成果,对实际应用最多的变步长电导增量法和扰动观察法两种MPPT技术进行优化设计,并详细对比验证了两种改进方法特性.对比仿真和实验结果表明,改进算法均能快速准确地实现最大功率跟踪,其中改进的扰动观察法因算法简单更适合实际产品使用.
Abramowicz, H.; Abt, I.; Adamczyk, L.; Adamus, M.; Aggarwal, R.; Antonelli, S.; Antonioli, P.; Antonov, A.; Arneodo, M.; Aushev, V.; Aushev, Y.; Bachynska, O.; Bamberger, A.; Barakbaev, A. N.; Barbagli, G.; Bari, G.; Barreiro, F.; Bartsch, D.; Basile, M.; Behnke, O.; Behr, J.; Behrens, U.; Bellagamba, L.; Bertolin, A.; Bhadra, S.; Bindi, M.; Blohm, C.; Bold, T.; Boos, E. G.; Borodin, M.; Borras, K.; Boscherini, D.; Boutle, S. K.; Brock, I.; Brownson, E.; Brugnera, R.; Bruemmer, N.; Bruni, A.; Bruni, G.; Brzozowska, B.; Bussey, P. J.; Butterworth, J. M.; Bylsma, B.; Caldwell, A.; Capua, M.; Carlin, R.; Catterall, C. D.; Chekanov, S.; Chwastowski, J.; Ciborowski, J.; Ciesielski, R.; Cifarelli, L.; Cindolo, F.; Contin, A.; Cooper-Sarkar, A. M.; Coppola, N.; Corradi, M.; Corriveau, F.; Costa, M.; D'Agostini, G.; Dal Corso, F.; de Favereau, J.; del Peso, J.; Dementiev, R. K.; De Pasquale, S.; Derrick, M.; Devenish, R. C. E.; Dobur, D.; Dolgoshein, B. A.; Doyle, A. T.; Drugakov, V.; Durkin, L. S.; Dusini, S.; Eisenberg, Y.; Ermolov, P. F.; Eskreys, A.; Fazio, S.; Ferrando, J.; Ferrero, M. I.; Figiel, J.; Forrest, M.; Foster, B.; Fourletov, S.; Gach, G.; Galas, A.; Gallo, E.; Garfagnini, A.; Geiser, A.; Gialas, I.; Gladilin, L. K.; Gladkov, D.; Glasman, C.; Gogota, O.; Golubkov, Yu. A.; Goettlicher, P.; Grabowska-Bold, I.; Grebenyuk, J.; Gregor, I.; Grigorescu, G.; Grzelak, G.; Gwenlan, C.; Haas, T.; Hain, W.; Hamatsu, R.; Hart, J. C.; Hartmann, H.; Hartner, G.; Hilger, E.; Hochman, D.; Holm, U.; Hori, R.; Horton, K.; Huettmann, A.; Iacobucci, G.; Ibrahim, Z. A.; Iga, Y.; Ingbir, R.; Ishitsuka, M.; Jakob, H. -P.; Januschek, F.; Jimenez, M.; Jones, T. W.; Juengst, M.; Kadenko, I.; Kahle, B.; Kamaluddin, B.; Kananov, S.; Kanno, T.; Karshon, U.; Karstens, F.; Katkov, I. I.; Kaur, M.; Kaur, P.; Keramidas, A.; Khein, L. A.; Kim, J. Y.; Kisielewska, D.; Kitamura, S.; Klanner, R.; Klein, U.; Kollar, D.; Kooijman, P.; Korol, Ie.; Korzhavina, I. A.; Kotanski, A.; Koetz, U.; Kowalski, H.; Kulinski, P.; Kuprash, O.; Kuze, M.; Kuzmin, V. A.; Lee, A.; Levchenko, B. B.; Libov, V.; Limentani, S.; Ling, T. Y.; Lisovyi, M.; Lobodzinska, E.; Lohmann, W.; Loehr, B.; Lohrmann, E.; Loizides, J. H.; Long, K. R.; Longhin, A.; Lontkovskyi, D.; Lukina, O. Yu.; Luzniak, P.; Maeda, J.; Magill, S.; Makarenko, I.; Malka, J.; Mankel, R.; Margotti, A.; Marini, G.; Mastroberardino, A.; Matsumoto, T.; Mattingly, M. C. K.; Melzer-Pellmann, I. -A.; Miglioranzi, S.; Idris, F. Mohamad; Monaco, V.; Montanari, A.; Musgrave, B.; Nagano, K.; Namsoo, T.; Nania, R.; Nicholass, D.; Nigro, A.; Ning, Y.; Noor, U.; Notz, D.; Nowak, R. J.; Nuncio-Quiroz, A. E.; Oh, B. Y.; Okazaki, N.; Oliver, K.; Olkiewicz, K.; Onishchuk, Yu.; Ota, O.; Papageorgiu, K.; Parenti, A.; Pawlak, J. M.; Pawlik, B.; Pelfer, P. G.; Pellegrino, A.; Perlanski, W.; Perrey, H.; Piotrzkowski, K.; Plucinski, P.; Pokrovskiy, N. S.; Polini, A.; Proskuryakov, A. S.; Przybycien, M.; Raval, A.; Reeder, D. D.; Reisert, B.; Ren, Z.; Repond, J.; Ri, Y. D.; Robertson, A.; Roloff, P.; Ron, E.; Rubinsky, I.; Ruspa, M.; Sacchi, R.; Salii, A.; Samson, U.; Sartorelli, G.; Savin, A. A.; Saxon, D. H.; Schioppa, M.; Schlenstedt, S.; Schleper, P.; Schmidke, W. B.; Schneekloth, U.; Schoenberg, V.; Schoerner-Sadenius, T.; Schwartz, J. .; Sciulli, F.; Shcheglova, L. M.; Shehzadi, R.; Singh, I.; Skillicorn, I. O.; Slominski, W.; Smith, W. H.; Sola, V.; Solano, A.; Son, D.; Sosnovtsev, V.; Spiridonov, A.; Stadie, H.; Stanco, L.; Stern, A.; Stewart, T. P.; Stifutkin, A.; Stopa, P.; Suchkov, S.; Susinno, G.; Suszycki, L.; Sztuk, J.; Szuba, D.; Szuba, J.; Tapper, A. D.; Tassi, E.; Terron, J.; Theedt, T.; Tiecke, H.; Tokushuku, K.; Tomalak, O.; Tomaszewska, J.; Tsurugai, T.; Turcato, M.; Tymieniecka, T.; Uribe-Estrada, C.; Vazquez, M.; Verbytskyi, A.; Viazloz, V.; Vlasov, N. N.; Volynets, O.; Walczak, R.; Abdullah, W. A. T. Wan; Whitmore, J. J.; Whyte, J.; Wing, M.; Wlasenko, M.; Wolf, G.; Wolfe, H.; Wrona, K.; Yaguees-Molina, A. G.; Yamada, S.; Yamazaki, Y.; Yoshida, R.; Youngman, C.; Zarnecki, A. F.; Zawiejski, L.; Zenaiev, O.; Zeuner, W.; Zhautykov, B. O.; Zhmak, N.; Zichichi, A.; Zolko, M.; Zotkin, D. S.; Zulkapli, Z.
2010-01-01
For the first time, differential inclusive-jet cross sections have been measured in neutral current deep inelastic ep scattering using the anti-k(T) and SIScone algorithms. The measurements were made for boson virtualities Q(2) > 125 GeV2 with the ZEUS detector at HERA using an integrated luminosity
Energy Technology Data Exchange (ETDEWEB)
Abramowicz, H. [Tel Aviv University (Israel). Raymond and Beverly Sackler Faculty of Exact Sciences, School of Physics; Max Planck Inst., Munich (Germany); Abt, I. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Adamczyk, L. [AGH-University of Science and Technology, Cracow (PL). Faculty of Physics and Applied Computer Science] (and others)
2010-03-15
For the first time, differential inclusive-jet cross sections have been measured in neutral current deep inelastic ep scattering using the anti-k{sub T} and SIScone algorithms. The measurements were made for boson virtualities Q{sup 2} > 125 GeV{sup 2} with the ZEUS detector at HERA using an integrated luminosity of 82 pb{sup -1} and the jets were identified in the Breit frame. The performance and suitability of the jet algorithms for their use in hadron-like reactions were investigated by comparing the measurements to those performed with the k{sub T} algorithm. Next-to-leading-order QCD calculations give a good description of the measurements. Measurements of the ratios of cross sections using different jet algorithms are also presented; the measured ratios are well described by calculations including up to O({alpha}{sub s}{sup 3}) terms. Values of {alpha}{sub s}(M{sub Z}) were extracted from the data; the results are compatible with and have similar precision to the value extracted from the k{sub T} analysis. (orig.)
Othman, Arsalan; Gloaguen, Richard
2015-04-01
Topographic effects and complex vegetation cover hinder lithology classification in mountain regions based not only in field, but also in reflectance remote sensing data. The area of interest "Bardi-Zard" is located in the NE of Iraq. It is part of the Zagros orogenic belt, where seven lithological units outcrop and is known for its chromite deposit. The aim of this study is to compare three machine learning algorithms (MLAs): Maximum Likelihood (ML), Support Vector Machines (SVM), and Random Forest (RF) in the context of a supervised lithology classification task using Advanced Space-borne Thermal Emission and Reflection radiometer (ASTER) satellite, its derived, spatial information (spatial coordinates) and geomorphic data. We emphasize the enhancement in remote sensing lithological mapping accuracy that arises from the integration of geomorphic features and spatial information (spatial coordinates) in classifications. This study identifies that RF is better than ML and SVM algorithms in almost the sixteen combination datasets, which were tested. The overall accuracy of the best dataset combination with the RF map for the all seven classes reach ~80% and the producer and user's accuracies are ~73.91% and 76.09% respectively while the kappa coefficient is ~0.76. TPI is more effective with SVM algorithm than an RF algorithm. This paper demonstrates that adding geomorphic indices such as TPI and spatial information in the dataset increases the lithological classification accuracy.
三类有源噪声控制算法性能比较%Comparison of Performances of Three Types of Active Noise Control Algorithms
Institute of Scientific and Technical Information of China (English)
陈珏; 玉昊昕; 陈克安
2013-01-01
设计了FxLMS、GSFxAP、FsLMS等三类有源控制算法的仿真实验和消声室实验，分析了算法性能，对算法的适用条件进行深入研究。结果表明：当次级通路为线性通路时，在实际应用对收敛速度要求不高的情况下，选择FxLMS算法的性能代价比最高。如果欲控制的噪声为非平稳噪声或对算法收敛速度要求较高，GSFxAP算法是最优选择。如果参考信号与初级信号相关性差，选用FsLMS算法最为合适。上述结论为实际工程中有源控制算法的选择提供了理论依据。%In order to reasonably choose active noise control (ANC) algorithm in practical engineering, the performances of three typical ANC algorithms, FxLMS, GSFxAP and FsLMS, were investigated in different conditions by simulations and experiments. The conditions of application of the algorithms were also studied. It was concluded that if the secondary path is linear and the convergence speed does not need to be very high, FxLMS algorithm is the best choice;if the noise to be controlled (i.e. the primary noise) is non-stationary or the convergence speed needs to be very high, GSFxAP algorithm is a suitable choice;if the correlation between the primary noise and reference signal is weak, FsLMS algorithm is the reasonable choice. This conclusion provides a theoretical guide for the choice of ANC algorithm in practical engineering.
76 FR 4896 - Call for Candidates
2011-01-27
... From the Federal Register Online via the Government Publishing Office FEDERAL ACCOUNTING STANDARDS ADVISORY BOARD Call for Candidates AGENCY: Federal Accounting Standards Advisory Board. ACTION: Notice... Federal Accounting Standards Advisory Board (FASAB) is currently seeking candidates (candidates must...
Directory of Open Access Journals (Sweden)
Hunt Anthony
2005-09-01
Full Text Available Abstract Background Accurate measurement of the QT interval is very important from a clinical and pharmaceutical drug safety screening perspective. Expert manual measurement is both imprecise and imperfectly reproducible, yet it is used as the reference standard to assess the accuracy of current automatic computer algorithms, which thus produce reproducible but incorrect measurements of the QT interval. There is a scientific imperative to evaluate the most commonly used algorithms with an accurate and objective 'gold standard' and investigate novel automatic algorithms if the commonly used algorithms are found to be deficient. Methods This study uses a validated computer simulation of 8 different noise contaminated ECG waveforms (with known QT intervals of 461 and 495 ms, generated from a cell array using Luo-Rudy membrane kinetics and the Crank-Nicholson method, as a reference standard to assess the accuracy of commonly used QT measurement algorithms. Each ECG contaminated with 39 mixtures of noise at 3 levels of intensity was first filtered then subjected to three threshold methods (T1, T2, T3, two T wave slope methods (S1, S2 and a Novel method. The reproducibility and accuracy of each algorithm was compared for each ECG. Results The coefficient of variation for methods T1, T2, T3, S1, S2 and Novel were 0.36, 0.23, 1.9, 0.93, 0.92 and 0.62 respectively. For ECGs of real QT interval 461 ms the methods T1, T2, T3, S1, S2 and Novel calculated the mean QT intervals(standard deviations to be 379.4(1.29, 368.5(0.8, 401.3(8.4, 358.9(4.8, 381.5(4.6 and 464(4.9 ms respectively. For ECGs of real QT interval 495 ms the methods T1, T2, T3, S1, S2 and Novel calculated the mean QT intervals(standard deviations to be 396.9(1.7, 387.2(0.97, 424.9(8.7, 386.7(2.2, 396.8(2.8 and 493(0.97 ms respectively. These results showed significant differences between means at >95% confidence level. Shifting ECG baselines caused large errors of QT interval with T1 and T2
On security arguments of the second round SHA-3 candidates
DEFF Research Database (Denmark)
Andreeva, Elena; Bogdanov, Andrey; Mennink, Bart;
2012-01-01
In 2007, the US National Institute for Standards and Technology (NIST) announced a call for the design of a new cryptographic hash algorithm in response to vulnerabilities like differential attacks identified in existing hash functions, such as MD5 and SHA-1. NIST received many submissions, 51...... of which got accepted to the first round. 14 candidates were left in the second round, out of which five candidates have been recently chosen for the final round. An important criterion in the selection process is the SHA-3 hash function security. We identify two important classes of security arguments...... for the new designs: (1) the possible reductions of the hash function security to the security of its underlying building blocks and (2) arguments against differential attack on building blocks. In this paper, we compare the state of the art provable security reductions for the second round candidates...
Mahmoodabadi, M J; Taherkhorsandi, M; Bagheri, A
2014-01-01
An optimal robust state feedback tracking controller is introduced to control a biped robot. In the literature, the parameters of the controller are usually determined by a tedious trial and error process. To eliminate this process and design the parameters of the proposed controller, the multiobjective evolutionary algorithms, that is, the proposed method, modified NSGAII, Sigma method, and MATLAB's Toolbox MOGA, are employed in this study. Among the used evolutionary optimization algorithms to design the controller for biped robots, the proposed method operates better in the aspect of designing the controller since it provides ample opportunities for designers to choose the most appropriate point based upon the design criteria. Three points are chosen from the nondominated solutions of the obtained Pareto front based on two conflicting objective functions, that is, the normalized summation of angle errors and normalized summation of control effort. Obtained results elucidate the efficiency of the proposed controller in order to control a biped robot.
模板匹配算法的两种实现方法比较%A comparison of two methods for model matching algorithm
Institute of Scientific and Technical Information of China (English)
谢方方; 杨文飞; 陈静; 李芳; 于越
2012-01-01
Model matching algorithm is commonly applied to the system of image matching and the system of video tracking. It analyzed in detail the efficiencies through the two methods of VS2010 and System Generator are compared. The simulation results show that the model matching algorithm based on System Generator is more efficient with shorter time.%模板匹配算法是图像配准和视频跟踪等系统中常用的一种算法,首先对它进行了详细的分析研究,在此基础上比较了它在VS2010和System Generator两种环境下实现的性能指标.实验结果表明,基于System Generator环境的模板匹配算法效率更高,开发周期更短.
Energy Technology Data Exchange (ETDEWEB)
Birchler, W.D.; Schilling, S.A.
2001-02-01
The purpose of this report is to demonstrate that modern computer-aided design (CAD), computer-aided manufacturing (CAM), and computer-aided engineering (CAE) systems can be used in the Department of Energy (DOE) Nuclear Weapons Complex (NWC) to design new and remodel old products, fabricate old and new parts, and reproduce legacy data within the inspection uncertainty limits. In this study, two two-dimensional splines are compared with several modern CAD curve-fitting modeling algorithms. The first curve-fitting algorithm is called the Wilson-Fowler Spline (WFS), and the second is called a parametric cubic spline (PCS). Modern CAD systems usually utilize either parametric cubic and/or B-splines.
Katz, Sandor; Nogradi, Daniel; Torok, Csaba
2016-01-01
We study three possible ways to circumvent the sign problem in the O(3) nonlinear sigma model in 1+1 dimensions. We compare the results of the worm algorithm to complex Langevin and multi-parameter reweighting. Using the worm algorithm, the thermodynamics of the model is investigated, continuum results are shown for the pressure at different $\\mu/T$ values in the range $0-4$. By performing $T=0$ simulations using the worm algorithm the Silver Blaze phenomenon is reproduced. Regarding complex Langevin, we test various implementations of discretizing the complex Langevin equation. We found that the exponentialized Euler discretization of the Langevin equation gives wrong results for the action and the density at low $T/m$. By performing continuum extrapolation we found that this discrepancy does not disappear and depends slightly on temperature. The discretization with spherical coordinates perform similarly at low $\\mu/T$, but goes wrong also at some higher temperatures at high $\\mu/T$. However, a third discre...
Leutheuser, Heike; Schuldhaus, Dominik; Eskofier, Bjoern M
2013-01-01
Insufficient physical activity is the 4th leading risk factor for mortality. Methods for assessing the individual daily life activity (DLA) are of major interest in order to monitor the current health status and to provide feedback about the individual quality of life. The conventional assessment of DLAs with self-reports induces problems like reliability, validity, and sensitivity. The assessment of DLAs with small and light-weight wearable sensors (e.g. inertial measurement units) provides a reliable and objective method. State-of-the-art human physical activity classification systems differ in e.g. the number and kind of sensors, the performed activities, and the sampling rate. Hence, it is difficult to compare newly proposed classification algorithms to existing approaches in literature and no commonly used dataset exists. We generated a publicly available benchmark dataset for the classification of DLAs. Inertial data were recorded with four sensor nodes, each consisting of a triaxial accelerometer and a triaxial gyroscope, placed on wrist, hip, chest, and ankle. Further, we developed a novel, hierarchical, multi-sensor based classification system for the distinction of a large set of DLAs. Our hierarchical classification system reached an overall mean classification rate of 89.6% and was diligently compared to existing state-of-the-art algorithms using our benchmark dataset. For future research, the dataset can be used in the evaluation process of new classification algorithms and could speed up the process of getting the best performing and most appropriate DLA classification system.
Halopentacenes: Promising Candidates for Organic Semiconductors
Institute of Scientific and Technical Information of China (English)
DU Gong-He; REN Zhao-Yu; GUO Ping; ZHENG Ji-Ming
2009-01-01
We introduce polar substituents such as F, Cl, Br into pentacene to enhance the dissolubility in common organic solvents while retaining the high charge-carrier mobilities of pentacene. Geometric structures, dipole moments,frontier molecule orbits, ionization potentials and electron affinities, as well as reorganization energies of those molecules, and of pentacene for comparison, are successively calculated by density functional theory. The results indicate that halopentacenes have rather small reorganization energies (< 0.2 eV), and when the substituents are in position 2 or positions 2 and 9, they are polarity molecules. Thus we conjecture that they can easily be dissolved in common organic solvents, and are promising candidates for organic semiconductors.
Performance Comparison for Four TSVR-type Learning Algorithms%四种TSVR型学习算法的性能比较
Institute of Scientific and Technical Information of China (English)
李艳蒙; 范丽亚
2016-01-01
It is w ell know n that the computational complexity and sparsity of learning algorithms based on support vector regression machines (SVRs) are two main factors for analyzing and treating big data ,especially for high dimensional data .According to the two factors ,scholars did a lot of research work and proposed many improved SVR‐type learning algorithms .Among these improved algorithms , some have the basically same starting point ,just solving methods are slightly different ;some have dis‐tinctly different starting point and then result in different optimization problems ,but the solving meth‐ods are similar .For deep understanding these improved algorithms and being more selective in the appli‐cations ,this paper is devoted to analyze and compare the performance for four more representative TS‐VR‐type algorithms .%我们知道，基于SVR的学习算法的计算复杂性和稀疏性对分析和处理大数据来说是非常重要的两个因素，尤其是对高维数据。为此，学者们做了大量的研究工作并提出了许多改进的SVR型算法。它们当中，有些算法的出发点基本相同，只是求解方法上略有不同；有些算法有明显不同的出发点，其所构建的最优化模型也不相同，但求解方法上大同小异。本文选择四个较具代表性的TSVR型学习算法，分析和比较它们的性能，以期更加深入的理解这些算法，且在应用中更具有选择性。
音频压缩中3种整数型MDCT变换的比较%Comparison of three IntMDCT algorithms in audio compression
Institute of Scientific and Technical Information of China (English)
王膂; 伍家松; Senhadji Lotfi; 舒华忠
2012-01-01
为了快速计算整数型改进的离散余弦变换(IntMDCT),构造了基于提升变换、模变换以及无穷范数旋转变换的3种计算12点IntMDCT的算法.首先将12点MDCT转化为6点Ⅳ型离散余弦变换(DCT-Ⅳ),并将后者分解为7个Givens旋转变换的乘积；然后分别利用提升变换算法、模变换算法和无穷范数旋转变换算法实现Givens旋转变换的整数型近似计算；最后,对这3种算法在语音信号无损和有损压缩中的运行速度和计算精确度进行比较.实验结果表明,在这3种算法中,基于模变换的IntMDCT算法的运行速度最快；基于无穷范数旋转变换的IntMDCT算法的计算精度最高,并在有损音频压缩中获得的信噪比最高.%In order to improve the computation efficiency of the integer modified discrete cosine transform (IntMDCT), three algorithms based on the lifting scheme, modulo transform and infinity norm rotation transform are formulated respectively for computing the 12-point IntMDCT. First, the 12-point IntMDCT is converted into the 6-point type-]V discrete cosine transform (DCT-IV) , which is then factorized into a product of 7 Givens rotation matrices. The integer type Givens rotation matrices are approximated by lifting scheme, modulo transform and infinity norm rotation transform, respectively. Finally, the speed and accuracy of these three IntMDCT algorithms are compared in both lossless and lossy audio compression. The experimental results show that in the three algorithms , the IntMDCT algorithm based on the modulo transform has the highest computation speed. The IntMDCT algorithm based on the infinity norm rotation transform has the highest accuracy, and can achieve the highest signal to noise ratio (SNR) in lossy audio compression.
Performance Analysis of Cone Detection Algorithms
Mariotti, Letizia
2015-01-01
Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of two popular cone detection algorithms and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the three algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimat...
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines...
Candidate gene prioritization with Endeavour.
Tranchevent, Léon-Charles; Ardeshirdavani, Amin; ElShal, Sarah; Alcaide, Daniel; Aerts, Jan; Auboeuf, Didier; Moreau, Yves
2016-07-08
Genomic studies and high-throughput experiments often produce large lists of candidate genes among which only a small fraction are truly relevant to the disease, phenotype or biological process of interest. Gene prioritization tackles this problem by ranking candidate genes by profiling candidates across multiple genomic data sources and integrating this heterogeneous information into a global ranking. We describe an extended version of our gene prioritization method, Endeavour, now available for six species and integrating 75 data sources. The performance (Area Under the Curve) of Endeavour on cross-validation benchmarks using 'gold standard' gene sets varies from 88% (for human phenotypes) to 95% (for worm gene function). In addition, we have also validated our approach using a time-stamped benchmark derived from the Human Phenotype Ontology, which provides a setting close to prospective validation. With this benchmark, using 3854 novel gene-phenotype associations, we observe a performance of 82%. Altogether, our results indicate that this extended version of Endeavour efficiently prioritizes candidate genes. The Endeavour web server is freely available at https://endeavour.esat.kuleuven.be/.
Candidate cave entrances on Mars
Cushing, Glen E.
2012-01-01
This paper presents newly discovered candidate cave entrances into Martian near-surface lava tubes, volcano-tectonic fracture systems, and pit craters and describes their characteristics and exploration possibilities. These candidates are all collapse features that occur either intermittently along laterally continuous trench-like depressions or in the floors of sheer-walled atypical pit craters. As viewed from orbit, locations of most candidates are visibly consistent with known terrestrial features such as tube-fed lava flows, volcano-tectonic fractures, and pit craters, each of which forms by mechanisms that can produce caves. Although we cannot determine subsurface extents of the Martian features discussed here, some may continue unimpeded for many kilometers if terrestrial examples are indeed analogous. The features presented here were identified in images acquired by the Mars Odyssey's Thermal Emission Imaging System visible-wavelength camera, and by the Mars Reconnaissance Orbiter's Context Camera. Select candidates have since been targeted by the High-Resolution Imaging Science Experiment. Martian caves are promising potential sites for future human habitation and astrobiology investigations; understanding their characteristics is critical for long-term mission planning and for developing the necessary exploration technologies.
Directory of Open Access Journals (Sweden)
James E Skinner
2009-08-01
Full Text Available James E Skinner1, Michael Meyer2, Brian A Nester3, Una Geary4, Pamela Taggart4, Antoinette Mangione4, George Ramalanjaona5, Carol Terregino6, William C Dalsey41Vicor Technologies, Inc., Boca Raton, FL, USA; 2Max Planck Institute for Experimental Physiology, Goettingen, Germany; 3Lehigh Valley Hospital, Allentown, PA, USA; 4Albert Einstein Medical Center, Philadelphia, PA, USA; 5North Shore University Hospital, Plainview, NY, USA; 6Cooper Medical Center, Camden, NJ, USAObjective: Comparative algorithmic evaluation of heartbeat series in low-to-high risk cardiac patients for the prospective prediction of risk of arrhythmic death (AD.Background: Heartbeat variation reflects cardiac autonomic function and risk of AD. Indices based on linear stochastic models are independent risk factors for AD in post-myocardial infarction (post-MI cohorts. Indices based on nonlinear deterministic models have superior predictability in retrospective data.Methods: Patients were enrolled (N = 397 in three emergency departments upon presenting with chest pain and were determined to be at low-to-high risk of acute MI (>7%. Brief ECGs were recorded (15 min and R-R intervals assessed by three nonlinear algorithms (PD2i, DFA, and ApEn and four conventional linear-stochastic measures (SDNN, MNN, 1/f-Slope, LF/HF. Out-of-hospital AD was determined by modified Hinkle–Thaler criteria.Results: All-cause mortality at one-year follow-up was 10.3%, with 7.7% adjudicated to be AD. The sensitivity and relative risk for predicting AD was highest at all time-points for the nonlinear PD2i algorithm (p ≤ 0.001. The sensitivity at 30 days was 100%, specificity 58%, and relative risk >100 (p ≤ 0.001; sensitivity at 360 days was 95%, specificity 58%, and relative risk >11.4 (p ≤ 0.001.Conclusions: Heartbeat analysis by the time-dependent nonlinear PD2i algorithm is comparatively the superior test.Keywords: autonomic nervous system, regulatory systems, electrophysiology, heart rate
Comparing Online Algorithms for Bin Packing Problems
DEFF Research Database (Denmark)
Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard
2012-01-01
The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst-ord......-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair....
Institute of Scientific and Technical Information of China (English)
杨胜龙; 张禹; 张衡; 樊伟
2015-01-01
Catch per unite of effort (CPUE) is often used as an index of relative abundance in fisheries stock assessments. However, the trends in nominal CPUE can be influenced by many factors in addition to stock abundance, including the choice of fishing location and target species, and environmental conditions. Therefore CPUE standardization is a basic work in stock assessment and management. CPUE standardization research is a rapidly developing field, and many statistical models have been used in this field. Improvement of data quality and continued evaluation of model performance should be given priority so as to provide recommendation for management and conservation. In this paper, we evaluated the performance of 5 candidate methods (artificial neural network (ANN), regression trees (RT), random forest (RF), support vector machine (SVM) and generalized linear model (GLM)) using the actual fishery data for bigeye tuna (Thunnus obesus) from the International Commission for the Conservation of Atlantic Tunas (ICCAT). Statistical performances of these 5 models were compared based on mean square error (MSE), mean absolute error (MAE), 3 kinds of correlation coefficients (the Person’s, Kendall’s rank and Spearman’s rank) and normalized mean square error (NMSE), which were measured by the difference between the observed and the corresponding predicted values. The results showed that the performance of the SVM was better than (or equivalent to) the RF, and their MSE, MAE, 3 kinds of correlation coefficients and NMSE were almost the same. These 2 algorithms were superior to the other methods based on the results from the training and testing dataset and all data, except the NMSE value in training dataset. The NMSE value of the RT was better than the SVM and RF. The performance of the RT was better than that of the ANN, but inferior to that of the SVM and RF except the NMSE value in training dataset. The performance of the ANN was better than that of the GLM. The
A star identification algorithm for large FOV observations
Duan, Yu; Niu, Zhaodong; Chen, Zengping
2016-10-01
Due to the broader extent of observation and higher detection probability of space targets, large FOV (field of vision) optical instruments are widely used in astronomical applications.. However, the high density of observed stars and the distortion of the optical system often bring about inaccuracy in star locations. So in large FOV observations, many conventional star identification algorithms do not show very good performance. In this paper, we propose a star identification method with a low requirement for observation accuracy and thus suitable for large FOV circumstances. The proposed method includes two stages. The former is based on the match group algorithm, in addition to which we exploit the information of differential angles of inclination for verification. The inclinations of satellite stars are computed by reference to the selected pole stars. Then we obtain a set of identified stars for further recognition. The latter stage involves four steps. First, we derive the relationship between the rectangular coordinates of catalog stars and sensor stars with the identified locations obtained. Second, we transform the sensor coordinates to the catalog coordinates and find the catalog stars at close range as candidates. Third, we calculate the angle of inclination of each unidentified sensor star in relation to the nearest previously identified one, and the angular separation between them as well, to compare with those of the candidates. At last, candidates satisfying the limitations are considered the appropriate correspondences. The experimental results show that in large FOV observations, the proposed method presents better performance in comparison with several typical star identification methods in open literature.
ISINA: INTEGRAL Source Identification Network Algorithm
Scaringi, S; Clark, D J; Dean, A J; Hill, A B; McBride, V A; Shaw, S E
2008-01-01
We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using Random Forests, is applied to the IBIS/ISGRI dataset in order to ease the production of unbiased future soft gamma-ray source catalogues. First we introduce the dataset and the problems encountered when dealing with images obtained using the coded mask technique. The initial step of source candidate searching is introduced and an initial candidate list is created. A description of the feature extraction on the initial candidate list is then performed together with feature merging for these candidates. Three training and testing sets are created in order to deal with the diverse timescales encountered when dealing with the gamma-ray sky. Three independent Random Forest are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described...
Bergamino, Maurizio; Barletta, Laura; Castellan, Lucio; Mancardi, Gianluigi; Roccatagliata, Luca
2015-12-01
Dynamic contrast-enhanced magnetic resonance imaging (DCE-MRI) is a well-established technique for studying blood-brain barrier (BBB) permeability that allows measurements to be made for a wide range of brain pathologies, including multiple sclerosis and brain tumors (BT). This latter application is particularly interesting, because high-grade gliomas are characterized by increased microvascular permeability and a loss of BBB function due to the structural abnormalities of the endothelial layer. In this study, we compared the extended Tofts-Kety (ETK) model and an extended derivate class from phenomenological universalities called EU1 in 30 adult patients with different BT grades. A total of 75 regions of interest were manually drawn on the MRI and subsequently analyzed using the ETK and EU1 algorithms. Significant linear correlations were found among the parameters obtained by these two algorithms. The means of R (2) obtained using ETK and EU1 models for high-grade tumors were 0.81 and 0.91, while those for low-grade tumors were 0.82 and 0.85, respectively; therefore, these two models are equivalent. In conclusion, we can confirm that the application of the EU1 model to the DCE-MRI experimental data might be a useful alternative to pharmacokinetic models in the study of BT, because the analytic results can be generated more quickly and easily than with the ETK model.
Directory of Open Access Journals (Sweden)
V.B.Kirubanand
2010-03-01
Full Text Available The main theme of this paper is to find the performance of the Hub, Switch and Bluetooth technology using the Queueing Petri-net model and the markov algorithm with the security of Steganography. This paper mainly focuses on comparis on of Hub, switch and Bluetooth technologies in terms of service rate and arrival rate by using Markov algorithm (M/M(1,b/1. When comparing the service rates from the Hub network, switch network and the Bluetooth technology, it has been found that the service rate from the Bluetooth technology is very efficient for implementation. The values obtained from the Bluetooth technology can used for calculating the performance of other wireless technologies. QPNs facilitate the integration of both hardware and software aspects of the system behavior in the improved model. The purpose of Steganography is to send the hidden the information from one system to another through the Bluetooth technology with security measures. Queueing Petri Nets are very powerful as a performance analysis and prediction tool. By demonstrating the power of QPNs as a modeling paradigm in further fore coming technologies we hope to motivate further research in this area.
Directory of Open Access Journals (Sweden)
Żaneta Kaszta
2016-09-01
Full Text Available Separation of savanna land cover components is challenging due to the high heterogeneity of this landscape and spectral similarity of compositionally different vegetation types. In this study, we tested the usability of very high spatial and spectral resolution WorldView-2 (WV-2 imagery to classify land cover components of African savanna in wet and dry season. We compared the performance of Object-Based Image Analysis (OBIA and pixel-based approach with several algorithms: k-nearest neighbor (k-NN, maximum likelihood (ML, random forests (RF, classification and regression trees (CART and support vector machines (SVM. Results showed that classifications of WV-2 imagery produce high accuracy results (>77% regardless of the applied classification approach. However, OBIA had a significantly higher accuracy for almost every classifier with the highest overall accuracy score of 93%. Amongst tested classifiers, SVM and RF provided highest accuracies. Overall classifications of the wet season image provided better results with 93% for RF. However, considering woody leaf-off conditions, the dry season classification also performed well with overall accuracy of 83% (SVM and high producer accuracy for the tree cover (91%. Our findings demonstrate the potential of imagery like WorldView-2 with OBIA and advanced supervised machine-learning algorithms in seasonal fine-scale land cover classification of African savanna.
Energy Technology Data Exchange (ETDEWEB)
Chang, Liyun, E-mail: cliyun2000@gmail.com [Department of Medical Imaging and Radiological Sciences, I-Shou University, Kaohsiung, Taiwan (China); Ho, Sheng-Yow [Department of Radiation Oncology, Chi Mei Medical Center, Liouying, Tainan, Taiwan (China); Lee, Tsair-Fwu [Medical Physics and Informatics Laboratory, Department of Electronics Engineering, National Kaohsiung University of Applied Sciences, Kaohsiung, Taiwan (China); Yeh, Shyh-An [Department of Medical Imaging and Radiological Sciences, I-Shou University, Kaohsiung, Taiwan (China); Department of Radiation Oncology, E-Da Hospital, Kaohsiung, Taiwan (China); Ding, Hueisch-Jy [Department of Medical Imaging and Radiological Sciences, I-Shou University, Kaohsiung, Taiwan (China); Chen, Pang-Yu, E-mail: pangyuchen@yahoo.com.tw [Department of Radiation Oncology, Sinlau Christian Hospital, Tainan, Taiwan (China)
2015-03-21
EBT2 film is a convenient dosimetry quality-assurance (QA) tool with high 2D dosimetry resolution and a self-development property for use in verifications of radiation therapy treatment planning and special projects; however, the user will suffer from a relatively higher degree of uncertainty (more than ±6% by Hartmann et al. [29]), and the trouble of cutting one piece of film into small pieces and then reintegrating them each time. To prevent this tedious cutting work, and save calibration time and budget, a dose range analysis is presented in this study for EBT2 film calibration using the Percentage–Depth–Dose (PDD) method. Different combinations of the three dose ranges, 9–26 cGy, 33–97 cGy and 109–320 cGy, with two types of curve fitting algorithms, film pixel values and net optical densities converting into doses, were tested and compared. With the lowest error and acceptable inaccuracy of less than 3 cGy for the clinical dose range (9–320 cGy), a single film calibrated by the net optical density algorithm with the dose range 109–320 cGy was suggested for routine calibration.
Giacometti, Achille; Gögelein, Christoph; Lado, Fred; Sciortino, Francesco; Ferrari, Silvano; Pastore, Giorgio
2014-03-07
Building upon past work on the phase diagram of Janus fluids [F. Sciortino, A. Giacometti, and G. Pastore, Phys. Rev. Lett. 103, 237801 (2009)], we perform a detailed study of integral equation theory of the Kern-Frenkel potential with coverage that is tuned from the isotropic square-well fluid to the Janus limit. An improved algorithm for the reference hypernetted-chain (RHNC) equation for this problem is implemented that significantly extends the range of applicability of RHNC. Results for both structure and thermodynamics are presented and compared with numerical simulations. Unlike previous attempts, this algorithm is shown to be stable down to the Janus limit, thus paving the way for analyzing the frustration mechanism characteristic of the gas-liquid transition in the Janus system. The results are also compared with Barker-Henderson thermodynamic perturbation theory on the same model. We then discuss the pros and cons of both approaches within a unified treatment. On balance, RHNC integral equation theory, even with an isotropic hard-sphere reference system, is found to be a good compromise between accuracy of the results, computational effort, and uniform quality to tackle self-assembly processes in patchy colloids of complex nature. Further improvement in RHNC however clearly requires an anisotropic reference bridge function.
Directory of Open Access Journals (Sweden)
Xiaolei Yu
2014-10-01
Full Text Available Accurate inversion of land surface geo/biophysical variables from remote sensing data for earth observation applications is an essential and challenging topic for the global change research. Land surface temperature (LST is one of the key parameters in the physics of earth surface processes from local to global scales. The importance of LST is being increasingly recognized and there is a strong interest in developing methodologies to measure LST from the space. Landsat 8 Thermal Infrared Sensor (TIRS is the newest thermal infrared sensor for the Landsat project, providing two adjacent thermal bands, which has a great benefit for the LST inversion. In this paper, we compared three different approaches for LST inversion from TIRS, including the radiative transfer equation-based method, the split-window algorithm and the single channel method. Four selected energy balance monitoring sites from the Surface Radiation Budget Network (SURFRAD were used for validation, combining with the MODIS 8 day emissivity product. For the investigated sites and scenes, results show that the LST inverted from the radiative transfer equation-based method using band 10 has the highest accuracy with RMSE lower than 1 K, while the SW algorithm has moderate accuracy and the SC method has the lowest accuracy.
Automatic Classification of Kepler Planetary Transit Candidates
McCauliff, Sean D.; Jenkins, Jon M.; Catanzarite, Joseph; Burke, Christopher J.; Coughlin, Jeffrey L.; Twicken, Joseph D.; Tenenbaum, Peter; Seader, Shawn; Li, Jie; Cote, Miles
2014-01-01
In the first three years of operation the Kepler mission found 3,697 planet candidates from a set of 18,406 transit-like features detected on over 200,000 distinct stars. Vetting candidate signals manually by inspecting light curves and other diagnostic information is a labor intensive effort. Additionally, this classification methodology does not yield any information about the quality of planet candidates; all candidates are as credible as any other candidate. The torrent of exoplanet disco...
Solving Maximal Clique Problem through Genetic Algorithm
Rajawat, Shalini; Hemrajani, Naveen; Menghani, Ekta
2010-11-01
Genetic algorithm is one of the most interesting heuristic search techniques. It depends basically on three operations; selection, crossover and mutation. The outcome of the three operations is a new population for the next generation. Repeating these operations until the termination condition is reached. All the operations in the algorithm are accessible with today's molecular biotechnology. The simulations show that with this new computing algorithm, it is possible to get a solution from a very small initial data pool, avoiding enumerating all candidate solutions. For randomly generated problems, genetic algorithm can give correct solution within a few cycles at high probability.
Undercover Stars Among Exoplanet Candidates
2005-03-01
Very Large Telescope Finds Planet-Sized Transiting Star Summary An international team of astronomers have accurately determined the radius and mass of the smallest core-burning star known until now. The observations were performed in March 2004 with the FLAMES multi-fibre spectrograph on the 8.2-m VLT Kueyen telescope at the ESO Paranal Observatory (Chile). They are part of a large programme aimed at measuring accurate radial velocities for sixty stars for which a temporary brightness "dip" has been detected during the OGLE survey. The astronomers find that the dip seen in the light curve of the star known as OGLE-TR-122 is caused by a very small stellar companion, eclipsing this solar-like star once every 7.3 days. This companion is 96 times heavier than planet Jupiter but only 16% larger. It is the first time that direct observations demonstrate that stars less massive than 1/10th of the solar mass are of nearly the same size as giant planets. This fact will obviously have to be taken into account during the current search for transiting exoplanets. In addition, the observations with the Very Large Telescope have led to the discovery of seven new eclipsing binaries, that harbour stars with masses below one-third the mass of the Sun, a real bonanza for the astronomers. PR Photo 06a/05: Brightness "Dip" and Velocity Variations of OGLE-TR-122. PR Photo 06b/05: Properties of Low-Mass Stars and Planets. PR Photo 06c/05: Comparison Between OGLE-TR-122b, Jupiter and the Sun. The OGLE Survey When a planet happens to pass in front of its parent star (as seen from the Earth), it blocks a small fraction of the star's light from our view [1]. These "planetary transits" are of great interest as they allow astronomers to measure in a unique way the mass and the radius of exoplanets. Several surveys are therefore underway which attempt to find these faint signatures of other worlds. One of these programmes is the OGLE survey which was originally devised to detect microlensing
A Novel Algorithm for Finding Interspersed Repeat Regions
Institute of Scientific and Technical Information of China (English)
Dongdong Li; Zhengzhi Wang; Qingshan Ni
2004-01-01
The analysis of repeats in the DNA sequences is an important subject in bioinformatics. In this paper, we propose a novel projection-assemble algorithm to find unknown interspersed repeats in DNA sequences. The algorithm employs random projection algorithm to obtain a candidate fragment set, and exhaustive search algorithm to search each pair of fragments from the candidate fragment set to find potential linkage, and then assemble them together. The complexity of our projection-assemble algorithm is nearly linear to the length of the genome sequence, and its memory usage is limited by the hardware. We tested our algorithm with both simulated data and real biology data, and the results show that our projection-assemble algorithm is efficient. By means of this algorithm, we found an un-labeled repeat region that occurs five times in Escherichia coli genome, with its length more than 5,000 bp, and a mismatch probability less than 4%.
Kwon, Ji-Wook; Kim, Jin Hyo; Seo, Jiwon
2015-05-06
This paper proposes a Multiple Leader Candidate (MLC) structure and a Competitive Position Allocation (CPA) algorithm which can be applicable for various applications including environmental sensing. Unlike previous formation structures such as virtual-leader and actual-leader structures with position allocation including a rigid allocation and an optimization based allocation, the formation employing the proposed MLC structure and CPA algorithm is robust against the fault (or disappearance) of the member robots and reduces the entire cost. In the MLC structure, a leader of the entire system is chosen among leader candidate robots. The CPA algorithm is the decentralized position allocation algorithm that assigns the robots to the vertex of the formation via the competition of the adjacent robots. The numerical simulations and experimental results are included to show the feasibility and the performance of the multiple robot system employing the proposed MLC structure and the CPA algorithm.
Enthalpy screen of drug candidates.
Schön, Arne; Freire, Ernesto
2016-11-15
The enthalpic and entropic contributions to the binding affinity of drug candidates have been acknowledged to be important determinants of the quality of a drug molecule. These quantities, usually summarized in the thermodynamic signature, provide a rapid assessment of the forces that drive the binding of a ligand. Having access to the thermodynamic signature in the early stages of the drug discovery process will provide critical information towards the selection of the best drug candidates for development. In this paper, the Enthalpy Screen technique is presented. The enthalpy screen allows fast and accurate determination of the binding enthalpy for hundreds of ligands. As such, it appears to be ideally suited to aid in the ranking of the hundreds of hits that are usually identified after standard high throughput screening.
Leishmaniasis: vaccine candidates and perspectives.
Singh, Bhawana; Sundar, Shyam
2012-06-06
Leishmania is a protozoan parasite and a causative agent of the various clinical forms of leishmaniasis. High cost, resistance and toxic side effects of traditional drugs entail identification and development of therapeutic alternatives. The sound understanding of parasite biology is key for identifying novel drug targets, that can induce the cell mediated immunity (mainly CD4+ and CD8+ IFN-gamma mediated responses) polarized towards a Th1 response. These aspects are important in designing a new vaccine along with the consideration of the candidates with respect to their ability to raise memory response in order to improve the vaccine performance. This review is an effort to identify molecules according to their homology with the host and their ability to be used as potent vaccine candidates.
Song, Miao
2009-01-01
We implement for comparative purposes the Feynman algorithm within a C++-based framework for two-layer uniform facet elastic object for real-time softbody simulation based on physics modeling methods. To facilitate the comparison, we implement initial timing measurements on the same hardware against that of Euler integrator in the softbody framework by varying different algorithm parameters. Due to a relatively large number of such variations we implement a GLUI-based user-interface to allow for much more finer control over the simulation process at real-time, which was lacking completely in the previous versions of the framework. We show our currents results based on the enhanced framework. The two-layered elastic object consists of inner and outer elastic mass-spring surfaces and compressible internal pressure. The density of the inner layer can be set differently from the density of the outer layer; the motion of the inner layer can be opposite to the motion of the outer layer. These special features, whic...
El-habashi, A.; Ahmed, S.
2015-10-01
New approaches are described that use of the Ocean Color Remote Sensing Reflectance readings (OC Rrs) available from the existing Visible Infrared Imaging Radiometer Suite (VIIRS) bands to detect and retrieve Karenia brevis (KB) Harmful Algal Blooms (HABs) that frequently plague the coasts of the West Florida Shelf (WFS). Unfortunately, VIIRS, unlike MODIS, does not have a 678 nm channel to detect Chlorophyll fluorescence, which is used with MODIS in the normalized fluorescence height (nFLH) algorithm which has been shown to help in effectively detecting and tracking KB HABs. We present here the use of neural network (NN) algorithms for KB HABS retrievals in the WFS. These NNs, previously reported by us, were trained, using a wide range of suitably parametrized synthetic data typical of coastal waters, to form a multiband inversion algorithm which models the relationship between Rrs values at the 486, 551 and 671nm VIIRS bands against the values of phytoplankton absorption (aph), CDOM absorption (ag), non-algal particles (NAP) absorption (aNAP) and the particulate backscattering bbp coefficients, all at 443nm, and permits retrievals of these parameters. We use the NN to retrieve aph443 in the WFS. The retrieved aph443 values are then filtered by applying known limiting conditions on minimum Chlorophyll concentration [Chla] and low backscatter properties associated with KB HABS in the WFS, thereby identifying, delineating and quantifying the aph443 values, and hence [Chl] concentrations representing KB HABS. Comparisons with in-situ measurements and other techniques including MODIS nFLH confirm the viability of both the NN retrievals and the filtering approaches devised.
Directory of Open Access Journals (Sweden)
Wenjuan Li
2015-11-01
Full Text Available The leaf area index (LAI and the fraction of photosynthetically active radiation absorbed by green vegetation (FAPAR are essential climatic variables in surface process models. FCOVER is also important to separate vegetation and soil for energy balance processes. Currently, several LAI, FAPAR and FCOVER satellite products are derived moderate to coarse spatial resolution. The launch of Sentinel-2 in 2015 will provide data at decametric resolution with a high revisit frequency to allow quantifying the canopy functioning at the local to regional scales. The aim of this study is thus to evaluate the performances of a neural network based algorithm to derive LAI, FAPAR and FCOVER products at decametric spatial resolution and high temporal sampling. The algorithm is generic, i.e., it is applied without any knowledge of the landcover. A time series of high spatial resolution SPOT4_HRVIR (16 scenes and Landsat 8 (18 scenes images acquired in 2013 over the France southwestern site were used to generate the LAI, FAPAR and FCOVER products. For each sensor and each biophysical variable, a neural network was first trained over PROSPECT+SAIL radiative transfer model simulations of top of canopy reflectance data for green, red, near-infra red and short wave infra-red bands. Our results show a good spatial and temporal consistency between the variables derived from both sensors: almost half the pixels show an absolute difference between SPOT and LANDSAT estimates of lower that 0.5 unit for LAI, and 0.05 unit for FAPAR and FCOVER. Finally, downward-looking digital hemispherical cameras were completed over the main land cover types to validate the accuracy of the products. Results show that the derived products are strongly correlated with the field measurements (R2 > 0.79, corresponding to a RMSE = 0.49 for LAI, RMSE = 0.10 (RMSE = 0.12 for black-sky (white sky FAPAR and RMSE = 0.15 for FCOVER. It is concluded that the proposed generic algorithm provides a good
Toward organometallic antischistosomal drug candidates.
Hess, Jeannine; Keiser, Jennifer; Gasser, Gilles
2015-01-01
In the recent years, there has been a growing interest in the use of novel approaches for the treatment of parasitic diseases such as schistosomiasis. Among the different approaches used, organometallic compounds were found to offer unique opportunities in the design of antiparasitic drug candidates. A ferrocenyl derivative, namely ferroquine, has even entered clinical trials as a novel antimalarial. In this short review, we report on the studies describing the use of organometallic compounds against schistosomiasis.
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
Directory of Open Access Journals (Sweden)
Hamed Piarehzadeh
2012-08-01
Full Text Available In this study is tried to optimal distributed generation allocation for stability improvement in radial distribution systems. Voltage instability implies an uncontrolled decrease in voltage triggered by a disturbance, leading to voltage collapse and is primarily caused by dynamics connected with the load. The instability is divided into steady state and transient voltage instability Based on the time spectrum of the incident of the phenomena. The analysis is accomplished using a steady state voltage stability index which can be evaluated at each node of the distribution system. Several optimal capacities and locations are used to check these results. The location of DG has the main effect voltage stability on the system. Effects of location and capacity on incrementing steady state voltage stability in radial distribution systems are examined through Harmony Search Algorithm (HSA and finally the results are compared to Particle Swarm Optimization (PSO on the terms of speed, convergence and accuracy.
Comparison and Analysis of Traffic Signs Recognition Algorithm%交通标志识别算法的对比与分析
Institute of Scientific and Technical Information of China (English)
钟玲; 于雅洁; 张志佳; 靳永超
2016-01-01
交通标志识别作为典型的机器视觉应用，已有多种机器视觉算法得到广泛的应用。卷积神经网络能够避免显式的人工特征提取过程，因此本文引入卷积神经网络为交通标志进行识别研究，并与BP神经网络、支持向量机进行对比实验，通过对实验结果的理解与分析，可以得出卷积神经网络在识别率及训练速度上均显著高于另两种算法，并能取得最佳的识别效果。%Traffic signs recognition as a typical machine vision application,a variety of machine vision algorithms have been widely used.Convolutional neural network can avoid explicit artificial feature extraction process.Therefore,this thesis introduces convolutional neural network for traffic sign recognition research,and comparative experiments with BP neural network,support vector machine,through the understanding and analysis of the experimental results,it can be derived from the convolution neural network in recognition rate and the training speed were significantly higher than those of the other two algorithm, and can achieve the best effect of recognition.
Calzado, A; Geleijns, J; Joemai, R M S; Veldkamp, W J H
2014-01-01
Objective: To compare low-contrast detectability (LCDet) performance between a model [non–pre-whitening matched filter with an eye filter (NPWE)] and human observers in CT images reconstructed with filtered back projection (FBP) and iterative [adaptive iterative dose reduction three-dimensional (AIDR 3D; Toshiba Medical Systems, Zoetermeer, Netherlands)] algorithms. Methods: Images of the Catphan® phantom (Phantom Laboratories, New York, NY) were acquired with Aquilion ONE™ 320-detector row CT (Toshiba Medical Systems, Tokyo, Japan) at five tube current levels (20–500 mA range) and reconstructed with FBP and AIDR 3D. Samples containing either low-contrast objects (diameters, 2–15 mm) or background were extracted and analysed by the NPWE model and four human observers in a two-alternative forced choice detection task study. Proportion correct (PC) values were obtained for each analysed object and used to compare human and model observer performances. An efficiency factor (η) was calculated to normalize NPWE to human results. Results: Human and NPWE model PC values (normalized by the efficiency, η = 0.44) were highly correlated for the whole dose range. The Pearson's product-moment correlation coefficients (95% confidence interval) between human and NPWE were 0.984 (0.972–0.991) for AIDR 3D and 0.984 (0.971–0.991) for FBP, respectively. Bland–Altman plots based on PC results showed excellent agreement between human and NPWE [mean absolute difference 0.5 ± 0.4%; range of differences (−4.7%, 5.6%)]. Conclusion: The NPWE model observer can predict human performance in LCDet tasks in phantom CT images reconstructed with FBP and AIDR 3D algorithms at different dose levels. Advances in knowledge: Quantitative assessment of LCDet in CT can accurately be performed using software based on a model observer. PMID:24837275
Explicit filtering of building blocks for genetic algorithms
Kemenade, C.H.M. van
1996-01-01
Genetic algorithms are often applied to building block problems. We have developed a simple filtering algorithm that can locate building blocks within a bit-string, and does not make assumptions regarding the linkage of the bits. A comparison between the filtering algorithm and genetic algorithms re
A heuristic path-estimating algorithm for large-scale real-time traffic information calculating
Institute of Scientific and Technical Information of China (English)
2008-01-01
As the original Global Position System (GPS) data in Floating Car Data have the accuracy problem,this paper proposes a heuristic path-estimating algorithm for large-scale real-time traffic information calculating. It uses the heuristic search method,imports the restriction with geometric operation,and makes comparison between the vectors composed of the vehicular GPS points and the special road network model to search the set of vehicular traveling route candidates. Finally,it chooses the most optimal one according to weight. Experimental results indicate that the algorithm has considerable efficiency in accuracy (over 92.7%) and com-putational speed (max 8000 GPS records per second) when handling the GPS tracking data whose sampling rate is larger than 1 min even under complex road network conditions.
Bovchaliuk, Valentyn; Goloub, Philippe; Podvin, Thierry; Veselovskii, Igor; Tanre, Didier; Chaikovsky, Anatoli; Dubovik, Oleg; Mortier, Augustin; Lopatin, Anton; Korenskiy, Mikhail; Victori, Stephane
2016-07-01
Aerosol particles are important and highly variable components of the terrestrial atmosphere, and they affect both air quality and climate. In order to evaluate their multiple impacts, the most important requirement is to precisely measure their characteristics. Remote sensing technologies such as lidar (light detection and ranging) and sun/sky photometers are powerful tools for determining aerosol optical and microphysical properties. In our work, we applied several methods to joint or separate lidar and sun/sky-photometer data to retrieve aerosol properties. The Raman technique and inversion with regularization use only lidar data. The LIRIC (LIdar-Radiometer Inversion Code) and recently developed GARRLiC (Generalized Aerosol Retrieval from Radiometer and Lidar Combined data) inversion methods use joint lidar and sun/sky-photometer data. This paper presents a comparison and discussion of aerosol optical properties (extinction coefficient profiles and lidar ratios) and microphysical properties (volume concentrations, complex refractive index values, and effective radius values) retrieved using the aforementioned methods. The comparison showed inconsistencies in the retrieved lidar ratios. However, other aerosol properties were found to be generally in close agreement with the AERONET (AErosol RObotic NETwork) products. In future studies, more cases should be analysed in order to clearly define the peculiarities in our results.
Automatic extraction of candidate nomenclature terms using the doublet method
Directory of Open Access Journals (Sweden)
Berman Jules J
2005-10-01
Full Text Available Abstract Background New terminology continuously enters the biomedical literature. How can curators identify new terms that can be added to existing nomenclatures? The most direct method, and one that has served well, involves reading the current literature. The scholarly curator adds new terms as they are encountered. Present-day scholars are severely challenged by the enormous volume of biomedical literature. Curators of medical nomenclatures need computational assistance if they hope to keep their terminologies current. The purpose of this paper is to describe a method of rapidly extracting new, candidate terms from huge volumes of biomedical text. The resulting lists of terms can be quickly reviewed by curators and added to nomenclatures, if appropriate. The candidate term extractor uses a variation of the previously described doublet coding method. The algorithm, which operates on virtually any nomenclature, derives from the observation that most terms within a knowledge domain are composed entirely of word combinations found in other terms from the same knowledge domain. Terms can be expressed as sequences of overlapping word doublets that have more specific meaning than the individual words that compose the term. The algorithm parses through text, finding contiguous sequences of word doublets that are known to occur somewhere in the reference nomenclature. When a sequence of matching word doublets is encountered, it is compared with whole terms already included in the nomenclature. If the doublet sequence is not already in the nomenclature, it is extracted as a candidate new term. Candidate new terms can be reviewed by a curator to determine if they should be added to the nomenclature. An implementation of the algorithm is demonstrated, using a corpus of published abstracts obtained through the National Library of Medicine's PubMed query service and using "The developmental lineage classification and taxonomy of neoplasms" as a reference
A secured Cryptographic Hashing Algorithm
Mohanty, Rakesh; Bishi, Sukant kumar
2010-01-01
Cryptographic hash functions for calculating the message digest of a message has been in practical use as an effective measure to maintain message integrity since a few decades. This message digest is unique, irreversible and avoids all types of collisions for any given input string. The message digest calculated from this algorithm is propagated in the communication medium along with the original message from the sender side and on the receiver side integrity of the message can be verified by recalculating the message digest of the received message and comparing the two digest values. In this paper we have designed and developed a new algorithm for calculating the message digest of any message and implemented t using a high level programming language. An experimental analysis and comparison with the existing MD5 hashing algorithm, which is predominantly being used as a cryptographic hashing tool, shows this algorithm to provide more randomness and greater strength from intrusion attacks. In this algorithm th...
Improved Tiled Bitmap Forensic Analysis Algorithm
Directory of Open Access Journals (Sweden)
C. D. Badgujar, G. N. Dhanokar
2012-12-01
Full Text Available In Computer network world, the needs for securityand proper systems of control are obvious and findout the intruders who do the modification andmodified data. Nowadays Frauds that occurs incompanies are not only by outsiders but also byinsiders. Insider may perform illegal activity & tryto hide illegal activity. Companies would like to beassured that such illegal activity i.e. tampering hasnot occurred, or if it does, it should be quicklydiscovered. Mechanisms now exist that detecttampering of a database, through the use ofcryptographically-strong hash functions. This papercontains a survey which explores the various beliefsupon database forensics through differentmethodologies using forensic algorithms and toolsfor investigations. Forensic analysis algorithms areused to determine who, when, and what data hadbeen tampered. Tiled Bitmap Algorithm introducesthe notion of a candidate set (all possible locationsof detected tampering(s and provides a completecharacterization of the candidate set and itscardinality. Improved tiled bitmap algorithm willcover come the drawbacks of existing tiled bitmapalgorithm.
Directory of Open Access Journals (Sweden)
Quandalle P.
2006-11-01
Full Text Available Cet article est une étude comparative de différentes méthodes itératives de résolution matricielle sur le calc-lateur CRAY 1. Les différentes méthodes retenues sont des méthodes qui ont déjà été décrites dans la littérature pétrolière, mais dont la structure (plus ou moins vectorisable leur confère un regain d'intérêt sur des calculateurs tels que le CRAY 1 ou le CYBER 205. Le contexte dans lequel nous nous plaçons est celui de la simulation d'un écoulement triphasique et tridimensionnel en milieu poreux sur un modèle de type Black Oil. Nous supposerons que les équations qui décrivent l'écoulement sont discrétisées par la méthode des différences finies utilisant un schéma à cinq points [1]. Les algorithmes que nous allons étudier sont dérivés de trois méthodes, la méthode de Surrelaxation par blocks, la Strong lmplicit Procedure, et la méthode Orthomin. A l'aide d'exemples, nous essaierons de dégager des informations tant sur leur rapidité d'exécution que sur la qualité de leur solution. This article makes a comparative study of different iterative methods for matrix solving on a CRAY 1 computer. The selected methods have been described in the petroleum litterature but are such that their (more or less vectorizable structure makes them of renewed interest with computers such as the CRAY 1 or CYBER 205. The context dealt with here is that of simulating a three-phase three-dimensional flow in a porous medium on a Black Oil model. We assume that the equations describing the flow are discretized by the finite-difference method using a five-spot pattern. The algorithme we are going to investigate are derived from three methods : the Block Successive Over Relaxation method, the Strong Implicit Procedure, and the Orthomin method. Examples will be used to bring out information on both their execution speed and the quality of their solution.
Energy Technology Data Exchange (ETDEWEB)
Hudobivnik, Nace; Dedes, George; Parodi, Katia; Landry, Guillaume, E-mail: g.landry@lmu.de [Department of Medical Physics, Ludwig-Maximilians-University, Munich 85748 (Germany); Schwarz, Florian; Johnson, Thorsten; Sommer, Wieland H. [Institute for Clinical Radiology, Ludwig Maximilians University Hospital Munich, 81377 Munich (Germany); Agolli, Linda [Department of Radiation Oncology, Ludwig-Maximilians-University, Munich 81377, Germany and Radiation Oncology, Sant’ Andrea Hospital, Sapienza University, Rome 00189 (Italy); Tessonnier, Thomas [Department of Medical Physics, Ludwig-Maximilians-University, Munich 85748, Germany and Department of Radiation Oncology, Heidelberg University Hospital, Heidelberg (Germany); Verhaegen, Frank [Department of Radiation Oncology (MAASTRO), GROW School for Oncology and Developmental Biology, Maastricht 6229 ET, the Netherlands and Medical Physics Unit, Department of Oncology, McGill University, Montreal, Quebec H3A 0G4 (Canada); Thieke, Christian; Belka, Claus [Department of Radiation Oncology, Ludwig-Maximilians-University, Munich 81377 (Germany)
2016-01-15
Purpose: Dual energy CT (DECT) has recently been proposed as an improvement over single energy CT (SECT) for stopping power ratio (SPR) estimation for proton therapy treatment planning (TP), thereby potentially reducing range uncertainties. Published literature investigated phantoms. This study aims at performing proton therapy TP on SECT and DECT head images of the same patients and at evaluating whether the reported improved DECT SPR accuracy translates into clinically relevant range shifts in clinical head treatment scenarios. Methods: Two phantoms were scanned at a last generation dual source DECT scanner at 90 and 150 kVp with Sn filtration. The first phantom (Gammex phantom) was used to calibrate the scanner in terms of SPR while the second served as evaluation (CIRS phantom). DECT images of five head trauma patients were used as surrogate cancer patient images for TP of proton therapy. Pencil beam algorithm based TP was performed on SECT and DECT images and the dose distributions corresponding to the optimized proton plans were calculated using a Monte Carlo (MC) simulation platform using the same patient geometry for both plans obtained from conversion of the 150 kVp images. Range shifts between the MC dose distributions from SECT and DECT plans were assessed using 2D range maps. Results: SPR root mean square errors (RMSEs) for the inserts of the Gammex phantom were 1.9%, 1.8%, and 1.2% for SECT phantom calibration (SECT{sub phantom}), SECT stoichiometric calibration (SECT{sub stoichiometric}), and DECT calibration, respectively. For the CIRS phantom, these were 3.6%, 1.6%, and 1.0%. When investigating patient anatomy, group median range differences of up to −1.4% were observed for head cases when comparing SECT{sub stoichiometric} with DECT. For this calibration the 25th and 75th percentiles varied from −2% to 0% across the five patients. The group median was found to be limited to 0.5% when using SECT{sub phantom} and the 25th and 75th percentiles
An Efficient Hybrid Face Recognition Algorithm Using PCA and GABOR Wavelets
Directory of Open Access Journals (Sweden)
Hyunjong Cho
2014-04-01
Full Text Available With the rapid development of computers and the increasing, mass use of high-tech mobile devices, vision-based face recognition has advanced significantly. However, it is hard to conclude that the performance of computers surpasses that of humans, as humans have generally exhibited better performance in challenging situations involving occlusion or variations. Motivated by the recognition method of humans who utilize both holistic and local features, we present a computationally efficient hybrid face recognition method that employs dual-stage holistic and local feature-based recognition algorithms. In the first coarse recognition stage, the proposed algorithm utilizes Principal Component Analysis (PCA to identify a test image. The recognition ends at this stage if the confidence level of the result turns out to be reliable. Otherwise, the algorithm uses this result for filtering out top candidate images with a high degree of similarity, and passes them to the next fine recognition stage where Gabor filters are employed. As is well known, recognizing a face image with Gabor filters is a computationally heavy task. The contribution of our work is in proposing a flexible dual-stage algorithm that enables fast, hybrid face recognition. Experimental tests were performed with the Extended Yale Face Database B to verify the effectiveness and validity of the research, and we obtained better recognition results under illumination variations not only in terms of computation time but also in terms of the recognition rate in comparison to PCA- and Gabor wavelet-based recognition algorithms.
Institute of Scientific and Technical Information of China (English)
Armand BABOLI; Mohammadali Pirayesh NEGHAB; Rasoul HAJI
2008-01-01
This paper considers a two-level supply chain consisting of one warehouse and one retailer. In this model we determine the optimal ordering policy according to inventory and transportation costs. We assume that the demand rate by the retailer is known. Shortages are allowed neither at the retailer nor at the warehouse. We study this model in two cases; decentralized and centralized. In the decentralized case the retailer and the warehouse independently minimize their own costs; while in the centralized case the warehouse and the retailer are considered as a whole firm. We propose an algorithm to find economic order quantities for both the retailer and the warehouse which minimize the total system cost in the centralized case. The total system cost contains the holding and ordering costs at the retailer and the warehouse as well as the transportation cost from the warehouse to the retailer. The application of this model into the pharmaceutical downstream supply chain of a public hospital allows obtaining significant savings. By numerical examples, the costs are computed in MATLAB(C) to compare the costs in the centralized case with decentralized one and to propose a saving-sharing mechanism through quantity discount.
Pines, S.
1982-01-01
The results of an investigation carried out for the Langley Research Center Terminal Configured Vehicle Program are presented. The investigation generated and compared three path update algorithms designed to provide smooth transition for an aircraft guidance system from DME, VORTAC, and barometric navaids to the more precise MLS by modifying the desired 3-D flight path. The first, called the Zero Cross Track, eliminates the discontinuity in cross track and altitude error by designating the first valid MLS aircraft position as the desired first waypoint, while retaining all subsequent waypoints. The discontinuity in track angle is left unaltered. The second, called the Tangent Path also eliminates the discontinuity in cross track and altitude and choose a new desired heading to be tangent to the next oncoming circular arc turn. The third, called the Continued Track eliminates the discontinuity in cross track, altitude and track angle by accepting the current MLS position and track angle as the desired ones and recomputes the location of the next waypoint. A method is presented for providing a waypoint guidance path reconstruction which treats turns of less than, and greater than, 180 degrees in a uniform manner to construct the desired path.
DEFF Research Database (Denmark)
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Immunological Evaluation and Comparison of Different EV71 Vaccine Candidates
Directory of Open Access Journals (Sweden)
Ai-Hsiang Chou
2012-01-01
Full Text Available Enterovirus 71 (EV71 and coxsackievirus A16 (CVA16 are major causative agents of hand, foot, and mouth diseases (HFMDs, and EV71 is now recognized as an emerging neurotropic virus in Asia. Effective medications and/or prophylactic vaccines against HFMD are not available. The current results from mouse immunogenicity studies using in-house standardized RD cell virus neutralization assays indicate that (1 VP1 peptide (residues 211–225 formulated with Freund’s adjuvant (CFA/IFA elicited low virus neutralizing antibody response (1/32 titer; (2 recombinant virus-like particles produced from baculovirus formulated with CFA/IFA could elicit good virus neutralization titer (1/160; (3 individual recombinant EV71 antigens (VP1, VP2, and VP3 formulated with CFA/IFA, only VP1 elicited antibody response with 1/128 virus neutralization titer; and (4 the formalin-inactivated EV71 formulated in alum elicited antibodies that cross-neutralized different EV71 genotypes (1/640, but failed to neutralize CVA16. In contrast, rabbits antisera could cross-neutralize strongly against different genotypes of EV71 but weakly against CVA16, with average titers 1/6400 and 1/32, respectively. The VP1 amino acid sequence dissimilarity between CVA16 and EV71 could partially explain why mouse antibodies failed to cross-neutralize CVA16. Therefore, the best formulation for producing cost-effective HFMD vaccine is a combination of formalin-inactivated EV71 and CAV16 virions.
Institute of Scientific and Technical Information of China (English)
王明辉; 金红军; 王文勇
2016-01-01
Massive MIMO is regarded as one of the key techniques for the next generation communication(5G) systems. In order to acquire better performance of massive MIMO, the performances of different precoding algorithms for downlink massive MIMO system are analyzed and compared. In this paper, MRT (Maximum Ratio Transmission) precoding, ZF (Zero-Forcing) precoding, and NS (Neumann Series) precoding are compared, and the computational complexity, capacity and BER of these prcoding schemes analyzed, and finally the performances at different SNRs acquired. Simulation results indicate that the computational complexity of MRT precoding is the lowest, but its capacity and BER are the worst and NS precoding can achieve the exact capacity-approaching performance of the ZF precoding with lower computational complexity.%大规模MIMO技术是下一代移动通信（5G）的关键技术，而预编码算法是大规模MIMO系统的核心技术，成为了当前的研究重点。通过建立大规模MIMO系统模型，依次对最大比传输（MRT）预编码、迫零（ZF）预编码和基于Neumann级数（NS）的预编码进行理论分析，并仿真对比它们的复杂度、容量和误码率性能。仿真结果表明：MRT预编码的复杂度最低，但是其容量和误码率性能最差；NS预编码的复杂度比ZF预编码低，但是其容量和误码率性能可以达到接近于ZF预编码的性能。
Directory of Open Access Journals (Sweden)
Cristina Anton
2012-01-01
Full Text Available OBJECTIVE: Differentiation between benign and malignant ovarian neoplasms is essential for creating a system for patient referrals. Therefore, the contributions of the tumor markers CA125 and human epididymis protein 4 (HE4 as well as the risk ovarian malignancy algorithm (ROMA and risk malignancy index (RMI values were considered individually and in combination to evaluate their utility for establishing this type of patient referral system. METHODS: Patients who had been diagnosed with ovarian masses through imaging analyses (n = 128 were assessed for their expression of the tumor markers CA125 and HE4. The ROMA and RMI values were also determined. The sensitivity and specificity of each parameter were calculated using receiver operating characteristic curves according to the area under the curve (AUC for each method. RESULTS: The sensitivities associated with the ability of CA125, HE4, ROMA, or RMI to distinguish between malignant versus benign ovarian masses were 70.4%, 79.6%, 74.1%, and 63%, respectively. Among carcinomas, the sensitivities of CA125, HE4, ROMA (pre-and post-menopausal, and RMI were 93.5%, 87.1%, 80%, 95.2%, and 87.1%, respectively. The most accurate numerical values were obtained with RMI, although the four parameters were shown to be statistically equivalent. CONCLUSION: There were no differences in accuracy between CA125, HE4, ROMA, and RMI for differentiating between types of ovarian masses. RMI had the lowest sensitivity but was the most numerically accurate method. HE4 demonstrated the best overall sensitivity for the evaluation of malignant ovarian tumors and the differential diagnosis of endometriosis. All of the parameters demonstrated increased sensitivity when tumors with low malignancy potential were considered low-risk, which may be used as an acceptable assessment method for referring patients to reference centers.
Ghosh, Aniruddha; Joshi, P. K.
2014-02-01
Bamboo is used by different communities in India to develop indigenous products, maintain livelihood and sustain life. Indian National Bamboo Mission focuses on evaluation, monitoring and development of bamboo as an important plant resource. Knowledge of spatial distribution of bamboo therefore becomes necessary in this context. The present study attempts to map bamboo patches using very high resolution (VHR) WorldView 2 (WV 2) imagery in parts of South 24 Parganas, West Bengal, India using both pixel and object-based approaches. A combined layer of pan-sharpened multi-spectral (MS) bands, first 3 principal components (PC) of these bands and seven second order texture measures based Gray Level Co-occurrence Matrices (GLCM) of first three PC were used as input variables. For pixel-based image analysis (PBIA), recursive feature elimination (RFE) based feature selection was carried out to identify the most important input variables. Results of the feature selection indicate that the 10 most important variables include PC 1, PC 2 and their GLCM mean along with 6 MS bands. Three different sets of predictor variables (5 and 10 most important variables and all 32 variables) were classified with Support Vector Machine (SVM) and Random Forest (RF) algorithms. Producer accuracy of bamboo was found to be highest when 10 most important variables selected from RFE were classified with SVM (82%). However object-based image analysis (OBIA) achieved higher classification accuracy than PBIA using the same 32 variables, but with less number of training samples. Using object-based SVM classifier, the producer accuracy of bamboo reached 94%. The significance of this study is that the present framework is capable of accurately identifying bamboo patches as well as detecting other tree species in a tropical region with heterogeneous land use land cover (LULC), which could further aid the mandate of National Bamboo Mission and related programs.
三种经典单通道语音增强算法的比较%Comparison of Yhree Classical Single Channel Speech Enhancement Algorithms
Institute of Scientific and Technical Information of China (English)
陈玉霞
2014-01-01
This paper introduces three classical single channel speech enhancement algorithms that is subspace,mini-mum mean - square error estimator(MMSE)and spectral subtraction,and then discusses advantage and disadvantage of the three methods. Simulation result shows that the signal enhanced by subspace has the least noises and the maximum speech distortion,and the runtime of subspace is the longest. Although the signal enhanced by MMSE has more noises than signal enhanced by subspace,it has the minimum speech distortion and the runtime of MMSE ranks in the middle. The noise reduction and speech distortion of spectral subtraction rank in the middle and the runtime of spectral subtrac-tion is the shortest.%本文介绍了三种经典的单通道语音增强算法：子空间、最小均方误差估计和谱减法，分析和比较了这三种方法的优缺点。仿真实验表明，子空间去噪能力最强，语音失真度最大，运行时间最长；最小均方误差估计去噪能力不如子空间，语音失真度最小，运行时间介于其它两个算法之间；谱减法去噪能力和语音失真度介于其它两个算法之间，运行时间最短。
Five modified boundary scan adaptive test generation algorithms
Institute of Scientific and Technical Information of China (English)
Niu Chunping; Ren Zheping; Yao Zongzhong
2006-01-01
To study the diagnostic problem of Wire-OR (W-O) interconnect fault of PCB (Printed Circuit Board), five modified boundary scan adaptive algorithms for interconnect test are put forward. These algorithms apply Global-diagnosis sequence algorithm to replace the equal weight algorithm of primary test, and the test time is shortened without changing the fault diagnostic capability. The descriptions of five modified adaptive test algorithms are presented, and the capability comparison between the modified algorithm and the original algorithm is made to prove the validity of these algorithms.
A thermodynamic approach to the affinity optimization of drug candidates.
Freire, Ernesto
2009-11-01
High throughput screening and other techniques commonly used to identify lead candidates for drug development usually yield compounds with binding affinities to their intended targets in the mid-micromolar range. The affinity of these molecules needs to be improved by several orders of magnitude before they become viable drug candidates. Traditionally, this task has been accomplished by establishing structure activity relationships to guide chemical modifications and improve the binding affinity of the compounds. As the binding affinity is a function of two quantities, the binding enthalpy and the binding entropy, it is evident that a more efficient optimization would be accomplished if both quantities were considered and improved simultaneously. Here, an optimization algorithm based upon enthalpic and entropic information generated by Isothermal Titration Calorimetry is presented.
Planetary transit candidates in COROT-IRa01 field
Carpano, S; Alonso, R; Barge, P; Aigrain, S; Almenara, J -M; Bordé, P; Bouchy, F; Carone, L; Deeg, H J; De la Reza, R; Deleuil, M; Dvorak, R; Erikson, A; Fressin, F; Fridlund, M; Gondoin, P; Guillot, T; Hatzes, A; Jorda, L; Lammer, H; Léger, A; Llebaria, A; Magain, P; Moutou, C; Ofir, A; Ollivier, M; Pacheco, E J; Pátzold, M; Pont, F; Queloz, D; Rauer, H; Régulo, C; Renner, S; Rouan, D; Samuel, B; Schneider, J; Wuchterl, G
2009-01-01
Context: CoRoT is a pioneering space mission devoted to the analysis of stellar variability and the photometric detection of extrasolar planets. Aims: We present the list of planetary transit candidates detected in the first field observed by CoRoT, IRa01, the initial run toward the Galactic anticenter, which lasted for 60 days. Methods: We analysed 3898 sources in the coloured bands and 5974 in the monochromatic band. Instrumental noise and stellar variability were taken into account using detrending tools before applying various transit search algorithms. Results: Fifty sources were classified as planetary transit candidates and the most reliable 40 detections were declared targets for follow-up ground-based observations. Two of these targets have so far been confirmed as planets, COROT-1b and COROT-4b, for which a complete characterization and specific studies were performed.
Directory of Open Access Journals (Sweden)
J. M. A. C. Souza
2011-03-01
Full Text Available Three methods for automatic detection of mesoscale coherent structures are applied to Sea Level Anomaly (SLA fields in the South Atlantic. The first method is based on the wavelet packet decomposition of the SLA data, the second on the estimation of the Okubo-Weiss parameter and the third on a geometric criterion using the winding-angle approach. The results provide a comprehensive picture of the mesoscale eddies over the South Atlantic Ocean, emphasizing their main characteristics: amplitude, diameter, duration and propagation velocity. Five areas of particular eddy dynamics were selected: the Brazil Current, the Agulhas eddies propagation corridor, the Agulhas Current retroflexion, the Brazil-Malvinas confluence zone and the northern branch of the Antarctic Circumpolar Current (ACC. For these areas, mean propagation velocities and amplitudes were calculated. Two regions with long duration eddies were observed, corresponding to the propagation of Agulhas and ACC eddies. Through the comparison between the identification methods, their main advantages and shortcomings were detailed. The geometric criterion presents a better performance, mainly in terms of number of detections, duration of the eddies and propagation velocities. The results are particularly good for the Agulhas Rings, that presented the longest lifetimes of all South Atlantic eddies.
Béland, Laurent K; Stoller, Roger; Xu, Haixuan
2014-01-01
We present a comparison of the kinetic Activation-Relaxation Technique (k-ART) and the Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC), two off-lattice, on-the-fly kinetic Monte Carlo (KMC) techniques that were recently used to solve several materials science problems. We show that if the initial displacements are localized the dimer method and the Activation-Relaxation Technique \\emph{nouveau} provide similar performance. We also show that k-ART and SEAKMC, although based on different approximations, are in agreement with each other, as demonstrated by the examples of 50 vacancies in a 1950-atom Fe box and of interstitial loops in 16000-atom boxes. Generally speaking, k-ART's treatment of geometry and flickers is more flexible, e.g. it can handle amorphous systems, and rigorous than SEAKMC's, while the later's concept of active volumes permits a significant speedup of simulations for the systems under consideration and therefore allows investigations of processes requiring large systems that are not acc...
Directory of Open Access Journals (Sweden)
Robin Roj
2014-07-01
Full Text Available This paper presents three different search engines for the detection of CAD-parts in large databases. The analysis of the contained information is performed by the export of the data that is stored in the structure trees of the CAD-models. A preparation program generates one XML-file for every model, which in addition to including the data of the structure tree, also owns certain physical properties of each part. The first search engine is specializes in the discovery of standard parts, like screws or washers. The second program uses certain user input as search parameters, and therefore has the ability to perform personalized queries. The third one compares one given reference part with all parts in the database, and locates files that are identical, or similar to, the reference part. All approaches run automatically, and have the analysis of the structure tree in common. Files constructed with CATIA V5, and search engines written with Python have been used for the implementation. The paper also includes a short comparison of the advantages and disadvantages of each program, as well as a performance test.
Kramer, Oliver
2017-01-01
This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.
Wind Turbines Support Techniques during Frequency Drops — Energy Utilization Comparison
Directory of Open Access Journals (Sweden)
Ayman B. Attya
2014-08-01
Full Text Available The supportive role of wind turbines during frequency drops is still not clear enough, although there are many proposed algorithms. Most of the offered techniques make the wind turbine deviates from optimum power generation operation to special operation modes, to guarantee the availability of reasonable power support, when the system suffers frequency deviations. This paper summarizes the most dominant support algorithms and derives wind turbine power curves for each one. It also conducts a comparison from the point of view of wasted energy, with respect to optimum power generation. The authors insure the advantage of a frequency support algorithm, they previously presented, as it achieved lower amounts of wasted energy. This analysis is performed in two locations that are promising candidates for hosting wind farms in Egypt. Additionally, two different types of wind turbines from two different manufacturers are integrated. Matlab and Simulink are the implemented simulation environments.
Planetary Candidates Observed by Kepler VI: Planet Sample from Q1-Q16 (47 Months)
Mullally, F; Thompson, Susan E; Rowe, Jason; Burke, Christopher; Latham, David W; Batalha, Natalie M; Bryson, Stephen T; Christiansen, Jessie; Henze, Christopher E; Ofir, Aviv; Quarles, Billy; Shporer, Avi; Van Eylen, Vincent; Van Laerhoven, Christa; Shah, Yash; Wolfgang, Angie; Chaplin, W J; Xie, Ji-Wei; Akeson, Rachel; Argabright, Vic; Bachtell, Eric; Borucki, Thomas Barclay William J; Caldwell, Douglas A; Campbell, Jennifer R; Catanzarite, Joseph H; Cochran, William D; Duren, Riley M; Fleming, Scott W; Fraquelli, Dorothy; Girouard, Forrest R; Haas, Michael R; Hełminiak, Krzysztof G; Howell, Steve B; Huber, Daniel; Larson, Kipp; Gautier, Thomas N; Jenkins, Jon; Li, Jie; Lissauer, Jack J; McArthur, Scot; Miller, Chris; Morris, Robert L; Patil-Sabale, Anima; Plavchan, Peter; Putnam, Dustin; Quintana, Elisa V; Ramirez, Solange; Aguirre, V Silva; Seader, Shawn; Smith, Jeffrey C; Steffen, Jason H; Stewart, Chris; Stober, Jeremy; Still, Martin; Tenenbaum, Peter; Troeltzsch, John; Twicken, Joseph D; Zamudio, Khadeejah A
2015-01-01
\\We present the sixth catalog of Kepler candidate planets based on nearly 4 years of high precision photometry. This catalog builds on the legacy of previous catalogs released by the Kepler project and includes 1493 new Kepler Objects of Interest (KOIs) of which 554 are planet candidates, and 131 of these candidates have best fit radii 50 days to provide a consistently vetted sample that can be used to improve planet occurrence rate calculations. We discuss the performance of our planet detection algorithms, and the consistency of our vetting products. The full catalog is publicly available at the NASA Exoplanet Archive.
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Institute of Scientific and Technical Information of China (English)
Pei Yusheng; Cai Tong; Gao Hua; Tan Dejiang; Zhang Yuchen; Zhang Guolai
2014-01-01
Background The bacterial endotoxins test (BET) is a method used to detect or quantify endotoxins (lipo-polysaccharide,LPS) and is widely used in the quality control of parenteral medicines/vaccines and clinical dialysis fluid.It is also used in the diagnosis of endotoxemia and in detection of environment air quality control.Although BET has been adopted by most pharmacopoeias,result judgment algorithms (RJAs) of the test for interfering factors in the BET still differ between certain pharmacopoeias.We have evaluated RJAs of the test for interfering factors for the revision of BET described in the Chinese Pharmacopoeia 2010 (CHP2010).Methods Original data from 1 748 samples were judged by RJAs of the Chinese Pharmacopoeia 2010,the Japanese Pharmacopoeia 2011 (JP2011),the European Pharmacopoeia 7.0 (EP7.0),the United States Pharmacopoeia 36 (USP36),and the Indian Pharmacopoeia 2010 (IP2010),respectively.A SAS software package was used in the statistical analysis.Results The results using CHP2010 and USP36,JP2011,EP7.0,and IP2010 had no significant difference (P=-0.7740).The results using CHP2010 of 1 748 samples showed that 132 samples (7.6％) required an additional step; nevertheless there was no such requirement when using the other pharmacopeias.The kappa value of two RJAs (CHP2010 and EP7.0) was 0.6900 (0.6297-0.7504) indicating that the CHP2010 and other pharmacopoeias have good consistency.Conclusions The results using CHP2010 and USP36,JP2011,EP7.0,and IP2010 have different characteristics.CHP2010 method shows a good performance in Specificity,mistake diagnostic rate,agreement rate,predictive value for suspicious rate,and predictive value for passed rate.The CHP2010 method only had disadvantages in sensitivity compared with other pharmacopeias.We suggest that the Chinese pharmacopoeia interference test be revised in accordance with the USP36,JP2011,EP7.0,and IP2010 judgment model.
Directory of Open Access Journals (Sweden)
Jyoti Kalyani
2006-01-01
Full Text Available Security of wired and wireless networks is the most challengeable in today's computer world. The aim of this study was to give brief introduction about viruses and worms, their creators and characteristics of algorithms used by viruses. Here wired and wireless network viruses are elaborated. Also viruses are compared with human immune system. On the basis of this comparison four guidelines are given to detect viruses so that more secure systems are made. While concluding this study it is found that the security is most challengeable, thus it is required to make more secure models which automatically detect viruses and prevent the system from its affect.
Excursion-Set-Mediated Genetic Algorithm
Noever, David; Baskaran, Subbiah
1995-01-01
Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.
Ravari, Alireza Norouzzadeh; Taghirad, Hamid D
2014-10-01
In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods.
Hybrid ant colony algorithm for traveling salesman problem
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
A hybrid approach based on ant colony algorithm for the traveling salesman problem is proposed, which is an improved algorithm characterized by adding a local search mechanism, a cross-removing strategy and candidate lists. Experimental results show that it is competitive in terms of solution quality and computation time.
Indicators of Psychical Stability Among Junior and Youth Track and Field National Team Candidates
Directory of Open Access Journals (Sweden)
Romualdas K. Malinauskas
2014-03-01
Full Text Available This article deals with questions of psychical stability among junior and youth track and field national team candidates. Two methods were used to carry out the survey: The Competitive State Anxiety Inventory developed by Martens et al. and Athletes Psychical Stability Questionnaire developed by Milman. The random sample size consists of 81 junior and youth track and field national team candidates. Participants comprised 39 youth teams and 42 junior national team candidates. It was determined that, in comparison with the junior track and field national team candidates, anxiety of youth track and field national team candidates is lower (p<0.05. The psychical stability of youth track and field national team candidates, were found to be significantly higher than those of junior track and field national team candidates because it was found that youth track and field national team candidates scored higher (p <.05 than junior track and field national team candidates in following components of psychical stability: in precompetitive emotional stability and in self-regulation.
A New Algorithm for Mining Frequent Pattern
Institute of Scientific and Technical Information of China (English)
李力; 靳蕃
2002-01-01
Mining frequent pattern in transaction database, time-series databases, and many other kinds of databases have been studied popularly in data mining research. Most of the previous studies adopt Apriori-like candidate set generation-and-test approach. However, candidate set generation is very costly. Han J. proposed a novel algorithm FP-growth that could generate frequent pattern without candidate set. Based on the analysis of the algorithm FP-growth, this paper proposes a concept of equivalent FP-tree and proposes an improved algorithm, denoted as FP-growth*, which is much faster in speed, and easy to realize. FP-growth* adopts a modified structure of FP-tree and header table, and only generates a header table in each recursive operation and projects the tree to the original FP-tree. The two algorithms get the same frequent pattern set in the same transaction database, but the performance study on computer shows that the speed of the improved algorithm, FP-growth*, is at least two times as fast as that of FP-growth.
Microgenetic optimization algorithm for optimal wavefront shaping
Anderson, Benjamin R; Gunawidjaja, Ray; Eilers, Hergen
2015-01-01
One of the main limitations of utilizing optimal wavefront shaping in imaging and authentication applications is the slow speed of the optimization algorithms currently being used. To address this problem we develop a micro-genetic optimization algorithm ($\\mu$GA) for optimal wavefront shaping. We test the abilities of the $\\mu$GA and make comparisons to previous algorithms (iterative and simple-genetic) by using each algorithm to optimize transmission through an opaque medium. From our experiments we find that the $\\mu$GA is faster than both the iterative and simple-genetic algorithms and that both genetic algorithms are more resistant to noise and sample decoherence than the iterative algorithm.
Image Classification through integrated K- Means Algorithm
Directory of Open Access Journals (Sweden)
Balasubramanian Subbiah
2012-03-01
Full Text Available Image Classification has a significant role in the field of medical diagnosis as well as mining analysis and is even used for cancer diagnosis in the recent years. Clustering analysis is a valuable and useful tool for image classification and object diagnosis. A variety of clustering algorithms are available and still this is a topic of interest in the image processing field. However, these clustering algorithms are confronted with difficulties in meeting the optimum quality requirements, automation and robustness requirements. In this paper, we propose two clustering algorithm combinations with integration of K-Means algorithm that can tackle some of these problems. Comparison study is made between these two novel combination algorithms. The experimental results demonstrate that the proposed algorithms are very effective in producing desired clusters of the given data sets as well as diagnosis. These algorithms are very much useful for image classification as well as extraction of objects.
Teacher Candidates' Communication Skills and Communicator Styles
Cem ÇUHADAR; Özgür, Hasan; Akgün, Fatma; GÜNDÜZ, Şemseddin
2015-01-01
The purpose of this study is to find out the relationship between the communication skills and the communicator styles of teacher candidates. This research was conducted among the senior class students, studying at Trakya University, Faculty of Education in the fall semester of the 2012-2013 academic year. 205 women and 110 men, in a total of 315 teacher candidates participated in the research. As a result, it has been observed that the teacher candidates bear animated/expressive features the...
A Hybrid Intelligent Algorithm for Optimal Birandom Portfolio Selection Problems
Directory of Open Access Journals (Sweden)
Qi Li
2014-01-01
Full Text Available Birandom portfolio selection problems have been well developed and widely applied in recent years. To solve these problems better, this paper designs a new hybrid intelligent algorithm which combines the improved LGMS-FOA algorithm with birandom simulation. Since all the existing algorithms solving these problems are based on genetic algorithm and birandom simulation, some comparisons between the new hybrid intelligent algorithm and the existing algorithms are given in terms of numerical experiments, which demonstrate that the new hybrid intelligent algorithm is more effective and precise when the numbers of the objective function computations are the same.
AN INCREMENTAL UPDATING ALGORITHM FOR MINING ASSOCIATION RULES
Institute of Scientific and Technical Information of China (English)
Xu Baowen; Yi Tong; Wu Fangjun; Chen Zhenqiang
2002-01-01
In this letter, on the basis of Frequent Pattern(FP) tree, the support function to update FP-tree is introduced, then an Incremental FP (IFP) algorithm for mining association rules is proposed. IFP algorithm considers not only adding new data into the database but also reducing old data from the database. Furthermore, it can predigest five cases to three cases.The algorithm proposed in this letter can avoid generating lots of candidate items, and it is high efficient.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
Institute of Scientific and Technical Information of China (English)
金英连; 王斌锐; 徐崟
2013-01-01
稳像是提高基于视觉的移动机器人作业精度的关键.论文建立了完整的稳像算法流程,包含图像运动学模型、KLT特征提取、SAD特征匹配和滤波算法；设计了运动参数的Kalman和FIR滤波算法；并利用MATLAB实现了运动参数的Kalman和FIR滤波器；仿真验证和对比分析了Kalman和FIR滤波器对运动参数的去抖效果.结果表明,机器人视觉稳像中,Kalman滤波效果优于FIR滤波.用VC++和OpenCV编程实现了基于Kalman滤波的机器人视觉稳像软件,在双机器人移动平台上开展了实验,稳像计算时间小于视频采样时间,系统满足机器人对接作业实时性和精度要求.%Image stabilization is the key for accurate docking operations of robots with vision. The whole algorithm of image stabilization is established, including images kinematics model, KLT feature pixels detecting, SAD feature pixels matching and filters. Kalman and FIR filters are designed for smoothing images motion parameters and built in MATLAB. Simulation of filter of motion un-intended parameters is implemented to indicate removing jitter effect. Kalman filter is compared with FIR filter. Comparison curves and tables are given , which demonstrate that Kalman filter is better than FIR in robot vision image stabilization process. Based on VC++ and OpenCV, image stabilization software is programmed, and experiments are completed on double moving robots docking operation platform. The algorithm running time is less than the sampling period, and the precision and real-time demands are contented.
Characterization for Fusion Candidate Vanadium Alloys
Institute of Scientific and Technical Information of China (English)
T. Muroga; T. Nagasaka; J. M. Chen; Z. Y. Xu; Q. Y. Huang; y. C. Wu
2004-01-01
This paper summarizes recent achievements in the characterization of candidate vanadium alloys obtained for fusion in the framework of the Japan-China Core University Program.National Institute for Fusion Science (NIFS) has a program of fabricating high-purity V-4Cr4Ti alloys. The resulting products (NIFS-HEAT-1,2), were characterized by various research groups in the world including Chinese partners. South Western Institute of Physics (SWIP) fabricated a new V-4Cr-4Ti alloy (SWIP-Heat), and carried out a comparative evaluation of hydrogen embrittlement of NIFS-HEATs and SWIP-Heat. The tensile test of hydrogen-doped alloys showed that the NIFS-HEAT maintained the ductility to relatively high hydrogen levels.The comparison of the data with those of previous studies suggested that the reduced oxygen level in the NIFS-HEATs should be responsible for the increased resistance to hydrogen embrittlement.Based on the chemical analysis data of NIFS-HEATs and SWIP-Heats, neutron-induced activation was analyzed in Institute of Plasma Physics (IPP-CAS) as a function of cooling time after the use in the fusion first wall. The results showed that the low level of Co dominates the activity up to 50 years followed by a domination of Nb or Nb and Al in the respective alloys. It was suggested that reduction of Co and Nb, both of which are thought to have been introduced via cross-contamination into the alloys from the molds used should be crucial for reducing further the activation.
An Improved Heuristic Algorithm of Attribute Reduction in Rough Set
Institute of Scientific and Technical Information of China (English)
ShunxiangWu; MaoqingLi; WentingHuang; SifengLiu
2004-01-01
This paper introduces background of rough set theory, then proposes a new algorithm for finding optimal reduction and make comparison between the original algorithm and the improved one by the experiment about the nine standard data set in UL database to explain the validity of the improved heuristic algorithm.