Experience with CANDID: Comparison algorithm for navigating digital image databases
Energy Technology Data Exchange (ETDEWEB)
Kelly, P.; Cannon, M.
1994-10-01
This paper presents results from the authors experience with CANDID (Comparison Algorithm for Navigating Digital Image Databases), which was designed to facilitate image retrieval by content using a query-by-example methodology. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized similarity measure between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to a user-provided example image. Results for three test applications are included.
CANDID: Comparison algorithm for navigating digital image databases
Energy Technology Data Exchange (ETDEWEB)
Kelly, P.M.; Cannon, T.M.
1994-02-21
In this paper, we propose a method for calculating the similarity between two digital images. A global signature describing the texture, shape, or color content is first computed for every image stored in a database, and a normalized distance between probability density functions of feature vectors is used to match signatures. This method can be used to retrieve images from a database that are similar to an example target image. This algorithm is applied to the problem of search and retrieval for database containing pulmonary CT imagery, and experimental results are provided.
Lee, K J; Jenet, F A; Martinez, J; Dartez, L P; Mata, A; Lunsford, G; Cohen, S; Biwer, C M; Rohr, M; Flanigan, J; Walker, A; Banaszak, S; Allen, B; Barr, E D; Bhat, N D R; Bogdanov, S; Brazier, A; Camilo, F; Champion, D J; Chatterjee, S; Cordes, J; Crawford, F; Deneva, J; Desvignes, G; Ferdman, R D; Freire, P; Hessels, J W T; Karuppusamy, R; Kaspi, V M; Knispel, B; Kramer, M; Lazarus, P; Lynch, R; Lyne, A; McLaughlin, M; Ransom, S; Scholz, P; Siemens, X; Spitler, L; Stairs, I; Tan, M; van Leeuwen, J; Zhu, W W
2013-01-01
Modern radio pulsar surveys produce a large volume of prospective candidates, the majority of which are polluted by human-created radio frequency interference or other forms of noise. Typically, large numbers of candidates need to be visually inspected in order to determine if they are real pulsars. This process can be labor intensive. In this paper, we introduce an algorithm called PEACE (Pulsar Evaluation Algorithm for Candidate Extraction) which improves the efficiency of identifying pulsar signals. The algorithm ranks the candidates based on a score function. Unlike popular machine-learning based algorithms, no prior training data sets are required. This algorithm has been applied to data from several large-scale radio pulsar surveys. Using the human-based ranking results generated by students in the Arecibo Remote Command enter programme, the statistical performance of PEACE was evaluated. It was found that PEACE ranked 68% of the student-identified pulsars within the top 0.17% of sorted candidates, 95% ...
Comparison of Text Categorization Algorithms
Institute of Scientific and Technical Information of China (English)
SHI Yong-feng; ZHAO Yan-ping
2004-01-01
This paper summarizes several automatic text categorization algorithms in common use recently, analyzes and compares their advantages and disadvantages.It provides clues for making use of appropriate automatic classifying algorithms in different fields.Finally some evaluations and summaries of these algorithms are discussed, and directions to further research have been pointed out.
Comparison of fast discrete wavelet transform algorithms
Institute of Scientific and Technical Information of China (English)
MENG Shu-ping; TIAN Feng-chun; XU Xin
2005-01-01
This paper presents an analysis on and experimental comparison of several typical fast algorithms for discrete wavelet transform (DWT) and their implementation in image compression, particularly the Mallat algorithm, FFT-based algorithm, Short-length based algorithm and Lifting algorithm. The principles, structures and computational complexity of these algorithms are explored in details respectively. The results of the experiments for comparison are consistent to those simulated by MATLAB. It is found that there are limitations in the implementation of DWT. Some algorithms are workable only for special wavelet transform, lacking in generality. Above all, the speed of wavelet transform, as the governing element to the speed of image processing, is in fact the retarding factor for real-time image processing.
Evaluation of GPM candidate algorithms on hurricane observations
Le, M.; Chandrasekar, C. V.
2012-12-01
storms and hurricanes. In this paper, the performance of GPM candidate algorithms [2][3] to perform profile classification, melting region detection as well as drop size distribution retrieval for hurricane Earl will be presented. This analysis will be compared with other storm observations that are not tropical storms. The philosophy of the algorithm is based on the vertical characteristic of measured dual-frequency ratio (DFRm), defined as the difference in measured radar reflectivities at the two frequencies. It helps our understanding of how hurricanes such as Earl form and intensify rapidly. Reference [1] T. Iguchi, R. Oki, A. Eric and Y. Furuhama, "Global precipitation measurement program and the development of dual-frequency precipitation radar," J. Commun. Res. Lab. (Japan), 49, 37-45.2002. [2] M. Le and V. Chandrasekar, Recent updates on precipitation classification and hydrometeor identification algorithm for GPM-DPR, Geoscience science and remote sensing symposium, IGARSS'2012, IEEE International, Munich, Germany. [3] M. Le ,V. Chandrasekar and S. Lim, Microphysical retrieval from dual-frequency precipitation radar board GPM, Geoscience science and remote sensing symposium, IGARSS'2010, IEEE International, Honolulu, USA.
Institute of Scientific and Technical Information of China (English)
WANG ShunJin; ZHANG Hua
2007-01-01
Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.
Supercomputers and biological sequence comparison algorithms.
Core, N G; Edmiston, E W; Saltz, J H; Smith, R M
1989-12-01
Comparison of biological (DNA or protein) sequences provides insight into molecular structure, function, and homology and is increasingly important as the available databases become larger and more numerous. One method of increasing the speed of the calculations is to perform them in parallel. We present the results of initial investigations using two dynamic programming algorithms on the Intel iPSC hypercube and the Connection Machine as well as an inexpensive, heuristically-based algorithm on the Encore Multimax.
A stereo matching algorithm using multi-peak candidate matches and geometric constraints
Institute of Scientific and Technical Information of China (English)
Yepeng Guan
2008-01-01
Gray cross correlation matching technique is adopted to extract candidate matches with gray cross correla- tion coefficients less than some certain range of maximal correlation coefficient called multi-peak candidate matches. Multi-peak candidates are extracted corresponding to three closest feature points at first. The corresponding multi-peak candidate matches are used to construct the model polygon. Correspondence is determined based on the local geometric relations between the three feature points and the multi-peak candidates. The disparity test and the global consistency checkout are applied to eliminate the remaining ambiguous matches that are not removed by the local geometric relational test. Experimental results show that the proposed algorithm is feasible and accurate.
Dynamic programming algorithms for biological sequence comparison.
Pearson, W R; Miller, W
1992-01-01
Efficient dynamic programming algorithms are available for a broad class of protein and DNA sequence comparison problems. These algorithms require computer time proportional to the product of the lengths of the two sequences being compared [O(N2)] but require memory space proportional only to the sum of these lengths [O(N)]. Although the requirement for O(N2) time limits use of the algorithms to the largest computers when searching protein and DNA sequence databases, many other applications of these algorithms, such as calculation of distances for evolutionary trees and comparison of a new sequence to a library of sequence profiles, are well within the capabilities of desktop computers. In particular, the results of library searches with rapid searching programs, such as FASTA or BLAST, should be confirmed by performing a rigorous optimal alignment. Whereas rapid methods do not overlook significant sequence similarities, FASTA limits the number of gaps that can be inserted into an alignment, so that a rigorous alignment may extend the alignment substantially in some cases. BLAST does not allow gaps in the local regions that it reports; a calculation that allows gaps is very likely to extend the alignment substantially. Although a Monte Carlo evaluation of the statistical significance of a similarity score with a rigorous algorithm is much slower than the heuristic approach used by the RDF2 program, the dynamic programming approach should take less than 1 hr on a 386-based PC or desktop Unix workstation. For descriptive purposes, we have limited our discussion to methods for calculating similarity scores and distances that use gap penalties of the form g = rk. Nevertheless, programs for the more general case (g = q+rk) are readily available. Versions of these programs that run either on Unix workstations, IBM-PC class computers, or the Macintosh can be obtained from either of the authors.
An Algorithm for Selecting QGP Candidate Events from Relativistic Heavy Ion Collision Data Sample
Lian Shou Liu; Yuan, H B; Lianshou, Liu; Qinghua, Chen; Yuan, Hu
1998-01-01
The formation of quark-gluon plasma (QGP) in relativistic heavy ion collision, is expected to be accompanied by a background of ordinary collision events without phase transition. In this short note an algorithm is proposed to select the QGP candidate events from the whole event sample. This algorithm is based on a simple geometrical consideration together with some ordinary QGP signal, e.g. the increasing of $K/\\pi$ ratio. The efficiency of this algorithm in raising the 'signal/noise ratio' of QGP events in the selected sub-sample is shown explicitly by using Monte-Carlo simulation.
A Comparison of Candidate Seal Designs for Future Docking Systems
Dunlap, Patrick, H., Jr.; Steinetz, Bruce, M.
2012-01-01
NASA is developing a new docking system to support future space exploration missions to low Earth orbit, the Moon, and other destinations. A key component of this system is the seal at the main docking interface which inhibits the loss of cabin air once docking is complete. Depending on the mission, the seal must be able to dock in either a seal-on-flange or seal-on-seal configuration. Seal-on-flange mating would occur when a docking system equipped with a seal docks to a system with a flat metal flange. This would occur when a vehicle docks to a node on the International Space Station. Seal-on-seal mating would occur when two docking systems equipped with seals dock to each other. Two types of seal designs were identified for this application: Gask-O-seals and multi-piece seals. Both types of seals had a pair of seal bulbs to satisfy the redundancy requirement. A series of performance assessments and comparisons were made between the candidate seal designs indicating that they meet the requirements for leak rate and compression and adhesion loads under a range of operating conditions. Other design factors such as part count, integration into the docking system tunnel, seal-on-seal mating, and cost were also considered leading to the selection of the multi-piece seal design for the new docking system. The results of this study can be used by designers of future docking systems and other habitable volumes to select the seal design best-suited for their particular application.
A Comparison of Algorithms for the Construction of SZ Cluster Catalogues
Melin, J -B; Bartelmann, M; Bartlett, J G; Betoule, M; Bobin, J; Carvalho, P; Chon, G; Delabrouille, J; Diego, J M; Harrison, D L; Herranz, D; Hobson, M; Kneissl, R; Lasenby, A N; Jeune, M Le; Lopez-Caniego, M; Mazzotta, P; Rocha, G M; Schaefer, B M; Starck, J -L; Waizmann, J -C; Yvon, D
2012-01-01
We evaluate the construction methodology of an all-sky catalogue of galaxy clusters detected through the Sunyaev-Zel'dovich (SZ) effect. We perform an extensive comparison of twelve algorithms applied to the same detailed simulations of the millimeter and submillimeter sky based on a Planck-like case. We present the results of this "SZ Challenge" in terms of catalogue completeness, purity, astrometric and photometric reconstruction. Our results provide a comparison of a representative sample of SZ detection algorithms and highlight important issues in their application. In our study case, we show that the exact expected number of clusters remains uncertain (about a thousand cluster candidates at |b|> 20 deg with 90% purity) and that it depends on the SZ model and on the detailed sky simulations, and on algorithmic implementation of the detection methods. We also estimate the astrometric precision of the cluster candidates which is found of the order of ~2 arcmins on average, and the photometric uncertainty of...
The Performance Comparisons between the Unconstrained and Constrained Equalization Algorithms
Institute of Scientific and Technical Information of China (English)
HE Zhong-qiu; LI Dao-ben
2003-01-01
This paper proposes two unconstrained algorithms, the Steepest Decent (SD) algorithm and the Conjugate Gradient (CG) algorithm, based on a superexcellent cost function [1～3]. At the same time, two constrained algorithms which include the Constrained Steepest Decent (CSD) algorithm and the Constrained Conjugate Gradient algorithm (CCG) are deduced subject to a new constrain condition. They are both implemented in unitary transform domain. The computational complexities of the constrained algorithms are compared to those of the unconstrained algorithms. Resulting simulations show their performance comparisons.
[SKLOF: a new algorithm to reduce the range of supernova candidates].
Tu, Liang-ping; Wei, Hui-ming; Wei, Peng; Pan, Jing-chang; Luo, A-li; Zhao, Yong-heng
2015-01-01
Supernova (SN) is called the "standard candles" in the cosmology, the probability of outbreak in the galaxy is very low and is a kind of special, rare astronomical objects. Only in a large number of galaxies, we have a chance to find the supernova. The supernova which is in the midst of explosion will illuminate the entire galaxy, so the spectra of galaxies we obtained have obvious features of supernova. But the number of supernova have been found is very small relative to the large number of astronomical objects. The time computation that search the supernova be the key to weather the follow-up observations, therefore it needs to look for an efficient method. The time complexity of the density-based outlier detecting algorithm (LOF) is not ideal, which effects its application in large datasets. Through the improvement of LOF algorithm, a new algorithm that reduces the searching range of supernova candidates in a flood of spectra of galaxies is introduced and named SKLOF. Firstly, the spectra datasets are pruned and we can get rid of most objects are impossible to be the outliers. Secondly, we use the improved LOF algorithm to calculate the local outlier factors (LOF) of the spectra datasets remained and all LOFs are arranged in descending order. Finally, we can get the smaller searching range of the supernova candidates for the subsequent identification. The experimental results show that the algorithm is very effective, not only improved in accuracy, but also reduce the operation time compared with LOF algorithm with the guarantee of the accuracy of detection. PMID:25993860
COMPARISON OF LOSSLESS DATA COMPRESSION ALGORITHMS FOR TEXT DATA
Directory of Open Access Journals (Sweden)
U. S.Amarasinghe
2010-12-01
Full Text Available Data compression is a common requirement for most of the computerized applications. There are number of data compression algorithms,which are dedicated to compress different data formats. Even for a single data type there are number of different compression algorithms,which use different approaches. This paper examines lossless data compression algorithms and compares their performance. A set ofselected algorithms are examined and implemented to evaluate the performance in compressing text data. An experimental comparison of anumber of different lossless data compression algorithms is presented in this paper. The article is concluded by stating which algorithmperforms well for text data.
Trust Based Algorithm for Candidate Node Selection in Hybrid MANET-DTN
Directory of Open Access Journals (Sweden)
Jan Papaj
2014-01-01
Full Text Available The hybrid MANET - DTN is a mobile network that enables transport of the data between groups of the disconnected mobile nodes. The network provides benefits of the Mobile Ad-Hoc Networks (MANET and Delay Tolerant Network (DTN. The main problem of the MANET occurs if the communication path is broken or disconnected for some short time period. On the other side, DTN allows sending data in the disconnected environment with respect to higher tolerance to delay. Hybrid MANET - DTN provides optimal solution for emergency situation in order to transport information. Moreover, the security is the critical factor because the data are transported by mobile devices. In this paper, we investigate the issue of secure candidate node selection for transportation of the data in a disconnected environment for hybrid MANET- DTN. To achieve the secure selection of the reliable mobile nodes, the trust algorithm is introduced. The algorithm enables select reliable nodes based on collecting routing information. This algorithm is implemented to the simulator OPNET modeler.
A Comparison of First-order Algorithms for Machine Learning
Wei, Yu; Thomas, Pock
2014-01-01
Using an optimization algorithm to solve a machine learning problem is one of mainstreams in the field of science. In this work, we demonstrate a comprehensive comparison of some state-of-the-art first-order optimization algorithms for convex optimization problems in machine learning. We concentrate on several smooth and non-smooth machine learning problems with a loss function plus a regularizer. The overall experimental results show the superiority of primal-dual algorithms in solving a mac...
A systematic comparison of genome-scale clustering algorithms
Jay, Jeremy J.; Eblen, John D; Zhang, Yun; Benson, Mikael; Perkins, Andy D.; Saxton, Arnold M.; Voy, Brynn H.; Elissa J Chesler; Langston, Michael A.
2012-01-01
Background: A wealth of clustering algorithms has been applied to gene co-expression experiments. These algorithms cover a broad range of approaches, from conventional techniques such as k-means and hierarchical clustering, to graphical approaches such as k-clique communities, weighted gene co-expression networks (WGCNA) and paraclique. Comparison of these methods to evaluate their relative effectiveness provides guidance to algorithm selection, development and implementation. Most prior work...
Comparison between Two Text Digital Watermarking Algorithms
Institute of Scientific and Technical Information of China (English)
TANG Sheng; XUE Xu-ce
2011-01-01
In this paper,two text digital watermarking methods are compared in the context of their robustness performances.A nonlinear watermarking algorithm embeds the watermark into the reordered DCT coefficients of a text image,and utilizes a nonlinear detector to detect the watermark in some attacks.Compared with the classical watermarking algorithm,experimental results show that this nonlinear watennarking nlgorithm has some potential merits.
Garg, Poonam
2010-01-01
Genetic algorithms are a population-based Meta heuristics. They have been successfully applied to many optimization problems. However, premature convergence is an inherent characteristic of such classical genetic algorithms that makes them incapable of searching numerous solutions of the problem domain. A memetic algorithm is an extension of the traditional genetic algorithm. It uses a local search technique to reduce the likelihood of the premature convergence. The cryptanalysis of simplified data encryption standard can be formulated as NP-Hard combinatorial problem. In this paper, a comparison between memetic algorithm and genetic algorithm were made in order to investigate the performance for the cryptanalysis on simplified data encryption standard problems(SDES). The methods were tested and various experimental results show that memetic algorithm performs better than the genetic algorithms for such type of NP-Hard combinatorial problem. This paper represents our first effort toward efficient memetic algo...
Directory of Open Access Journals (Sweden)
Ait-Ali Lamia
2011-11-01
Full Text Available Abstract Background To propose a new diagnostic algorithm for candidates for Fontan and identify those who can skip cardiac catheterization (CC. Methods Forty-four candidates for Fontan (median age 4.8 years, range: 2-29 years were prospectively evaluated by trans-thoracic echocardiography (TTE, Cardiovascular magnetic resonance (CMR and CC. Before CC, according to clinical, echo and CMR findings, patients were divided in two groups: Group I comprised 18 patients deemed suitable for Fontan without requiring CC; group II comprised 26 patients indicated for CC either in order to detect more details, or for interventional procedures. Results In Group I ("CC not required" no unexpected new information affecting surgical planning was provided by CC. Conversely, in Group II new information was provided by CC in three patients (0 vs 11.5%, p = 0.35 and in six an interventional procedure was performed. During CC, minor complications occurred in one patient from Group I and in three from Group II (6 vs 14%, p = 0.7. Radiation Dose-Area product was similar in the two groups (Median 20 Gycm2, range: 5-40 vs 26.5 Gycm2, range: 9-270 p = 0.37. All 18 Group I patients and 19 Group II patients underwent a total cavo-pulmonary anastomosis; in the remaining seven group II patients, four were excluded from Fontan; two are awaiting Fontan; one refused the intervention. Conclusion In this paper we propose a new diagnostic algorithm in a pre-Fontan setting. An accurate non-invasive evaluation comprising TTE and CMR could select patients who can skip CC.
Comparison of greedy algorithms for α-decision tree construction
Alkhalid, Abdulaziz
2011-01-01
A comparison among different heuristics that are used by greedy algorithms which constructs approximate decision trees (α-decision trees) is presented. The comparison is conducted using decision tables based on 24 data sets from UCI Machine Learning Repository [2]. Complexity of decision trees is estimated relative to several cost functions: depth, average depth, number of nodes, number of nonterminal nodes, and number of terminal nodes. Costs of trees built by greedy algorithms are compared with minimum costs calculated by an algorithm based on dynamic programming. The results of experiments assign to each cost function a set of potentially good heuristics that minimize it. © 2011 Springer-Verlag.
Comparison of the SLAM algorithms: Hangar experiments
Directory of Open Access Journals (Sweden)
Korkmaz Mehmet
2016-01-01
Full Text Available This study purposes to compare two known algorithms in an application scenario of simultaneous localization and mapping (SLAM and to present issues related with them as well. Mostly used SLAM algorithms Extended Kalman Filter (EKF and Unscented Kalman Filter (UKF are compared with respect to the point of accuracy of the robot states, localization and mapping. Because of considering the most implementations in the previous studies, the simulation environments are chosen as big as possible to provide reliable results. In this study, two different hangar regions are tried to be simulated. According to the outcomes of the applications, UKF-based SLAM algorithm has superior performance over the EKF-based one, apart from elapsed time.
Does a Least-Preferred Candidate Win a Seat? A Comparison of Three Electoral Systems
Directory of Open Access Journals (Sweden)
Yoichi Hizen
2015-01-01
Full Text Available In this paper, the differences between two variations of proportional representation (PR, open-list PR and closed-list PR, are analyzed in terms of their ability to accurately reflect voter preference. The single nontransferable vote (SNTV is also included in the comparison as a benchmark. We construct a model of voting equilibria with a candidate who is least preferred by voters in the sense that replacing the least-preferred candidate in the set of winners with any loser is Pareto improving, and our focus is on whether the least-preferred candidate wins under each electoral system. We demonstrate that the least-preferred candidate never wins under the SNTV, but can win under open-list PR, although this is less likely than winning under closed-list PR.
Algorithmic parameterization of mixed treatment comparisons
van Valkenhoef, Gert; Tervonen, Tommi; de Brock, Bert; Hillege, Hans
2012-01-01
Mixed Treatment Comparisons (MTCs) enable the simultaneous meta-analysis (data pooling) of networks of clinical trials comparing a parts per thousand yen2 alternative treatments. Inconsistency models are critical in MTC to assess the overall consistency between evidence sources. Only in the absence
A Comparison of learning algorithms on the Arcade Learning Environment
Defazio, Aaron; Graepel, Thore
2014-01-01
Reinforcement learning agents have traditionally been evaluated on small toy problems. With advances in computing power and the advent of the Arcade Learning Environment, it is now possible to evaluate algorithms on diverse and difficult problems within a consistent framework. We discuss some challenges posed by the arcade learning environment which do not manifest in simpler environments. We then provide a comparison of model-free, linear learning algorithms on this challenging problem set.
An Adaptive Algorithm for Pairwise Comparison-based Preference Measurement
DEFF Research Database (Denmark)
Meissner, Martin; Decker, Reinhold; Scholz, Sören W.
2011-01-01
The Pairwise Comparison‐based Preference Measurement (PCPM) approach has been proposed for products featuring a large number of attributes. In the PCPM framework, a static two‐cyclic design is used to reduce the number of pairwise comparisons. However, adaptive questioning routines that maximize ...... Hierarchy Process as well as a random selection of pairwise comparisons. By means of Monte Carlo simulations, we quantify the extent to which the adaptive selection of pairwise comparisons increases the efficiency of the respective approach....... the information gained from pairwise comparisons promise to further increase the efficiency of this approach. This paper introduces a new adaptive algorithm for PCPM, which accounts for several response errors. The suggested approach is compared with an adaptive algorithm that was proposed for the Analytic...
Directory of Open Access Journals (Sweden)
Amin Mubark Alamin Ibrahim
2015-04-01
Full Text Available The subject of matching text or search the texts is important topics in the field of computer science and is used in many programs such as Microsoft word program in correct spelling mistakes and search &replace, and other uses. The aim of this study was to learn how to trade-off texts matching algorithms, which are very much where we have the application on Horspool's and Brute Force algorithms. According to the standard number of comparisons and time of execution. The study pointed on preference Horspool's algorithm.
Novel algorithm of finding good candidate pre-configuration cycles in survivable WDM mesh network
Institute of Scientific and Technical Information of China (English)
ZHAO Tai-fei; YU Hong-fang; LI Le-min
2006-01-01
We present a novel algorithm of finding cycles, called the Fast Cycles Mining Algorithm (FCMA),for effi cient p-cycle network design in WDM networks. The algorithm is also flexible in that the number and the length of cycles generated are controlled by several input parameters. The problem of wavelength assign ment on p-cycle is considered in the algorithm. This algorithm is scalable and especially suitable for surviv able WDM mesh networks. Finally, the performance of the algorithm is gauged by running on some real world network topologies.
Comparison of face Recognition Algorithms on Dummy Faces
Directory of Open Access Journals (Sweden)
Aruni Singh
2012-09-01
Full Text Available In the age of rising crime face recognition is enormously important in the contexts of computer vision, psychology, surveillance, fraud detection, pattern recognition, neural network, content based video processing, etc. Face is a non intrusive strong biometrics for identification and hence criminals always try to hide their facial organs by different artificial means such as plastic surgery, disguise and dummy. The availability of a comprehensive face database is crucial to test the performance of these face recognition algorithms. However, while existing publicly-available face databases contain face images with a wide variety of poses, illumination, gestures and face occlusions but there is no dummy face database is available in public domain. The contributions of this research paper are: i Preparation of dummy face database of 110 subjects ii Comparison of some texture based, feature based and holistic face recognition algorithms on that dummy face database, iii Critical analysis of these types of algorithms on dummy face database.
Selection of candidate plus phenotypes of Jatropha curcas L. using method of paired comparisons
Energy Technology Data Exchange (ETDEWEB)
Mishra, D.K. [Silviculture Division, Arid Forest Research Institute, P.O. Krishi Mandi, New Pali Road, Jodhpur 342005, Rajasthan (India)
2009-03-15
Jatropha curcas L. (Euphorbiaceae) is an oil bearing species with multiple uses and considerable potential as a biodiesel crop. The present communication deals with the method of selecting plus phenotypes of J. curcas for exploiting genetic variability for further improvement. Candidate plus tree selection is the first and most important stage in any tree improvement programme. The selection of candidate plus plants (CPPs) is based upon various important attributes associated with the species and their relative ranking. Relative preference between various traits and scoring for each trait has been worked out by using the method of paired comparisons for the selection of CPP in J. curcas L. The most important ones are seed and oil yields. (author)
Amin Mubark Alamin Ibrahim; Mustafa Elgili Mustafa
2015-01-01
The subject of matching text or search the texts is important topics in the field of computer science and is used in many programs such as Microsoft word program in correct spelling mistakes and search &replace, and other uses. The aim of this study was to learn how to trade-off texts matching algorithms, which are very much where we have the application on Horspool's and Brute Force algorithms. According to the standard number of comparisons and time of execution. The study pointed on prefer...
Comparison of machine learning algorithms for detecting coral reef
Directory of Open Access Journals (Sweden)
Eduardo Tusa
2014-09-01
Full Text Available (Received: 2014/07/31 - Accepted: 2014/09/23This work focuses on developing a fast coral reef detector, which is used for an autonomous underwater vehicle, AUV. A fast detection secures the AUV stabilization respect to an area of reef as fast as possible, and prevents devastating collisions. We use the algorithm of Purser et al. (2009 because of its precision. This detector has two parts: feature extraction that uses Gabor Wavelet filters, and feature classification that uses machine learning based on Neural Networks. Due to the extensive time of the Neural Networks, we exchange for a classification algorithm based on Decision Trees. We use a database of 621 images of coral reef in Belize (110 images for training and 511 images for testing. We implement the bank of Gabor Wavelets filters using C++ and the OpenCV library. We compare the accuracy and running time of 9 machine learning algorithms, whose result was the selection of the Decision Trees algorithm. Our coral detector performs 70ms of running time in comparison to 22s executed by the algorithm of Purser et al. (2009.
Parallel Branch and Bound Algorithm - A comparison between serial, OpenMP and MPI implementations
International Nuclear Information System (INIS)
This paper presents a comparison of an extended version of the regular Branch and Bound algorithm previously implemented in serial with a new parallel implementation, using both MPI (distributed memory parallel model) and OpenMP (shared memory parallel model). The branch-and-bound algorithm is an enumerative optimization technique, where finding a solution to a mixed integer programming (MIP) problem is based on the construction of a tree where nodes represent candidate problems and branches represent the new restrictions to be considered. Through this tree all integer solutions of the feasible region of the problem are listed explicitly or implicitly ensuring that all the optimal solutions will be found. A common approach to solve such problems is to convert sub-problems of the mixed integer problem to linear programming problems, thereby eliminating some of the integer constraints, and then trying to solve that problem using an existing linear program approach. The paper describes the general branch and bound algorithm used and provides details on the implementation and the results of the comparison.
Comparison of evolutionary algorithms in gene regulatory network model inference.
LENUS (Irish Health Repository)
2010-01-01
ABSTRACT: BACKGROUND: The evolution of high throughput technologies that measure gene expression levels has created a data base for inferring GRNs (a process also known as reverse engineering of GRNs). However, the nature of these data has made this process very difficult. At the moment, several methods of discovering qualitative causal relationships between genes with high accuracy from microarray data exist, but large scale quantitative analysis on real biological datasets cannot be performed, to date, as existing approaches are not suitable for real microarray data which are noisy and insufficient. RESULTS: This paper performs an analysis of several existing evolutionary algorithms for quantitative gene regulatory network modelling. The aim is to present the techniques used and offer a comprehensive comparison of approaches, under a common framework. Algorithms are applied to both synthetic and real gene expression data from DNA microarrays, and ability to reproduce biological behaviour, scalability and robustness to noise are assessed and compared. CONCLUSIONS: Presented is a comparison framework for assessment of evolutionary algorithms, used to infer gene regulatory networks. Promising methods are identified and a platform for development of appropriate model formalisms is established.
A Comparison of Efficient Algorithms for Scheduling Parallel Data Redistribution
Directory of Open Access Journals (Sweden)
Marios-Evangelos Kogias
2014-06-01
Full Text Available Data redistribution in parallel is an often-addressed issue in modern computer networks. In this ontext, we study the case of data redistribution over a switching network. Data from the source stations need to be transferred to the destination stations in the minimum time possible. Unfortunately the time required to complete the transfer is burdened by each switching and thus producing an optimal schedule is proven to be computationally intractable. For the purposes of this paper we consider two algorithms, which have been proved to be very efficient in the past. To get improved results in comparison to previous approaches, we propose splitting the data in two clusters depending on the size of the data to be transferred. To prove the efficiency of our approach we ran experiments on all three algorithms, comparing the time span of the schedules produced as well as the running times to produce those schedules. The test cases we ran indicate that not only our newly proposed algorithm yields better results in terms of the schedule produced but runs faster as well.
Comparison of algorithms for ultrasound image segmentation without ground truth
Sikka, Karan; Deserno, Thomas M.
2010-02-01
Image segmentation is a pre-requisite to medical image analysis. A variety of segmentation algorithms have been proposed, and most are evaluated on a small dataset or based on classification of a single feature. The lack of a gold standard (ground truth) further adds to the discrepancy in these comparisons. This work proposes a new methodology for comparing image segmentation algorithms without ground truth by building a matrix called region-correlation matrix. Subsequently, suitable distance measures are proposed for quantitative assessment of similarity. The first measure takes into account the degree of region overlap or identical match. The second considers the degree of splitting or misclassification by using an appropriate penalty term. These measures are shown to satisfy the axioms of a quasi-metric. They are applied for a comparative analysis of synthetic segmentation maps to show their direct correlation with human intuition of similar segmentation. Since ultrasound images are difficult to segment and usually lack a ground truth, the measures are further used to compare the recently proposed spectral clustering algorithm (encoding spatial and edge information) with standard k-means over abdominal ultrasound images. Improving the parameterization and enlarging the feature space for k-means steadily increased segmentation quality to that of spectral clustering.
Comparison of total variation algorithms for electrical impedance tomography.
Zhou, Zhou; Sato dos Santos, Gustavo; Dowrick, Thomas; Avery, James; Sun, Zhaolin; Xu, Hui; Holder, David S
2015-06-01
The applications of total variation (TV) algorithms for electrical impedance tomography (EIT) have been investigated. The use of the TV regularisation technique helps to preserve discontinuities in reconstruction, such as the boundaries of perturbations and sharp changes in conductivity, which are unintentionally smoothed by traditional l2 norm regularisation. However, the non-differentiability of TV regularisation has led to the use of different algorithms. Recent advances in TV algorithms such as the primal dual interior point method (PDIPM), the linearised alternating direction method of multipliers (LADMM) and the spilt Bregman (SB) method have all been demonstrated successful EIT applications, but no direct comparison of the techniques has been made. Their noise performance, spatial resolution and convergence rate applied to time difference EIT were studied in simulations on 2D cylindrical meshes with different noise levels, 2D cylindrical tank and 3D anatomically head-shaped phantoms containing vegetable material with complex conductivity. LADMM had the fastest calculation speed but worst resolution due to the exclusion of the second-derivative; PDIPM reconstructed the sharpest change in conductivity but with lower contrast than SB; SB had a faster convergence rate than PDIPM and the lowest image errors. PMID:26008768
Comparison between Galileo CBOC Candidates and BOC(1,1 in Terms of Detection Performance
Directory of Open Access Journals (Sweden)
Fabio Dovis
2008-01-01
Full Text Available Many scientific activities within the navigation field have been focused on the analysis of innovative modulations for both GPS L1C and Galileo E1 OS, after the 2004 agreement between United States and European Commission on the development of GPS and Galileo. The joint effort by scientists of both parties has been focused on the multiplexed binary offset carrier (MBOC which is defined on the basis of its spectrum, and in this sense different time waveforms can be selected as possible modulation candidates. The goal of this paper is to present the detection performance of the composite BOC implementation of an MBOC signal in terms of detection and false alarm probabilities. A comparison among the CBOC and BOC(1,1 modulations is also presented to show how the CBOC solution, designed to have excellent tracking performance and multipath rejection capabilities, does not limit the acquisition process.
Directory of Open Access Journals (Sweden)
Saira Beg
2011-11-01
Full Text Available This paper presents performance evaluation of Bionomic Algorithm (BA for Shortest Path Finding (SPF problem as compared with the performance of Genetic Algorithm (GA for the same problem. SPF is a classical problem having many applications in networks, robotics and electronics etc. SPF problem has been solved using different algorithms such as Dijkstras Algorithm, Floyd including GA, Neural Network (NN, Tabu Search (TS, and Ant Colony Optimization (ACO etc. We have employed Bionomic Algorithm for solving the SPF problem and have given the performance comparison of BA vs. GA for the same problem. Simulation results are presented at the end which is carried out using MATLAB.
Comparison of Adhesion and Retention Forces for Two Candidate Docking Seal Elastomers
Hartzler, Brad D.; Panickar, Marta B.; Wasowski, Janice L.; Daniels, Christopher C.
2011-01-01
To successfully mate two pressurized vehicles or structures in space, advanced seals are required at the interface to prevent the loss of breathable air to the vacuum of space. A critical part of the development testing of candidate seal designs was a verification of the integrity of the retaining mechanism that holds the silicone seal component to the structure. Failure to retain the elastomer seal during flight could liberate seal material in the event of high adhesive loads during undocking. This work presents an investigation of the force required to separate the elastomer from its metal counter-face surface during simulated undocking as well as a comparison to that force which was necessary to destructively remove the elastomer from its retaining device. Two silicone elastomers, Wacker 007-49524 and Esterline ELASA-401, were evaluated. During the course of the investigation, modifications were made to the retaining devices to determine if the modifications improved the force needed to destructively remove the seal. The tests were completed at the expected operating temperatures of -50, +23, and +75 C. Under the conditions investigated, the comparison indicated that the adhesion between the elastomer and the metal counter-face was significantly less than the force needed to forcibly remove the elastomer seal from its retainer, and no failure would be expected.
Comparison of depletion algorithms for large systems of nuclides
International Nuclear Information System (INIS)
In this work five algorithms for solving the system of decay and transmutation equations with constant reaction rates encountered in burnup calculations were compared. These are Chebyshev rational approximation method (CRAM), which is a new matrix exponential method, the matrix exponential power series with instant decay and a secular equilibrium approximations for short-lived nuclides, which is used in ORIGEN, and three different variants of transmutation trajectory analysis (TTA), which is also known as the linear chains method. The common feature of these methods is their ability to deal with thousands of nuclides and reactions. Consequently, there is no need to simplify the system of equations and all nuclides can be accounted for explicitly. The methods were compared in single depletion steps using decay and cross-section data taken from the default ORIGEN libraries. Very accurate reference solutions were obtained from a high precision TTA algorithm. The results from CRAM and TTA were found to be very accurate. While ORIGEN was not as accurate, it should still be sufficient for most purposes. All TTA variants are much slower than the other two, which are so fast that their running time should be negligible in most, if not all, applications. The combination of speed and accuracy makes CRAM the clear winner of the comparison.
Comparison of New Multilevel Association Rule Algorithm with MAFIA
Arpna Shrivastava; Jain, R. C.; Ajay Kumar Shrivastava
2014-01-01
Multilevel association rules provide the more precise and specific information. Apriori algorithm is an established algorithm for finding association rules. Fast Apriori implementation is modified to develop new algorithm for finding frequent item sets and mining multilevel association rules. MAFIA is another established algorithm for finding frequent item sets. In this paper, the performance of this new algorithm is analyzed and compared with MAFIA algorithm.
Comparison of New Multilevel Association Rule Algorithm with MAFIA
Directory of Open Access Journals (Sweden)
Arpna Shrivastava
2014-10-01
Full Text Available Multilevel association rules provide the more precise and specific information. Apriori algorithm is an established algorithm for finding association rules. Fast Apriori implementation is modified to develop new algorithm for finding frequent item sets and mining multilevel association rules. MAFIA is another established algorithm for finding frequent item sets. In this paper, the performance of this new algorithm is analyzed and compared with MAFIA algorithm.
Performance Comparison Of Evolutionary Algorithms For Image Clustering
Civicioglu, P.; Atasever, U. H.; Ozkan, C.; Besdok, E.; Karkinli, A. E.; Kesikoglu, A.
2014-09-01
Evolutionary computation tools are able to process real valued numerical sets in order to extract suboptimal solution of designed problem. Data clustering algorithms have been intensively used for image segmentation in remote sensing applications. Despite of wide usage of evolutionary algorithms on data clustering, their clustering performances have been scarcely studied by using clustering validation indexes. In this paper, the recently proposed evolutionary algorithms (i.e., Artificial Bee Colony Algorithm (ABC), Gravitational Search Algorithm (GSA), Cuckoo Search Algorithm (CS), Adaptive Differential Evolution Algorithm (JADE), Differential Search Algorithm (DSA) and Backtracking Search Optimization Algorithm (BSA)) and some classical image clustering techniques (i.e., k-means, fcm, som networks) have been used to cluster images and their performances have been compared by using four clustering validation indexes. Experimental test results exposed that evolutionary algorithms give more reliable cluster-centers than classical clustering techniques, but their convergence time is quite long.
A comparison of computational methods and algorithms for the complex gamma function
Ng, E. W.
1974-01-01
A survey and comparison of some computational methods and algorithms for gamma and log-gamma functions of complex arguments are presented. Methods and algorithms reported include Chebyshev approximations, Pade expansion and Stirling's asymptotic series. The comparison leads to the conclusion that Algorithm 421 published in the Communications of ACM by H. Kuki is the best program either for individual application or for the inclusion in subroutine libraries.
Directory of Open Access Journals (Sweden)
DURUSU, A.
2014-08-01
Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.
Institute of Scientific and Technical Information of China (English)
Li Xi; Ji Hong; Zheng Ruiming; Li Ting
2009-01-01
In order to improve the performance of peer-to-peer files sharing system under mobile distributed environments, a novel always-optimally-coordinated (AOC) criterion and corresponding candidate selection algorithm are proposed in this paper. Compared with the traditional min-hops criterion, the new approach introduces a fuzzy knowledge combination theory to investigate several important factors that influence files transfer success rate and efficiency. Whereas the min-hops based protocols only ask the nearest candidate peer for desired files, the selection algorithm based on AOC comprehensively considers users' preference and network requirements with flexible balancing rules. Furthermore, its advantage also expresses in the independence of specified resource discovering protocols, allowing for scalability. The simulation results show that when using the AOC based peer selection algorithm, system performance is much better than the min-hops scheme, with files successful transfer rate improved more than 50% and transfer time reduced at least 20%.
Comparison of cone beam artifacts reduction: two pass algorithm vs TV-based CS algorithm
Choi, Shinkook; Baek, Jongduk
2015-03-01
In a cone beam computed tomography (CBCT), the severity of the cone beam artifacts is increased as the cone angle increases. To reduce the cone beam artifacts, several modified FDK algorithms and compressed sensing based iterative algorithms have been proposed. In this paper, we used two pass algorithm and Gradient-Projection-Barzilai-Borwein (GPBB) algorithm to reduce the cone beam artifacts, and compared their performance using structural similarity (SSIM) index. In two pass algorithm, it is assumed that the cone beam artifacts are mainly caused by extreme-density(ED) objects, and therefore the algorithm reproduces the cone beam artifacts(i.e., error image) produced by ED objects, and then subtract it from the original image. GPBB algorithm is a compressed sensing based iterative algorithm which minimizes an energy function for calculating the gradient projection with the step size determined by the Barzilai- Borwein formulation, therefore it can estimate missing data caused by the cone beam artifacts. To evaluate the performance of two algorithms, we used testing objects consisting of 7 ellipsoids separated along the z direction and cone beam artifacts were generated using 30 degree cone angle. Even though the FDK algorithm produced severe cone beam artifacts with a large cone angle, two pass algorithm reduced the cone beam artifacts with small residual errors caused by inaccuracy of ED objects. In contrast, GPBB algorithm completely removed the cone beam artifacts and restored the original shape of the objects.
The Comparison and Application of Corner Detection Algorithms
Jie Chen; Li-hui Zou; Juan Zhang; Li-hua Dou
2009-01-01
Corners in images represent a lot of important information. Extracting corners accurately is significant to image processing, which can reduce much of the calculations. In this paper, two widely used corner detection algorithms, SUSAN and Harris corner detection algorithms which are both based on intensity, were compared in stability, noise immunity and complexity quantificationally via stability factor η, anti-noise factor ρ and the runtime of each algorithm. It concluded that Harris corner ...
Comparison of two global digital algorithms for Minkowski tensor estimation
DEFF Research Database (Denmark)
Christensen, Sabrina Tang; Kiderlen, Markus
2016-01-01
The geometry of real world objects can be described by Minkowski tensors. Algorithms have been suggested to approximate Minkowski tensors if only a binary image of the object is available. This paper presents implementations of two such algorithms. The theoretical convergence properties are...... confirmed by simulations on test sets, and recommendations for input arguments of the algorithms are given. For increasing resolutions, we obtain more accurate es- timators for the Minkowski tensors. Digitisations of more complicated objects are shown to require higher resolutions....
The Comparison and Application of Corner Detection Algorithms
Directory of Open Access Journals (Sweden)
Jie Chen
2009-12-01
Full Text Available Corners in images represent a lot of important information. Extracting corners accurately is significant to image processing, which can reduce much of the calculations. In this paper, two widely used corner detection algorithms, SUSAN and Harris corner detection algorithms which are both based on intensity, were compared in stability, noise immunity and complexity quantificationally via stability factor η, anti-noise factor ρ and the runtime of each algorithm. It concluded that Harris corner detection algorithm was superior to SUSAN corner detection algorithm on the whole. Moreover, SUSAN and Harris detection algorithms were improved by selecting an adaptive gray difference threshold and by changing directional differentials, respectively, and compared using these three criterions. In addition, SUSAN and Harris corner detectors were applied to an image matching experiment. It was verified that the quantitative evaluations of the corner detection algorithms were valid through calculating match efficiency, defined as correct matching corner pairs dividing by matching time, which can reflect the performances of a corner detection algorithm comprehensively. Furthermore, the better corner detector was used into image mosaic experiment, and the result was satisfied. The work of this paper can provide a direction to the improvement and the utilization of these two corner detection algorithms.
A First Comparison of Kepler Planet Candidates in Single and Multiple Systems
Latham, David W; Quinn, Samuel N; Batalha, Natalie M; Borucki, William J; Brown, Timothy M; Bryson, Stephen T; Buchhave, Lars A; Caldwell, Douglas A; Carter, Joshua A; Christiansen, Jesse L; Ciardi, David R; Cochran, William D; Dunham, Edward W; Fabrycky, Daniel C; Ford, Eric B; Gautier, Thomas N; Gilliland, Ronald L; Holman, Matthew J; Howell, Steve B; Ibrahim, Khadeejah A; Isaacson, Howard; Basri, Gibor; Furesz, Gabor; Geary, John C; Jenkins, Jon M; Koch, David G; Lissauer, Jack J; Marcy, Geoffrey W; Quintana, Elisa V; Ragozzine, Darin; Sasselov, Dimitar D; Shporer, Avi; Steffen, Jason H; Welsh, William F; Wohler, Bill
2011-01-01
In this letter we present an overview of the rich population of systems with multiple candidate transiting planets found in the first four months of Kepler data. The census of multiples includes 115 targets that show 2 candidate planets, 45 with 3, 8 with 4, and 1 each with 5 and 6, for a total of 170 systems with 408 candidates. When compared to the 827 systems with only one candidate, the multiples account for 17 percent of the total number of systems, and a third of all the planet candidates. We compare the characteristics of candidates found in multiples with those found in singles. False positives due to eclipsing binaries are much less common for the multiples, as expected. Singles and multiples are both dominated by planets smaller than Neptune; 69 +2/-3 percent for singles and 86 +2/-5 percent for multiples. This result, that systems with multiple transiting planets are less likely to include a transiting giant planet, suggests that close-in giant planets tend to disrupt the orbital inclinations of sm...
A Comparison of Evolutionary Algorithms for Tracking Time-Varying Recursive Systems
Directory of Open Access Journals (Sweden)
White Michael S
2003-01-01
Full Text Available A comparison is made of the behaviour of some evolutionary algorithms in time-varying adaptive recursive filter systems. Simulations show that an algorithm including random immigrants outperforms a more conventional algorithm using the breeder genetic algorithm as the mutation operator when the time variation is discontinuous, but neither algorithm performs well when the time variation is rapid but smooth. To meet this deficit, a new hybrid algorithm which uses a hill climber as an additional genetic operator, applied for several steps at each generation, is introduced. A comparison is made of the effect of applying the hill climbing operator a few times to all members of the population or a larger number of times solely to the best individual; it is found that applying to the whole population yields the better results, substantially improved compared with those obtained using earlier methods.
Lillo-Box, J; Bouy, H
2014-01-01
The Kepler mission has discovered thousands of planet candidates. Currently, some of them have already been discarded; more than 200 have been confirmed by follow-up observations, and several hundreds have been validated. However, most of them are still awaiting for confirmation. Thus, priorities (in terms of the probability of the candidate being a real planet) must be established for subsequent observations. The motivation of this work is to provide a set of isolated (good) host candidates to be further tested by other techniques. We identify close companions of the candidates that could have contaminated the light curve of the planet host. We used the AstraLux North instrument located at the 2.2 m telescope in the Calar Alto Observatory to obtain diffraction-limited images of 174 Kepler objects of interest. The lucky-imaging technique used in this work is compared to other AO and speckle imaging observations of Kepler planet host candidates. We define a new parameter, the blended source confidence level (B...
Performance Comparison of Constrained Artificial Bee Colony Algorithm
Directory of Open Access Journals (Sweden)
Soudeh Babaeizadeh
2015-06-01
Full Text Available This study is aimed to evaluate, analyze and compare the performances of available constrained Artificial Bee Colony (ABC algorithms in the literature. In recent decades, many different variants of the ABC algorithms have been suggested to solve Constrained Optimization Problems (COPs. However, to the best of the authors' knowledge, there rarely are comparative studies on the numerical performance of those algorithms. This study is considering a set of well-known benchmark problems from test problems of Congress of Evolutionary Computation 2006 (CEC2006.
DURUSU, A.; NAKIR, I.; AJDER, A.; Ayaz, R.; Akca, H.; TANRIOVEN, M.
2014-01-01
Maximum power point trackers (MPPTs) play an essential role in extracting power from photovoltaic (PV) panels as they make the solar panels to operate at the maximum power point (MPP) whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are ...
Advanced reconstruction algorithms for electron tomography: From comparison to combination
Energy Technology Data Exchange (ETDEWEB)
Goris, B. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Roelandts, T. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Batenburg, K.J. [Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1098XG Amsterdam (Netherlands); Heidari Mezerji, H. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Bals, S., E-mail: sara.bals@ua.ac.be [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium)
2013-04-15
In this work, the simultaneous iterative reconstruction technique (SIRT), the total variation minimization (TVM) reconstruction technique and the discrete algebraic reconstruction technique (DART) for electron tomography are compared and the advantages and disadvantages are discussed. Furthermore, we describe how the result of a three dimensional (3D) reconstruction based on TVM can provide objective information that is needed as the input for a DART reconstruction. This approach results in a tomographic reconstruction of which the segmentation is carried out in an objective manner. - Highlights: ► A comparative study between different reconstruction algorithms for tomography is performed. ► Reconstruction algorithms that uses prior knowledge about the specimen have a superior result. ► One reconstruction algorithm can provide the prior knowledge for a second algorithm.
Fast Quantum Search Algorithms in Protein Sequence Comparison - Quantum Biocomputing
Hollenberg, L C L
2000-01-01
Quantum search algorithms are considered in the context of protein sequencecomparison in biocomputing. Given a sample protein sequence of length m (i.e mresidues), the problem considered is to find an optimal match in a largedatabase containing N residues. Initially, Grover's quantum search algorithm isapplied to a simple illustrative case - namely where the database forms acomplete set of states over the 2^m basis states of a m qubit register, andthus is known to contain the exact sequence of interest. This exampledemonstrates explicitly the typical O(sqrt{N}) speedup on the classical O(N)requirements. An algorithm is then presented for the (more realistic) casewhere the database may contain repeat sequences, and may not necessarilycontain an exact match to the sample sequence. In terms of minimizing theHamming distance between the sample sequence and the database subsequences thealgorithm finds an optimal alignment, in O(sqrt{N}) steps, by employing anextension of Grover's algorithm, due to Boyer, Brassard,...
Comparison of parameter estimation algorithms in hydrological modelling
DEFF Research Database (Denmark)
Blasone, Roberta-Serena; Madsen, Henrik; Rosbjerg, Dan
2006-01-01
Local search methods have been applied successfully in calibration of simple groundwater models, but might fail in locating the optimum for models of increased complexity, due to the more complex shape of the response surface. Global search algorithms have been demonstrated to perform well...... for these types of models, although at a more expensive computational cost. The main purpose of this study is to investigate the performance of a global and a local parameter optimization algorithm, respectively, the Shuffled Complex Evolution (SCE) algorithm and the gradient-based Gauss......-Marquardt-Levenberg algorithm (implemented in the PEST software), when applied to a steady-state and a transient groundwater model. The results show that PEST can have severe problems in locating the global optimum and in being trapped in local regions of attractions. The global SCE procedure is, in general, more effective...
COMPARISON OF DIFFERENT SEGMENTATION ALGORITHMS FOR DERMOSCOPIC IMAGES
Directory of Open Access Journals (Sweden)
A.A. Haseena Thasneem
2015-05-01
Full Text Available This paper compares different algorithms for the segmentation of skin lesions in dermoscopic images. The basic segmentation algorithms compared are Thresholding techniques (Global and Adaptive, Region based techniques (K-means, Fuzzy C means, Expectation Maximization and Statistical Region Merging, Contour models (Active Contour Model and Chan - Vese Model and Spectral Clustering. Accuracy, sensitivity, specificity, Border error, Hammoude distance, Hausdorff distance, MSE, PSNR and elapsed time metrices were used to evaluate various segmentation techniques.
An Empirical Comparison of Learning Algorithms for Nonparametric Scoring
Depecker, Marine; Clémençon, Stéphan; Vayatis, Nicolas
2011-01-01
The TreeRank algorithm was recently proposed as a scoring-based method based on recursive partitioning of the input space. This tree induction algorithm builds orderings by recursively optimizing the Receiver Operating Characteristic (ROC) curve through a one-step optimization procedure called LeafRank. One of the aim of this paper is the indepth analysis of the empirical performance of the variants of TreeRank/LeafRank method. Numerical experiments based on both artificial and real data sets...
A comparison of surface fitting algorithms for geophysical data
El Abbass, Tihama; Jallouli, C.; Albouy, Yves; Diament, M.
1990-01-01
Cet article présente les résultats d'une comparaison de différents algorithmes d'approximation de surface. Pour chacun de ces algorithmes (approximation polynomiale, combinaison spline-laplace, krigeage, approximation aux moindres carrés, méthode des éléments finis) la pertinence pour différents ensembles de données et les limites d'application sont discutées
Comparison of Hierarchical Agglomerative Algorithms for Clustering Medical Documents
Directory of Open Access Journals (Sweden)
Rafa E. Al-Qutaish
2012-06-01
Full Text Available Extensive amount of data stored in medical documents require developing methods that help users to find what they are looking for effectively by organizing large amounts of information into a small number of meaningful clusters. The produced clusters contain groups of objects which are more similar to each other than to the members of any other group. Thus, the aim of high-quality document clustering algorithms is to determine a set of clusters in which the inter-cluster similarity is minimized and intra-cluster similarity is maximized. The most important feature in many clustering algorithms is treating the clustering problem as an optimization process, that is, maximizing or minimizing a particular clustering criterion function defined over the whole clustering solution. The only real difference between agglomerative algorithms is how they choose which clusters to merge. The main purpose of this paper is to compare different agglomerative algorithms based on the evaluation of the clusters quality produced by different hierarchical agglomerative clustering algorithms using different criterion functions for the problem of clustering medical documents. Our experimental results showed that the agglomerative algorithm that uses I1 as its criterion function for choosing which clusters to merge produced better clusters quality than the other criterion functions in term of entropy and purity as external measures.
Teacher Candidates' Attitudes towards Inclusion Education and Comparison of Self-Compassion Levels
Aydin, Aydan; Kuzu, Seher
2013-01-01
This study has been figured for the purpose of comparing attitudes of teacher candidates toward inclusion education in terms of several variables and self-compassion levels. Sampling of the study consists of Grade 4 students of (547) Marmara University Ataturk, Faculty of Education and Faculty of Science and Letters. In this study, a personnel…
Jelen, Birsen
2015-01-01
In recent years almost every newly opened government funded university in Turkey has established a music department where future music teachers are educated and piano is compulsory for every single music teacher candidate in Turkey. The aim of this research is to compare piano teaching instructors' and their students' perceptions about the current…
Hees, A; Guéna, J; Abgrall, M; Bize, S; Wolf, P
2016-08-01
We use 6 yrs of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions, and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine structure constant and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed.
Hees, A.; Guéna, J.; Abgrall, M.; Bize, S.; Wolf, P.
2016-08-01
We use 6 yrs of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions, and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine structure constant and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed.
Hees, A; Guéna, J; Abgrall, M; Bize, S; Wolf, P
2016-08-01
We use 6 yrs of accurate hyperfine frequency comparison data of the dual rubidium and caesium cold atom fountain FO2 at LNE-SYRTE to search for a massive scalar dark matter candidate. Such a scalar field can induce harmonic variations of the fine structure constant, of the mass of fermions, and of the quantum chromodynamic mass scale, which will directly impact the rubidium/caesium hyperfine transition frequency ratio. We find no signal consistent with a scalar dark matter candidate but provide improved constraints on the coupling of the putative scalar field to standard matter. Our limits are complementary to previous results that were only sensitive to the fine structure constant and improve them by more than an order of magnitude when only a coupling to electromagnetism is assumed. PMID:27541455
A Comparison of Improved Artificial Bee Colony Algorithms Based on Differential Evolution
Directory of Open Access Journals (Sweden)
Jianfeng Qiu
2013-10-01
Full Text Available The Artificial Bee Colony (ABC algorithm is an active field of optimization based on swarm intelligence in recent years. Inspired by the mutation strategies used in Differential Evolution (DE algorithm, this paper introduced three types strategies (“rand”,” best”, and “current-to-best” and one or two numbers of disturbance vectors to ABC algorithm. Although individual mutation strategies in DE have been used in ABC algorithm by some researchers in different occasions, there have not a comprehensive application and comparison of the mutation strategies used in ABC algorithm. In this paper, these improved ABC algorithms can be analyzed by a set of testing functions including the rapidity of the convergence. The results show that those improvements based on DE achieve better performance in the whole than basic ABC algorithm.
Performance comparison of several optimization algorithms in matched field inversion
Institute of Scientific and Technical Information of China (English)
ZOU Shixin; YANG Kunde; MA Yuanliang
2004-01-01
Optimization efficiencies and mechanisms of simulated annealing, genetic algorithm, differential evolution and downhill simplex differential evolution are compared and analyzed. Simulated annealing and genetic algorithm use a directed random process to search the parameter space for an optimal solution. They include the ability to avoid local minima, but as no gradient information is used, searches may be relatively inefficient. Differential evolution uses information from a distance and azimuth between individuals of a population to search the parameter space, the initial search is effective, but the search speed decreases quickly because differential information between the individuals of population vanishes. Local downhill simplex and global differential evolution methods are developed separately, and combined to produce a hybrid downhill simplex differential evolution algorithm. The hybrid algorithm is sensitive to gradients of the object function and search of the parameter space is effective. These algorithms are applied to the matched field inversion with synthetic data. Optimal values of the parameters, the final values of object function and inversion time is presented and compared.
Comparison of Algorithms for an Electronic Nose in Identifying Liquors
Institute of Scientific and Technical Information of China (English)
Zhi-biao Shi; Tao Yu; Qun Zhao; Yang Li; Yu-bin Lan
2008-01-01
When the electronic nose is used to identify different varieties of distilled liquors, the pattern recognition algorithm is chosen on the basis of the experience, which lacks the guiding principle. In this research, the different brands of distilled spirits were identified using the pattern recognition algorithms (principal component analysis and the artificial neural network). The recognition rates of different algorithms were compared. The recognition rate of the Back Propagation Neural Network (BPNN) is the highest. Owing to the slow convergence speed of the BPNN, it tends easily to get into a local minimum. A chaotic BPNN was tried in order to overcome the disadvantage of the BPNN. The convergence speed of the chaotic BPNN is 75.5 times faster than that of the BPNN.
Comparison of evolutionary algorithms for LPDA antenna optimization
Lazaridis, Pavlos I.; Tziris, Emmanouil N.; Zaharis, Zaharias D.; Xenos, Thomas D.; Cosmas, John P.; Gallion, Philippe B.; Holmes, Violeta; Glover, Ian A.
2016-08-01
A novel approach to broadband log-periodic antenna design is presented, where some of the most powerful evolutionary algorithms are applied and compared for the optimal design of wire log-periodic dipole arrays (LPDA) using Numerical Electromagnetics Code. The target is to achieve an optimal antenna design with respect to maximum gain, gain flatness, front-to-rear ratio (F/R) and standing wave ratio. The parameters of the LPDA optimized are the dipole lengths, the spacing between the dipoles, and the dipole wire diameters. The evolutionary algorithms compared are the Differential Evolution (DE), Particle Swarm (PSO), Taguchi, Invasive Weed (IWO), and Adaptive Invasive Weed Optimization (ADIWO). Superior performance is achieved by the IWO (best results) and PSO (fast convergence) algorithms.
Comparison of Supervised and Unsupervised Learning Algorithms for Pattern Classification
Directory of Open Access Journals (Sweden)
R. Sathya
2013-02-01
Full Text Available This paper presents a comparative account of unsupervised and supervised learning models and their pattern classification evaluations as applied to the higher education scenario. Classification plays a vital role in machine based learning algorithms and in the present study, we found that, though the error back-propagation learning algorithm as provided by supervised learning model is very efficient for a number of non-linear real-time problems, KSOM of unsupervised learning model, offers efficient solution and classification in the present study.
A comparison of cohesive features in IELTS writing of Chinese candidates and IELTS examiners
Institute of Scientific and Technical Information of China (English)
刘可
2012-01-01
This study aims at investigating cohesive ties applied in IELTS written texts produced by Chinese candidates and IELTS examiners,uncovering the differences in the use of cohesive features between the two groups,and analyzing whether the employment of cohesive ties is a possible problem in the Chinese candidates’ writing.Six written texts are analyzed in the study,with three Chinese candidates’ and three IELTS examiners’ IELTS writing respectively.The findings show that there exist differences in the use of cohesive devices between the two groups.Compared to the IETLS’ examiners’ writing,the group of Chinese candidates employed excessive conjunctions,with relatively less comparative and demonstrative reference ties used in their texts.Additionally,it appears that overusing repetition ties constitutes a potential problem in the candidates’ writing.Implications and suggestions about raising learners’ awareness and helping them to use cohesive devices effectively are discussed.
Direct Imaging of Extra-solar Planets - Homogeneous Comparison of Detected Planets and Candidates
Neuhäuser, Ralph; Schmidt, Tobias
2012-01-01
Searching the literature, we found 25 stars with directly imaged planets and candidates. We gathered photometric and spectral information for all these objects to derive their luminosities in a homogeneous way, taking a bolometric correction into account. Using theoretical evolutionary models, one can then estimate the mass from luminosity, temperature, and age. According to our mass estimates, all of them can have a mass below 25 Jup masses, so that they are considered as planets.
Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches
Çelik, Ufuk; Yurtay, Nilüfer; Koç, Emine Rabia; Tepe, Nermin; Güllüoğlu, Halil; Ertaş, Mustafa
2015-01-01
The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS) were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy. PMID:26075014
Diagnostic Accuracy Comparison of Artificial Immune Algorithms for Primary Headaches
Directory of Open Access Journals (Sweden)
Ufuk Çelik
2015-01-01
Full Text Available The present study evaluated the diagnostic accuracy of immune system algorithms with the aim of classifying the primary types of headache that are not related to any organic etiology. They are divided into four types: migraine, tension, cluster, and other primary headaches. After we took this main objective into consideration, three different neurologists were required to fill in the medical records of 850 patients into our web-based expert system hosted on our project web site. In the evaluation process, Artificial Immune Systems (AIS were used as the classification algorithms. The AIS are classification algorithms that are inspired by the biological immune system mechanism that involves significant and distinct capabilities. These algorithms simulate the specialties of the immune system such as discrimination, learning, and the memorizing process in order to be used for classification, optimization, or pattern recognition. According to the results, the accuracy level of the classifier used in this study reached a success continuum ranging from 95% to 99%, except for the inconvenient one that yielded 71% accuracy.
Evaluation and Comparison of Motion Estimation Algorithms for Video Compression
Directory of Open Access Journals (Sweden)
Avinash Nayak
2013-08-01
Full Text Available Video compression has become an essential component of broadcast and entertainment media. Motion Estimation and compensation techniques, which can eliminate temporal redundancy between adjacent frames effectively, have been widely applied to popular video compression coding standards such as MPEG-2, MPEG-4. Traditional fast block matching algorithms are easily trapped into the local minima resulting in degradation on video quality to some extent after decoding. In this paper various computing techniques are evaluated in video compression for achieving global optimal solution for motion estimation. Zero motion prejudgment is implemented for finding static macro blocks (MB which do not need to perform remaining search thus reduces the computational cost. Adaptive Rood Pattern Search (ARPS motion estimation algorithm is also adapted to reduce the motion vector overhead in frame prediction. The simulation results showed that the ARPS algorithm is very effective in reducing the computations overhead and achieves very good Peak Signal to Noise Ratio (PSNR values. This method significantly reduces the computational complexity involved in the frame prediction and also least prediction error in all video sequences. Thus ARPS technique is more efficient than the conventional searching algorithms in video compression.
A benchmark for comparison of cell tracking algorithms
M. Maška (Martin); V. Ulman (Vladimír); K. Svoboda; P. Matula (Pavel); P. Matula (Petr); C. Ederra (Cristina); A. Urbiola (Ainhoa); T. España (Tomás); R. Venkatesan (Rajkumar); D.M.W. Balak (Deepak); P. Karas (Pavel); T. Bolcková (Tereza); M. Štreitová (Markéta); C. Carthel (Craig); S. Coraluppi (Stefano); N. Harder (Nathalie); K. Rohr (Karl); K.E.G. Magnusson (Klas E.); J. Jaldén (Joakim); H.M. Blau (Helen); O.M. Dzyubachyk (Oleh); P. Křížek (Pavel); G.M. Hagen (Guy); D. Pastor-Escuredo (David); D. Jimenez-Carretero (Daniel); M.J. Ledesma-Carbayo (Maria); A. Muñoz-Barrutia (Arrate); E. Meijering (Erik); M. Kozubek (Michal); C. Ortiz-De-Solorzano (Carlos)
2014-01-01
textabstractMotivation: Automatic tracking of cells in multidimensional time-lapse fluorescence microscopy is an important task in many biomedical applications. A novel framework for objective evaluation of cell tracking algorithms has been established under the auspices of the IEEE International Sy
Comparison Between Four Detection Algorithms for GEO Objects
Yanagisawa, T.; Uetsuhara, M.; Banno, H.; Kurosaki, H.; Kinoshita, D.; Kitazawa, Y.; Hanada, T.
2012-09-01
Four detection algorithms for GEO objects are being developed under the collaboration between Kyushu University, IHI corporation and JAXA. Each algorithm is designed to process CCD images to detect GEO objects. First one is PC based stacking method which has been developed in JAXA since 2000. Numerous CCD images are used to detect faint GEO objects below the limiting magnitude of a single CCD image. Sub-images are cropped from many CCD image to fit the movement of the objects. A median image of all the sub-images is then created. Although this method has an ability to detect faint objects, it takes time to analyze. Second one is the line-identifying technique which also uses many CCD frames and finds any series of objects that are arrayed on a straight line from the first frame to the last frame. This can analyze data faster than the stacking method, but cannot detect faint objects as the stacking method. Third one is the robust stacking method developed by IHI corporation which uses average instead of median to reduce analysis time. This has same analysis speed as the line-identifying technique and better detection capabilities in terms of the darkness. Forth one is the FPGA based stacking method which uses binalized images and a new algorithm installed in a FPGA board which reduce analysis time about one thousandth. All four algorithms analyzed the same sets of data to evaluate their advantages and disadvantages. By comparing their analysis times and results, an optimal usage of these algorithms are considered.
Parallel divide and conquer bio-sequence comparison based on Smith-Waterman algorithm
Institute of Scientific and Technical Information of China (English)
ZHANG Fa; QIAO Xiangzhen; LIU Zhiyong
2004-01-01
Tools for pair-wise bio-sequence alignment have for long played a central role in computation biology. Several algorithms for bio-sequence alignment have been developed. The Smith-Waterman algorithm, based on dynamic programming, is considered the most fundamental alignment algorithm in bioinformatics. However the existing parallel Smith-Waterman algorithm needs large memory space, and this disadvantage limits the size of a sequence to be handled. As the data of biological sequences expand rapidly, the memory requirement of the existing parallel SmithWaterman algorithm has become a critical problem. For solving this problem, we develop a new parallel bio-sequence alignment algorithm, using the strategy of divide and conquer, named PSW-DC algorithm. In our algorithm, first, we partition the query sequence into several subsequences and distribute them to every processor respectively,then compare each subsequence with the whole subject sequence in parallel, using the Smith-Waterman algorithm, and get an interim result, finally obtain the optimal alignment between the query sequence and subject sequence, through the special combination and extension method. Memory space required in our algorithm is reduced significantly in comparison with existing ones. We also develop a key technique of combination and extension, named the C&E method, to manipulate the interim results and obtain the final sequences alignment. We implement the new parallel bio-sequences alignment algorithm,the PSW-DC, in a cluster parallel system.
A comparison of updating algorithms for large $N$ reduced models
Pérez, Margarita García; Keegan, Liam; Okawa, Masanori; Ramos, Alberto
2015-01-01
We investigate Monte Carlo updating algorithms for simulating $SU(N)$ Yang-Mills fields on a single-site lattice, such as for the Twisted Eguchi-Kawai model (TEK). We show that performing only over-relaxation (OR) updates of the gauge links is a valid simulation algorithm for the Fabricius and Haan formulation of this model, and that this decorrelates observables faster than using heat-bath updates. We consider two different methods of implementing the OR update: either updating the whole $SU(N)$ matrix at once, or iterating through $SU(2)$ subgroups of the $SU(N)$ matrix, we find the same critical exponent in both cases, and only a slight difference between the two.
Ebtehaj, Isa; Bonakdari, Hossein
2014-01-01
The existence of sediments in wastewater greatly affects the performance of the sewer and wastewater transmission systems. Increased sedimentation in wastewater collection systems causes problems such as reduced transmission capacity and early combined sewer overflow. The article reviews the performance of the genetic algorithm (GA) and imperialist competitive algorithm (ICA) in minimizing the target function (mean square error of observed and predicted Froude number). To study the impact of bed load transport parameters, using four non-dimensional groups, six different models have been presented. Moreover, the roulette wheel selection method is used to select the parents. The ICA with root mean square error (RMSE) = 0.007, mean absolute percentage error (MAPE) = 3.5% show better results than GA (RMSE = 0.007, MAPE = 5.6%) for the selected model. All six models return better results than the GA. Also, the results of these two algorithms were compared with multi-layer perceptron and existing equations. PMID:25429460
Comparison of Algorithms for Control of Loads for Voltage Regulation
Douglass, Philip James; Han, Xue; You, Shi
2014-01-01
Autonomous flexible loads can be utilized to regulate voltag e on low voltage feeders. This paper compares two algorithms for controllin g loads: a simple voltage droop, where load power consumption is a varied in proportio n to RMS voltage; and a normalized relative voltage droop, which modifies the simpl e voltage droop by subtracting the mean voltage value at the bus and dividing by the standard deviation. These two controllers are applied to hot water heaters simul ated in a simple reside...
A comparison of fitness scallng methods in evolutionary algorithms
Bertone, E.; Alfonso, Hugo; Gallard, Raúl Hector
1999-01-01
Proportional selection (PS), as a selection mechanism for mating (reproduction with emphasis), selects individuals according to their fitness. Consequently the probability of an individual to obtain a number of offspring is directly proportional to its fitness value. This can lead to a loss of selective pressure in the fmal stages of the evolutionary process degrading the search. This presentation discusses performance results on evolutionary algorithms optimizing two highly multimodal ...
Comparison of four Adaboost algorithm based artificial neural networks in wind speed predictions
International Nuclear Information System (INIS)
Highlights: • Four hybrid algorithms are proposed for the wind speed decomposition. • Adaboost algorithm is adopted to provide a hybrid training framework. • MLP neural networks are built to do the forecasting computation. • Four important network training algorithms are included in the MLP networks. • All the proposed hybrid algorithms are suitable for the wind speed predictions. - Abstract: The technology of wind speed prediction is important to guarantee the safety of wind power utilization. In this paper, four different hybrid methods are proposed for the high-precision multi-step wind speed predictions based on the Adaboost (Adaptive Boosting) algorithm and the MLP (Multilayer Perceptron) neural networks. In the hybrid Adaboost–MLP forecasting architecture, four important algorithms are adopted for the training and modeling of the MLP neural networks, including GD-ALR-BP algorithm, GDM-ALR-BP algorithm, CG-BP-FR algorithm and BFGS algorithm. The aim of the study is to investigate the promoted forecasting percentages of the MLP neural networks by the Adaboost algorithm’ optimization under various training algorithms. The hybrid models in the performance comparison include Adaboost–GD-ALR-BP–MLP, Adaboost–GDM-ALR-BP–MLP, Adaboost–CG-BP-FR–MLP, Adaboost–BFGS–MLP, GD-ALR-BP–MLP, GDM-ALR-BP–MLP, CG-BP-FR–MLP and BFGS–MLP. Two experimental results show that: (1) the proposed hybrid Adaboost–MLP forecasting architecture is effective for the wind speed predictions; (2) the Adaboost algorithm has promoted the forecasting performance of the MLP neural networks considerably; (3) among the proposed Adaboost–MLP forecasting models, the Adaboost–CG-BP-FR–MLP model has the best performance; and (4) the improved percentages of the MLP neural networks by the Adaboost algorithm decrease step by step with the following sequence of training algorithms as: GD-ALR-BP, GDM-ALR-BP, CG-BP-FR and BFGS
Comparison with reconstruction algorithms in magnetic induction tomography.
Han, Min; Cheng, Xiaolin; Xue, Yuyan
2016-05-01
Magnetic induction tomography (MIT) is a kind of imaging technology, which uses the principle of electromagnetic detection to measure the conductivity distribution. In this research, we make an effort to improve the quality of image reconstruction mainly via the image reconstruction of MIT analysis, including solving the forward problem and image reconstruction. With respect to the forward problem, the variational finite element method is adopted. We transform the solution of a nonlinear partial differential equation into linear equations by using field subdividing and the appropriate interpolation function so that the voltage data of the sensing coils can be calculated. With respect to the image reconstruction, a method of modifying the iterative Newton-Raphson (NR) algorithm is presented in order to improve the quality of the image. In the iterative NR, weighting matrix and L1-norm regularization are introduced to overcome the drawbacks of large estimation errors and poor stability of the reconstruction image. On the other hand, within the incomplete-data framework of the expectation maximization (EM) algorithm, the image reconstruction can be converted to the problem of EM through the likelihood function for improving the under-determined problem. In the EM, the missing-data is introduced and the measurement data and the sensitivity matrix are compensated to overcome the drawback that the number of the measurement voltage is far less than the number of the unknown. In addition to the two aspects above, image segmentation is also used to make the lesion more flexible and adaptive to the patients' real conditions, which provides a theoretical reference for the development of the application of the MIT technique in clinical applications. The results show that solving the forward problem with the variational finite element method can provide the measurement voltage data for image reconstruction, the improved iterative NR method and EM algorithm can enhance the image
A COMPARISON BETWEEN TWO ALGORITHMS FOR THE RETRIEVAL OF SOIL MOISTURE USING AMSR-E DATA
Simonetta ePaloscia; Emanuele eSanti; Simone ePettinato; Iliana eMladenova; Tom eJackson; Michael eCosh
2015-01-01
A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS) watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions) and provide long-term in-situ data. One of the algorithms is the artificial neural network-ba...
A comparison between two algorithms for the retrieval of soil moisture using AMSR-E data
Paloscia, Simonetta; santi, emanuele; Pettinato, Simone; Mladenova, Iliana; Jackson, Thomas; Bindlish, Rajat; Cosh, Michael
2015-01-01
A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS) watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions) and provide long-term in-situ data. One of the algorithms is the artificial neural network-ba...
Absorption, refraction and scattering in analyzer-based imaging: comparison of different algorithms.
Diemoz, P. C.; Coan, P.; Glaser, C; Bravin, A.
2010-01-01
Many mathematical methods have been so far proposed in order to separate absorption, refraction and ultra-small angle scattering information in phase-contrast analyzer-based images. These algorithms all combine a given number of images acquired at different positions of the crystal analyzer along its rocking curve. In this paper a comprehensive quantitative comparison between five of the most widely used phase extraction algorithms based on the geometrical optics approximation is presented: t...
International Nuclear Information System (INIS)
We present a new quasi-stellar object (QSO) selection algorithm using a Support Vector Machine, a supervised classification method, on a set of extracted time series features including period, amplitude, color, and autocorrelation value. We train a model that separates QSOs from variable stars, non-variable stars, and microlensing events using 58 known QSOs, 1629 variable stars, and 4288 non-variables in the MAssive Compact Halo Object (MACHO) database as a training set. To estimate the efficiency and the accuracy of the model, we perform a cross-validation test using the training set. The test shows that the model correctly identifies ∼80% of known QSOs with a 25% false-positive rate. The majority of the false positives are Be stars. We applied the trained model to the MACHO Large Magellanic Cloud (LMC) data set, which consists of 40 million light curves, and found 1620 QSO candidates. During the selection none of the 33,242 known MACHO variables were misclassified as QSO candidates. In order to estimate the true false-positive rate, we crossmatched the candidates with astronomical catalogs including the Spitzer Surveying the Agents of a Galaxy's Evolution LMC catalog and a few X-ray catalogs. The results further suggest that the majority of the candidates, more than 70%, are QSOs.
Comparison of Greedy Algorithms for Decision Tree Optimization
Alkhalid, Abdulaziz
2013-01-01
This chapter is devoted to the study of 16 types of greedy algorithms for decision tree construction. The dynamic programming approach is used for construction of optimal decision trees. Optimization is performed relative to minimal values of average depth, depth, number of nodes, number of terminal nodes, and number of nonterminal nodes of decision trees. We compare average depth, depth, number of nodes, number of terminal nodes and number of nonterminal nodes of constructed trees with minimum values of the considered parameters obtained based on a dynamic programming approach. We report experiments performed on data sets from UCI ML Repository and randomly generated binary decision tables. As a result, for depth, average depth, and number of nodes we propose a number of good heuristics. © Springer-Verlag Berlin Heidelberg 2013.
Energy Technology Data Exchange (ETDEWEB)
Carroll, Mark C
2014-09-01
High-purity graphite is the core structural material of choice in the Very High Temperature Reactor (VHTR) design, a graphite-moderated, helium-cooled configuration that is capable of producing thermal energy for power generation as well as process heat for industrial applications that require temperatures higher than the outlet temperatures of present nuclear reactors. The Baseline Graphite Characterization Program is endeavoring to minimize the conservative estimates of as-manufactured mechanical and physical properties in nuclear-grade graphites by providing comprehensive data that captures the level of variation in measured values. In addition to providing a thorough comparison between these values in different graphite grades, the program is also carefully tracking individual specimen source, position, and orientation information in order to provide comparisons both in specific properties and in the associated variability between different lots, different billets, and different positions from within a single billet. This report is a preliminary comparison between each of the grades of graphite that are considered “candidate” grades from four major international graphite producers. These particular grades (NBG-18, NBG-17, PCEA, IG-110, and 2114) are the major focus of the evaluations presently underway on irradiated graphite properties through the series of Advanced Graphite Creep (AGC) experiments. NBG-18, a medium-grain pitch coke graphite from SGL from which billets are formed via vibration molding, was the favored structural material in the pebble-bed configuration. NBG-17 graphite from SGL is essentially NBG-18 with the grain size reduced by a factor of two. PCEA, petroleum coke graphite from GrafTech with a similar grain size to NBG-17, is formed via an extrusion process and was initially considered the favored grade for the prismatic layout. IG-110 and 2114, from Toyo Tanso and Mersen (formerly Carbone Lorraine), respectively, are fine-grain grades
International Nuclear Information System (INIS)
Development of attenuated mutants for use as vaccines is in progress for other viruses, including influenza, rotavirus, varicella-zoster, cytomegalovirus, and hepatitis-A virus (HAV). Attenuated viruses may be derived from naturally occurring mutants that infect human or nonhuman hosts. Alternatively, attenuated mutants may be generated by passage of wild-type virus in cell culture. Production of attenuated viruses in cell culture is a laborious and empiric process. Despite previous empiric successes, understanding the molecular basis for attenuation of vaccine viruses could facilitate future development and use of live-virus vaccines. Comparison of the complete nucleotide sequences of wild-type (virulent) and vaccine (attenuated) viruses has been reported for polioviruses and yellow fever virus. Here, the authors compare the nucleotide sequence of wild-type HAV HM-175 with that of a candidate vaccine derivative
VennPainter: A Tool for the Comparison and Identification of Candidate Genes Based on Venn Diagrams.
Directory of Open Access Journals (Sweden)
Guoliang Lin
Full Text Available VennPainter is a program for depicting unique and shared sets of genes lists and generating Venn diagrams, by using the Qt C++ framework. The software produces Classic Venn, Edwards' Venn and Nested Venn diagrams and allows for eight sets in a graph mode and 31 sets in data processing mode only. In comparison, previous programs produce Classic Venn and Edwards' Venn diagrams and allow for a maximum of six sets. The software incorporates user-friendly features and works in Windows, Linux and Mac OS. Its graphical interface does not require a user to have programing skills. Users can modify diagram content for up to eight datasets because of the Scalable Vector Graphics output. VennPainter can provide output results in vertical, horizontal and matrix formats, which facilitates sharing datasets as required for further identification of candidate genes. Users can obtain gene lists from shared sets by clicking the numbers on the diagram. Thus, VennPainter is an easy-to-use, highly efficient, cross-platform and powerful program that provides a more comprehensive tool for identifying candidate genes and visualizing the relationships among genes or gene families in comparative analysis.
VennPainter: A Tool for the Comparison and Identification of Candidate Genes Based on Venn Diagrams.
Lin, Guoliang; Chai, Jing; Yuan, Shuo; Mai, Chao; Cai, Li; Murphy, Robert W; Zhou, Wei; Luo, Jing
2016-01-01
VennPainter is a program for depicting unique and shared sets of genes lists and generating Venn diagrams, by using the Qt C++ framework. The software produces Classic Venn, Edwards' Venn and Nested Venn diagrams and allows for eight sets in a graph mode and 31 sets in data processing mode only. In comparison, previous programs produce Classic Venn and Edwards' Venn diagrams and allow for a maximum of six sets. The software incorporates user-friendly features and works in Windows, Linux and Mac OS. Its graphical interface does not require a user to have programing skills. Users can modify diagram content for up to eight datasets because of the Scalable Vector Graphics output. VennPainter can provide output results in vertical, horizontal and matrix formats, which facilitates sharing datasets as required for further identification of candidate genes. Users can obtain gene lists from shared sets by clicking the numbers on the diagram. Thus, VennPainter is an easy-to-use, highly efficient, cross-platform and powerful program that provides a more comprehensive tool for identifying candidate genes and visualizing the relationships among genes or gene families in comparative analysis. PMID:27120465
COMPARISON OF TDOA LOCATION ALGORITHMS WITH DIRECT SOLUTION METHOD
Institute of Scientific and Technical Information of China (English)
Li Chun; Liu Congfeng; Liao Guisheng
2011-01-01
For Time Difference Of Arrival (TDOA) location based on multi-ground stations scene,two direct solution methods are proposed to solve the target position in TDOA location.Therein,the solving methods are realized in the rectangular and polar coordinates.On the condition of rectangular coordinates,first of all,it solves the radial range between the target and reference station,then calculates the location of the target.In the case of polar coordinates,the azimuth between the target and reference station is solved first,then the radial range between the target and reference station is figured out,finally the location of the target is obtained.Simultaneously,the simulation and comparison analysis are given in detail,and show that the polar solving method has the better fuzzy performance than that of rectangular coordinate.
Performance Comparison of Total Variation based Image Regularization Algorithms
Directory of Open Access Journals (Sweden)
Kamalaveni Vanjigounder
2016-07-01
Full Text Available The mathematical approach calculus of variation is commonly used to find an unknown function that minimizes or maximizes the functional. Retrieving the original image from the degraded one, such problems are called inverse problems. The most basic example for inverse problem is image denoising. Variational methods are formulated as optimization problems and provides a good solution to image denoising. Three such variational methods Tikhonov model, ROF model and Total Variation-L1 model for image denoising are studied and implemented. Performance of these variational algorithms are analyzed for different values of regularization parameter. It is found that small value of regularization parameter causes better noise removal whereas large value of regularization parameter preserves well sharp edges. The Euler’s Lagrangian equation corresponding to an energy functional used in variational methods is solved using gradient descent method and the resulting partial differential equation is solved using Euler’s forward finite difference method. The quality metrics are computed and the results are compared in this paper.
Effective Comparison and Evaluation of DES and Rijndael Algorithm (AES
Directory of Open Access Journals (Sweden)
Prof.N..Penchalaiah,
2010-08-01
Full Text Available This paper discusses the effective coding of Rijndael algorithm, Advanced Encryption Standard (AES in Hardware Description Language, Verilog. In this work we analyze the structure and design of new AES, following three criteria: a resistance against all known attacks; b speed and code compactness on a wide range of platforms; and c designsimplicity; as well as its similarities and dissimilarities with other symmetric ciphers. On the other side, the principal advantages of new AES with respect to DES, as well as its limitations, are investigated. Thus, for example, the fact that the new cipher and its inverse use different components, which practically eliminates the possibility for weak and semi-weak keys, as existing for DES, and the non-linearity of the key expansion, which practically eliminates the possibility of equivalent keys, are two of the principal advantages of new cipher. Finally, the implementation aspects of Rijndael cipherand its inverse are treated. Thus, although Rijndael is well suited to be implemented efficiently on a wide range of processors and in dedicated hardware, we have concentrated our study on 8-bit processors, typical for current Smart Cards and on 32-bit processors, typical for PCs.
Genetic Algorithms for a Parameter Estimation of a Fermentation Process Model: A Comparison
Directory of Open Access Journals (Sweden)
Olympia Roeva
2005-12-01
Full Text Available In this paper the problem of a parameter estimation using genetic algorithms is examined. A case study considering the estimation of 6 parameters of a nonlinear dynamic model of E. coli fermentation is presented as a test problem. The parameter estimation problem is stated as a nonlinear programming problem subject to nonlinear differential-algebraic constraints. This problem is known to be frequently ill-conditioned and multimodal. Thus, traditional (gradient-based local optimization methods fail to arrive satisfied solutions. To overcome their limitations, the use of different genetic algorithms as stochastic global optimization methods is explored. These algorithms are proved to be very suitable for the optimization of highly non-linear problems with many variables. Genetic algorithms can guarantee global optimality and robustness. These facts make them advantageous in use for parameter identification of fermentation models. A comparison between simple, modified and multi-population genetic algorithms is presented. The best result is obtained using the modified genetic algorithm. The considered algorithms converged very closely to the cost value but the modified algorithm is in times faster than other two.
V.A.F. Dallagnol (V. A F); J.H. van den Berg (Jan); L. Mous (Lonneke)
2009-01-01
textabstractIn this paper, it is shown a comparison of the application of particle swarm optimization and genetic algorithms to portfolio management, in a constrained portfolio optimization problem where no short sales are allowed. The objective function to be minimized is the value at risk calculat
A comparison between two algorithms for the retrieval of soil moisture using AMSR-E data
A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS) watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground condit...
Tang, Jie; Nett, Brian E.; Chen, Guang-Hong
2009-10-01
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
International Nuclear Information System (INIS)
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.
A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller
Energy Technology Data Exchange (ETDEWEB)
Tapp, P.A.
1992-04-01
A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms` performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.
A comparison of three self-tuning control algorithms developed for the Bristol-Babcock controller
Energy Technology Data Exchange (ETDEWEB)
Tapp, P.A.
1992-04-01
A brief overview of adaptive control methods relating to the design of self-tuning proportional-integral-derivative (PID) controllers is given. The methods discussed include gain scheduling, self-tuning, auto-tuning, and model-reference adaptive control systems. Several process identification and parameter adjustment methods are discussed. Characteristics of the two most common types of self-tuning controllers implemented by industry (i.e., pattern recognition and process identification) are summarized. The substance of the work is a comparison of three self-tuning proportional-plus-integral (STPI) control algorithms developed to work in conjunction with the Bristol-Babcock PID control module. The STPI control algorithms are based on closed-loop cycling theory, pattern recognition theory, and model-based theory. A brief theory of operation of these three STPI control algorithms is given. Details of the process simulations developed to test the STPI algorithms are given, including an integrating process, a first-order system, a second-order system, a system with initial inverse response, and a system with variable time constant and delay. The STPI algorithms' performance with regard to both setpoint changes and load disturbances is evaluated, and their robustness is compared. The dynamic effects of process deadtime and noise are also considered. Finally, the limitations of each of the STPI algorithms is discussed, some conclusions are drawn from the performance comparisons, and a few recommendations are made. 6 refs.
Lu, Jing; Chen, Lei; Yin, Jun; Huang, Tao; Bi, Yi; Kong, Xiangyin; Zheng, Mingyue; Cai, Yu-Dong
2016-01-01
Lung cancer, characterized by uncontrolled cell growth in the lung tissue, is the leading cause of global cancer deaths. Until now, effective treatment of this disease is limited. Many synthetic compounds have emerged with the advancement of combinatorial chemistry. Identification of effective lung cancer candidate drug compounds among them is a great challenge. Thus, it is necessary to build effective computational methods that can assist us in selecting for potential lung cancer drug compounds. In this study, a computational method was proposed to tackle this problem. The chemical-chemical interactions and chemical-protein interactions were utilized to select candidate drug compounds that have close associations with approved lung cancer drugs and lung cancer-related genes. A permutation test and K-means clustering algorithm were employed to exclude candidate drugs with low possibilities to treat lung cancer. The final analysis suggests that the remaining drug compounds have potential anti-lung cancer activities and most of them have structural dissimilarity with approved drugs for lung cancer.
Gallenne, A; Kervella, P; Monnier, J D; Schaefer, G H; Baron, F; Breitfelder, J; Bouquin, J B Le; Roettenbacher, R M; Gieren, W; Pietrzynski, G; McAlister, H; Brummelaar, T ten; Sturmann, J; Sturmann, L; Turner, N; Ridgway, S; Kraus, S
2015-01-01
Long-baseline interferometry is an important technique to spatially resolve binary or multiple systems in close orbits. By combining several telescopes together and spectrally dispersing the light, it is possible to detect faint components around bright stars. Aims. We provide a rigorous and detailed method to search for high-contrast companions around stars, determine the detection level, and estimate the dynamic range from interferometric observations. We developed the code CANDID (Companion Analysis and Non-Detection in Interferometric Data), a set of Python tools that allows us to search systematically for point-source, high-contrast companions and estimate the detection limit. The search pro- cedure is made on a N x N grid of fit, whose minimum needed resolution is estimated a posteriori. It includes a tool to estimate the detection level of the companion in the number of sigmas. The code CANDID also incorporates a robust method to set a 3{\\sigma} detection limit on the flux ratio, which is based on an a...
Directory of Open Access Journals (Sweden)
Devesh Batra
2014-11-01
Full Text Available The Internet paved way for information sharing all over the world decades ago and its popularity for distribution of data has spread like a wildfire ever since. Data in the form of images, sounds, animations and videos is gaining users’ preference in comparison to plain text all across the globe. Despite unprecedented progress in the fields of data storage, computing speed and data transmission speed, the demands of available data and its size (due to the increase in both, quality and quantity continue to overpower the supply of resources. One of the reasons for this may be how the uncompressed data is compressed in order to send it across the network. This paper compares the two most widely used training algorithms for multilayer perceptron (MLP image compression – the Levenberg-Marquardt algorithm and the Scaled Conjugate Gradient algorithm. We test the performance of the two training algorithms by compressing the standard test image (Lena or Lenna in terms of accuracy and speed. Based on our results, we conclude that both algorithms were comparable in terms of speed and accuracy. However, the Levenberg- Marquardt algorithm has shown slightly better performance in terms of accuracy (as found in the average training accuracy and mean squared error, whereas the Scaled Conjugate Gradient algorithm faired better in terms of speed (as found in the average training iteration on a simple MLP structure (2 hidden layers.
International Nuclear Information System (INIS)
The objective of this work is to present the capabilities of the NUMERICS web platform for evaluation of the performance of image registration algorithms. The NUMERICS platform is a web accessible tool which provides access to dedicated numerical algorithms for registration and comparison of medical images (http://numerics.phys.uni-sofia.bg). The platform allows comparison of noisy medical images by means of different types of image comparison algorithms, which are based on statistical tests for outliers. The platform also allows 2D image registration with different techniques like Elastic Thin-Plate Spline registration, registration based on rigid transformations, affine transformations, as well as non-rigid image registration based on Mobius transformations. In this work we demonstrate how the platform can be used as a tool for evaluation of the quality of the image registration process. We demonstrate performance evaluation of a deformable image registration technique based on Mobius transformations. The transformations are applied with appropriate cost functions like: Mutual information, Correlation coefficient, Sum of Squared Differences. The accent is on the results provided by the platform to the user and their interpretation in the context of the performance evaluation of 2D image registration. The NUMERICS image registration and image comparison platform provides detailed statistical information about submitted image registration jobs and can be used to perform quantitative evaluation of the performance of different image registration techniques. (authors)
Li Li; Guo Yang; Wu Wenwu; Shi Youyi; Cheng Jian; Tao Shiheng
2012-01-01
Abstract Background Several biclustering algorithms have been proposed to identify biclusters, in which genes share similar expression patterns across a number of conditions. However, different algorithms would yield different biclusters and further lead to distinct conclusions. Therefore, some testing and comparisons between these algorithms are strongly required. Methods In this study, five biclustering algorithms (i.e. BIMAX, FABIA, ISA, QUBIC and SAMBA) were compared with each other in th...
A COMPARISON BETWEEN TWO ALGORITHMS FOR THE RETRIEVAL OF SOIL MOISTURE USING AMSR-E DATA
Directory of Open Access Journals (Sweden)
Simonetta ePaloscia
2015-04-01
Full Text Available A comparison between two algorithms for estimating soil moisture with microwave satellite data was carried out by using the datasets collected on the four Agricultural Research Service (ARS watershed sites in the US from 2002 to 2009. These sites collectively represent a wide range of ground conditions and precipitation regimes (from natural to agricultural surfaces and from desert to humid regions and provide long-term in-situ data. One of the algorithms is the artificial neural network-based algorithm developed by the Institute of Applied Physics of the National Research Council (IFAC-CNR (HydroAlgo and the second one is the Single Channel Algorithm (SCA developed by USDA-ARS (US Department of Agriculture-Agricultural Research Service. Both algorithms are based on the same radiative transfer equations but are implemented very differently. Both made use of datasets provided by the Japanese Aerospace Exploration Agency (JAXA, within the framework of Advanced Microwave Scanning Radiometer–Earth Observing System (AMSR-E and Global Change Observation Mission–Water GCOM/AMSR-2 programs. Results demonstrated that both algorithms perform better than the mission specified accuracy, with Root Mean Square Error (RMSE ≤0.06 m3/m3 and Bias <0.02 m3/m3. These results expand on previous investigations using different algorithms and sites. The novelty of the paper consists of the fact that it is the first intercomparison of the HydroAlgo algorithm with a more traditional retrieval algorithm, which offers an approach to higher spatial resolution products.
EXPERIMENTAL COMPARISON OF HOMODYNE DEMODULATION ALGORITHMS FOR PHASE FIBER-OPTIC SENSOR
Directory of Open Access Journals (Sweden)
M. N. Belikin
2015-11-01
Full Text Available Subject of Research. The paper presents the results of experimental comparative analysis of homodyne demodulation algorithms based on differential cross multiplying method and on arctangent method under the same conditions. The dependencies of parameters for the output signals on the optical radiation intensity are studied for the considered demodulation algorithms. Method. The prototype of single fiber optic phase interferometric sensor has been used for experimental comparison of signal demodulation algorithms. Main Results. We have found that homodyne demodulation based on arctangent method provides greater (by 7 dB at average signal-to-noise ratio of output signals over the frequency band of acoustic impact from 100 Hz to 500 Hz as compared to differential cross multiplying algorithms. We have demonstrated that no change in the output signal amplitude occurs for the studied range of values of the optical pulses amplitudes. Obtained results indicate that the homodyne demodulation based on arctangent method is most suitable for application in the phase fiber-optic sensors. It provides higher repeatability of their characteristics than the differential cross multiplying algorithm. Practical Significance. Algorithms of interferometric signals demodulation are widely used in phase fiber-optic sensors. Improvement of their characteristics has a positive effect on the performance of such sensors.
A Comparison of the Machine Learning Algorithm for Evaporation Duct Estimation
Directory of Open Access Journals (Sweden)
C. Yang
2013-06-01
Full Text Available In this research, a comparison of the relevance vector machine (RVM, least square support vector machine (LSSVM and the radial basis function neural network (RBFNN for evaporation duct estimation are presented. The parabolic equation model is adopted as the forward propagation model, and which is used to establish the training database between the radar sea clutter power and the evaporation duct height. The comparison of the RVM, LSSVM and RBFNN for evaporation duct estimation are investigated via the experimental and the simulation studies, and the statistical analysis method is employed to analyze the performance of the three machine learning algorithms in the simulation study. The analysis demonstrate that the M profile of RBFNN estimation has a relatively good match to the measured profile for the experimental study; for the simulation study, the LSSVM is the most precise one among the three machine learning algorithms, besides, the performance of RVM is basically identical to the RBFNN.
Code Syntax-Comparison Algorithm Based on Type-Redefinition-Preprocessing and Rehash Classification
Directory of Open Access Journals (Sweden)
Baojiang Cui
2011-08-01
Full Text Available The code comparison technology plays an important role in the fields of software security protection and plagiarism detection. Nowadays, there are mainly FIVE approaches of plagiarism detection, file-attribute-based, text-based, token-based, syntax-based and semantic-based. The prior three approaches have their own limitations, while the technique based on syntax has its shortage of detection ability and low efficiency that all of these approaches cannot meet the requirements on large-scale software plagiarism detection. Based on our prior research, we propose an algorithm on type redefinition plagiarism detection, which could detect the level of simple type redefinition, repeating pattern redefinition, and the redefinition of type with pointer. Besides, this paper also proposes a code syntax-comparison algorithm based on rehash classification, which enhances the node storage structure of the syntax tree, and greatly improves the efficiency.
Shot Boundary Detection in Soccer Video using Twin-comparison Algorithm and Dominant Color Region
Directory of Open Access Journals (Sweden)
Matko Šarić
2008-06-01
Full Text Available The first step in generic video processing is temporal segmentation, i.e. shot boundary detection. Camera shot transitions can be either abrupt (e.g. cuts or gradual (e.g. fades, dissolves, wipes. Sports video is one of the most challenging domains for robust shot boundary detection. We proposed a shot boundary detection algorithm for soccer video based on the twin-comparison method and the absolute difference between frames in their ratios of dominant colored pixels to total number of pixels. With this approach the detection of gradual transitions is improved by decreasing the number of false positives caused by some camera operations. We also compared performances of our algorithm and the standard twin-comparison method.
Energy Technology Data Exchange (ETDEWEB)
Antoniucci, S.; Giannini, T.; Li Causi, G.; Lorenzetti, D., E-mail: simone.antoniucci@oa-roma.inaf.it, E-mail: teresa.giannini@oa-roma.inaf.it, E-mail: gianluca.licausi@oa-roma.inaf.it, E-mail: dario.lorenzetti@oa-roma.inaf.it [INAF-Osservatorio Astronomico di Roma, via Frascati 33, I-00040 Monte Porzio (Italy)
2014-02-10
Aiming to statistically study the variability in the mid-IR of young stellar objects, we have compared the 3.6, 4.5, and 24 μm Spitzer fluxes of 1478 sources belonging to the C2D (Cores to Disks) legacy program with the WISE fluxes at 3.4, 4.6, and 22 μm. From this comparison, we have selected a robust sample of 34 variable sources. Their variations were classified per spectral Class (according to the widely accepted scheme of Class I/flat/II/III protostars), and per star forming region. On average, the number of variable sources decreases with increasing Class and is definitely higher in Perseus and Ophiuchus than in Chamaeleon and Lupus. According to the paradigm Class ≡ Evolution, the photometric variability can be considered to be a feature more pronounced in less evolved protostars, and, as such, related to accretion processes. Moreover, our statistical findings agree with the current knowledge of star formation activity in different regions. The 34 selected variables were further investigated for similarities with known young eruptive variables, namely the EXors. In particular, we analyzed (1) the shape of the spectral energy distribution, (2) the IR excess over the stellar photosphere, (3) magnitude versus color variations, and (4) output parameters of model fitting. This first systematic search for EXors ends up with 11 bona fide candidates that can be considered as suitable targets for monitoring or future investigations.
A comparison between genetic algorithms and neural networks for optimizing fuel recharges in BWR
International Nuclear Information System (INIS)
In this work the results of a genetic algorithm (AG) and a neural recurrent multi state network (RNRME) for optimizing the fuel reload of 5 cycles of the Laguna Verde nuclear power plant (CNLV) are presented. The fuel reload obtained by both methods are compared and it was observed that the RNRME creates better fuel distributions that the AG. Moreover a comparison of the utility for using one or another one techniques is make. (Author)
A comparison of semiglobal and local dense matching algorithms for surface reconstruction
Dall'Asta, E.; Roncella, R.
2014-06-01
Encouraged by the growing interest in automatic 3D image-based reconstruction, the development and improvement of robust stereo matching techniques is one of the most investigated research topic of the last years in photogrammetry and computer vision. The paper is focused on the comparison of some stereo matching algorithms (local and global) which are very popular both in photogrammetry and computer vision. In particular, the Semi-Global Matching (SGM), which realizes a pixel-wise matching and relies on the application of consistency constraints during the matching cost aggregation, will be discussed. The results of some tests performed on real and simulated stereo image datasets, evaluating in particular the accuracy of the obtained digital surface models, will be presented. Several algorithms and different implementation are considered in the comparison, using freeware software codes like MICMAC and OpenCV, commercial software (e.g. Agisoft PhotoScan) and proprietary codes implementing Least Square e Semi-Global Matching algorithms. The comparisons will also consider the completeness and the level of detail within fine structures, and the reliability and repeatability of the obtainable data.
Directory of Open Access Journals (Sweden)
Ji Xinglai
2010-08-01
Full Text Available Abstract Background We are developing a cross-species comparison strategy to distinguish between cancer driver- and passenger gene alteration candidates, by utilizing the difference in genomic location of orthologous genes between the human and other mammals. As an initial test of this strategy, we conducted a pilot study with human colorectal cancer (CRC and its mouse model C57BL/6J ApcMin/+, focusing on human 5q22.2 and 18q21.1-q21.2. Methods We first performed bioinformatics analysis on the evolution of 5q22.2 and 18q21.1-q21.2 regions. Then, we performed exon-targeted sequencing, real time quantitative polymerase chain reaction (qPCR, and real time quantitative reverse transcriptase PCR (qRT-PCR analyses on a number of genes of both regions with both human and mouse colon tumors. Results These two regions (5q22.2 and 18q21.1-q21.2 are frequently deleted in human CRCs and encode genuine colorectal tumor suppressors APC and SMAD4. They also encode genes such as MCC (mutated in colorectal cancer with their role in CRC etiology unknown. We have discovered that both regions are evolutionarily unstable, resulting in genes that are clustered in each human region being found scattered at several distinct loci in the genome of many other species. For instance, APC and MCC are within 200 kb apart in human 5q22.2 but are 10 Mb apart in the mouse genome. Importantly, our analyses revealed that, while known CRC driver genes APC and SMAD4 were disrupted in both human colorectal tumors and tumors from ApcMin/+ mice, the questionable MCC gene was disrupted in human tumors but appeared to be intact in mouse tumors. Conclusions These results indicate that MCC may not actually play any causative role in early colorectal tumorigenesis. We also hypothesize that its disruption in human CRCs is likely a mere result of its close proximity to APC in the human genome. Expanding this pilot study to the entire genome may identify more questionable genes like MCC
International Nuclear Information System (INIS)
We are developing a cross-species comparison strategy to distinguish between cancer driver- and passenger gene alteration candidates, by utilizing the difference in genomic location of orthologous genes between the human and other mammals. As an initial test of this strategy, we conducted a pilot study with human colorectal cancer (CRC) and its mouse model C57BL/6J ApcMin/+, focusing on human 5q22.2 and 18q21.1-q21.2. We first performed bioinformatics analysis on the evolution of 5q22.2 and 18q21.1-q21.2 regions. Then, we performed exon-targeted sequencing, real time quantitative polymerase chain reaction (qPCR), and real time quantitative reverse transcriptase PCR (qRT-PCR) analyses on a number of genes of both regions with both human and mouse colon tumors. These two regions (5q22.2 and 18q21.1-q21.2) are frequently deleted in human CRCs and encode genuine colorectal tumor suppressors APC and SMAD4. They also encode genes such as MCC (mutated in colorectal cancer) with their role in CRC etiology unknown. We have discovered that both regions are evolutionarily unstable, resulting in genes that are clustered in each human region being found scattered at several distinct loci in the genome of many other species. For instance, APC and MCC are within 200 kb apart in human 5q22.2 but are 10 Mb apart in the mouse genome. Importantly, our analyses revealed that, while known CRC driver genes APC and SMAD4 were disrupted in both human colorectal tumors and tumors from ApcMin/+ mice, the questionable MCC gene was disrupted in human tumors but appeared to be intact in mouse tumors. These results indicate that MCC may not actually play any causative role in early colorectal tumorigenesis. We also hypothesize that its disruption in human CRCs is likely a mere result of its close proximity to APC in the human genome. Expanding this pilot study to the entire genome may identify more questionable genes like MCC, facilitating the discovery of new CRC driver gene candidates
Comparison of Reconstruction and Control algorithms on the ESO end-to-end simulator OCTOPUS
Montilla, I.; Béchet, C.; Lelouarn, M.; Correia, C.; Tallon, M.; Reyes, M.; Thiébaut, É.
Extremely Large Telescopes are very challenging concerning their Adaptive Optics requirements. Their diameters, the specifications demanded by the science for which they are being designed for, and the planned use of Extreme Adaptive Optics systems, imply a huge increment in the number of degrees of freedom in the deformable mirrors. It is necessary to study new reconstruction algorithms to implement the real time control in Adaptive Optics at the required speed. We have studied the performance, applied to the case of the European ELT, of three different algorithms: the matrix-vector multiplication (MVM) algorithm, considered as a reference; the Fractal Iterative Method (FrIM); and the Fourier Transform Reconstructor (FTR). The algorithms have been tested on ESO's OCTOPUS software, which simulates the atmosphere, the deformable mirror, the sensor and the closed-loop control. The MVM is the default reconstruction and control method implemented in OCTOPUS, but it scales in O(N2) operations per loop so it is not considered as a fast algorithm for wave-front reconstruction and control on an Extremely Large Telescope. The two other methods are the fast algorithms studied in the E-ELT Design Study. The performance, as well as their response in the presence of noise and with various atmospheric conditions, has been compared using a Single Conjugate Adaptive Optics configuration for a 42 m diameter ELT, with a total amount of 5402 actuators. Those comparisons made on a common simulator allow to enhance the pros and cons of the various methods, and give us a better understanding of the type of reconstruction algorithm that an ELT demands.
Directory of Open Access Journals (Sweden)
Rhythm Suren Wadhwa
2011-11-01
Full Text Available The paper presents a comparison and application of metaheuristic population-based optimization algorithms to a flexible manufacturing automation scenario in a metacasting foundry. It presents a novel application and comparison of Bee Colony Algorithm (BCA with variations of Particle Swarm Optimization (PSO and Ant Colony Optimization (ACO for object recognition problem in a robot material handling system. To enable robust pick and place activity of metalcasted parts by a six axis industrial robot manipulator, it is important that the correct orientation of the parts is input to the manipulator, via the digital image captured by the vision system. This information is then used for orienting the robot gripper to grip the part from a moving conveyor belt. The objective is to find the reference templates on the manufactured parts from the target landscape picture which may contain noise. The Normalized cross-correlation (NCC function is used as an objection function in the optimization procedure. The ultimate goal is to test improved algorithms that could prove useful in practical manufacturing automation scenarios.
Limongelli, Carla; Sciarrone, Filippo; Temperini, Marco; Vaste, Giulia
2011-01-01
LS-Lab provides automatic support to comparison/evaluation of the Learning Object Sequences produced by different Curriculum Sequencing Algorithms. Through this framework a teacher can verify the correspondence between the behaviour of different sequencing algorithms and her pedagogical preferences. In fact the teacher can compare algorithms…
Sensitivity study of voxel-based PET image comparison to image registration algorithms
Energy Technology Data Exchange (ETDEWEB)
Yip, Stephen, E-mail: syip@lroc.harvard.edu; Chen, Aileen B.; Berbeco, Ross [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 (United States); Aerts, Hugo J. W. L. [Department of Radiation Oncology, Brigham and Women’s Hospital, Dana-Farber Cancer Institute and Harvard Medical School, Boston, Massachusetts 02115 and Department of Radiology, Brigham and Women’s Hospital and Harvard Medical School, Boston, Massachusetts 02115 (United States)
2014-11-01
Purpose: Accurate deformable registration is essential for voxel-based comparison of sequential positron emission tomography (PET) images for proper adaptation of treatment plan and treatment response assessment. The comparison may be sensitive to the method of deformable registration as the optimal algorithm is unknown. This study investigated the impact of registration algorithm choice on therapy response evaluation. Methods: Sixteen patients with 20 lung tumors underwent a pre- and post-treatment computed tomography (CT) and 4D FDG-PET scans before and after chemoradiotherapy. All CT images were coregistered using a rigid and ten deformable registration algorithms. The resulting transformations were then applied to the respective PET images. Moreover, the tumor region defined by a physician on the registered PET images was classified into progressor, stable-disease, and responder subvolumes. Particularly, voxels with standardized uptake value (SUV) decreases >30% were classified as responder, while voxels with SUV increases >30% were progressor. All other voxels were considered stable-disease. The agreement of the subvolumes resulting from difference registration algorithms was assessed by Dice similarity index (DSI). Coefficient of variation (CV) was computed to assess variability of DSI between individual tumors. Root mean square difference (RMS{sub rigid}) of the rigidly registered CT images was used to measure the degree of tumor deformation. RMS{sub rigid} and DSI were correlated by Spearman correlation coefficient (R) to investigate the effect of tumor deformation on DSI. Results: Median DSI{sub rigid} was found to be 72%, 66%, and 80%, for progressor, stable-disease, and responder, respectively. Median DSI{sub deformable} was 63%–84%, 65%–81%, and 82%–89%. Variability of DSI was substantial and similar for both rigid and deformable algorithms with CV > 10% for all subvolumes. Tumor deformation had moderate to significant impact on DSI for progressor
Digital Sound Synthesis Algorithms: a Tutorial Introduction and Comparison of Methods
Lee, J. Robert
The objectives of the dissertation are to provide both a compendium of sound-synthesis methods with detailed descriptions and sound examples, as well as a comparison of the relative merits of each method based on ease of use, observed sound quality, execution time, and data storage requirements. The methods are classified under the general headings of wavetable-lookup synthesis, additive synthesis, subtractive synthesis, nonlinear methods, and physical modelling. The nonlinear methods comprise a large group that ranges from the well-known frequency-modulation synthesis to waveshaping. The final category explores computer modelling of real musical instruments and includes numerical and analytical solutions to the classical wave equation of motion, along with some of the more sophisticated time -domain models that are possible through the prudent combination of simpler synthesis techniques. The dissertation is intended to be understandable by a musician who is mathematically literate but who does not necessarily have a background in digital signal processing. With this limitation in mind, a brief and somewhat intuitive description of digital sampling theory is provided in the introduction. Other topics such as filter theory are discussed as the need arises. By employing each of the synthesis methods to produce the same type of sound, interesting comparisons can be made. For example, a struck string sound, such as that typical of a piano, can be produced by algorithms in each of the synthesis classifications. Many sounds, however, are peculiar to a single algorithm and must be examined independently. Psychoacoustic studies were conducted as an aid in the comparison of the sound quality of several implementations of the synthesis algorithms. Other psychoacoustic experiments were conducted to supplement the established notions of which timbral issues are important in the re -synthesis of the sounds of acoustic musical instruments.
Bircher, Pascal; Liniger, Hanspeter; Prasuhn, Volker
2016-04-01
Soil erosion is a well-known challenge both from a global perspective and in Switzerland, and it is assessed and discussed in many projects (e.g. national or European erosion risk maps). Meaningful assessment of soil erosion requires models that adequately reflect surface water flows. Various studies have attempted to achieve better modelling results by including multiple flow algorithms in the topographic length and slope factor (LS-factor) of the Revised Universal Soil Loss Equation (RUSLE). The choice of multiple flow algorithms is wide, and many of them have been implemented in programs or tools like Saga-Gis, GrassGis, ArcGIS, ArcView, Taudem, and others. This study compares six different multiple flow algorithms with the aim of identifying a suitable approach to calculating the LS factor for a new soil erosion risk map of Switzerland. The comparison of multiple flow algorithms is part of a broader project to model soil erosion for the entire agriculturally used area in Switzerland and to renew and optimize the current erosion risk map of Switzerland (ERM2). The ERM2 was calculated in 2009, using a high resolution digital elevation model (2 m) and a multiple flow algorithm in ArcView. This map has provided the basis for enforcing soil protection regulations since 2010 and has proved its worth in practice, but it has become outdated (new basic data are now available, e.g. data on land use change, a new rainfall erosivity map, a new digital elevation model, etc.) and is no longer user friendly (ArcView). In a first step towards its renewal, a new data set from the Swiss Federal Office of Topography (Swisstopo) was used to generate the agricultural area based on the existing field block map. A field block is an area consisting of farmland, pastures, and meadows which is bounded by hydrological borders such as streets, forests, villages, surface waters, etc. In our study, we compared the six multiple flow algorithms with the LS factor calculation approach used in
Directory of Open Access Journals (Sweden)
Prabhat Kumar Giri
2016-01-01
Full Text Available In the present era of globalization and competitive market, cellular manufacturing has become a vital tool for meeting the challenges of improving productivity, which is the way to sustain growth. Getting best results of cellular manufacturing depends on the formation of the machine cells and part families. This paper examines advantages of ART method of cell formation over array based clustering algorithms, namely ROC-2 and DCA. The comparison and evaluation of the cell formation methods has been carried out in the study. The most appropriate approach is selected and used to form the cellular manufacturing system. The comparison and evaluation is done on the basis of performance measure as grouping efficiency and improvements over the existing cellular manufacturing system is presented.
Pulliam, T. H.; Nemec, M.; Holst, T.; Zingg, D. W.; Kwak, Dochan (Technical Monitor)
2002-01-01
A comparison between an Evolutionary Algorithm (EA) and an Adjoint-Gradient (AG) Method applied to a two-dimensional Navier-Stokes code for airfoil design is presented. Both approaches use a common function evaluation code, the steady-state explicit part of the code,ARC2D. The parameterization of the design space is a common B-spline approach for an airfoil surface, which together with a common griding approach, restricts the AG and EA to the same design space. Results are presented for a class of viscous transonic airfoils in which the optimization tradeoff between drag minimization as one objective and lift maximization as another, produces the multi-objective design space. Comparisons are made for efficiency, accuracy and design consistency.
A comparison of two adaptive algorithms for the control of active engine mounts
Hillis, A. J.; Harrison, A. J. L.; Stoten, D. P.
2005-08-01
This paper describes work conducted in order to control automotive active engine mounts, consisting of a conventional passive mount and an internal electromagnetic actuator. Active engine mounts seek to cancel the oscillatory forces generated by the rotation of out-of-balance masses within the engine. The actuator generates a force dependent on a control signal from an algorithm implemented with a real-time DSP. The filtered-x least-mean-square (FXLMS) adaptive filter is used as a benchmark for comparison with a new implementation of the error-driven minimal controller synthesis (Er-MCSI) adaptive controller. Both algorithms are applied to an active mount fitted to a saloon car equipped with a four-cylinder turbo-diesel engine, and have no a priori knowledge of the system dynamics. The steady-state and transient performance of the two algorithms are compared and the relative merits of the two approaches are discussed. The Er-MCSI strategy offers significant computational advantages as it requires no cancellation path modelling. The Er-MCSI controller is found to perform in a fashion similar to the FXLMS filter—typically reducing chassis vibration by 50-90% under normal driving conditions.
Marchant, Benjamin; Platnick, Steven; Meyer, Kerry; Arnold, G. Thomas; Riedi, Jérôme
2016-04-01
Cloud thermodynamic phase (ice, liquid, undetermined) classification is an important first step for cloud retrievals from passive sensors such as MODIS (Moderate Resolution Imaging Spectroradiometer). Because ice and liquid phase clouds have very different scattering and absorbing properties, an incorrect cloud phase decision can lead to substantial errors in the cloud optical and microphysical property products such as cloud optical thickness or effective particle radius. Furthermore, it is well established that ice and liquid clouds have different impacts on the Earth's energy budget and hydrological cycle, thus accurately monitoring the spatial and temporal distribution of these clouds is of continued importance. For MODIS Collection 6 (C6), the shortwave-derived cloud thermodynamic phase algorithm used by the optical and microphysical property retrievals has been completely rewritten to improve the phase discrimination skill for a variety of cloudy scenes (e.g., thin/thick clouds, over ocean/land/desert/snow/ice surface, etc). To evaluate the performance of the C6 cloud phase algorithm, extensive granule-level and global comparisons have been conducted against the heritage C5 algorithm and CALIOP. A wholesale improvement is seen for C6 compared to C5.
SKLOF：一种新的超新星候选范围约减算法%SKLOF:A New Algorithm to Reduce the Range of Supernova Candidates
Institute of Scientific and Technical Information of China (English)
屠良平; 魏会明; 韦鹏; 潘景昌; 罗阿理; 赵永恒
2015-01-01
Supernova (SN) is called the “standard candles” in the cosmology ,the probability of outbreak in the galaxy is very low and is a kind of special ,rare astronomical objects .Only in a large number of galaxies ,we have a chance to find the superno-va .The supernova which is in the midst of explosion will illuminate the entire galaxy ,so the spectra of galaxies we obtained have obvious features of supernova .But the number of supernova have been found is very small relative to the large number of astro-nomical objects .The time computation that search the supernova be the key to weather the follow-up observations ,therefore it needs to look for an efficient method .The time complexity of the density-based outlier detecting algorithm (LOF) is not ideal , which effects its application in large datasets .Through the improvement of LOF algorithm ,a new algorithm that reduces the searching range of supernova candidates in a flood of spectra of galaxies is introduced and named SKLOF .Firstly ,the spectra datasets are pruned and we can get rid of most objects are impossible to be the outliers .Secondly ,we use the improved LOF al-gorithm to calculate the local outlier factors (LOF) of the spectra datasets remained and all LOFs are arranged in descending or-der .Finally ,we can get the smaller searching range of the supernova candidates for the subsequent identification .The experi-mental results show that the algorithm is very effective ,not only improved in accuracy ,but also reduce the operation time com-pared with LOF algorithm with the guarantee of the accuracy of detection .%超新星是宇宙学中的“标准烛光”，其在星系中爆发的概率很低，是一种特殊、稀少的天体，只有在大量观测的星系数据中才有机会遇到，而正处于爆发期的超新星会照亮其整个星系从而在观测获得的星系光谱中具有较明显的特征。但是，目前已发现的超新星数量相对于大量的天体而言又是非常稀少
Algorithm, applications and evaluation for protein comparison by Ramanujan Fourier transform.
Zhao, Jian; Wang, Jiasong; Hua, Wei; Ouyang, Pingkai
2015-12-01
The amino acid sequence of a protein determines its chemical properties, chain conformation and biological functions. Protein sequence comparison is of great importance to identify similarities of protein structures and infer their functions. Many properties of a protein correspond to the low-frequency signals within the sequence. Low frequency modes in protein sequences are linked to the secondary structures, membrane protein types, and sub-cellular localizations of the proteins. In this paper, we present Ramanujan Fourier transform (RFT) with a fast algorithm to analyze the low-frequency signals of protein sequences. The RFT method is applied to similarity analysis of protein sequences with the Resonant Recognition Model (RRM). The results show that the proposed fast RFT method on protein comparison is more efficient than commonly used discrete Fourier transform (DFT). RFT can detect common frequencies as significant feature for specific protein families, and the RFT spectrum heat-map of protein sequences demonstrates the information conservation in the sequence comparison. The proposed method offers a new tool for pattern recognition, feature extraction and structural analysis on protein sequences.
An Incremental Algorithm of Text Clustering Based on Semantic Sequences
Institute of Scientific and Technical Information of China (English)
FENG Zhonghui; SHEN Junyi; BAO Junpeng
2006-01-01
This paper proposed an incremental textclustering algorithm based on semantic sequence.Using similarity relation of semantic sequences and calculating the cover of similarity semantic sequences set, the candidate cluster with minimum entropy overlap value was selected as a result cluster every time in this algorithm.The comparison of experimental results shows that the precision of the algorithm is higher than other algorithms under same conditions and this is obvious especially on long documents set.
Directory of Open Access Journals (Sweden)
Gaurav Prakash
2016-01-01
Conclusions: Preoperative whole eye HOA were similar for refractive surgery candidates of Arab and South.Asian origin. The values were comparable to historical data for Caucasian eyes and were lower than Asian. (Chinese eyes. These findings may aid in refining refractive nomograms for wavefront ablations.
Comparison of Bayesian Land Surface Temperature algorithm performance with Terra MODIS observations
Morgan, J A
2009-01-01
An approach to land surface temperature (LST) estimation that relies upon Bayesian inference has been validated against multiband infrared radiometric imagery from the Terra MODIS instrument. Bayesian LST estimators are shown to reproduce standard MODIS product LST values starting from a parsimoniously chosen (hence, uninformative) range of prior band emissivity knowledge. Two estimation methods have been tested. The first is the iterative contraction mapping of joint expectation values for LST and surface emissivity described in a previous paper. In the second method, the Bayesian algorithm is reformulated as a Maximum \\emph{A-Posteriori} (MAP) search for the maximum joint \\emph{a-posteriori} probability for LST, given observed sensor aperture radiances and \\emph{a-priori} probabilities for LST and emissivity. Two MODIS data granules each for daytime and nighttime were used for the comparison. The granules were chosen to be largely cloud-free, with limited vertical relief in those portions of the granules fo...
Ivanova, Natalia; Pedersen, Leif T.; Lavergne, Thomas; Tonboe, Rasmus T.; Saldo, Roberto; Mäkynen, Marko; Heygster, Georg; Rösel, Anja; Kern, Stefan; Dybkjær, Gorm; Sørensen, Atle; Brucker, Ludovic; Shokr, Mohammed; Korosov, Anton; Hansen, Morten W.
2015-04-01
Sea ice concentration (SIC) has been derived globally from satellite passive microwave observations since the 1970s by a multitude of algorithms. However, existing datasets and algorithms, although agreeing in the large-scale picture, differ substantially in the details and have disadvantages in summer and fall due to presence of melt ponds and thin ice. There is thus a need for understanding of the causes for the differences and identifying the most suitable method to retrieve SIC. Therefore, during the ESA Climate Change Initiative effort 30 algorithms have been implemented, inter-compared and validated by a standardized reference dataset. The algorithms were evaluated over low and high sea ice concentrations and thin ice. Based on the findings, an optimal approach to retrieve sea ice concentration globally for climate purposes was suggested and validated. The algorithm was implemented with atmospheric correction and dynamical tie points in order to produce the final sea ice concentration dataset with per-pixel uncertainties. The issue of melt ponds was addressed in particular because they are interpreted as open water by the algorithms and thus SIC can be underestimated by up to 40%. To improve our understanding of this issue, melt-pond signatures in AMSR2 images were investigated based on their physical properties with help of observations of melt pond fraction from optical (MODIS and MERIS) and active microwave (SAR) satellite measurements.
Comparison of algorithms for blood stain detection applied to forensic hyperspectral imagery
Yang, Jie; Messinger, David W.; Mathew, Jobin J.; Dube, Roger R.
2016-05-01
Blood stains are among the most important types of evidence for forensic investigation. They contain valuable DNA information, and the pattern of the stains can suggest specifics about the nature of the violence that transpired at the scene. Early detection of blood stains is particularly important since the blood reacts physically and chemically with air and materials over time. Accurate identification of blood remnants, including regions that might have been intentionally cleaned, is an important aspect of forensic investigation. Hyperspectral imaging might be a potential method to detect blood stains because it is non-contact and provides substantial spectral information that can be used to identify regions in a scene with trace amounts of blood. The potential complexity of scenes in which such vast violence occurs can be high when the range of scene material types and conditions containing blood stains at a crime scene are considered. Some stains are hard to detect by the unaided eye, especially if a conscious effort to clean the scene has occurred (we refer to these as "latent" blood stains). In this paper we present the initial results of a study of the use of hyperspectral imaging algorithms for blood detection in complex scenes. We describe a hyperspectral imaging system which generates images covering 400 nm - 700 nm visible range with a spectral resolution of 10 nm. Three image sets of 31 wavelength bands were generated using this camera for a simulated indoor crime scene in which blood stains were placed on a T-shirt and walls. To detect blood stains in the scene, Principal Component Analysis (PCA), Subspace Reed Xiaoli Detection (SRXD), and Topological Anomaly Detection (TAD) algorithms were used. Comparison of the three hyperspectral image analysis techniques shows that TAD is most suitable for detecting blood stains and discovering latent blood stains.
Fedorova , E.; Vasylenko, A.; Hnatyk, B. I.; Zhdanov, V. I.
2016-02-01
We analyze the X-ray properties of the Compton-thick Seyfert 1.9 radio quiet AGN in NGC 1194 using INTEGRAL (ISGRI), XMM-Newton (EPIC), Swift (BAT and XRT), and Suzaku (XIS) observations. There is a set of Fe-K lines in the NGC 1194 spectrum with complex relativistic profiles that can be considered as a sign of either a warped Bardeen-Petterson accretion disk or double black hole. We compare our results on NGC 1194 with two other megamaser warped disk candidates, NGC 1068 and NGC 4258, to trace out the other properties which can be typical for AGNs with warped accretion disks. To finally confirm or disprove the double black-hole hypotheses, further observations of the iron lines and their evolution of their shape with time are necessary. Based on obsrvations made with INTEGRAL, XMM-Newton, Swift, Suzaku.
Energy Technology Data Exchange (ETDEWEB)
Chatziioannou, A.; Qi, J.; Moore, A.; Annala, A.; Nguyen, K.; Leahy, R.M.; Cherry, S.R.
2000-01-01
We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior.
International Nuclear Information System (INIS)
We have evaluated the performance of two three dimensional reconstruction algorithms with data acquired from microPET, a high resolution tomograph dedicated to small animal imaging. The first was a linear filtered-backprojection algorithm (FBP) with reprojection of the missing data and the second was a statistical maximum-aposteriori probability algorithm (MAP). The two algorithms were evaluated in terms of their resolution performance, both in phantoms and in vivo. Sixty independent realizations of a phantom simulating the brain of a baby monkey were acquired, each containing 3 million counts. Each of these realizations was reconstructed independently with both algorithms. The ensemble of the sixty reconstructed realizations was used to estimate the standard deviation as a measure of the noise for each reconstruction algorithm. More detail was recovered in the MAP reconstruction without an increase in noise relative to FBP. Studies in a simple cylindrical compartment phantom demonstrated improved recovery of known activity ratios with MAP. Finally in vivo studies also demonstrated a clear improvement in spatial resolution using the MAP algorithm. The quantitative accuracy of the MAP reconstruction was also evaluated by comparison with autoradiography and direct well counting of tissue samples and was shown to be superior
Ueno, Hiroki; Urasaki, Naoya; Natsume, Satoshi; Yoshida, Kentaro; Tarora, Kazuhiko; Shudo, Ayano; Terauchi, Ryohei; Matsumura, Hideo
2015-04-01
The sex type of papaya (Carica papaya) is determined by the pair of sex chromosomes (XX, female; XY, male; and XY(h), hermaphrodite), in which there is a non-recombining genomic region in the Y and Y(h) chromosomes. This region is presumed to be involved in determination of males and hermaphrodites; it is designated as the male-specific region in the Y chromosome (MSY) and the hermaphrodite-specific region in the Y(h) chromosome (HSY). Here, we identified the genes determining male and hermaphrodite sex types by comparing MSY and HSY genomic sequences. In the MSY and HSY genomic regions, we identified 14,528 nucleotide substitutions and 965 short indels with a large gap and two highly diverged regions. In the predicted genes expressed in flower buds, we found no nucleotide differences leading to amino acid changes between the MSY and HSY. However, we found an HSY-specific transposon insertion in a gene (SVP like) showing a similarity to the Short Vegetative Phase (SVP) gene. Study of SVP-like transcripts revealed that the MSY allele encoded an intact protein, while the HSY allele encoded a truncated protein. Our findings demonstrated that the SVP-like gene is a candidate gene for male-hermaphrodite determination in papaya.
International Nuclear Information System (INIS)
Work in the respective areas included assessment of conditions related to sinkhole development. Information collected and assessed involved geology, hydrogeology, land use, lineaments and linear trends, identification of karst features and zones, and inventory of historical sinkhole development and type. Karstification of the candidate, Rhea County, and Morristown study areas, in comparison to other karst areas in Tennessee, can be classified informally as youthful, submature, and mature, respectively. Historical sinkhole development in the more karstified areas is attributed to the greater degree of structural deformation by faulting and fracturing, subsequent solutioning of bedrock, thinness of residuum, and degree of development by man. Sinkhole triggering mechanisms identified are progressive solution of bedrock, water-level fluctuations, piping, and loading. 68 refs., 18 figs., 11 tabs
Humeau-Heurtier, Anne; Mahé, Guillaume; Abraham, Pierre
2015-12-01
Laser speckle contrast imaging (LSCI) enables a noninvasive monitoring of microvascular perfusion. Some studies have proposed to extract information from LSCI data through their multiscale entropy (MSE). However, for reaching a large range of scales, the original MSE algorithm may require long recordings for reliability. Recently, a novel approach to compute MSE with shorter data sets has been proposed: the short-time MSE (sMSE). Our goal is to apply, for the first time, the sMSE algorithm in LSCI data and to compare results with those given by the original MSE. Moreover, we apply the original MSE algorithm on data of different lengths and compare results with those given by longer recordings. For this purpose, synthetic signals and 192 LSCI regions of interest (ROIs) of different sizes are processed. Our results show that the sMSE algorithm is valid to compute the MSE of LSCI data. Moreover, with time series shorter than those initially proposed, the sMSE and original MSE algorithms give results with no statistical difference from those of the original MSE algorithm with longer data sets. The minimal acceptable length depends on the ROI size. Comparisons of MSE from healthy and pathological subjects can be performed with shorter data sets than those proposed until now. PMID:26220209
Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms
Directory of Open Access Journals (Sweden)
Dominik Zurek
2013-01-01
Full Text Available Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting algorithms on GPU cards and multicore processors. Then hybrid algorithm will be presented which consists of parts executed on both platforms, standard CPU and GPU.
Li, Zhaokun; Cao, Jingtai; Liu, Wei; Feng, Jianfeng; Zhao, Xiaohui
2015-03-01
We use conventional adaptive optical system to compensate atmospheric turbulence in free space optical (FSO) communication system under strong scintillation circumstances, undesired wave-front measurements based on Shark-Hartman sensor (SH). Since wavefront sensor-less adaptive optics is a feasible option, we propose several swarm intelligence algorithms to compensate the wavefront aberration from atmospheric interference in FSO and mainly discuss the algorithm principle, basic flows, and simulation result. The numerical simulation experiment and result analysis show that compared with SPGD algorithm, the proposed algorithms can effectively restrain wavefront aberration, and improve convergence rate of the algorithms and the coupling efficiency of receiver in large extent.
Mosconi, E; Sima, D M; Osorio Garcia, M I; Fontanella, M; Fiorini, S; Van Huffel, S; Marzola, P
2014-04-01
Proton magnetic resonance spectroscopy (MRS) is a sensitive method for investigating the biochemical compounds in a tissue. The interpretation of the data relies on the quantification algorithms applied to MR spectra. Each of these algorithms has certain underlying assumptions and may allow one to incorporate prior knowledge, which could influence the quality of the fit. The most commonly considered types of prior knowledge include the line-shape model (Lorentzian, Gaussian, Voigt), knowledge of the resonating frequencies, modeling of the baseline, constraints on the damping factors and phase, etc. In this article, we study whether the statistical outcome of a biological investigation can be influenced by the quantification method used. We chose to study lipid signals because of their emerging role in the investigation of metabolic disorders. Lipid spectra, in particular, are characterized by peaks that are in most cases not Lorentzian, because measurements are often performed in difficult body locations, e.g. in visceral fats close to peristaltic movements in humans or very small areas close to different tissues in animals. This leads to spectra with several peak distortions. Linear combination of Model spectra (LCModel), Advanced Method for Accurate Robust and Efficient Spectral fitting (AMARES), quantitation based on QUantum ESTimation (QUEST), Automated Quantification of Short Echo-time MRS (AQSES)-Lineshape and Integration were applied to simulated spectra, and area under the curve (AUC) values, which are proportional to the quantity of the resonating molecules in the tissue, were compared with true values. A comparison between techniques was also carried out on lipid signals from obese and lean Zucker rats, for which the polyunsaturation value expressed in white adipose tissue should be statistically different, as confirmed by high-resolution NMR measurements (considered the gold standard) on the same animals. LCModel, AQSES-Lineshape, QUEST and Integration
A comparison of kinematic algorithms to estimate gait events during overground running.
Smith, Laura; Preece, Stephen; Mason, Duncan; Bramah, Christopher
2015-01-01
The gait cycle is frequently divided into two distinct phases, stance and swing, which can be accurately determined from ground reaction force data. In the absence of such data, kinematic algorithms can be used to estimate footstrike and toe-off. The performance of previously published algorithms is not consistent between studies. Furthermore, previous algorithms have not been tested at higher running speeds nor used to estimate ground contact times. Therefore the purpose of this study was to both develop a new, custom-designed, event detection algorithm and compare its performance with four previously tested algorithms at higher running speeds. Kinematic and force data were collected on twenty runners during overground running at 5.6m/s. The five algorithms were then implemented and estimated times for footstrike, toe-off and contact time were compared to ground reaction force data. There were large differences in the performance of each algorithm. The custom-designed algorithm provided the most accurate estimation of footstrike (True Error 1.2 ± 17.1 ms) and contact time (True Error 3.5 ± 18.2 ms). Compared to the other tested algorithms, the custom-designed algorithm provided an accurate estimation of footstrike and toe-off across different footstrike patterns. The custom-designed algorithm provides a simple but effective method to accurately estimate footstrike, toe-off and contact time from kinematic data.
Directory of Open Access Journals (Sweden)
Li Li
2012-07-01
Full Text Available Abstract Background Several biclustering algorithms have been proposed to identify biclusters, in which genes share similar expression patterns across a number of conditions. However, different algorithms would yield different biclusters and further lead to distinct conclusions. Therefore, some testing and comparisons between these algorithms are strongly required. Methods In this study, five biclustering algorithms (i.e. BIMAX, FABIA, ISA, QUBIC and SAMBA were compared with each other in the cases where they were used to handle two expression datasets (GDS1620 and pathway with different dimensions in Arabidopsis thaliana (A. thaliana GO (gene ontology annotation and PPI (protein-protein interaction network were used to verify the corresponding biological significance of biclusters from the five algorithms. To compare the algorithms’ performance and evaluate quality of identified biclusters, two scoring methods, namely weighted enrichment (WE scoring and PPI scoring, were proposed in our study. For each dataset, after combining the scores of all biclusters into one unified ranking, we could evaluate the performance and behavior of the five biclustering algorithms in a better way. Results Both WE and PPI scoring methods has been proved effective to validate biological significance of the biclusters, and a significantly positive correlation between the two sets of scores has been tested to demonstrate the consistence of these two methods. A comparative study of the above five algorithms has revealed that: (1 ISA is the most effective one among the five algorithms on the dataset of GDS1620 and BIMAX outperforms the other algorithms on the dataset of pathway. (2 Both ISA and BIMAX are data-dependent. The former one does not work well on the datasets with few genes, while the latter one holds well for the datasets with more conditions. (3 FABIA and QUBIC perform poorly in this study and they may be suitable to large datasets with more genes and
Kim, Ye-seul; Park, Hye-suk; Lee, Haeng-Hwa; Choi, Young-Wook; Choi, Jae-Gu; Kim, Hak Hee; Kim, Hee-Joung
2016-02-01
Digital breast tomosynthesis (DBT) is a recently developed system for three-dimensional imaging that offers the potential to reduce the false positives of mammography by preventing tissue overlap. Many qualitative evaluations of digital breast tomosynthesis were previously performed by using a phantom with an unrealistic model and with heterogeneous background and noise, which is not representative of real breasts. The purpose of the present work was to compare reconstruction algorithms for DBT by using various breast phantoms; validation was also performed by using patient images. DBT was performed by using a prototype unit that was optimized for very low exposures and rapid readout. Three algorithms were compared: a back-projection (BP) algorithm, a filtered BP (FBP) algorithm, and an iterative expectation maximization (EM) algorithm. To compare the algorithms, three types of breast phantoms (homogeneous background phantom, heterogeneous background phantom, and anthropomorphic breast phantom) were evaluated, and clinical images were also reconstructed by using the different reconstruction algorithms. The in-plane image quality was evaluated based on the line profile and the contrast-to-noise ratio (CNR), and out-of-plane artifacts were evaluated by means of the artifact spread function (ASF). Parenchymal texture features of contrast and homogeneity were computed based on reconstructed images of an anthropomorphic breast phantom. The clinical images were studied to validate the effect of reconstruction algorithms. The results showed that the CNRs of masses reconstructed by using the EM algorithm were slightly higher than those obtained by using the BP algorithm, whereas the FBP algorithm yielded much lower CNR due to its high fluctuations of background noise. The FBP algorithm provides the best conspicuity for larger calcifications by enhancing their contrast and sharpness more than the other algorithms; however, in the case of small-size and low
Korean Medication Algorithm for Bipolar Disorder 2014: comparisons with other treatment guidelines
Directory of Open Access Journals (Sweden)
Jeong JH
2015-06-01
with MS or AAP for dysphoric/psychotic mania. Aripiprazole, olanzapine, quetiapine, and risperidone were the first-line AAPs in nearly all of the phases of bipolar disorder across the guidelines. Most guidelines advocated newer AAPs as first-line treatment options in all phases, and lamotrigine in depressive and maintenance phases. Lithium and valproic acid were commonly used as MSs in all phases of bipolar disorder. As research evidence accumulated over time, recommendations of newer AAPs – such as asenapine, paliperidone, lurasidone, and long-acting injectable risperidone – became prominent. This comparison identifies that the treatment recommendations of the KMAP-BP 2014 are similar to those of other treatment guidelines and reflect current changes in prescription patterns for bipolar disorder based on accumulated research data. Further studies are needed to address several issues identified in our review. Keywords: bipolar disorder, pharmacotherapy, treatment algorithm, guideline comparison, KMAP-2014
Li, Borui; Mu, Chundi; WANG, Tao; Peng, Qian
2014-01-01
This is a revised version of our paper published in Journal of Convergence Information Technology(JCIT): "Comparison of Feature Point Extraction Algorithms for Vision Based Autonomous Aerial Refueling". We corrected some errors including measurement unit errors, spelling errors and so on. Since the published papers in JCIT are not allowed to be modified, we submit the revised version to arXiv.org to make the paper more rigorous and not to confuse other researchers.
Berlich, Rüdiger; Kunze, Marcel
1997-02-01
The Supervised Growing Neural Gas algorithm (SGNG) provides an interesting alternative to standard Multi-Layer Perceptrons (MLP). A comparison is drawn between the performance of SGNG and MLP in the domain of function mapping. A further field of interest is classification power, which has been investigated with real data taken by PS197 at CERN. The characteristics of the two network models will be discussed from a practical point of view as well as their advantages and disadvantages.
Binnicker, Matthew J.; Jespersen, Deborah J.; Rollins, Leonard O.
2012-01-01
We describe the first direct comparison of the reverse and traditional syphilis screening algorithms in a population with a low prevalence of syphilis. Among 1,000 patients tested, the results for 6 patients were falsely reactive by reverse screening, compared to none by traditional testing. However, reverse screening identified 2 patients with possible latent syphilis that were not detected by rapid plasma reagin (RPR).
Comparison of New Tau PET-Tracer Candidates With [18F]T808 and [18F]T807.
Declercq, Lieven; Celen, Sofie; Lecina, Joan; Ahamed, Muneer; Tousseyn, Thomas; Moechars, Diederik; Alcazar, Jesus; Ariza, Manuela; Fierens, Katleen; Bottelbergs, Astrid; Mariën, Jonas; Vandenberghe, Rik; Andres, Ignacio Jose; Van Laere, Koen; Verbruggen, Alfons; Bormans, Guy
2016-01-01
Early clinical results of two tau tracers, [(18)F]T808 and [(18)F]T807, have recently been reported. In the present study, the biodistribution, radiometabolite quantification, and competition-binding studies were performed in order to acquire comparative preclinical data as well as to establish the value of T808 and T807 as benchmark compounds for assessment of binding affinities of eight new/other tau tracers. Biodistribution studies in mice showed high brain uptake and fast washout.In vivoradiometabolite analysis using high-performance liquid chromatography showed the presence of polar radiometabolites in plasma and brain. No specific binding of [(18)F]T808 was found in transgenic mice expressing mutant human P301L tau. In semiquantitative autoradiography studies on human Alzheimer disease slices, we observed more than 50% tau selective blocking of [(18)F]T808 in the presence of 1 µmol/L of the novel ligands. This study provides a straightforward comparison of the binding affinity and selectivity for tau of the reported radiolabeled tracers BF-158, BF-170, THK5105, lansoprazole, astemizole, and novel tau positron emission tomography ligands against T807 and T808. Therefore, these data are helpful to identify structural requirements for selective interaction with tau and to compare the performance of new highly selective and specific radiolabeled tau tracers. PMID:27030397
Antoniucci, S; Causi, G Li; Lorenzetti, D
2014-01-01
Aiming at statistically studying the variability in the mid-IR of young stellar objects (YSOs), we have compared the 3.6, 4.5, and 24 um Spitzer fluxes of 1478 sources belonging to the C2D (Cores to Disks) legacy program with the WISE fluxes at 3.4, 4.6, and 22 um. From this comparison we have selected a robust sample of 34 variable sources. Their variations were classified per spectral Class (according to the widely accepted scheme of Class I/flat/II/III protostars), and per star forming region. On average, the number of variable sources decreases with increasing Class and is definitely higher in Perseus and Ophiuchus than in Chamaeleon and Lupus. According to the paradigm Class Evolution, the photometric variability can be considered to be a feature more pronounced in less evolved protostars, and, as such, related to accretion processes. Moreover, our statistical findings agree with the current knowledge of the star formation activity in different regions. The 34 selected variables were further investigate...
An Improved Chaotic Bat Algorithm for Solving Integer Programming Problems
Directory of Open Access Journals (Sweden)
Osama Abdel Raouf
2014-08-01
Full Text Available Bat Algorithm is a recently-developed method in the field of computational intelligence. In this paper is presented an improved version of a Bat Meta-heuristic Algorithm, (IBACH, for solving integer programming problems. The proposed algorithm uses chaotic behaviour to generate a candidate solution in behaviors similar to acoustic monophony. Numerical results show that the IBACH is able to obtain the optimal results in comparison to traditional methods (branch and bound, particle swarm optimization algorithm (PSO, standard Bat algorithm and other harmony search algorithms. However, the benefits of this proposed algorithm is in its ability to obtain the optimal solution within less computation, which save time in comparison with the branch and bound algorithm (exact solution method.
Comparison Of Hybrid Sorting Algorithms Implemented On Different Parallel Hardware Platforms
Dominik Zurek; Marcin Pietron; Maciej Wielgosz; Kazimierz Wiatr
2013-01-01
Sorting is a common problem in computer science. There are lot of well-known sorting algorithms created for sequential execution on a single processor. Recently, hardware platforms enable to create wide parallel algorithms. We have standard processors consist of multiple cores and hardware accelerators like GPU. The graphic cards with their parallel architecture give new possibility to speed up many algorithms. In this paper we describe results of implementation of a few different sorting alg...
Performance Comparison of Known ICA Algorithms to a Wavelet-ICA Merger
Janett Walters-Williams, Yan Li
2011-01-01
These signals are however contaminated with artifacts which must be removed to have pure EEGsignals. These artifacts can be removed by Independent Component Analysis (ICA). In this paperwe studied the performance of three ICA algorithms (FastICA, JADE, and Radical) as well as ournewly developed ICA technique. Comparing these ICA algorithms, it is observed that our newtechniques perform as well as these algorithms at denoising EEG signals.
Jie TANG; Nett, Brian E; Chen, Guang-Hong
2009-01-01
Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical rec...
Directory of Open Access Journals (Sweden)
Rajeswari Sridhar
2010-07-01
Full Text Available In this work we have compared two indexing algorithms that have been used to index and retrieve Carnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithm for music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. The modification in the dual ternary algorithm was essential to handle variable length query phrase and to accommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted for Carnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternary algorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval in which features like MFCC, spectral flux, melody string and spectral centroid are used as features for indexing data into a hash table. The way in which collision resolution was handled by this hash table is different than the normal hash table approaches. It was observed that multi-key hashing based retrieval had a lesser time complexity than dual-ternary based indexing The algorithms were also compared for their precision and recall in which multi-key hashing had a better recall than modified dual ternary indexing for the sample data considered.
Guo, Liyong; Yan, Zhiqiang; Zheng, Xiliang; Hu, Liang; Yang, Yongliang; Wang, Jin
2014-07-01
In protein-ligand docking, an optimization algorithm is used to find the best binding pose of a ligand against a protein target. This algorithm plays a vital role in determining the docking accuracy. To evaluate the relative performance of different optimization algorithms and provide guidance for real applications, we performed a comparative study on six efficient optimization algorithms, containing two evolutionary algorithm (EA)-based optimizers (LGA, DockDE) and four particle swarm optimization (PSO)-based optimizers (SODock, varCPSO, varCPSO-ls, FIPSDock), which were implemented into the protein-ligand docking program AutoDock. We unified the objective functions by applying the same scoring function, and built a new fitness accuracy as the evaluation criterion that incorporates optimization accuracy, robustness, and efficiency. The varCPSO and varCPSO-ls algorithms show high efficiency with fast convergence speed. However, their accuracy is not optimal, as they cannot reach very low energies. SODock has the highest accuracy and robustness. In addition, SODock shows good performance in efficiency when optimizing drug-like ligands with less than ten rotatable bonds. FIPSDock shows excellent robustness and is close to SODock in accuracy and efficiency. In general, the four PSO-based algorithms show superior performance than the two EA-based algorithms, especially for highly flexible ligands. Our method can be regarded as a reference for the validation of new optimization algorithms in protein-ligand docking.
Comparison of strapdown inertial navigation algorithm based on rotation vector and dual quaternion
Institute of Scientific and Technical Information of China (English)
Wang Zhenhuan; Chen Xijun; Zeng Qingshuang
2013-01-01
For the navigation algorithm of the strapdown inertial navigation system,by comparing to the equations of the dual quaternion and quaternion,the superiority of the attitude algorithm based on dual quaternion over the ones based on rotation vector in accuracy is analyzed in the case of the rotation of navigation frame.By comparing the update algorithm of the gravitational velocity in dual quaternion solution with the compensation algorithm of the harmful acceleration in traditional velocity solution,the accuracy advantage of the gravitational velocity based on dual quaternion is addressed.In view of the idea of the attitude and velocity algorithm based on dual quaternion,an improved navigation algorithm is proposed,which is as much as the rotation vector algorithm in computational complexity.According to this method,the attitude quaternion does not require compensating as the navigation frame rotates.In order to verify the correctness of the theoretical analysis,simulations are carried out utilizing the software,and the simulation results show that the accuracy of the improved algorithm is approximately equal to the dual quaternion algorithm.
Comparison of several algorithms of the electric force calculation in particle plasma models
International Nuclear Information System (INIS)
This work is devoted to plasma modelling using the technique of molecular dynamics. The crucial problem of most such models is the efficient calculation of electric force. This is usually solved by using the particle-in-cell (PIC) algorithm. However, PIC is an approximative algorithm as it underestimates the short-range interactions of charged particles. We propose a hybrid algorithm which adds these interactions to PIC. Then we include this algorithm in a set of algorithms which we test against each other in a two-dimensional collisionless magnetized plasma model. Besides our hybrid algorithm, this set includes two variants of pure PIC and the direct application of Coulomb's law. We compare particle forces, particle trajectories, total energy conservation and the speed of the algorithms. We find out that the hybrid algorithm can be a good replacement of direct Coulomb's law application (quite accurate and much faster). It is however probably unnecessary to use it in practical 2D models.
Banks, H Thomas; Hu, Shuhua; Joyner, Michele; Broido, Anna; Canter, Brandi; Gayvert, Kaitlyn; Link, Kathryn
2012-07-01
In this paper, we investigate three particular algorithms: a stochastic simulation algorithm (SSA), and explicit and implicit tau-leaping algorithms. To compare these methods, we used them to analyze two infection models: a Vancomycin-resistant enterococcus (VRE) infection model at the population level, and a Human Immunodeficiency Virus (HIV) within host infection model. While the first has a low species count and few transitions, the second is more complex with a comparable number of species involved. The relative efficiency of each algorithm is determined based on computational time and degree of precision required. The numerical results suggest that all three algorithms have the similar computational efficiency for the simpler VRE model, and the SSA is the best choice due to its simplicity and accuracy. In addition, we have found that with the larger and more complex HIV model, implementation and modification of tau-Leaping methods are preferred.
DEFF Research Database (Denmark)
Nica, Florin Valentin Traian; Ritchie, Ewen; Leban, Krisztina Monika
2013-01-01
Nowadays the requirements imposed by the industry and economy ask for better quality and performance while the price must be maintained in the same range. To achieve this goal optimization must be introduced in the design process. Two of the best known optimization algorithms for machine design......, genetic algorithm and particle swarm are shortly presented in this paper. These two algorithms are tested to determine their performance on five different benchmark test functions. The algorithms are tested based on three requirements: precision of the result, number of iterations and calculation time....... Both algorithms are also tested on an analytical design process of a Transverse Flux Permanent Magnet Generator to observe their performances in an electrical machine design application....
Tang, Y.; Reed, P.; Wagner, T.
2005-12-01
This study provides the first comprehensive assessment of state-of-the-art evolutionary multiobjective optimization (EMO) tools- relative effectiveness in calibrating integrated hydrologic models. The relative computational efficiency, accuracy, and ease-of-use of the following EMO algorithms are tested: Epsilon Dominance Nondominated Sorted Genetic Algorithm-II (??-NSGAII), the Multiobjective Shuffled Complex Evolution Metropolis algorithm (MOSCEM-UA), and the Strength Pareto Evolutionary Algorithm 2 (SPEA2). This study assesses the performances of these three evolutionary multiobjective algorithms using a formal metrics-based methodology. This study uses two phases of testing to compare the algorithms- performances. In the first phase, this study uses a suite of standard computer science test problems to validate the algorithms- abilities to perform global search effectively, efficiently, and reliably. The second phase of testing compares the algorithms- performances for a computationally intensive multiobjective integrated hydrologic model calibration application for the Shale Hills watershed located within the Valley and Ridge province of the Susquehanna River Basin in north central Pennsylvania. The Shale Hills test case demonstrates the computational challenges posed by the paradigmatic shift in environmental and water resources simulation tools towards highly nonlinear physical models that seek to holistically simulate the water cycle. Specifically, the Shale Hills test case is an excellent test for the three EMO algorithms due to the large number of continuous decision variables, the increased computational demands posed by the simulating fully-coupled hydrologic processes, and the highly multimodal nature of the search space. A challenge and contribution of this work is the development of a comprehensive methodology for comprehensively comparing EMO algorithms that have different search operators and randomization techniques.
Comparison of Fully Numerical Predictor-Corrector and Apollo Skip Entry Guidance Algorithms
Brunner, Christopher W.; Lu, Ping
2012-09-01
The dramatic increase in computational power since the Apollo program has enabled the development of numerical predictor-corrector (NPC) entry guidance algorithms that allow on-board accurate determination of a vehicle's trajectory. These algorithms are sufficiently mature to be flown. They are highly adaptive, especially in the face of extreme dispersion and off-nominal situations compared with reference-trajectory following algorithms. The performance and reliability of entry guidance are critical to mission success. This paper compares the performance of a recently developed fully numerical predictor-corrector entry guidance (FNPEG) algorithm with that of the Apollo skip entry guidance. Through extensive dispersion testing, it is clearly demonstrated that the Apollo skip entry guidance algorithm would be inadequate in meeting the landing precision requirement for missions with medium (4000-7000 km) and long (>7000 km) downrange capability requirements under moderate dispersions chiefly due to poor modeling of atmospheric drag. In the presence of large dispersions, a significant number of failures occur even for short-range missions due to the deviation from planned reference trajectories. The FNPEG algorithm, on the other hand, is able to ensure high landing precision in all cases tested. All factors considered, a strong case is made for adopting fully numerical algorithms for future skip entry missions.
Energy Technology Data Exchange (ETDEWEB)
Gotway, C.A. [Nebraska Univ., Lincoln, NE (United States). Dept. of Biometry; Rutherford, B.M. [Sandia National Labs., Albuquerque, NM (United States)
1993-09-01
Stochastic simulation has been suggested as a viable method for characterizing the uncertainty associated with the prediction of a nonlinear function of a spatially-varying parameter. Geostatistical simulation algorithms generate realizations of a random field with specified statistical and geostatistical properties. A nonlinear function is evaluated over each realization to obtain an uncertainty distribution of a system response that reflects the spatial variability and uncertainty in the parameter. Crucial management decisions, such as potential regulatory compliance of proposed nuclear waste facilities and optimal allocation of resources in environmental remediation, are based on the resulting system response uncertainty distribution. Many geostatistical simulation algorithms have been developed to generate the random fields, and each algorithm will produce fields with different statistical properties. These different properties will result in different distributions for system response, and potentially, different managerial decisions. The statistical properties of the resulting system response distributions are not completely understood, nor is the ability of the various algorithms to generate response distributions that adequately reflect the associated uncertainty. This paper reviews several of the algorithms available for generating random fields. Algorithms are compared in a designed experiment using seven exhaustive data sets with different statistical and geostatistical properties. For each exhaustive data set, a number of realizations are generated using each simulation algorithm. The realizations are used with each of several deterministic transfer functions to produce a cumulative uncertainty distribution function of a system response. The uncertainty distributions are then compared to the single value obtained from the corresponding exhaustive data set.
Directory of Open Access Journals (Sweden)
Rajeswari Sridhar
2010-07-01
Full Text Available In this work we have compared two indexing algorithms that have been used to index and retrieveCarnatic music songs. We have compared a modified algorithm of the Dual ternary indexing algorithmfor music indexing and retrieval with the multi-key hashing indexing algorithm proposed by us. Themodification in the dual ternary algorithm was essential to handle variable length query phrase and toaccommodate features specific to Carnatic music. The dual ternary indexing algorithm is adapted forCarnatic music by segmenting using the segmentation technique for Carnatic music. The dual ternaryalgorithm is compared with the multi-key hashing algorithm designed by us for indexing and retrieval inwhich features like MFCC, spectral flux, melody string and spectral centroid are used as features forindexing data into a hash table. The way in which collision resolution was handled by this hash table isdifferent than the normal hash table approaches. It was observed that multi-key hashing based retrievalhad a lesser time complexity than dual-ternary based indexing The algorithms were also compared fortheir precision and recall in which multi-key hashing had a better recall than modified dual ternaryindexing for the sample data considered.
Dai, Chenyun; Li, Yejin; Christie, Anita; Bonato, Paolo; McGill, Kevin C; Clancy, Edward A
2015-01-01
The reliability of clinical and scientific information provided by algorithms that automatically decompose the electromyogram (EMG) depends on the algorithms' accuracies. We used experimental and simulated data to assess the agreement and accuracy of three publicly available decomposition algorithms-EMGlab (McGill , 2005) (single channel data only), Fuzzy Expert (Erim and Lim, 2008) and Montreal (Florestal , 2009). Data consisted of quadrifilar needle EMGs from the tibialis anterior of 12 subjects at 10%, 20% and 50% maximum voluntary contraction (MVC); single channel needle EMGs from the biceps brachii of 10 controls and 10 patients during contractions just above threshold; and matched simulated data. Performance was assessed via agreement between pairs of algorithms for experimental data and accuracy with respect to the known decomposition for simulated data. For the quadrifilar experimental data, median agreements between the Montreal and Fuzzy Expert algorithms at 10%, 20%, and 50% MVC were 95%, 86%, and 64%, respectively. For the single channel control and patient data, median agreements between the three algorithm pairs were statistically similar at ∼ 97% and ∼ 92%, respectively. Accuracy on the simulated data exceeded this performance. Agreement/accuracy was strongly related to the Decomposability Index (Florestal , 2009). When agreement was high between algorithm pairs applied to simulated data, so was accuracy. PMID:24876131
Comparison of two approximal proximal point algorithms for monotone variational inequalities
Institute of Scientific and Technical Information of China (English)
TAO Min
2007-01-01
Proximal point algorithms (PPA) are attractive methods for solving monotone variational inequalities (MVI). Since solving the sub-problem exactly in each iteration is costly or sometimes impossible, various approximate versions ofPPA (APPA)are developed for practical applications. In this paper, we compare two APPA methods, both of which can be viewed as prediction-correction methods. The only difference is that they use different search directions in the correction-step. By extending the general forward-backward splitting methods, we obtain Algorithm Ⅰ; in the same way, Algorithm Ⅱ is proposed by spreading the general extra-gradient methods. Our analysis explains theoretically why Algorithm Ⅱ usually outperforms Algorithm Ⅰ.For computation practice, we consider a class of MVI with a special structure, and choose the extending Algorithm Ⅱ to implement, which is inspired by the idea of Gauss-Seidel iteration method making full use of information about the latest iteration.And in particular, self-adaptive techniques are adopted to adjust relevant parameters for faster convergence. Finally, some numerical experiments are reported on the separated MVI. Numerical results showed that the extending Algorithm Ⅱ is feasible and easy to implement with relatively low computation load.
COMPARISON PROCESS LONG EXECUTION BETWEEN PQ ALGORTHM AND NEW FUZZY LOGIC ALGORITHM FOR VOIP
Directory of Open Access Journals (Sweden)
Suardinata
2011-01-01
Full Text Available The transmission of voice over IP networks can generate network congestion due to weak supervision of the traffic incoming packet, queuing and scheduling. This congestion negatively affects the Quality of Service (QoS such as delay, packet drop and packet loss. Packet delay effects will affect the other QoS such as: unstable voice packet delivery, packet jitter, packet loss and echo. Priority Queuing (PQ algorithm is a more popular technique used in the VoIP network to reduce delays. In operation, the PQ is to use the method of sorting algorithms, search and route planning to classify packets on the router. Thus,this packet classifying method can result in repetition of the process. And this recursive loop leads to thenext queue starved. In this paper, to solving problems, there are three phases namely queuing phase,classifying phase and scheduling phase. The PQ algorithm technique is based on the priority. It will beapplied to the fuzzy inference system to classify the queuing incoming packet (voice, video and text; that can reduce recursive loop and starvation. After the incoming packet is classified, the packet will be sent to the packet buffering. In addition, to justify the research objective of the PQ improved algorithm will becompared against the algorithm existing PQ, which is found in the literature using metrics such as delay,packets drop and packet losses. This paper described about different execution long process in Priority (PQ and our algorithm. Our Algorithm is to simplify process execution Algorithm that can cause starvation occurs in PQ algorithm.
Performance comparison of some evolutionary algorithms on job shop scheduling problems
Mishra, S. K.; Rao, C. S. P.
2016-09-01
Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.
DEFF Research Database (Denmark)
Knöös, Tommy; Wieslander, Elinore; Cozzi, Luca;
2006-01-01
correction-based equivalent path length algorithms to model-based algorithms. These were divided into two groups based on how changes in electron transport are accounted for ((a) not considered and (b) considered). Increasing the complexity from the relatively homogeneous pelvic region to the very...... to the fields. A Monte Carlo calculated algorithm input data set and a benchmark set for a virtual linear accelerator have been produced which have facilitated the analysis and interpretation of the results. The more sophisticated models in the type b group exhibit changes in both absorbed dose and its...
A comparison of three additive tree algorithms that rely on a least-squares loss criterion.
Smith, T J
1998-11-01
The performances of three additive tree algorithms which seek to minimize a least-squares loss criterion were compared. The algorithms included the penalty-function approach of De Soete (1983), the iterative projection strategy of Hubert & Arabie (1995) and the two-stage ADDTREE algorithm, (Corter, 1982; Sattath & Tversky, 1977). Model fit, comparability of structure, processing time and metric recovery were assessed. Results indicated that the iterative projection strategy consistently located the best-fitting tree, but also displayed a wider range and larger number of local optima. PMID:9854946
Verbeeck, Cis; Colak, Tufan; Watson, Fraser T; Delouille, Veronique; Mampaey, Benjamin; Qahwaji, Rami
2011-01-01
Since the Solar Dynamics Observatory (SDO) began recording ~ 1 TB of data per day, there has been an increased need to automatically extract features and events for further analysis. Here we compare the overall detection performance, correlations between extracted properties, and usability for feature tracking of four solar feature-detection algorithms: the Solar Monitor Active Region Tracker (SMART) detects active regions in line-of-sight magnetograms; the Automated Solar Activity Prediction code (ASAP) detects sunspots and pores in white-light continuum images; the Sunspot Tracking And Recognition Algorithm (STARA) detects sunspots in white-light continuum images; the Spatial Possibilistic Clustering Algorithm (SPoCA) automatically segments solar EUV images into active regions (AR), coronal holes (CH) and quiet Sun (QS). One month of data from the SOHO/MDI and SOHO/EIT instruments during 12 May - 23 June 2003 is analysed. The overall detection performance of each algorithm is benchmarked against National Oc...
Algorithm comparison and benchmarking using a parallel spectra transform shallow water model
Energy Technology Data Exchange (ETDEWEB)
Worley, P.H. [Oak Ridge National Lab., TN (United States); Foster, I.T.; Toonen, B. [Argonne National Lab., IL (United States)
1995-04-01
In recent years, a number of computer vendors have produced supercomputers based on a massively parallel processing (MPP) architecture. These computers have been shown to be competitive in performance with conventional vector supercomputers for some applications. As spectral weather and climate models are heavy users of vector supercomputers, it is interesting to determine how these models perform on MPPS, and which MPPs are best suited to the execution of spectral models. The benchmarking of MPPs is complicated by the fact that different algorithms may be more efficient on different architectures. Hence, a comprehensive benchmarking effort must answer two related questions: which algorithm is most efficient on each computer and how do the most efficient algorithms compare on different computers. In general, these are difficult questions to answer because of the high cost associated with implementing and evaluating a range of different parallel algorithms on each MPP platform.
Comparison of Algorithms for Prediction of Protein Structural Features from Evolutionary Data.
Bywater, Robert P
2016-01-01
Proteins have many functions and predicting these is still one of the major challenges in theoretical biophysics and bioinformatics. Foremost amongst these functions is the need to fold correctly thereby allowing the other genetically dictated tasks that the protein has to carry out to proceed efficiently. In this work, some earlier algorithms for predicting protein domain folds are revisited and they are compared with more recently developed methods. In dealing with intractable problems such as fold prediction, when different algorithms show convergence onto the same result there is every reason to take all algorithms into account such that a consensus result can be arrived at. In this work it is shown that the application of different algorithms in protein structure prediction leads to results that do not converge as such but rather they collude in a striking and useful way that has never been considered before.
Institute of Scientific and Technical Information of China (English)
Liu Jie; Shi Shu-Ting; Zhao Jun-Chan
2013-01-01
The three most widely used methods for reconstructing the underlying time series via the recurrence plots (RPs) of a dynamical system are compared with each other in this paper.We aim to reconstruct a toy series,a periodical series,a random series,and a chaotic series to compare the effectiveness of the most widely used typical methods in terms of signal correlation analysis.The application of the most effective algorithm to the typical chaotic Lorenz system verifies the correctness of such an effective algorithm.It is verified that,based on the unthresholded RPs,one can reconstruct the original attractor by choosing different RP thresholds based on the Hirata algorithm.It is shown that,in real applications,it is possible to reconstruct the underlying dynamics by using quite little information from observations of real dynamical systems.Moreover,rules of the threshold chosen in the algorithm are also suggested.
A comparison of two extended Kalman filter algorithms for air-to-air passive ranging.
Ewing, Ward Hubert.
1983-01-01
Approved for public release; distribution is unlimited Two Extended Kalman Filter algorithms for air-to-air passive ranging are proposed, and examined by computer simulation. One algorithm uses only bearing observations while the other uses both bearing and elevation angles. Both are tested using a flat-Earth model and also using a spherical-Earth model where the benefit of a simple correction for the curvature-of-the-Earth effect on elevation angle is examined. The effects of varied an...
A Comparison of Two Open Source LiDAR Surface Classification Algorithms
Danny G Marks; Nancy F. Glenn; Timothy E. Link; Hudak, Andrew T.; Rupesh Shrestha; Michael J. Falkowski; Alistair M. S. Smith; Hongyu Huang; Wade T. Tinkham
2011-01-01
With the progression of LiDAR (Light Detection and Ranging) towards a mainstream resource management tool, it has become necessary to understand how best to process and analyze the data. While most ground surface identification algorithms remain proprietary and have high purchase costs; a few are openly available, free to use, and are supported by published results. Two of the latter are the multiscale curvature classification and the Boise Center Aerospace Laboratory LiDAR (BCAL) algorithms....
A Comparison and Selection on Basic Type of Searching Algorithm in Data Structure
Kamlesh Kumar Pandey; Narendra Pradhan
2014-01-01
A lot of problems in different practical fields of Computer Science, Database Management System, Networks, Data Mining and Artificial intelligence. Searching is common fundamental operation and solve to searching problem in a different formats of these field. This research paper are presents the basic type of searching algorithms of data structure like linear search, binary search, and hash search. We have try to cover some technical aspects to this searching algorithm. This research is provi...
Huh, Hee Jin; Chung, Jae-Woo; Park, Seong Yeon; Chae, Seok Lae
2015-01-01
Background Automated Mediace Treponema pallidum latex agglutination (TPLA) and Mediace rapid plasma reagin (RPR) assays are used by many laboratories for syphilis diagnosis. This study compared the results of the traditional syphilis screening algorithm and a reverse algorithm using automated Mediace RPR or Mediace TPLA as first-line screening assays in subjects undergoing a health checkup. Methods Samples from 24,681 persons were included in this study. We routinely performed Mediace RPR and...
Directory of Open Access Journals (Sweden)
Natarajan Meghanathan
2013-05-01
Full Text Available The high-level contribution of this paper is an exhaustive simulation-based comparison study of three categories (density, node id and stability-based of algorithms to determine connected dominating sets (CDS for mobile ad hoc networks and evaluate their performance under two categories (random node mobility and grid-based vehicular ad hoc network of mobility models. The CDS algorithms studied are the maximum density-based (MaxD-CDS, node ID-based (ID-CDS and the minimum velocity-based (MinV-CDS algorithms representing the density, node id and stability categories respectively. The node mobility models used are the Random Waypoint model (representing random node mobility and the City Section and Manhattan mobility models (representing the grid-based vehicular ad hoc networks. The three CDS algorithms under the three mobility models are evaluated with respect to two critical performance metrics: the effective CDS lifetime (calculated taking into consideration the CDS connectivity and absolute CDS lifetime and the CDS node size. Simulations are conducted under a diverse set of conditions representing low, moderate and high network density, coupled with low, moderate and high node mobility scenarios. For each CDS, the paper identifies the mobility model that can be employed to simultaneously maximize the lifetime and minimize the node size with minimal tradeoff. For the two VANET mobility models, the impact of the grid block length on the CDS lifetime and node size is also evaluated.
Performance Comparison Research of the FECG Signal Separation Based on the BSS Algorithm
Directory of Open Access Journals (Sweden)
Xinling Wen
2012-08-01
Full Text Available Fetal Electrocardiogram (FECG is a weak signal through placing the electrodes upon the maternal belly surface to indirectly monitor, which contains all the forms of jamming signal. So, how to separate the FECG from the strong background interference has important value of clinical application. Independent Component Analysis (ICA is a kind of developed new Blind Source Separation (BSS technology in recent years. This study adopted ICA method to the extraction of FECG and carried out the blind signal separation by using the Fast ICA algorithm and natural gradient algorithm in the FECG separation research. The experimental results shown that the two kind of algorithm can obtain the good separation result. But because the natural gradient algorithm can achieve FECG online separation and separation effect is better than Fast ICA algorithm, therefore, the natural gradient algorithm is a better way to used in FECG separation. And it will help to monitor the congenital heart disease, neonatal arrhythmia, intrauterine fetal retardation and other diseases, which has very important test application value.
Woon, Y. L.; Heng, S. P.; Wong, J. H. D.; Ung, N. M.
2016-03-01
Inhomogeneity correction is recommended for accurate dose calculation in radiotherapy treatment planning since human body are highly inhomogeneous with the presence of bones and air cavities. However, each dose calculation algorithm has its own limitations. This study is to assess the accuracy of five algorithms that are currently implemented for treatment planning, including pencil beam convolution (PBC), superposition (SP), anisotropic analytical algorithm (AAA), Monte Carlo (MC) and Acuros XB (AXB). The calculated dose was compared with the measured dose using radiochromic film (Gafchromic EBT2) in inhomogeneous phantoms. In addition, the dosimetric impact of different algorithms on intensity modulated radiotherapy (IMRT) was studied for head and neck region. MC had the best agreement with the measured percentage depth dose (PDD) within the inhomogeneous region. This was followed by AXB, AAA, SP and PBC. For IMRT planning, MC algorithm is recommended for treatment planning in preference to PBC and SP. The MC and AXB algorithms were found to have better accuracy in terms of inhomogeneity correction and should be used for tumour volume within the proximity of inhomogeneous structures.
Khare, Kshitij; 10.1214/11-AOS916
2012-01-01
The data augmentation (DA) algorithm is a widely used Markov chain Monte Carlo algorithm that is easy to implement but often suffers from slow convergence. The sandwich algorithm is an alternative that can converge much faster while requiring roughly the same computational effort per iteration. Theoretically, the sandwich algorithm always converges at least as fast as the corresponding DA algorithm in the sense that $\\Vert {K^*}\\Vert \\le \\Vert {K}\\Vert$, where $K$ and $K^*$ are the Markov operators associated with the DA and sandwich algorithms, respectively, and $\\Vert\\cdot\\Vert$ denotes operator norm. In this paper, a substantial refinement of this operator norm inequality is developed. In particular, under regularity conditions implying that $K$ is a trace-class operator, it is shown that $K^*$ is also a positive, trace-class operator, and that the spectrum of $K^*$ dominates that of $K$ in the sense that the ordered elements of the former are all less than or equal to the corresponding elements of the lat...
Institute of Scientific and Technical Information of China (English)
宋渊; 姚向华; 张新曼
2012-01-01
应用于无线Ad Hoc网络中的机会路由,结点转发候选集的选取通常是基于最短路径期望传输次数,没有充分考虑无线网络结点进行数据转发的广播特性.以多路径期望传输次数为路由量度,提出一种最优转发候选集算法MCET.实现对无线网络中除了目的结点以外的所有结点选取考虑多路径转发期望值的转发候选集,并在按照结点选取的顺序依次优先排列优先级.仿真结果表明,比较于传统的基于最短路径期望传输次数的机会路由,应用了最优转发候选集算法的机会路由明显减少了数据的平均传输次数,增加了数据报文的成功传输率.%The opportunistic routing for wireless Ad Hoc networks usually selects node forwarding candidate set based on shortest path ETX (expected transmission count) but regardless the broadcasting property of wireless networks nodes when forwarding data. In this paper, we take multipath ETX as the routing metric, and propose an optimal forwarding candidates set algorithm MCET (multipath-considered expected transmission). It realises the selection of forwarding candidate set with multipath forwarding expected value consideration for all nodes in wireless networks other than the destination, and then prioritise them in turn according to the order the nodes selected. Simulation results indicate that compared with traditional opportunistic routing based on shortest path ETX, the opportunistic routing applied the optimal forwarding candidate set algorithm has noticeably reduced the average number of data transmission, and increased the successful delivering rate of data packet.
Montilla, I; Béchet, C; Le Louarn, M; Reyes, M; Tallon, M
2010-11-01
Extremely Large Telescopes (ELTs) are very challenging with respect to their adaptive optics (AO) requirements. Their diameters and the specifications required by the astronomical science for which they are being designed imply a huge increment in the number of degrees of freedom in the deformable mirrors. Faster algorithms are needed to implement the real-time reconstruction and control in AO at the required speed. We present the results of a study of the AO correction performance of three different algorithms applied to the case of a 42-m ELT: one considered as a reference, the matrix-vector multiply (MVM) algorithm; and two considered fast, the fractal iterative method (FrIM) and the Fourier transform reconstructor (FTR). The MVM and the FrIM both provide a maximum a posteriori estimation, while the FTR provides a least-squares one. The algorithms are tested on the European Southern Observatory (ESO) end-to-end simulator, OCTOPUS. The performance is compared using a natural guide star single-conjugate adaptive optics configuration. The results demonstrate that the methods have similar performance in a large variety of simulated conditions. However, with respect to system misregistrations, the fast algorithms demonstrate an interesting robustness. PMID:21045895
Akoguz, A.; Bozkurt, S.; Gozutok, A. A.; Alp, G.; Turan, E. G.; Bogaz, M.; Kent, S.
2016-06-01
High resolution level in satellite imagery came with its fundamental problem as big amount of telemetry data which is to be stored after the downlink operation. Moreover, later the post-processing and image enhancement steps after the image is acquired, the file sizes increase even more and then it gets a lot harder to store and consume much more time to transmit the data from one source to another; hence, it should be taken into account that to save even more space with file compression of the raw and various levels of processed data is a necessity for archiving stations to save more space. Lossless data compression algorithms that will be examined in this study aim to provide compression without any loss of data holding spectral information. Within this objective, well-known open source programs supporting related compression algorithms have been implemented on processed GeoTIFF images of Airbus Defence & Spaces SPOT 6 & 7 satellites having 1.5 m. of GSD, which were acquired and stored by ITU Center for Satellite Communications and Remote Sensing (ITU CSCRS), with the algorithms Lempel-Ziv-Welch (LZW), Lempel-Ziv-Markov chain Algorithm (LZMA & LZMA2), Lempel-Ziv-Oberhumer (LZO), Deflate & Deflate 64, Prediction by Partial Matching (PPMd or PPM2), Burrows-Wheeler Transform (BWT) in order to observe compression performances of these algorithms over sample datasets in terms of how much of the image data can be compressed by ensuring lossless compression.
Comparison of optimization algorithms in intensity-modulated radiation therapy planning
Kendrick, Rachel
Intensity-modulated radiation therapy is used to better conform the radiation dose to the target, which includes avoiding healthy tissue. Planning programs employ optimization methods to search for the best fluence of each photon beam, and therefore to create the best treatment plan. The Computational Environment for Radiotherapy Research (CERR), a program written in MATLAB, was used to examine some commonly-used algorithms for one 5-beam plan. Algorithms include the genetic algorithm, quadratic programming, pattern search, constrained nonlinear optimization, simulated annealing, the optimization method used in Varian EclipseTM, and some hybrids of these. Quadratic programing, simulated annealing, and a quadratic/simulated annealing hybrid were also separately compared using different prescription doses. The results of each dose-volume histogram as well as the visual dose color wash were used to compare the plans. CERR's built-in quadratic programming provided the best overall plan, but avoidance of the organ-at-risk was rivaled by other programs. Hybrids of quadratic programming with some of these algorithms seems to suggest the possibility of better planning programs, as shown by the improved quadratic/simulated annealing plan when compared to the simulated annealing algorithm alone. Further experimentation will be done to improve cost functions and computational time.
Farrington, C. Paddy; Noufaily, Angela; Andrews, Nick J.; Charlett, Andre
2016-01-01
A large-scale multiple surveillance system for infectious disease outbreaks has been in operation in England and Wales since the early 1990s. Changes to the statistical algorithm at the heart of the system were proposed and the purpose of this paper is to compare two new algorithms with the original algorithm. Test data to evaluate performance are created from weekly counts of the number of cases of each of more than 2000 diseases over a twenty-year period. The time series of each disease is separated into one series giving the baseline (background) disease incidence and a second series giving disease outbreaks. One series is shifted forward by twelve months and the two are then recombined, giving a realistic series in which it is known where outbreaks have been added. The metrics used to evaluate performance include a scoring rule that appropriately balances sensitivity against specificity and is sensitive to variation in probabilities near 1. In the context of disease surveillance, a scoring rule can be adapted to reflect the size of outbreaks and this was done. Results indicate that the two new algorithms are comparable to each other and better than the algorithm they were designed to replace. PMID:27513749
Directory of Open Access Journals (Sweden)
C. Keim
2009-05-01
Full Text Available This paper presents a first statistical validation of tropospheric ozone products derived from measurements of the satellite instrument IASI. Since end of 2006, IASI (Infrared Atmospheric Sounding Interferometer aboard the polar orbiter Metop-A measures infrared spectra of the Earth's atmosphere in nadir geometry. This validation covers the northern mid-latitudes and the period from July 2007 to August 2008. The comparison of the ozone products with the vertical ozone concentration profiles from balloon sondes leads to estimates of the systematic and random errors in the IASI ozone products. The intercomparison of the retrieval results from four different sources (including the EUMETSAT ozone products shows systematic differences due to the used methods and algorithms. On average the tropospheric columns have a small bias of less than 2 Dobson Units (DU when compared to the sonde measured columns. The comparison of the still pre-operational EUMETSAT columns shows higher mean differences of about 5 DU.
Comparison of the Noise Robustness of FVC Retrieval Algorithms Based on Linear Mixture Models
Directory of Open Access Journals (Sweden)
Hiroki Yoshioka
2011-07-01
Full Text Available The fraction of vegetation cover (FVC is often estimated by unmixing a linear mixture model (LMM to assess the horizontal spread of vegetation within a pixel based on a remotely sensed reflectance spectrum. The LMM-based algorithm produces results that can vary to a certain degree, depending on the model assumptions. For example, the robustness of the results depends on the presence of errors in the measured reflectance spectra. The objective of this study was to derive a factor that could be used to assess the robustness of LMM-based algorithms under a two-endmember assumption. The factor was derived from the analytical relationship between FVC values determined according to several previously described algorithms. The factor depended on the target spectra, endmember spectra, and choice of the spectral vegetation index. Numerical simulations were conducted to demonstrate the dependence and usefulness of the technique in terms of robustness against the measurement noise.
Amooee, Golriz; Bagheri-Dehnavi, Malihe
2012-01-01
In the current competitive world, industrial companies seek to manufacture products of higher quality which can be achieved by increasing reliability, maintainability and thus the availability of products. On the other hand, improvement in products lifecycle is necessary for achieving high reliability. Typically, maintenance activities are aimed to reduce failures of industrial machinery and minimize the consequences of such failures. So the industrial companies try to improve their efficiency by using different fault detection techniques. One strategy is to process and analyze previous generated data to predict future failures. The purpose of this paper is to detect wasted parts using different data mining algorithms and compare the accuracy of these algorithms. A combination of thermal and physical characteristics has been used and the algorithms were implemented on Ahanpishegan's current data to estimate the availability of its produced parts. Keywords: Data Mining, Fault Detection, Availability, Predictio...
Directory of Open Access Journals (Sweden)
M. Mohammadi
2015-01-01
Full Text Available This paper presents the optimal planning of harmonic passive filters in distribution system using three intelligent methods including genetic algorithm (GA, particle swarm optimization (PSO, artificial bee colony (ABC and as a new research is compared with biogeography based optimization (BBO algorithm. In this work, the considered objective function is to minimize the value of investment cost of filters and total harmonic distortion of three-phase current. It is shown that through an economical placement and sizing of LC passive filters the total voltage harmonic distortion and cost could be minimized simultaneously. BBO is a novel evolutionary algorithm that is based on the mathematics of biogeography. In the BBO model, problem solutions are represented as islands, and the sharing of features between solutions is represented as immigration and emigration between the islands. The simulation results show that the proposed method is efficient for solving the presented problem.
COMPARISON AND ANALYSIS OF WATERMARKING ALGORITHMS IN COLOR IMAGES – IMAGE SECURITY PARADIGM
Directory of Open Access Journals (Sweden)
D. Biswas
2011-06-01
Full Text Available This paper is based on a comparative study between different watermarking techniques such as LSB hiding algorithm, (2, 2 visual cryptography based watermarking for color images [3,4] and Randomized LSB-MSB hiding algorithm [1]. Here, we embed the secret image in a host or original image, by using these bit-wise pixel manipulation algorithms. This is followed by a comparative study of the resultantimages through Peak Signal to Noise Ratio (PSNR calculation. The property wise variation of differenttypes of secret images that are embedded into the host image plays an important role in this context. The calculation of the Peak Signal to Noise Ratio is done for different color levels (red, green, blue and also for their equivalent gray level images. From the results, we are trying to predict which technique is more suitable to which type of secret image.
Vijay Alagappan, A.; Narasimha Rao, K. V.; Krishna Kumar, R.
2015-02-01
Tyre models are a prerequisite for any vehicle dynamics simulation. Tyre models range from the simplest mathematical models that consider only the cornering stiffness to a complex set of formulae. Among all the steady-state tyre models that are in use today, the Magic Formula tyre model is unique and most popular. Though the Magic Formula tyre model is widely used, obtaining the model coefficients from either the experimental or the simulation data is not straightforward due to its nonlinear nature and the presence of a large number of coefficients. A common procedure used for this extraction is the least-squares minimisation that requires considerable experience for initial guesses. Various researchers have tried different algorithms, namely, gradient and Newton-based methods, differential evolution, artificial neural networks, etc. The issues involved in all these algorithms are setting bounds or constraints, sensitivity of the parameters, the features of the input data such as the number of points, noisy data, experimental procedure used such as slip angle sweep or tyre measurement (TIME) procedure, etc. The extracted Magic Formula coefficients are affected by these variants. This paper highlights the issues that are commonly encountered in obtaining these coefficients with different algorithms, namely, least-squares minimisation using trust region algorithms, Nelder-Mead simplex, pattern search, differential evolution, particle swarm optimisation, cuckoo search, etc. A key observation is that not all the algorithms give the same Magic Formula coefficients for a given data. The nature of the input data and the type of the algorithm decide the set of the Magic Formula tyre model coefficients.
Recent Research and Comparison of QoS Routing Algorithms for MPLS Networks
Directory of Open Access Journals (Sweden)
Santosh Kulkarni
2012-03-01
Full Text Available MPLS enables service providers to meet challenges brought about by explosive growth and provides the opportunity for differentiated services without necessitating the sacrifice of the existing infrastructure. MPLS is a highly scalable data carrying mechanism which forwards packets to outgoing interface based only on label value .MPLS network has the capability of routing with some specific constraints for supporting desired QoS. In this paper we will compare recent QoS Routing Algorithms for MPLS Networks. We are presenting simulation results which will focus on the computational complexity of each algorithm, performances under a wide range of workload, topology and system parameters.
Modeling Signal Transduction Networks: A comparison of two Stochastic Kinetic Simulation Algorithms
Energy Technology Data Exchange (ETDEWEB)
Pettigrew, Michel F.; Resat, Haluk
2005-09-15
Simulations of a scalable four compartment reaction model based on the well known epidermal growth factor receptor (EGFR) signal transduction system are used to compare two stochastic algorithms ? StochSim and the Gibson-Gillespie. It is concluded that the Gibson-Gillespie is the algorithm of choice for most realistic cases with the possible exception of signal transduction networks characterized by a moderate number (< 100) of complex types, each with a very small population, but with a high degree of connectivity amongst the complex types. Keywords: Signal transduction networks, Stochastic simulation, StochSim, Gillespie
International Nuclear Information System (INIS)
Multichannel pulse height measurements with a cylindrical 3He proportional counter obtained at a reactor filter of natural iron are taken to investigate the properties of three algorithms for neutron spectrum unfolding. For a systematic application of uncertainty propagation the covariance matrix of previously determined 3He response functions is evaluated. The calculated filter transmission function together with a covariance matrix estimated from cross-section uncertainties of the filter material is used as fluence pre-information. The results obtained from algorithms with and without pre-information differ in shape and uncertainties for single group fluence values, but there is sufficient agreement when evaluating integrals over neutron energy intervals
A Comparison of the Machine Learning Algorithm for Evaporation Duct Estimation
Yang, C.
2013-01-01
In this research, a comparison of the relevance vector machine (RVM), least square support vector machine (LSSVM) and the radial basis function neural network (RBFNN) for evaporation duct estimation are presented. The parabolic equation model is adopted as the forward propagation model, and which is used to establish the training database between the radar sea clutter power and the evaporation duct height. The comparison of the RVM, LSSVM and RBFNN for evaporation duct estimation are investig...
Pande, Saket; Sharma, Ashish
2014-05-01
This study is motivated by the need to robustly specify, identify, and forecast runoff generation processes for hydroelectricity production. It atleast requires the identification of significant predictors of runoff generation and the influence of each such significant predictor on runoff response. To this end, we compare two non-parametric algorithms of predictor subset selection. One is based on information theory that assesses predictor significance (and hence selection) based on Partial Information (PI) rationale of Sharma and Mehrotra (2014). The other algorithm is based on a frequentist approach that uses bounds on probability of error concept of Pande (2005), assesses all possible predictor subsets on-the-go and converges to a predictor subset in an computationally efficient manner. Both the algorithms approximate the underlying system by locally constant functions and select predictor subsets corresponding to these functions. The performance of the two algorithms is compared on a set of synthetic case studies as well as a real world case study of inflow forecasting. References: Sharma, A., and R. Mehrotra (2014), An information theoretic alternative to model a natural system using observational information alone, Water Resources Research, 49, doi:10.1002/2013WR013845. Pande, S. (2005), Generalized local learning in water resource management, PhD dissertation, Utah State University, UT-USA, 148p.
Institute of Scientific and Technical Information of China (English)
Haixing Liu,Jing Lu,Ming Zhao∗; Yixing Yuan
2016-01-01
In order to compare two advanced multi⁃objective evolutionary algorithms, a multi⁃objective water distribution problem is formulated in this paper. The multi⁃objective optimization has received more attention in the water distribution system design. On the one hand the cost of water distribution system including capital, operational, and maintenance cost is mostly concerned issue by the utilities all the time; on the other hand improving the performance of water distribution systems is of equivalent importance, which is often conflicting with the previous goal. Many performance metrics of water networks are developed in recent years, including total or maximum pressure deficit, resilience, inequity, probabilistic robustness, and risk measure. In this paper, a new resilience metric based on the energy analysis of water distribution systems is proposed. Two optimization objectives are comprised of capital cost and the new resilience index. A heuristic algorithm, speed⁃constrained multi⁃objective particle swarm optimization ( SMPSO) extended on the basis of the multi⁃objective particle swarm algorithm, is introduced to compare with another state⁃of⁃the⁃art heuristic algorithm, NSGA⁃II. The solutions are evaluated by two metrics, namely spread and hypervolume. To illustrate the capability of SMPSO to efficiently identify good designs, two benchmark problems ( two⁃loop network and Hanoi network) are employed. From several aspects the results demonstrate that SMPSO is a competitive and potential tool to tackle with the optimization problem of complex systems.
DEFF Research Database (Denmark)
Cook, Gerald; Lin, Ching-Fang
1980-01-01
The local linearization algorithm is presented as a possible numerical integration scheme to be used in real-time simulation. A second-order nonlinear example problem is solved using different methods. The local linearization approach is shown to require less computing time and give significant...
Pick-N Multiple Choice-Exams: A Comparison of Scoring Algorithms
Bauer, Daniel; Holzer, Matthias; Kopp, Veronika; Fischer, Martin R.
2011-01-01
To compare different scoring algorithms for Pick-N multiple correct answer multiple-choice (MC) exams regarding test reliability, student performance, total item discrimination and item difficulty. Data from six 3rd year medical students' end of term exams in internal medicine from 2005 to 2008 at Munich University were analysed (1,255 students,…
A comparison of 12 algorithms for matching on the propensity score.
Austin, Peter C
2014-03-15
Propensity-score matching is increasingly being used to reduce the confounding that can occur in observational studies examining the effects of treatments or interventions on outcomes. We used Monte Carlo simulations to examine the following algorithms for forming matched pairs of treated and untreated subjects: optimal matching, greedy nearest neighbor matching without replacement, and greedy nearest neighbor matching without replacement within specified caliper widths. For each of the latter two algorithms, we examined four different sub-algorithms defined by the order in which treated subjects were selected for matching to an untreated subject: lowest to highest propensity score, highest to lowest propensity score, best match first, and random order. We also examined matching with replacement. We found that (i) nearest neighbor matching induced the same balance in baseline covariates as did optimal matching; (ii) when at least some of the covariates were continuous, caliper matching tended to induce balance on baseline covariates that was at least as good as the other algorithms; (iii) caliper matching tended to result in estimates of treatment effect with less bias compared with optimal and nearest neighbor matching; (iv) optimal and nearest neighbor matching resulted in estimates of treatment effect with negligibly less variability than did caliper matching; (v) caliper matching had amongst the best performance when assessed using mean squared error; (vi) the order in which treated subjects were selected for matching had at most a modest effect on estimation; and (vii) matching with replacement did not have superior performance compared with caliper matching without replacement.
Delimata, Paweł
2010-01-01
We discuss two, in a sense extreme, kinds of nondeterministic rules in decision tables. The first kind of rules, called as inhibitory rules, are blocking only one decision value (i.e., they have all but one decisions from all possible decisions on their right hand sides). Contrary to this, any rule of the second kind, called as a bounded nondeterministic rule, can have on the right hand side only a few decisions. We show that both kinds of rules can be used for improving the quality of classification. In the paper, two lazy classification algorithms of polynomial time complexity are considered. These algorithms are based on deterministic and inhibitory decision rules, but the direct generation of rules is not required. Instead of this, for any new object the considered algorithms extract from a given decision table efficiently some information about the set of rules. Next, this information is used by a decision-making procedure. The reported results of experiments show that the algorithms based on inhibitory decision rules are often better than those based on deterministic decision rules. We also present an application of bounded nondeterministic rules in construction of rule based classifiers. We include the results of experiments showing that by combining rule based classifiers based on minimal decision rules with bounded nondeterministic rules having confidence close to 1 and sufficiently large support, it is possible to improve the classification quality. © 2010 Springer-Verlag.
Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling
DEFF Research Database (Denmark)
Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath;
2012-01-01
This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals...
A comparison of reconstruction algorithms for C-arm mammography tomosynthesis
International Nuclear Information System (INIS)
Digital tomosynthesis is an imaging technique to produce a tomographic image from a series of angular digital images in a manner similar to conventional focal plane tomography. Unlike film focal plane tomography, the acquisition of the data in a C-arm geometry causes the image receptor to be positioned at various angles to the reconstruction tomogram. The digital nature of the data allows for input images to be combined into the desired plane with the flexibility of generating tomograms of many separate planes from a single set of input data. Angular datasets were obtained of a low contrast detectability (LCD) phantom and cadaver breast utilizing a Lorad stereotactic biopsy unit with a coupled source and digital detector in a C-arm configuration. Datasets of 9 and 41 low-dose projections were collected over a 30 deg. angular range. Tomographic images were reconstructed using a Backprojection (BP) algorithm, an Iterative Subtraction (IS) algorithm that allows the partial subtraction of out-of-focus planes, and an Algebraic Reconstruction (AR) algorithm. These were compared with single view digital radiographs. The methods' effectiveness at enhancing visibility of an obscured LCD phantom was quantified in terms of the Signal to Noise Ratio (SNR), and Signal to Background Ratio (SBR), all normalized to the metric value for the single projection image. The methods' effectiveness at removing ghosting artifacts in a cadaver breast was quantified in terms of the Artifact Spread Function (ASF). The technology proved effective at partially removing out of focus structures and enhancing SNR and SBR. The normalized SNR was highest at 4.85 for the obscured LCD phantom, using nine projections and IS algorithm. The normalized SBR was highest at 23.2 for the obscured LCD phantom, using 41 projections and an AR algorithm. The highest normalized metric values occurred with the obscured phantom. This supports the assertion that the greatest value of tomosynthesis is in imaging
Comparison between Acuros XB and Brainlab Monte Carlo algorithms for photon dose calculation
Energy Technology Data Exchange (ETDEWEB)
Misslbeck, M.; Kneschaurek, P. [Klinikum rechts der Isar der Technischen Univ. Muenchen (Germany). Klinik und Poliklinik fuer Strahlentherapie und Radiologische Onkologie
2012-07-15
Purpose: The Acuros {sup registered} XB dose calculation algorithm by Varian and the Monte Carlo algorithm XVMC by Brainlab were compared with each other and with the well-established AAA algorithm, which is also from Varian. Methods: First, square fields to two different artificial phantoms were applied: (1) a 'slab phantom' with a 3 cm water layer, followed by a 2 cm bone layer, a 7 cm lung layer, and another 18 cm water layer and (2) a 'lung phantom' with water surrounding an eccentric lung block. For the slab phantom, depth-dose curves along central beam axis were compared. The lung phantom was used to compare profiles at depths of 6 and 14 cm. As clinical cases, the CTs of three different patients were used. The original AAA plans with all three algorithms using open fields were recalculated. Results: There were only minor differences between Acuros and XVMC in all artificial phantom depth doses and profiles; however, this was different for AAA, which had deviations of up to 13% in depth dose and a few percent for profiles in the lung phantom. These deviations did not translate into the clinical cases, where the dose-volume histograms of all algorithms were close to each other for open fields. Conclusion: Only within artificial phantoms with clearly separated layers of simulated tissue does AAA show differences at layer boundaries compared to XVMC or Acuros. In real patient CTs, these differences in the dose-volume histogram of the planning target volume were not observed. (orig.)
Directory of Open Access Journals (Sweden)
Yong Tian
2014-12-01
Full Text Available State of charge (SOC estimation is essential to battery management systems in electric vehicles (EVs to ensure the safe operations of batteries and providing drivers with the remaining range of the EVs. A number of estimation algorithms have been developed to get an accurate SOC value because the SOC cannot be directly measured with sensors and is closely related to various factors, such as ambient temperature, current rate and battery aging. In this paper, two model-based adaptive algorithms, including the adaptive unscented Kalman filter (AUKF and adaptive slide mode observer (ASMO are applied and compared in terms of convergence behavior, tracking accuracy, computational cost and estimation robustness against parameter uncertainties of the battery model in SOC estimation. Two typical driving cycles, including the Dynamic Stress Test (DST and New European Driving Cycle (NEDC are applied to evaluate the performance of the two algorithms. Comparison results show that the AUKF has merits in convergence ability and tracking accuracy with an accurate battery model, while the ASMO has lower computational cost and better estimation robustness against parameter uncertainties of the battery model.
Directory of Open Access Journals (Sweden)
Manel Hlaili
2016-01-01
Full Text Available Photovoltaic (PV energy is one of the most important energy sources since it is clean and inexhaustible. It is important to operate PV energy conversion systems in the maximum power point (MPP to maximize the output energy of PV arrays. An MPPT control is necessary to extract maximum power from the PV arrays. In recent years, a large number of techniques have been proposed for tracking the maximum power point. This paper presents a comparison of different MPPT methods and proposes one which used a power estimator and also analyses their suitability for systems which experience a wide range of operating conditions. The classic analysed methods, the incremental conductance (IncCond, perturbation and observation (P&O, ripple correlation (RC algorithms, are suitable and practical. Simulation results of a single phase NPC grid connected PV system operating with the aforementioned methods are presented to confirm effectiveness of the scheme and algorithms. Simulation results verify the correct operation of the different MPPT and the proposed algorithm.
Directory of Open Access Journals (Sweden)
Murat KUL
2014-07-01
Full Text Available The purpose of the study , candidates who participated in a special aptitude test of Physical Education and Sports School are compared those who were eligible to register with the win of Multiple Inte lligence Areas. In the research Scan model was used. Within the investigation, in 785 candidates who applied Bartin Universty School of Physical Education and Sports Special Ability Test for 2013 - 2014 academic year, 536 volunteer candidates who have average age x yaş = 21.15± 2.66 constitude. As data collection tool, belogns to the candidates personal information form and “Multiple Intelligences Inventory” which was developed by Özden (2003 for he identification of multiple intellegences was applied. Reliability coefficient was discovered as .96. In evaluation of data, SPSS data an alysis program was used. In evaluation of data, frequency, average, standard, deviation from descriptive statistical techniques was used. Also by taking into account normal distribution of the data, Independent Sample T - test of statistical techniques was u sed. In considering the findings of the study “Bodily - Kinesthetic Intelligence” which is a field of Multiple Intelligences of candidates as statistically significant diffirence was found in the area. Candidates winning higher than avarage scores candidates who can not win are seen to have. Also, “Social - Interpersonal Intelligence” of candidates qualifing to register with who can not qualify to register statistically significant results were observed in the levels. Winning candidates in this area compared t o the candidates who win more than others, it is concluded that they carry the dominant features. As a result of “Verbal - Linguistic Intelligence”, “Logical - Mathematical Intelligence”, “Musical - Rhythmic Intelligence”, “Bodily - Kinesthetic Intelligence, “Soci al - Interpersonal Intelligence” of Multiple Intelligence Areas candidates who participated in Physical Education
Energy Technology Data Exchange (ETDEWEB)
Fan, Chengguang [College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha 410073, PR China and Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom); Drinkwater, Bruce W. [Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom)
2014-02-18
In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.
Comparison Study on the Battery SoC Estimation with EKF and UKF Algorithms
Directory of Open Access Journals (Sweden)
Hongwen He
2013-09-01
Full Text Available The battery state of charge (SoC, whose estimation is one of the basic functions of battery management system (BMS, is a vital input parameter in the energy management and power distribution control of electric vehicles (EVs. In this paper, two methods based on an extended Kalman filter (EKF and unscented Kalman filter (UKF, respectively, are proposed to estimate the SoC of a lithium-ion battery used in EVs. The lithium-ion battery is modeled with the Thevenin model and the model parameters are identified based on experimental data and validated with the Beijing Driving Cycle. Then space equations used for SoC estimation are established. The SoC estimation results with EKF and UKF are compared in aspects of accuracy and convergence. It is concluded that the two algorithms both perform well, while the UKF algorithm is much better with a faster convergence ability and a higher accuracy.
A comparison of thermal algorithms of fuel rod performance code systems
International Nuclear Information System (INIS)
The goal of the fuel rod performance is to identify the robustness of a fuel rod with cladding material. Computer simulation of the fuel rod performance becomes one of important parts to designed and evaluate new nuclear fuels and claddings. To construct a computing code system for the fuel rod performance, several algorithms of the existing fuel rod performance code systems are compared and are summarized as a preliminary work. Among several code systems, FRAPCON, and FEMAXI for LWR, ELESTRES for CANDU reactor, and LIFE for fast reactor are reviewed. Thermal algorithms of the above codes are investigated including methodologies and subroutines. This work will be utilized to construct a computing code system for dry process fuel rod performance
Salim, Umer
2010-01-01
In multi-user communication from one base station (BS) to multiple users, the problem of minimizing the transmit power to achieve some target guaranteed performance (rates) at users has been well investigated in the literature. Similarly various user selection algorithms have been proposed and analyzed when the BS has to transmit to a subset of the users in the system, mostly for the objective of the sum rate maximization. We study the joint problem of minimizing the transmit power at the BS to achieve specific signal-to-interference-and-noise ratio (SINR) targets at users in conjunction with user scheduling. The general analytical results for the average transmit power required to meet guaranteed performance at the users' side are difficult to obtain even without user selection due to joint optimization required over beamforming vectors and power allocation scalars. We study the transmit power minimization problem with various user selection algorithms, namely semi-orthogonal user selection (SUS), norm-based...
Directory of Open Access Journals (Sweden)
M. Frutos
2013-01-01
Full Text Available Many of the problems that arise in production systems can be handled with multiobjective techniques. One of those problems is that of scheduling operations subject to constraints on the availability of machines and buffer capacity. In this paper we analyze different Evolutionary multiobjective Algorithms (MOEAs for this kind of problems. We consider an experimental framework in which we schedule production operations for four real world Job-Shop contexts using three algorithms, NSGAII, SPEA2, and IBEA. Using two performance indexes, Hypervolume and R2, we found that SPEA2 and IBEA are the most efficient for the tasks at hand. On the other hand IBEA seems to be a better choice of tool since it yields more solutions in the approximate Pareto frontier.
Directory of Open Access Journals (Sweden)
Howard Williams
2014-05-01
Full Text Available Stochastic diffusion search (SDS is a multi-agent global optimisation technique based on the behaviour of ants, rooted in the partial evaluation of an objective function and direct communication between agents. Standard SDS, the fundamental algorithm at work in all SDS processes, is presented here. Parameter estimation is the task of suitably fitting a model to given data; some form of parameter estimation is a key element of many computer vision processes. Here, the task of hyperplane estimation in many dimensions is investigated. Following RANSAC (random sample consensus, a widely used optimisation technique and a standard technique for many parameter estimation problems, increasingly sophisticated data-driven forms of SDS are developed. The performance of these SDS algorithms and RANSAC is analysed and compared for a hyperplane estimation task. SDS is shown to perform similarly to RANSAC, with potential for tuning to particular search problems for improved results.
Betremieux, Yan
2015-01-01
Atmospheric refraction affects to various degrees exoplanet transit, lunar eclipse, as well as stellar occultation observations. Exoplanet retrieval algorithms often use analytical expressions for the column abundance along a ray traversing the atmosphere as well as for the deflection of that ray, which are first order approximations valid for low densities in a spherically symmetric homogeneous isothermal atmosphere. We derive new analytical formulae for both of these quantities, which are valid for higher densities, and use them to refine and validate a new ray tracing algorithm which can be used for arbitrary atmospheric temperature-pressure profiles. We illustrate with simple isothermal atmospheric profiles the consequences of our model for different planets: temperate Earth-like and Jovian-like planets, as well as HD189733b, and GJ1214b. We find that, for both hot exoplanets, our treatment of refraction does not make much of a difference to pressures as high as 10 atmosphere, but that it is important to ...
International Nuclear Information System (INIS)
In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded
Verbeeck, Cis; Higgins, Paul A.; Colak, Tufan; Watson, Fraser T.; Delouille, Veronique; Mampaey, Benjamin; Qahwaji, Rami
2011-01-01
Since the Solar Dynamics Observatory (SDO) began recording ~ 1 TB of data per day, there has been an increased need to automatically extract features and events for further analysis. Here we compare the overall detection performance, correlations between extracted properties, and usability for feature tracking of four solar feature-detection algorithms: the Solar Monitor Active Region Tracker (SMART) detects active regions in line-of-sight magnetograms; the Automated Solar Activity Prediction...
Design Optimization of Induction Motor by Genetic Algorithm and Comparison with Existing Motor
Çunkaş, Mehmet; AKKAYA, Ramazan
2006-01-01
This paper presents an optimal design method to optimize three-phase induction motor in manufacturing process. The optimally designed motor is compared with an existing motor having the same ratings. The Genetic Algorithm is used for optimization and three objective functions namely torque, efficiency, and cost are considered. The motor design procedure consists of a system of non-linear equations, which imposes induction motor characteristics, motor performance, magnetic stresses and thermal...
Performance Comparison of Three Parallel Implementations of a SchwarzSplitting Algorithm
Gamble, Jim; Ribbens, Calvin J.
1989-01-01
We describe three implementations of a Schwarz splitting algorithm for the numerical solution of two dimensional, second-order, linear elliptical partial differential equations. One implementation makes use of the SCHEDULE package. A second uses the language extensions available in SEQUENT Fortran for creating and controlling parallel processes. The third implementation is a hybrid of the first two -- using explicit (non-portable) calls to create a control parallel processes, but using dat...
A comparison of waveform processing algorithms for single-wavelength LiDAR bathymetry
Wang, Chisheng; Li, Qingquan; Liu, Yanxiong; Wu, Guofeng; Liu, Peng; Ding, Xiaoli
2015-03-01
Due to the low-cost and lightweight units, single-wavelength LiDAR bathymetric systems are an ideal option for shallow-water (bathymetric retrieval. In this paper, we test six algorithms for single-wavelength bathymetric waveform processing, i.e. peak detection (PD), the average square difference function (ASDF), Gaussian decomposition (GD), quadrilateral fitting (QF), Richardson-Lucy deconvolution (RLD), and Wiener filter deconvolution (WD). To date, most of these algorithms have previously only been applied in topographic LiDAR waveforms captured over land. A simulated dataset and an Optech Aquarius dataset were used to assess the algorithms, with the focus being on their capability of extracting the depth and the bottom response. The influences of a number of water and equipment parameters were also investigated by the use of a Monte Carlo method. The results showed that the RLD method had a superior performance in terms of a high detection rate and low errors in the retrieved depth and magnitude. The attenuation coefficient, noise level, water depth, and bottom reflectance had significant influences on the measurement error of the retrieved depth, while the effects of scan angle and water surface roughness were not so obvious.
Comparison of the period detection algorithms based on Pi of the Sky data
Opiela, Rafał; Mankiewicz, Lech; Żarnecki, Aleksander Filip
2015-09-01
The Pi of the Sky is a system of five autonomous detectors designed for continuous observation of the night sky, mainly looking for optical flashes of astrophysical origin, in particular for Gamma Ray Bursts (GRB). In the Pi of the Sky project we also study many kinds of variable stars (periods in range of 0.5d - 1000.0d) or take part in the multiwavelength observing campaigns, such as the DG Cvn outburst observations. Our wide field of view robotic telescopes are located in San Pedro the Atacama Observatory, Chile and INTA El Arenosillo Observatory, Spain and were designed for monitoring a large fraction of the sky with 12m -13m range and time resolution of the order of 1 - 10 seconds. During analysis of the variable stars observations very important is accurate determination of their variability parameters. We know a lot of algorithms which can be used to the variability analysis of the observed stars123 . In this article using Monte Carlo analysis we compare all used by us the period detection algorithms dedicated to the astronomical origin data analysis. Based on the tests performed we show which algorithm gives us the best period detection quality and try to derived approximate formula describing the period detection error. We also give some examples of this calculation based on the observed by our detectors variable stars. At the end of this article we show how removing bad measurements from the analysed light curve affect to the accuracy of the period detection.
Institute of Scientific and Technical Information of China (English)
Paweł CZARNUL
2014-01-01
This paper compares the quality and execution times of several algorithms for scheduling service based workflow applications with changeable service availability and parameters. A workflow is defined as an acyclic directed graph with nodes corresponding to tasks and edges to dependencies between tasks. For each task, one out of several available services needs to be chosen and scheduled to minimize the workflow execution time and keep the cost of service within the budget. During the exe-cution of a workflow, some services may become unavailable, new ones may appear, and costs and execution times may change with a certain probability. Rescheduling is needed to obtain a better schedule. A solution is proposed on how integer linear pro-gramming can be used to solve this problem to obtain optimal solutions for smaller problems or suboptimal solutions for larger ones. It is compared side-by-side with GAIN, divide-and-conquer, and genetic algorithms for various probabilities of service unavailability or change in service parameters. The algorithms are implemented and subsequently tested in a real BeesyCluster environment.
Wong, Y M; Wong, Yin Mei; Wilkie, Joshua
2006-01-01
Since the introduction of the Black-Scholes model stochastic processes have played an increasingly important role in mathematical finance. In many cases prices, volatility and other quantities can be modeled using stochastic ordinary differential equations. Available methods for solving such equations have until recently been markedly inferior to analogous methods for deterministic ordinary differential equations. Recently, a number of methods which employ variable stepsizes to control local error have been developed which appear to offer greatly improved speed and accuracy. Here we conduct a comparative study of the performance of these algorithms for problems taken from the mathematical finance literature.
Directory of Open Access Journals (Sweden)
R. Stübi
2009-12-01
Full Text Available This paper presents a first statistical validation of tropospheric ozone products derived from measurements of the IASI satellite instrument. Since the end of 2006, IASI (Infrared Atmospheric Sounding Interferometer aboard the polar orbiter Metop-A measures infrared spectra of the Earth's atmosphere in nadir geometry. This validation covers the northern mid-latitudes and the period from July 2007 to August 2008. Retrieval results from four different sources are presented: three are from scientific products (LATMOS, LISA, LPMAA and the fourth one is the pre-operational product distributed by EUMETSAT (version 4.2. The different products are derived from different algorithms with different approaches. The difference and their implications for the retrieved products are discussed. In order to evaluate the quality and the performance of each product, comparisons with the vertical ozone concentration profiles measured by balloon sondes are performed and lead to estimates of the systematic and random errors in the IASI ozone products (profiles and partial columns. A first comparison is performed on the given profiles; a second comparison takes into account the altitude dependent sensitivity of the retrievals. Tropospheric columnar amounts are compared to the sonde for a lower tropospheric column (surface to about 6 km and a "total" tropospheric column (surface to about 11 km. On average both tropospheric columns have small biases for the scientific products, less than 2 Dobson Units (DU for the lower troposphere and less than 1 DU for the total troposphere. The comparison of the still pre-operational EUMETSAT columns shows higher mean differences of about 5 DU.
Energy Technology Data Exchange (ETDEWEB)
Dong, Feng; Pierpaoli, Elena; Gunn, James E.; Wechsler, Risa H.
2007-10-29
We present a modified adaptive matched filter algorithm designed to identify clusters of galaxies in wide-field imaging surveys such as the Sloan Digital Sky Survey. The cluster-finding technique is fully adaptive to imaging surveys with spectroscopic coverage, multicolor photometric redshifts, no redshift information at all, and any combination of these within one survey. It works with high efficiency in multi-band imaging surveys where photometric redshifts can be estimated with well-understood error distributions. Tests of the algorithm on realistic mock SDSS catalogs suggest that the detected sample is {approx} 85% complete and over 90% pure for clusters with masses above 1.0 x 10{sup 14}h{sup -1} M and redshifts up to z = 0.45. The errors of estimated cluster redshifts from maximum likelihood method are shown to be small (typically less that 0.01) over the whole redshift range with photometric redshift errors typical of those found in the Sloan survey. Inside the spherical radius corresponding to a galaxy overdensity of {Delta} = 200, we find the derived cluster richness {Lambda}{sub 200} a roughly linear indicator of its virial mass M{sub 200}, which well recovers the relation between total luminosity and cluster mass of the input simulation.
Chandratilleke, Dinusha; Silvestrini, Roger; Culican, Sue; Campbell, David; Byth-Wilson, Karen; Swaminathan, Sanjay; Lin, Ming-Wei
2016-08-01
Extractable nuclear antigen (ENA) antibody testing is often requested in patients with suspected connective tissue diseases. Most laboratories in Australia use a two step process involving a high sensitivity screening assay followed by a high specificity confirmation test. Multiplexing technology with Addressable Laser Bead Immunoassay (e.g., FIDIS) offers simultaneous detection of multiple antibody specificities, allowing a single step screening and confirmation. We compared our current diagnostic laboratory testing algorithm [Organtec ELISA screen / Euroimmun line immunoassay (LIA) confirmation] and the FIDIS Connective Profile. A total of 529 samples (443 consecutive+86 known autoantibody positivity) were run through both algorithms, and 479 samples (90.5%) were concordant. The same autoantibody profile was detected in 100 samples (18.9%) and 379 were concordant negative samples (71.6%). The 50 discordant samples (9.5%) were subdivided into 'likely FIDIS or current method correct' or 'unresolved' based on ancillary data. 'Unresolved' samples (n = 25) were subclassified into 'potentially' versus 'potentially not' clinically significant based on the change to clinical interpretation. Only nine samples (1.7%) were deemed to be 'potentially clinically significant'. Overall, we found that the FIDIS Connective Profile ENA kit is non-inferior to the current ELISA screen/LIA characterisation. Reagent and capital costs may be limiting factors in using the FIDIS, but potential benefits include a single step analysis and simultaneous detection of dsDNA antibodies.
Energy Technology Data Exchange (ETDEWEB)
Fabri, Daniella; Bhatia, Amon [Medical Univ. of Vienna (Austria). Center of Medical Physics and Biomedical Engineering; Zambrano, Valentina [Medical Univ. of Vienna (Austria). Dept. of Radiotherapy; and others
2013-07-01
We present an evaluation of various non-rigid registration algorithms for the purpose of compensating interfractional motion of the target volume and organs at risk areas when acquiring CBCT image data prior to irradiation. Three different deformable registration (DR) methods were used: the Demons algorithm implemented in the iPlan Software (BrainLAB AG, Feldkirchen, Germany) and two custom-developed piecewise methods using either a Normalized Correlation or a Mutual Information metric (featurelet{sub NC} and featurelet{sub MI}). These methods were tested on data acquired using a novel purpose-built phantom for deformable registration and clinical CT/CBCT data of prostate and lung cancer patients. The Dice similarity coefficient (DSC) between manually drawn contours and the contours generated by a derived deformation field of the structures in question was compared to the result obtained with rigid registration (RR). For the phantom, the piecewise methods were slightly superior, the featurelet{sub NC} for the intramodality and the featurelet{sub MI} for the intermodality registrations. For the prostate cases in less than 50% of the images studied the DSC was improved over RR. Deformable registration methods improved the outcome over a rigid registration for lung cases and in the phantom study, but not in a significant way for the prostate study. A significantly superior deformation method could not be identified. (orig.)
Piironen, A. K.; Eloranta, E. W.
1992-01-01
This paper presents wind measurements made with the University of Wisconsin Volume Imaging Lidar (VIL) during Aug. 1989 as part of the First ISLSCP (International Satellite Land Surface Climatology Project) Field Experiment (FIFE). Enhancements to the algorithm are described. Comparisons of these results to aircraft, balloon, and surface based wind measurements are presented. Observations of the spatial variance of aerosol backscatter are also compared to measurements of the convective boundary layer depth. Measurements are based on two-dimensional cross correlations between horizontal image planes showing the spatial distribution of aerosol scattering observed by the lidar at intervals of approximately 3 minutes. Each image plane covers an area of 500-1000 sq km and the winds calculated represent area averages.
Kim, R S J; Postman, M; Strauss, M A; Bahcall, Neta A; Gunn, J E; Lupton, R H; Annis, J; Nichol, R C; Castander, F J; Brinkmann, J; Brunner, R J; Connolly, A; Csabai, I; Hindsley, R B; Ivezic, Z; Vogeley, M S; York, D G; Kim, Rita S. J.; Kepner, Jeremy V.; Postman, Marc; Strauss, Michael A.; Bahcall, Neta A.; Gunn, James E.; Lupton, Robert H.; Annis, James; Nichol, Robert C.; Castander, Francisco J.; Brunner, Robert J.; Connolly, Andrew; Csabai, Istvan; Hindsley, Robert B.; Ivezic, Zeljko; Vogeley, Michael S.; York, Donald G.
2002-01-01
We present a comparison of three cluster finding algorithms from imaging data using Monte Carlo simulations of clusters embedded in a 25 deg^2 region of Sloan Digital Sky Survey (SDSS) imaging data: the Matched Filter (MF; Postman et al. 1996), the Adaptive Matched Filter (AMF; Kepner et al. 1999) and a color-magnitude filtered Voronoi Tessellation Technique (VTT). Among the two matched filters, we find that the MF is more efficient in detecting faint clusters, whereas the AMF evaluates the redshifts and richnesses more accurately, therefore suggesting a hybrid method (HMF) that combines the two. The HMF outperforms the VTT when using a background that is uniform, but it is more sensitive to the presence of a non-uniform galaxy background than is the VTT; this is due to the assumption of a uniform background in the HMF model. We thus find that for the detection thresholds we determine to be appropriate for the SDSS data, the performance of both algorithms are similar; we present the selection function for eac...
Comparison of Three Greedy Routing Algorithms for Efficient Packet Forwarding in VANET
Directory of Open Access Journals (Sweden)
K. Lakshmi
2012-01-01
Full Text Available VANETs (Vehicular Ad hoc Networks are highlymobile wireless ad hoc networks and will play animportant role in public safety communicationsand commercial applications. In VANET nodeswhich are vehicles can move safety with highspeed and must communicate quickly andreliably. When an accident occurs in a road orhighway, alarm messages must be disseminated,instead of ad hoc routed, to inform all othervehicles. Vehicular ad hoc network architectureand cellular technology to achieve intelligentcommunication and improve road traffic safetyand efficiency . VANET can perform effectivecommunication by utilizing routing information.In this paper, we have discussed about threegreedy routing algorithms, and have comparedto show which one is efficient in deliveringpackets in terms of mobility, nodes andtransmission range
A comparison algorithm to check LTSA Layer 1 and SCORM compliance in e-Learning sites
Sengupta, Souvik; Banerjee, Nilanjan
2012-01-01
The success of e-Learning is largely dependent on the impact of its multimedia aided learning content on the learner over the hyper media. The e-Learning portals with different proportion of multimedia elements have different impact on the learner, as there is lack of standardization. The Learning Technology System Architecture (LTSA) Layer 1 deals with the effect of environment on the learner. From an information technology perspective it specifies learner interaction from the environment to the learner via multimedia content. Sharable Content Object Reference Model (SCROM) is a collection of standards and specifications for content of web-based e-learning and specifies how JavaScript API can be used to integrate content development. In this paper an examination is made on the design features of interactive multimedia components of the learning packages by creating an algorithm which will give a comparative study of multimedia component used by different learning packages. The resultant graph as output helps...
VHDL IMPLEMENTATION AND COMPARISON OF COMPLEX MUL-TIPLIER USING BOOTH’S AND VEDIC ALGORITHM
Directory of Open Access Journals (Sweden)
Rajashri K. Bhongade
2015-11-01
Full Text Available For designing of complex number multiplier basic idea is adopted from designing of multiplier. An ancient Indian mathematics "Vedas" is used for designing the multiplier unit. There are 16 sutra in Vedas, from that the Urdhva Tiryakb-hyam sutra (method was selected for implementation complex multiplication and basically Urdhva Tiryakbhyam sutra appli-cable to all cases of multiplication. Any multi-bit multiplication can be reduced down to single bit multiplication and addition by using Urdhva Tiryakbhyam sutra is performed by vertically and crosswise. The partial products and sums are generated in single step which reduces the carry propagation from LSB to MSB by using these formulas. In this paper simulation result for 4bit complex no. multiplication using Booth‟s algorithm and using Vedic sutra are illustrated. The implementation of the Vedic mathematics and their application to the complex multiplier was checked parameter like propagation delay.
Martin, Jacob A.; Gross, Kevin C.
2016-05-01
As off-nadir viewing platforms become increasingly prevalent in remote sensing, material identification techniques must be robust to changing viewing geometries. Current identification strategies generally rely on estimating reflectivity or emissivity, both of which vary with viewing angle. Presented here is a technique, leveraging polarimetric and hyperspectral imaging (P-HSI), to estimate index of refraction which is invariant to viewing geometry. Results from a quartz window show that index of refraction can be retrieved to within 0.08 rms error from 875-1250 cm-1 for an amorphous material. Results from a silicon carbide (SiC) wafer, which has much sharper features than quartz glass, show the index of refraction can be retrieved to within 0.07 rms error. The results from each of these datasets show an improvement when compared with a maximum smoothness TES algorithm.
Directory of Open Access Journals (Sweden)
Peter Domonkos
2013-01-01
Full Text Available Efficiency evaluations for change point Detection methods used in nine major Objective Homogenization Methods (DOHMs are presented. The evaluations are conducted using ten different simulated datasets and four efficiency measures: detection skill, skill of linear trend estimation, sum of squared error, and a combined efficiency measure. Test datasets applied have a diverse set of inhomogeneity (IH characteristics and include one dataset that is similar to the monthly benchmark temperature dataset of the European benchmarking effort known by the acronym COST HOME. The performance of DOHMs is highly dependent on the characteristics of test datasets and efficiency measures. Measures of skills differ markedly according to the frequency and mean duration of inhomogeneities and vary with the ratio of IH-magnitudes and background noise. The study focuses on cases when high quality relative time series (i.e., the difference between a candidate and reference series can be created, but the frequency and intensity of inhomogeneities are high. Results show that in these cases the Caussinus-Mestre method is the most effective, although appreciably good results can also be achieved by the use of several other DOHMs, such as the Multiple Analysis of Series for Homogenisation, Bayes method, Multiple Linear Regression, and the Standard Normal Homogeneity Test.
Trofimov, Alexey O; Kalentiev, George; Voennov, Oleg; Yuriev, Michail; Agarkova, Darya; Trofimova, Svetlana; Bragin, Denis E
2016-01-01
The aim of this work was comparison of two algorithms of perfusion computed tomography (PCT) data analysis for evaluation of cerebral microcirculation in the perifocal zone of chronic subdural hematoma (CSDH). Twenty patients with CSDH after polytrauma were included in the study. The same PCT data were assessed quantitatively in cortical brain region beneath the CSDH (zone 1), and in the corresponding contralateral brain hemisphere (zone 2) without and with the use of perfusion calculation mode excluding vascular pixel 'Remote Vessels' (RV); 1st and 2nd analysis method, respectively. Comparison with normal values for perfusion indices in the zone 1 in the 1st analysis method showed a significant (p < 0.01) increase in CBV and CBF, and no significant increase in MTT and TTP. Use of the RV mode (2nd analysis method) showed no statistically reliable change of perfusion parameters in the microcirculatory blood flow of the 2nd zone. Maintenance of microcirculatory blood flow perfusion reflects the preservation of cerebral blood flow autoregulation in patients with CSDH. PMID:27526170
Sun, X. H.; Rudnick, L.; Akahori, Takuya; Anderson, C. S.; Bell, M. R.; Bray, J. D.; Farnes, J. S.; Ideguchi, S.; Kumazaki, K.; O'Brien, T.; O'Sullivan, S. P.; Scaife, A. M. M.; Stepanov, R.; Stil, J.; Takahashi, K.; van Weeren, R. J.; Wolleben, M.
2015-02-01
Faraday rotation measures (RMs) and more general Faraday structures are key parameters for studying cosmic magnetism and are also sensitive probes of faint ionized thermal gas. A definition of which derived quantities are required for various scientific studies is needed, as well as addressing the challenges in determining Faraday structures. A wide variety of algorithms has been proposed to reconstruct these structures. In preparation for the Polarization Sky Survey of the Universe's Magnetism (POSSUM) to be conducted with the Australian Square Kilometre Array Pathfinder and the ongoing Galactic Arecibo L-band Feeds Array Continuum Transit Survey (GALFACTS), we run a Faraday structure determination data challenge to benchmark the currently available algorithms, including Faraday synthesis (previously called RM synthesis in the literature), wavelet, compressive sampling, and QU-fitting. The input models include sources with one Faraday thin component, two Faraday thin components, and one Faraday thick component. The frequency set is similar to POSSUM/GALFACTS with a 300 MHz bandwidth from 1.1 to 1.4 GHz. We define three figures of merit motivated by the underlying science: (1) an average RM weighted by polarized intensity, R{{M}wtd}, (2) the separation Δφ of two Faraday components, and (3) the reduced chi-squared χ r2. Based on the current test data with a signal-to-noise ratio of about 32, we find the following. (1) When only one Faraday thin component is present, most methods perform as expected, with occasional failures where two components are incorrectly found. (2) For two Faraday thin components, QU-fitting routines perform the best, with errors close to the theoretical ones for R{{M}wtd} but with significantly higher errors for Δφ . All other methods, including standard Faraday synthesis, frequently identify only one component when Δφ is below or near the width of the Faraday point-spread function. (3) No methods as currently implemented work well for
Comparison of fringe tracking algorithms for single-mode near-infrared long baseline interferometers
Choquet, Élodie; Perrin, Guy; Cassaing, Frédéric; Lacour, Sylvestre; Eisenhauer, Frank
2014-01-01
To enable optical long baseline interferometry toward faint objects, long integrations are necessary despite atmospheric turbulence. Fringe trackers are needed to stabilize the fringes and thus increase the fringe visibility and phase signal-to-noise ratio (SNR), with efficient controllers robust to instrumental vibrations, and to subsequent path fluctuations and flux drop-outs. We report on simulations, analysis and comparison of the performances of a classical integrator controller and of a Kalman controller, both optimized to track fringes under realistic observing conditions for different source magnitudes, disturbance conditions, and sampling frequencies. The key parameters of our simulations (instrument photometric performance, detection noise, turbulence and vibrations statistics) are based on typical observing conditions at the Very Large Telescope observatory and on the design of the GRAVITY instrument, a 4-telescope single-mode long baseline interferometer in the near-infrared, next in line to be in...
Kitsionas, S; Federrath, C; Schmidt, W; Price, D; Dursi, J; Gritschneder, M; Walch, S; Piontek, R; Kim, J; Jappsen, A -K; Ciecielag, P; Mac Low, M -M
2008-01-01
Simulations of astrophysical turbulence have reached a level of sophistication that quantitative results are now starting to emerge. Contradicting results have been reported, however, in the literature with respect to the performance of the numerical techniques employed for its study and their relevance to the physical systems modelled. We aim at characterising the performance of a number of hydrodynamics codes on the modelling of turbulence decay. This is the first such large-scale comparison ever conducted. We have driven compressible, supersonic, isothermal turbulence with GADGET and then let it decay in the absence of gravity, using a number of grid (ENZO, FLASH, TVD, ZEUS) and SPH codes (GADGET, VINE, PHANTOM). We have analysed the results of our numerical experiments using a variety of statistical measures ranging from energy spectrum functions (power spectra), to velocity structure functions, to probability distribution functions. In the low numerical resolution employed here the performance of the var...
ECG De-noising: A comparison between EEMD-BLMS and DWT-NN algorithms.
Kærgaard, Kevin; Jensen, Søren Hjøllund; Puthusserypady, Sadasivan
2015-08-01
Electrocardiogram (ECG) is a widely used non-invasive method to study the rhythmic activity of the heart and thereby to detect the abnormalities. However, these signals are often obscured by artifacts from various sources and minimization of these artifacts are of paramount important. This paper proposes two adaptive techniques, namely the EEMD-BLMS (Ensemble Empirical Mode Decomposition in conjunction with the Block Least Mean Square algorithm) and DWT-NN (Discrete Wavelet Transform followed by Neural Network) methods in minimizing the artifacts from recorded ECG signals, and compares their performance. These methods were first compared on two types of simulated noise corrupted ECG signals: Type-I (desired ECG+noise frequencies outside the ECG frequency band) and Type-II (ECG+noise frequencies both inside and outside the ECG frequency band). Subsequently, they were tested on real ECG recordings. Results clearly show that both the methods works equally well when used on Type-I signals. However, on Type-II signals the DWT-NN performed better. In the case of real ECG data, though both methods performed similar, the DWT-NN method was a slightly better in terms of minimizing the high frequency artifacts.
ECG De-noising: A comparison between EEMD-BLMS and DWT-NN algorithms.
Kærgaard, Kevin; Jensen, Søren Hjøllund; Puthusserypady, Sadasivan
2015-08-01
Electrocardiogram (ECG) is a widely used non-invasive method to study the rhythmic activity of the heart and thereby to detect the abnormalities. However, these signals are often obscured by artifacts from various sources and minimization of these artifacts are of paramount important. This paper proposes two adaptive techniques, namely the EEMD-BLMS (Ensemble Empirical Mode Decomposition in conjunction with the Block Least Mean Square algorithm) and DWT-NN (Discrete Wavelet Transform followed by Neural Network) methods in minimizing the artifacts from recorded ECG signals, and compares their performance. These methods were first compared on two types of simulated noise corrupted ECG signals: Type-I (desired ECG+noise frequencies outside the ECG frequency band) and Type-II (ECG+noise frequencies both inside and outside the ECG frequency band). Subsequently, they were tested on real ECG recordings. Results clearly show that both the methods works equally well when used on Type-I signals. However, on Type-II signals the DWT-NN performed better. In the case of real ECG data, though both methods performed similar, the DWT-NN method was a slightly better in terms of minimizing the high frequency artifacts. PMID:26737124
Comparison of parametric FBP and OS-EM reconstruction algorithm images for PET dynamic study
Energy Technology Data Exchange (ETDEWEB)
Oda, Keiichi; Uemura, Koji; Kimura, Yuichi; Senda, Michio [Tokyo Metropolitan Inst. of Gerontology (Japan). Positron Medical Center; Toyama, Hinako; Ikoma, Yoko
2001-10-01
An ordered subsets expectation maximization (OS-EM) algorithm is used for image reconstruction to suppress image noise and to make non-negative value images. We have applied OS-EM to a digital brain phantom and to human brain {sup 18}F-FDG PET kinetic studies to generate parametric images. A 45 min dynamic scan was performed starting injection of FDG with a 2D PET scanner. The images were reconstructed with OS-EM (6 iterations, 16 subsets) and with filtered backprojection (FBP), and K1, k2 and k3 images were created by the Marquardt non-linear least squares method based on the 3-parameter kinetic model. Although the OS-EM activity images correlated fairly well with those obtained by FBP, the pixel correlations were poor for the k2 and k3 parametric images, but the plots were scattered along the line of identity and the mean values for K1, k2 and k3 obtained by OS-EM were almost equal to those by FBP. The kinetic fitting error for OS-EM was no smaller than that for FBP. The results suggest that OS-EM is not necessarily superior to FBP for creating parametric images. (author)
Implementing and Comparison between Two Algorithms to Make a Decision in a Wireless Sensors Network
Directory of Open Access Journals (Sweden)
Fouad Essahlaoui
2016-08-01
Full Text Available The clinical presentation of acute CO poisoning and hydrocarbon gas (Butane CAS 106-97-8 varies depending on terrain, humidity, temperature, duration of exposure and the concentration of gas toxic: From then consciousness disorders (100 ppm or 15% rapidly limiting miners to ambient air and under oxygen until sudden coma (300 ppm or 45% required hospitalization monitoring unit, if not the result in few minutes it’s death in the poisoning site [1]. Leakage of the filling butane gas in the plant and very close to the latter position at the Faculty and under gas detection project. Has met a set of sensors to warn of possible leak, which can affect students, teachers and staff of the institution. Therefore, this document describes the implementation of two methods: the first is Average filter and the second as Cusum algorithm, to make a warning decision swished a signal given by the wireless sensors [9] [14-15]. Which installed in the inner side of Faculty of Science and Technology in Errachidia.
Comparison of most adaptive meta model With newly created Quality Meta-Model using CART Algorithm
Directory of Open Access Journals (Sweden)
Jasbir Malik
2012-09-01
Full Text Available To ensure that the software developed is of high quality, it is now widely accepted that various artifacts generated during the development process should be rigorously evaluated using domain-specific quality model. However, a domain-specific quality model should be derived from a generic quality model which is time-proven, well-validated and widely-accepted. This thesis lays down a clear definition of quality meta-model and then identifies various quality meta-models existing in the research and practice-domains. This thesis then compares the various existing quality meta-models to identify which model is the most adaptable to various domains. A set of criteria is used to compare the various quality meta-models. In this we specify the categories, as the CART Algorithms is completely a tree architecture which works on either true or false meta model decision making power .So in the process it has been compared that , if the following items has been found in one category then it falls under true section else under false section .
Swartz, W. H.; Bucesla, E. J.; Lamsal, L. N.; Celarier, E. A.; Krotkov, N. A.; Bhartia, P, K,; Strahan, S. E.; Gleason, J. F.; Herman, J.; Pickering, K.
2012-01-01
Nitrogen oxides (NOx =NO+NO2) are important atmospheric trace constituents that impact tropospheric air pollution chemistry and air quality. We have developed a new NASA algorithm for the retrieval of stratospheric and tropospheric NO2 vertical column densities using measurements from the nadir-viewing Ozone Monitoring Instrument (OMI) on NASA's Aura satellite. The new products rely on an improved approach to stratospheric NO2 column estimation and stratosphere-troposphere separation and a new monthly NO2 climatology based on the NASA Global Modeling Initiative chemistry-transport model. The retrieval does not rely on daily model profiles, minimizing the influence of a priori information. We evaluate the retrieved tropospheric NO2 columns using surface in situ (e.g., AQS/EPA), ground-based (e.g., DOAS), and airborne measurements (e.g., DISCOVER-AQ). The new, improved OMI tropospheric NO2 product is available at high spatial resolution for the years 200S-present. We believe that this product is valuable for the evaluation of chemistry-transport models, examining the spatial and temporal patterns of NOx emissions, constraining top-down NOx inventories, and for the estimation of NOx lifetimes.
Ben Said, Mourad; Galai, Yousr; Mhadhbi, Moez; Jedidi, Mohamed; de la Fuente, José; Darghouth, Mohamed Aziz
2012-11-23
The ixodid ticks from the Hyalomma genus are important pests of livestock, having major medical and veterinary significance in Northern Africa. Beside their direct pathogenic effects, these species are vectors of important diseases of livestock and in some instances of zoonoses. Anti-tick vaccines developed in Australia and Cuba based on the concealed antigen Bm86 have variable efficacy against H. anatolicum and H. dromedarii. This variation in vaccine efficacy could be explained by the variability in protein sequence between the recombinant Bm86 vaccine and Bm86 orthologs expressed in different Hyalomma species. Bm86 orthologs from three Hyalomma tick species were amplified in two overlapping fragments and sequenced. The rate of identity of the amino acid sequence of Hm86, He86 and Hdr86, the orthologs of Bm86, respectively, in H. marginatum marginatum, H. excavatum and H. dromedarii, with the Bm86 proteins from Rhipicephalus microplus (Australia, Argentina and Mozambique) ranged between 60 and 66%. The obtained amino-acid sequences of Hmm86, He86 and Hdr86 were compared with the Hd86-A1 sequence from H. scupense used as an experimental vaccine. The results showed an identity of 91, 88 and 87% for Hmm86, He86 and Hdr86, respectively. A specific program has been used to predict B cells epitopes sites. The comparison of antigenic sites between Hd86-A1 and Hm86/Hdr86/He86 revealed a diversity affecting 4, 8 and 12 antigenic peptides out of a total of 28 antigenic peptides, respectively. When the Bm86 orthologs amplification protocol adopted in this study was applied to H. excavatum, two alleles named He86p2a1 and He86p2a2 were detected in this species. This is the first time that two different alleles of Bm86 gene are recorded in the same tick specimen. He86p2a1 and He86p2a2 showed an amino acid identity of 92%. When He86p2a1 and He86p2a2 were compared to the corresponding sequence of Hd86-A1 protein, an identity of 86.4 and 91.0% was recorded, respectively. When
A task-based comparison of two reconstruction algorithms for digital breast tomosynthesis
Mahadevan, Ravi; Ikejimba, Lynda C.; Lin, Yuan; Samei, Ehsan; Lo, Joseph Y.
2014-03-01
Digital breast tomosynthesis (DBT) generates 3-D reconstructions of the breast by taking X-Ray projections at various angles around the breast. DBT improves cancer detection as it minimizes tissue overlap that is present in traditional 2-D mammography. In this work, two methods of reconstruction, filtered backprojection (FBP) and the Newton-Raphson iterative reconstruction were used to create 3-D reconstructions from phantom images acquired on a breast tomosynthesis system. The task based image analysis method was used to compare the performance of each reconstruction technique. The task simulated a 10mm lesion within the breast containing iodine concentrations between 0.0mg/ml and 8.6mg/ml. The TTF was calculated using the reconstruction of an edge phantom, and the NPS was measured with a structured breast phantom (CIRS 020) over different exposure levels. The detectability index d' was calculated to assess image quality of the reconstructed phantom images. Image quality was assessed for both conventional, single energy and dual energy subtracted reconstructions. Dose allocation between the high and low energy scans was also examined. Over the full range of dose allocations, the iterative reconstruction yielded a higher detectability index than the FBP for single energy reconstructions. For dual energy subtraction, detectability index was maximized when most of the dose was allocated to the high energy image. With that dose allocation, the performance trend for reconstruction algorithms reversed; FBP performed better than the corresponding iterative reconstruction. However, FBP performance varied very erratically with changing dose allocation. Therefore, iterative reconstruction is preferred for both imaging modalities despite underperforming dual energy FBP, as it provides stable results.
Characterization and Comparison of the 10-2 SITA-Standard and Fast Algorithms
Directory of Open Access Journals (Sweden)
Yaniv Barkana
2012-01-01
Full Text Available Purpose: To compare the 10-2 SITA-standard and SITA-fast visual field programs in patients with glaucoma. Methods: We enrolled 26 patients with open angle glaucoma with involvement of at least one paracentral location on 24-2 SITA-standard field test. Each subject performed 10-2 SITA-standard and SITA-fast tests. Within 2 months this sequence of tests was repeated. Results: SITA-fast was 30% shorter than SITA-standard (5.5±1.1 vs 7.9±1.1 minutes, <0.001. Mean MD was statistically significantly higher for SITA-standard compared with SITA-fast at first visit (Δ=0.3 dB, =0.017 but not second visit. Inter-visit difference in MD or in number of depressed points was not significant for both programs. Bland-Altman analysis showed that clinically significant variations can exist in individual instances between the 2 programs and between repeat tests with the same program. Conclusions: The 10-2 SITA-fast algorithm is significantly shorter than SITA-standard. The two programs have similar long-term variability. Average same-visit between-program and same-program between-visit sensitivity results were similar for the study population, but clinically significant variability was observed for some individual test pairs. Group inter- and intra-program test results may be comparable, but in the management of the individual patient field change should be verified by repeat testing.
Xie, H.; Hendrickx, J.; Kurc, S.; Small, E.
2002-12-01
Evapotranspiration (ET) is one of the most important components of the water balance, but also one of the most difficult to measure. Field techniques such as soil water balances and Bowen ratio or eddy covariance techniques are local, ranging from point to field scale. SEBAL (Surface Energy Balance Algorithm for Land) is an image-processing model that calculates ET and other energy exchanges at the earth's surface. SEBAL uses satellite image data (TM/ETM+, MODIS, AVHRR, ASTER, and so on) measuring visible, near-infrared, and thermal infrared radiation. SEBAL algorithms predict a complete radiation and energy balance for the surface along with fluxes of sensible heat and aerodynamic surface roughness (Bastiaanssen et al, 1998; and Allen et al. 2001). We are constructing a GIS based database that includes spatially-distributed estimates of ET from remote-sensed data at a resolution of about 30 m. The SEBAL code will be optimized for this region via comparison of surface based observations of ET, reference ET (from windspeed, solar radiation, humidity, air temperature, and rainfall records), surface temperature, albedo, and so on. The observed data is collected at a series of tower in the middle Rio Grande Basin. The satellite image provides the instantaneous ET (ET_inst) only. Therefore, estimating 24 hour ET (ET_24) requires some assumptions. Two of these assumptions, which are (1) by assuming the instantaneous evaporative fraction (EF) is equal to the 24-hour averaged value, and (2) by assuming the instantaneous ETrF (same as `crop coefficient', and equal to instantaneous ET divided by instantaneous reference ET) is equal to the 24 hour averaged value, will be evaluated for the study area. Seasonal ET will be estimated by expanding the 24-hour ET proportionally to a reference ET that is derived from weather data. References: Bastiaanssen,W.G.M., M.Menenti, R.A. Feddes, and A.A.M. Holtslag, 1998, A remote sensing surface energy balance algorithm for land (SEBAL): 1
Directory of Open Access Journals (Sweden)
Congcong Li
2014-01-01
Full Text Available Although a large number of new image classification algorithms have been developed, they are rarely tested with the same classification task. In this research, with the same Landsat Thematic Mapper (TM data set and the same classification scheme over Guangzhou City, China, we tested two unsupervised and 13 supervised classification algorithms, including a number of machine learning algorithms that became popular in remote sensing during the past 20 years. Our analysis focused primarily on the spectral information provided by the TM data. We assessed all algorithms in a per-pixel classification decision experiment and all supervised algorithms in a segment-based experiment. We found that when sufficiently representative training samples were used, most algorithms performed reasonably well. Lack of training samples led to greater classification accuracy discrepancies than classification algorithms themselves. Some algorithms were more tolerable to insufficient (less representative training samples than others. Many algorithms improved the overall accuracy marginally with per-segment decision making.
Hudson, Parisa; Hudson, Stephen D; Handler, William B; Scholl, Timothy J; Chronik, Blaine A
2010-04-01
High-performance shim coils are required for high-field magnetic resonance imaging and spectroscopy. Complete sets of high-power and high-performance shim coils were designed using two different methods: the minimum inductance and the minimum power target field methods. A quantitative comparison of shim performance in terms of merit of inductance (ML) and merit of resistance (MR) was made for shim coils designed using the minimum inductance and the minimum power design algorithms. In each design case, the difference in ML and the difference in MR given by the two design methods was inductance designs tend to feature oscillations within the current density; while minimum power designs tend to feature less rapidly varying current densities and lower power dissipation. Overall, the differences in coil performance obtained by the two methods are relatively small. For the specific case of shim systems customized for small animal imaging, the reduced power dissipation obtained when using the minimum power method is judged to be more significant than the improvements in switching speed obtained from the minimum inductance method.
Directory of Open Access Journals (Sweden)
P. S. Hiremath
2014-11-01
Full Text Available In mobile ad-hoc networks (MANET, the movement of the nodes may quickly change the networks topology resulting in the increase of the overhead message in topology maintenance. The nodes communicate with each other by exchanging the hello packet and constructing the neighbor list at each node. MANET is vulnerable to attacks such as black hole attack, gray hole attack, worm hole attack and sybil attack. A black hole attack makes a serious impact on routing, packet delivery ratio, throughput, and end to end delay of packets. In this paper, the performance comparison of clustering based and threshold based algorithms for detection and prevention of cooperative in MANETs is examined. In this study every node is monitored by its own cluster head (CH, while server (SV monitors the entire network by channel overhearing method. Server computes the trust value based on sent and receive count of packets of the receiver node. It is implemented using AODV routing protocol in the NS2 simulations. The results are obtained by comparing the performance of clustering based and threshold based methods by varying the concentration of black hole nodes and are analyzed in terms of throughput, packet delivery ratio. The results demonstrate that the threshold based method outperforms the clustering based method in terms of throughput, packet delivery ratio and end to end delay.
A comparison of iterative algorithms and a mixed approach for in-line x-ray phase retrieval.
Meng, Fanbo; Zhang, Da; Wu, Xizeng; Liu, Hong
2009-08-15
Previous studies have shown that iterative in-line x-ray phase retrieval algorithms may have higher precision than direct retrieval algorithms. This communication compares three iterative phase retrieval algorithms in terms of accuracy and efficiency using computer simulations. We found the Fourier transformation based algorithm (FT) is of the fastest convergence, while the Poisson-solver based algorithm (PS) has higher precision. The traditional Gerchberg-Saxton algorithm (GS) is very slow and sometimes does not converge in our tests. Then a mixed FT-PS algorithm is presented to achieve both high efficiency and high accuracy. The mixed algorithm is tested using simulated images with different noise level and experimentally obtained images of a piece of chicken breast muscle.
A Comparison of Standard One-Step DDA Circular Interpolators with a New Cheap Two-Step Algorithm
Directory of Open Access Journals (Sweden)
Leonid Moroz
2014-01-01
Full Text Available We present and study existing digital differential analyzer (DDA algorithms for circle generation, including an improved two-step DDA algorithm which can be implemented solely in terms of elementary shifts, addition, and subtraction.
Directory of Open Access Journals (Sweden)
Małgorzata Stramska
2013-02-01
Full Text Available The quasi-synoptic view available from satellites has been broadly used in recent years to observe in near-real time the large-scale dynamics of marine ecosystems and to estimate primary productivity in the world ocean. However, the standard global NASA ocean colour algorithms generally do not produce good results in the Baltic Sea. In this paper, we compare the ability of seven algorithms to estimate depth-integrated daily primary production (PP, mg C m-2 in the Baltic Sea. All the algorithms use surface chlorophyll concentration, sea surface temperature, photosynthetic available radiation, latitude, longitude and day of the year as input data. Algorithm-derived PP is then compared with PP estimates obtained from 14C uptake measurements. The results indicate that the best agreement between the modelled and measured PP in the Baltic Sea is obtained with the DESAMBEM algorithm. This result supports the notion that a regional approach should be used in the interpretation of ocean colour satellite data in the Baltic Sea.
Directory of Open Access Journals (Sweden)
Fatemeh Masoudnia
2013-11-01
Full Text Available In this paper three optimum approaches to design PID controller for a Gryphon Robot are presented. The three applied approaches are Artificial Bee Colony, Shuffled Frog Leaping algorithms and nero-fuzzy system. The design goal is to minimize the integral absolute error and reduce transient response by minimizing overshoot, settling time and rise time of step response. An Objective function of these indexes is defined and minimized applying Shuffled Frog Leaping (SFL algorithm, Artificial Bee Colony (ABC algorithm and Nero-Fuzzy System (FNN. After optimization of the objective function, the optimal parameters for the PID controller are adjusted. Simulation results show that FNN has a remarkable effect on decreasing the amount of settling time and rise-time and eliminating of steady-state error while the SFL algorithm performs better on steady-state error and the ABC algorithm is better on decreasing of overshoot. In steady state manner, all of the methods react robustly to the disturbance, but FNN shows more stability in transient response.
Małgorzata Stramska; Agata Zuzewicz
2013-01-01
The quasi-synoptic view available from satellites has been broadly used in recent years to observe in near-real time the large-scale dynamics of marine ecosystems and to estimate primary productivity in the world ocean. However, the standard global NASA ocean colour algorithms generally do not produce good results in the Baltic Sea. In this paper, we compare the ability of seven algorithms to estimate depth-integrated daily primary production (PP, mg C m-2) in the Baltic Sea. All the algorith...
Institute of Scientific and Technical Information of China (English)
Ikuo Nagashima; Tadahiro Takada; Miki Adachi; Hirokazu Nagawa; Tetsuichiro Muto; Kota Okinaga
2006-01-01
AIM: To select accurately good candidates of hepatic resection for colorectal liver metastasis.METHODS: Thirteen clinicopathological features, which were recognized only before or during surgery, were selected retrospectively in 81 consecutive patients in one hospital (Group Ⅰ ). These features were entered into a multivariate analysis to determine independent and significant variables affecting long-term prognosis after hepatectomy. Using selected variables, we created a scoring formula to classify patients with colorectal liver metastases to select good candidates for hepatic resection. The usefulness of the new scoring system was examined in a series of 92 patients from another hospital (Group Ⅱ ), comparing the number of selected variables.RESULTS: Among 81 patients of Group Ⅰ, multivariate analysis, i.e. Cox regression analysis, showed that multiple tumors, the largest tumor greater than 5 cm in diameter, and resectable extrahepatic metastases were significant and independent prognostic factors for poor survival after hepatectomy (P ＜ 0.05). In addition, these three factors: serosa invasion, local lymph node metastases of primary cancers, and postoperative disease free interval less than 1 year including synchronous hepatic metastasis, were not significant,however, they were selected by a stepwise method of Cox regression analysis (0.05 ＜ P ＜ 0.20). Using these six variables, we created a new scoring formula to classify patients with colorectal liver metastases. Finally,our new scoring system not only classified patients in Group Ⅰ very well, but also that in Group Ⅱ, according to long-term outcomes after hepatic resection. The positive number of these six variables also classified them well.CONCLUSION: Both, our new scoring system and the positive number of significant prognostic factors are useful to classify patients with colorectal liver metastases in the preoperative selection of good candidates for hepatic resection.
Directory of Open Access Journals (Sweden)
Tamborrini Marco
2011-12-01
Full Text Available Abstract Background In clinical trials, immunopotentiating reconstituted influenza virosomes (IRIVs have shown great potential as a versatile antigen delivery platform for synthetic peptides derived from Plasmodium falciparum antigens. This study describes the immunogenicity of a virosomally-formulated recombinant fusion protein comprising domains of the two malaria vaccine candidate antigens MSP3 and GLURP. Methods The highly purified recombinant protein GMZ2 was coupled to phosphatidylethanolamine and the conjugates incorporated into the membrane of IRIVs. The immunogenicity of this adjuvant-free virosomal formulation was compared to GMZ2 formulated with the adjuvants Montanide ISA 720 and Alum in three mouse strains with different genetic backgrounds. Results Intramuscular injections of all three candidate vaccine formulations induced GMZ2-specific antibody responses in all mice tested. In general, the humoral immune response in outbred NMRI mice was stronger than that in inbred BALB/c and C57BL/6 mice. ELISA with the recombinant antigens demonstrated immunodominance of the GLURP component over the MSP3 component. However, compared to the Al(OH3-adjuvanted formulation the two other formulations elicited in NMRI mice a larger proportion of anti-MSP3 antibodies. Analyses of the induced GMZ2-specific IgG subclass profiles showed for all three formulations a predominance of the IgG1 isotype. Immune sera against all three formulations exhibited cross-reactivity with in vitro cultivated blood-stage parasites. Immunofluorescence and immunoblot competition experiments showed that both components of the hybrid protein induced IgG cross-reactive with the corresponding native proteins. Conclusion A virosomal formulation of the chimeric protein GMZ2 induced P. falciparum blood stage parasite cross-reactive IgG responses specific for both MSP3 and GLURP. GMZ2 thus represents a candidate component suitable for inclusion into a multi-valent virosomal
Malartic, Cécile; Morel, Olivier; Rivain, Anne-Laure; Placé, Vinciane; Le Dref, Olivier; Dohan, Anthony; Gayat, Etienne; Barranger, Emmanuel; Soyer, Philippe
2013-01-01
Ultrasonographic and magnetic resonance (MR) imaging examinations of 68 women with uterine fibroids were reviewed to determine whether MR imaging may alter the therapeutic approach based on ultrasonography alone before uterine embolization. Therapeutic decisions based on ultrasonography alone were compared to those obtained after MR imaging. Discordant findings between both examinations involved 51 women (75%), and 19 (28%) had their therapeutic approaches based on ultrasonography alone altered by MR imaging. Ultrasonography and MR imaging showed concordant findings in 17 women (25%) for whom no changes in therapeutic option were made. MR imaging alters the therapeutic approach based on ultrasonography alone in 28% of candidates for uterine artery embolization. PMID:23206612
Directory of Open Access Journals (Sweden)
Yinliang Wang
Full Text Available The leaf beetle Ambrostoma quadriimpressum (Coleoptera: Chrysomelidae is a predominant forest pest that causes substantial damage to the lumber industry and city management. However, no effective and environmentally friendly chemical method has been discovered to control this pest. Until recently, the molecular basis of the olfactory system in A. quadriimpressum was completely unknown. In this study, antennae and leg transcriptomes were analyzed and compared using deep sequencing data to identify the olfactory genes in A. quadriimpressum. Moreover, the expression profiles of both male and female candidate olfactory genes were analyzed and validated by bioinformatics, motif analysis, homology analysis, semi-quantitative RT-PCR and RT-qPCR experiments in antennal and non-olfactory organs to explore the candidate olfactory genes that might play key roles in the life cycle of A. quadriimpressum. As a result, approximately 102.9 million and 97.3 million clean reads were obtained from the libraries created from the antennas and legs, respectively. Annotation led to 34344 Unigenes, which were matched to known proteins. Annotation data revealed that the number of genes in antenna with binding functions and receptor activity was greater than that of legs. Furthermore, many pathway genes were differentially expressed in the two organs. Sixteen candidate odorant binding proteins (OBPs, 10 chemosensory proteins (CSPs, 34 odorant receptors (ORs, 20 inotropic receptors [1] and 2 sensory neuron membrane proteins (SNMPs and their isoforms were identified. Additionally, 15 OBPs, 9 CSPs, 18 ORs, 6 IRs and 2 SNMPs were predicted to be complete ORFs. Using RT-PCR, RT-qPCR and homology analysis, AquaOBP1/2/4/7/C1/C6, AquaCSP3/9, AquaOR8/9/10/14/15/18/20/26/29/33, AquaIR8a/13/25a showed olfactory-specific expression, indicating that these genes might play a key role in olfaction-related behaviors in A. quadriimpressum such as foraging and seeking. AquaOBP4/C5, Aqua
Directory of Open Access Journals (Sweden)
Deleuze Jean-Francois
2006-04-01
Full Text Available Abstract Background The recent advances in genotyping and molecular techniques have greatly increased the knowledge of the human genome structure. Millions of polymorphisms are reported and freely available in public databases. As a result, there is now a need to identify among all these data, the relevant markers for genetic association studies. Recently, several methods have been published to select subsets of markers, usually Single Nucleotide Polymorphisms (SNPs, that best represent genetic polymorphisms in the studied candidate gene or region. Results In this paper, we compared four of these selection methods, two based on haplotype information and two based on pairwise linkage disequilibrium (LD. The methods were applied to the genotype data on twenty genes with different patterns of LD and different numbers of SNPs. A measure of the efficiency of the different methods to select SNPs was obtained by comparing, for each gene and under several single disease susceptibility models, the power to detect an association that will be achieved with the selected SNP subsets. Conclusion None of the four selection methods stands out systematically from the others. Methods based on pairwise LD information turn out to be the most interesting methods in a context of association study in candidate gene. In a context where the number of SNPs to be tested in a given region needs to be more limited, as in large-scale studies or wide genome scans, one of the two methods based on haplotype information, would be more suitable.
DEFF Research Database (Denmark)
Ivanova, N.; Pedersen, L. T.; Tonboe, R. T.;
2015-01-01
algorithm inter-comparison and evaluation experiment. The skills of 30 sea ice algorithms were evaluated systematically over low and high sea ice concentrations. Evaluation criteria included standard deviation relative to independent validation data, performance in the presence of thin ice and melt ponds......Sea ice concentration has been retrieved in polar regions with satellite microwave radiometers for over 30 years. However, the question remains as to what is an optimal sea ice concentration retrieval method for climate monitoring. This paper presents some of the key results of an extensive...... to retrieve sea ice concentration globally for climate monitoring purposes. This approach consists of a combination of two algorithms plus dynamic tie points implementation and atmospheric correction of input brightness temperatures. The method minimizes inter-sensor calibration discrepancies and sensitivity...
Novel multi-objective optimization algorithm
Institute of Scientific and Technical Information of China (English)
Jie Zeng; Wei Nie
2014-01-01
Many multi-objective evolutionary algorithms (MOEAs) can converge to the Pareto optimal front and work wel on two or three objectives, but they deteriorate when faced with many-objective problems. Indicator-based MOEAs, which adopt various indicators to evaluate the fitness values (instead of the Pareto-dominance relation to select candidate solutions), have been regarded as promising schemes that yield more satisfactory re-sults than wel-known algorithms, such as non-dominated sort-ing genetic algorithm (NSGA-II) and strength Pareto evolution-ary algorithm (SPEA2). However, they can suffer from having a slow convergence speed. This paper proposes a new indicator-based multi-objective optimization algorithm, namely, the multi-objective shuffled frog leaping algorithm based on the ε indicator (ε-MOSFLA). This algorithm adopts a memetic meta-heuristic, namely, the SFLA, which is characterized by the powerful capa-bility of global search and quick convergence as an evolutionary strategy and a simple and effective ε-indicator as a fitness as-signment scheme to conduct the search procedure. Experimental results, in comparison with other representative indicator-based MOEAs and traditional Pareto-based MOEAs on several standard test problems with up to 50 objectives, show thatε-MOSFLA is the best algorithm for solving many-objective optimization problems in terms of the solution quality as wel as the speed of convergence.
Suarez, Max J. (Editor); Chang, Alfred T. C.; Chiu, Long S.
1997-01-01
Seventeen months of rainfall data (August 1987-December 1988) from nine satellite rainfall algorithms (Adler, Chang, Kummerow, Prabhakara, Huffman, Spencer, Susskind, and Wu) were analyzed to examine the uncertainty of satellite-derived rainfall estimates. The variability among algorithms, measured as the standard deviation computed from the ensemble of algorithms, shows regions of high algorithm variability tend to coincide with regions of high rain rates. Histograms of pattern correlation (PC) between algorithms suggest a bimodal distribution, with separation at a PC-value of about 0.85. Applying this threshold as a criteria for similarity, our analyses show that algorithms using the same sensor or satellite input tend to be similar, suggesting the dominance of sampling errors in these satellite estimates.
Jones, Andrew Osler
There is an increasing interest in the use of inhomogeneity corrections for lung, air, and bone in radiotherapy treatment planning. Traditionally, corrections based on physical density have been used. Modern algorithms use the electron density derived from CT images. Small fields are used in both conformal radiotherapy and IMRT, however their beam characteristics in inhomogeneous media have not been extensively studied. This work compares traditional and modern treatment planning algorithms to Monte Carlo simulations in and near low-density inhomogeneities. Field sizes ranging from 0.5 cm to 5 cm in diameter are projected onto a phantom containing inhomogeneities and depth dose curves are compared. Comparisons of the Dose Perturbation Factors (DPF) are presented as functions of density and field size. Dose Correction Factors (DCF), which scale the algorithms to the Monte Carlo data, are compared for each algorithm. Physical scaling algorithms such as Batho and Equivalent Pathlength (EPL) predict an increase in dose for small fields passing through lung tissue, where Monte Carlo simulations show a sharp dose drop. The physical model-based collapsed cone convolution (CCC) algorithm correctly predicts the dose drop, but does not accurately predict the magnitude. Because the model-based algorithms do not correctly account for the change in backscatter, the dose drop predicted by CCC occurs further downstream compared to that predicted by the Monte Carlo simulations. Beyond the tissue inhomogeneity all of the algorithms studied predict dose distributions in close agreement with Monte Carlo simulations. Dose-volume relationships are important in understanding the effects of radiation to the lung. Dose within the lung is affected by a complex function of beam energy, lung tissue density, and field size. Dose algorithms vary in their abilities to correctly predict the dose to the lung tissue. A thorough analysis of the effects of density, and field size on dose to the lung
Institute of Scientific and Technical Information of China (English)
万润泽; 雷建军; 王海军
2013-01-01
LEACH协议忽略了节点剩余能量在簇头选举中的重要性,使得能量低的节点可能成为簇头而过早死亡而影响整个网络生命周期.为此,提出了按节点剩余能量进行分级的思想,使得级别高的节点更容易被选举为簇头,另外,对于非簇头节点接收到的多个候选簇头发送的广播消息,根据剩余能量和距离因素综合评价后再选择加入簇.在传输模式上,簇头融合簇内数据后采用多跳方式发送至sink节点,为了使网络能量消耗更均衡,采用非均匀分簇的思想,避免靠近sink的簇头能量过早耗尽.仿真结果表明,本文提出的算法能均衡网络负载,提高网络生命周期.%The residual energy of the nodes are not paid great attention in the process of the cluster-head election in LEACH,and the whole life cycle of the network will be reduced while the nodes with low energy become cluster heads.In this paper,we propose the idea of classification according to the residual energy of the nodes for the sake of that the node with higher level are more likely to be cluster-head.In addition,for the noncluster head nodes,they may receive the messages from multiple candidate cluster heads and choose to join the cluster according to the comprehensive evaluation of residual energy and distance.In order to make the energy consumption more balanced,the clusterheads send the data to the sink in the manner of multi-hop transmission,and use unequal clustering mechanism to avoid the situation where the nodes near the sink deplete their energy much faster than distant nodes.The simulation results show that our algorithm can achieve load balancing and prolong the life cycle of the network effectively.
Institute of Scientific and Technical Information of China (English)
魏强; 涂子学; 周静生; 周嘉男
2012-01-01
To reasonably select locations of emergency facilities, a model aimed at minimizing the number of emergency facilities was built under the presuppositon of covering all demand points. From the network location point of view, distribution of emergency facilities on network was optimized based on set covering so that decision space was no longer limited to discrete set of points. For the difficulties caused by the continuity of the decision space to solve the problem, The candidate sites algorithm was used to dis-cretize the continuous solution space by solving possible effective path set, the solution space was greatly reduced and network design problems were converted into the 0 - 1 integer programming problem. The results show that by means of solving 0-1 integer programming problem can achieve distribution optimization of emergency facilitie on network.%为解决应急管理系统中的应急设施最优选址问题,提出在覆盖所有需求点前提下以应急设施数目最少为目标的选址模型.从网络选址的角度出发,基于集合覆盖优化应急设施在网络上的布局,使其决策空间不再局限于离散点集.针对决策空间连续性给问题求解带来的困难,利用候选点集算法通过求解可能的有效路径集将连续解空间离散化,极大地缩小问题的解空间,将此网络设计问题转换为0-1整数规划问题进行求解.将候选点集算法应用于实例分析中,求得该区应急服务设施数量及其位置.
Indian Academy of Sciences (India)
P Chitra; P Venkatesh; R Rajaram
2011-04-01
The task scheduling problem in heterogeneous distributed computing systems is a multiobjective optimization problem (MOP). In heterogeneous distributed computing systems (HDCS), there is a possibility of processor and network failures and this affects the applications running on the HDCS. To reduce the impact of failures on an application running on HDCS, scheduling algorithms must be devised which minimize not only the schedule length (makespan) but also the failure probability of the application (reliability). These objectives are conﬂicting and it is not possible to minimize both objectives at the same time. Thus, it is needed to develop scheduling algorithms which account both for schedule length and the failure probability. Multiobjective Evolutionary Computation algorithms (MOEAs) are well-suited for Multiobjective task scheduling on heterogeneous environment. The two Multi-Objective Evolutionary Algorithms such as Multiobjective Genetic Algorithm (MOGA) and Multiobjective Evolutionary Programming (MOEP) with non-dominated sorting are developed and compared for the various random task graphs and also for a real-time numerical application graph. The metrics for evaluating the convergence and diversity of the obtained non-dominated solutions by the two algorithms are reported. The simulation results conﬁrm that the proposed algorithms can be used for solving the task scheduling at reduced computational times compared to the weighted-sum based biobjective algorithm in the literature.
International Nuclear Information System (INIS)
Astrophysical black hole candidates are thought to be the Kerr black hole predicted by General Relativity. However, in order to confirm the Kerr-nature of these objects, we need to probe the geometry of the space-time around them and check that observations are consistent with the predictions of the Kerr metric. That can be achieved, for instance, by studying the properties of the electromagnetic radiation emitted by the gas in the accretion disk. The high-frequency quasi-periodic oscillations observed in the X-ray flux of some stellar-mass black hole candidates might do the job. As the frequencies of these oscillations depend only very weakly on the observed X-ray flux, it is thought they are mainly determined by the metric of the space-time. In this paper, I consider the resonance models proposed by Abramowicz and Kluzniak and I extend previous results to the case of non-Kerr space-times. The emerging picture is more complicated than the one around a Kerr black hole and there is a larger number of possible combinations between different modes. I then compare the bounds inferred from the twin peak high-frequency quasi-periodic oscillations observed in three micro-quasars (GRO J1655-40, XTE J1550-564, and GRS 1915+105) with the measurements from the continuum-fitting method of the same objects. For Kerr black holes, the two approaches do not provide consistent results. In a non-Kerr geometry, this conflict may be solved if the observed quasi-periodic oscillations are produced by the resonance νθ:νr = 3:1, where νθ and νr are the two epicyclic frequencies. It is at least worth mentioning that the deformation from the Kerr solution required by observations would be consistent with the one suggested in another recent work discussing the possibility that steady jets are powered by the spin of these compact objects
Energy Technology Data Exchange (ETDEWEB)
Kobayashi, Naoko [Univ. of Nebraska, Lincoln, NE (United States); Barnes, Austin [Univ. of Nebraska, Lincoln, NE (United States); Jensen, Travis [Univ. of Nebraska, Lincoln, NE (United States); Noel, Eric [Univ. of Nebraska, Lincoln, NE (United States); Andlay, Gunjan [Synaptic Research, Baltimore, MD (United States); Rosenberg, Julian N. [Johns Hopkins Univ., Baltimore, MD (United States); Betenbaugh, Michael J. [Johns Hopkins Univ., Baltimore, MD (United States); Guarnieri, Michael T. [National Renewable Energy Lab. (NREL), Golden, CO (United States); Oyler, George A. [Univ. of Nebraska, Lincoln, NE (United States); Johns Hopkins Univ., Baltimore, MD (United States); Synaptic Research, Baltimore, MD (United States)
2015-09-01
Chlorella species from the UTEX collection, classified by rDNA-based phylogenetic analysis, were screened based on biomass and lipid production in different scales and modes of culture. Lead candidate strains of C. sorokiniana UTEX 1230 and C. vulgaris UTEX 395 and 259 were compared between conditions of vigorous aeration with filtered atmospheric air and 3% CO_{2} shake-flask cultivation. We found that the biomass of UTEX 1230 produced 2 times higher at 652 mg L^{-1} dry weight under both ambient CO_{2} vigorous aeration and 3% CO_{2} conditions, while UTEX 395 and 259 under 3% CO_{2} increased to 3 times higher at 863 mg L^{-1} dry weight than ambient CO_{2} vigorous aeration. The triacylglycerol contents of UTEX 395 and 259 increased more than 30 times to 30% dry weight with 3% CO_{2}, indicating that additional CO_{2} is essential for both biomass and lipid accumulation in UTEX 395 and 259.
Neuner, Philippe; Peier, Andrea M; Talamo, Fabio; Ingallinella, Paolo; Lahm, Armin; Barbato, Gaetano; Di Marco, Annalise; Desai, Kunal; Zytko, Karolina; Qian, Ying; Du, Xiaobing; Ricci, Davide; Monteagudo, Edith; Laufer, Ralph; Pocai, Alessandro; Bianchi, Elisabetta; Marsh, Donald J; Pessi, Antonello
2014-01-01
Neuromedin U (NMU) is an endogenous peptide implicated in the regulation of feeding, energy homeostasis, and glycemic control, which is being considered for the therapy of obesity and diabetes. A key liability of NMU as a therapeutic is its very short half-life in vivo. We show here that conjugation of NMU to human serum albumin (HSA) yields a compound with long circulatory half-life, which maintains full potency at both the peripheral and central NMU receptors. Initial attempts to conjugate NMU via the prevalent strategy of reacting a maleimide derivative of the peptide with the free thiol of Cys34 of HSA met with limited success, because the resulting conjugate was unstable in vivo. Use of a haloacetyl derivative of the peptide led instead to the formation of a metabolically stable conjugate. HSA-NMU displayed long-lasting, potent anorectic, and glucose-normalizing activity. When compared side by side with a previously described PEG conjugate, HSA-NMU proved superior on a molar basis. Collectively, our results reinforce the notion that NMU-based therapeutics are promising candidates for the treatment of obesity and diabetes. PMID:24222478
Brewin, Robert J.W.; Sathyendranath, Shubha; Muller, Dagmar; Brockmann, Carsten; Deschamps, Pierre-Yves; Devred, Emmanuel; Doerffer, Roland; Fomferra, Norman; Franz, Bryan; Grant, Mike; Groom, Steve; Horseman, Andrew; Hu, Chuanmin; Krasemann, Hajo; Lee, ZhongPing; Maritorena, Stephane; Melin, Frederic; Peters, Marco; Platt, Trevor; Regner, Peter; Smyth, Tim; Steinmetz, Francois; Swinton, John; Werdell, Jeremy; White, George N., III
2013-01-01
Satellite-derived remote-sensing reflectance (Rrs) can be used for mapping biogeochemically relevant variables, such as the chlorophyll concentration and the Inherent Optical Properties (IOPs) of the water, at global scale for use in climate-change studies. Prior to generating such products, suitable algorithms have to be selected that are appropriate for the purpose. Algorithm selection needs to account for both qualitative and quantitative requirements. In this paper we develop an objective methodology designed to rank the quantitative performance of a suite of bio-optical models. The objective classification is applied using the NASA bio-Optical Marine Algorithm Dataset (NOMAD). Using in situ Rrs as input to the models, the performance of eleven semianalytical models, as well as five empirical chlorophyll algorithms and an empirical diffuse attenuation coefficient algorithm, is ranked for spectrally-resolved IOPs, chlorophyll concentration and the diffuse attenuation coefficient at 489 nm. The sensitivity of the objective classification and the uncertainty in the ranking are tested using a Monte-Carlo approach (bootstrapping). Results indicate that the performance of the semi-analytical models varies depending on the product and wavelength of interest. For chlorophyll retrieval, empirical algorithms perform better than semi-analytical models, in general. The performance of these empirical models reflects either their immunity to scale errors or instrument noise in Rrs data, or simply that the data used for model parameterisation were not independent of NOMAD. Nonetheless, uncertainty in the classification suggests that the performance of some semi-analytical algorithms at retrieving chlorophyll is comparable with the empirical algorithms. For phytoplankton absorption at 443 nm, some semi-analytical models also perform with similar accuracy to an empirical model. We discuss the potential biases, limitations and uncertainty in the approach, as well as additional
Matsuda, Kiku; Chaudhari, Atul A; Lee, John Hwa
2011-09-01
We evaluated a recently developed live vaccine candidate for fowl typhoid (FT)-JOL916, a lon/cpxR mutant of Salmonella Gallinarum (SG)-by comparing its safety and efficacy with that of the well-known rough mutant strain SG9R vaccine in 6-wk-old Hy-Line hens. Forty-five chickens were divided into three groups of 15 chickens each. The chickens were then intramuscularly inoculated with 2 x 10(7) colony-forming units (CFUs) of JOL916 (JOL916 group), 2 x 10(7) CFUs of SG9R (SG9R group), or phosphate-buffered saline (control group). After vaccination, no clinical symptoms were observed in any of the groups. No differences in body weight increase were detected among the three groups postvaccination. A cellular immune response was observed at 2 wk postvaccination (wpv) in the JOL916 group with the peripheral lymphocyte proliferation assay, whereas no response was detected in the SG9R group. Elevation of SG antigen-specific plasma immunoglobulin was observed 2 and 3 wpv in the JOL916 and SG9R vaccine groups, respectively. After virulent challenge on day 25 postvaccination, 0, 1, and 15 chickens in the JOL916 group, SG9R group, and control group, respectively, died by 12 days postchallenge; the death rate of the SG9R vaccine group was statistically similar to that of the JOL916 group. Postmortem examination revealed that the JOL916 vaccine offered more efficient protection than the SG9R vaccine, with significantly decreased hepatic necrotic foci scores, splenic enlargement scores, necrotic foci scores, and recovery of the challenge strain from the spleen. Vaccination with JOL916 appears to be safe and offers better protection than SG9R against FT in chickens.
Abuhadi, Nouf; Bradley, David; Katarey, Dev; Podolyak, Zsolt; Sassi, Salem
2014-03-01
Introduction: Single-Photon Emission Computed Tomography (SPECT) is used to measure and quantify radiopharmaceutical distribution within the body. The accuracy of quantification depends on acquisition parameters and reconstruction algorithms. Until recently, most SPECT images were constructed using Filtered Back Projection techniques with no attenuation or scatter corrections. The introduction of 3-D Iterative Reconstruction algorithms with the availability of both computed tomography (CT)-based attenuation correction and scatter correction may provide for more accurate measurement of radiotracer bio-distribution. The effect of attenuation and scatter corrections on accuracy of SPECT measurements is well researched. It has been suggested that the combination of CT-based attenuation correction and scatter correction can allow for more accurate quantification of radiopharmaceutical distribution in SPECT studies (Bushberg et al., 2012). However, The effect of respiratory induced cardiac motion on SPECT images acquired using higher resolution algorithms such 3-D iterative reconstruction with attenuation and scatter corrections has not been investigated. Aims: To investigate the quantitative accuracy of 3D iterative reconstruction algorithms in comparison to filtered back projection (FBP) methods implemented on cardiac SPECT/CT imaging with and without CT-attenuation and scatter corrections. Also to investigate the effects of respiratory induced cardiac motion on myocardium perfusion quantification. Lastly, to present a comparison of spatial resolution for FBP and ordered subset expectation maximization (OSEM) Flash 3D together with and without respiratory induced motion, and with and without attenuation and scatter correction. Methods: This study was performed on a Siemens Symbia T16 SPECT/CT system using clinical acquisition protocols. Respiratory induced cardiac motion was simulated by imaging a cardiac phantom insert whilst moving it using a respiratory motion motor
Fykse, Egil
2013-01-01
The objective of this thesis is to compare the suitability of FPGAs, GPUs and DSPs for digital image processing applications. Normalized cross-correlation is used as a benchmark, because this algorithm includes convolution, a common operation in image processing and elsewhere. Normalized cross-correlation is a template matching algorithm that is used to locate predefined objects in a scene image. Because the throughput of DSPs is low for efficient calculation of normalized cross-correlation, ...
Ivanova, N.; Pedersen, L. T.; Tonboe, R. T.; Kern, S.; Heygster, G.; Lavergne, T.; Sørensen, A.; Saldo, R.; Dybkjær, G.; Brucker, L.; Shokr, M.
2015-09-01
Sea ice concentration has been retrieved in polar regions with satellite microwave radiometers for over 30 years. However, the question remains as to what is an optimal sea ice concentration retrieval method for climate monitoring. This paper presents some of the key results of an extensive algorithm inter-comparison and evaluation experiment. The skills of 30 sea ice algorithms were evaluated systematically over low and high sea ice concentrations. Evaluation criteria included standard deviation relative to independent validation data, performance in the presence of thin ice and melt ponds, and sensitivity to error sources with seasonal to inter-annual variations and potential climatic trends, such as atmospheric water vapour and water-surface roughening by wind. A selection of 13 algorithms is shown in the article to demonstrate the results. Based on the findings, a hybrid approach is suggested to retrieve sea ice concentration globally for climate monitoring purposes. This approach consists of a combination of two algorithms plus dynamic tie points implementation and atmospheric correction of input brightness temperatures. The method minimizes inter-sensor calibration discrepancies and sensitivity to the mentioned error sources.
Indian Academy of Sciences (India)
Sachin Vrajlal Rajani; Vivek J Pandya
2015-02-01
Solar energy is a clean, green and renewable source of energy. It is available in abundance in nature. Solar cells by photovoltaic action are able to convert the solar energy into electric current. The output power of solar cell depends upon factors such as solar irradiation (insolation), temperature and other climatic conditions. Present commercial efficiency of solar cells is not greater than 15% and therefore the available efficiency is to be exploited to the maximum possible value and the maximum power point tracking (MPPT) with the aid of power electronics to solar array can make this possible. There are many algorithms proposed to realize maximum power point tracking. These algorithms have their own merits and limitations. In this paper, an attempt is made to understand the basic functionality of the two most popular algorithms viz. Perturb and Observe (P & O) algorithm and Incremental conductance algorithm. These algorithms are compared by simulating a 100 kW solar power generating station connected to grid. MATLAB M-files are generated to understand MPPT and its dependency on insolation and temperature. MATLAB Simulink software is used to simulate the MPPT systems. Simulation results are presented to verify these assumptions.
Directory of Open Access Journals (Sweden)
Khalid A. Almahorg
2013-11-01
Full Text Available Mobile Ad Hoc networks (MANETs are gaining increased interest due to their wide range of potential applications in civilian and military sectors. The self-control, self-organization, topology dynamism, and bandwidth limitation of the wireless communication channel make implementation of MANETs a challenging task. The Connected Dominating Set (CDS has been proposed to facilitate MANETs realization. Minimizing the CDS size has several advantages; however, this minimization is NP complete problem; therefore, approximation algorithms are used to tackle this problem. The fastest CDS creation algorithm is Wu and Li algorithm; however, it generates a relatively high signaling overhead. Utilizing the location information of network members reduces the signaling overhead of Wu and Li algorithm. In this paper, we compare the performance of Wu and Li algorithm with its Location-Information-Based version under two types of Medium Access Control protocols, and several network sizes. The MAC protocols used are: a virtual ideal MAC protocol, and the IEEE 802.11 MAC protocol. The use of a virtual ideal MAC enables us to investigate how the real-world performance of these algorithms deviates from their ideal-conditions counterpart. The simulator used in this research is the ns-2 network simulator.
Directory of Open Access Journals (Sweden)
M O Qutub
2011-01-01
Full Text Available Purpose: To evaluate usefulness of applying either the two-step algorithm (Ag-EIAs and CCNA or the three-step algorithm (all three assays for better confirmation of toxigenic Clostridium difficile. The antigen enzyme immunoassays (Ag-EIAs can accurately identify the glutamate dehydrogenase antigen of toxigenic and nontoxigenic Clostridium difficile. Therefore, it is used in combination with a toxin-detecting assay [cell line culture neutralization assay (CCNA, or the enzyme immunoassays for toxins A and B (TOX-A/BII EIA] to provide specific evidence of Clostridium difficile-associated diarrhoea. Materials and Methods: A total of 151 nonformed stool specimens were tested by Ag-EIAs, TOX-A/BII EIA, and CCNA. All tests were performed according to the manufacturer′s instructions and the results of Ag-EIAs and TOX-A/BII EIA were read using a spectrophotometer at a wavelength of 450 nm. Results: A total of 61 (40.7%, 38 (25.3%, and 52 (34.7% specimens tested positive with Ag-EIA, TOX-A/BII EIA, and CCNA, respectively. Overall, the sensitivity, specificity, negative predictive value, and positive predictive value for Ag-EIA were 94%, 87%, 96.6%, and 80.3%, respectively. Whereas for TOX-A/BII EIA, the sensitivity, specificity, negative predictive value, and positive predictive value were 73.1%, 100%, 87.5%, and 100%, respectively. With the two-step algorithm, all 61 Ag-EIAs-positive cases required 2 days for confirmation. With the three-step algorithm, 37 (60.7% cases were reported immediately, and the remaining 24 (39.3% required further testing by CCNA. By applying the two-step algorithm, the workload and cost could be reduced by 28.2% compared with the three-step algorithm. Conclusions: The two-step algorithm is the most practical for accurately detecting toxigenic Clostridium difficile, but it is time-consuming.
Cunliffe, Alexandra R; White, Bradley; Justusson, Julia; Straus, Christopher; Malik, Renuka; Al-Hallaq, Hania A; Armato, Samuel G
2015-12-01
We evaluated the image registration accuracy achieved using two deformable registration algorithms when radiation-induced normal tissue changes were present between serial computed tomography (CT) scans. Two thoracic CT scans were collected for each of 24 patients who underwent radiation therapy (RT) treatment for lung cancer, eight of whom experienced radiologically evident normal tissue damage between pre- and post-RT scan acquisition. For each patient, 100 landmark point pairs were manually placed in anatomically corresponding locations between each pre- and post-RT scan. Each post-RT scan was then registered to the pre-RT scan using (1) the Plastimatch demons algorithm and (2) the Fraunhofer MEVIS algorithm. The registration accuracy for each scan pair was evaluated by comparing the distance between landmark points that were manually placed in the post-RT scans and points that were automatically mapped from pre- to post-RT scans using the displacement vector fields output by the two registration algorithms. For both algorithms, the registration accuracy was significantly decreased when normal tissue damage was present in the post-RT scan. Using the Plastimatch algorithm, registration accuracy was 2.4 mm, on average, in the absence of radiation-induced damage and 4.6 mm, on average, in the presence of damage. When the Fraunhofer MEVIS algorithm was instead used, registration errors decreased to 1.3 mm, on average, in the absence of damage and 2.5 mm, on average, when damage was present. This work demonstrated that the presence of lung tissue changes introduced following RT treatment for lung cancer can significantly decrease the registration accuracy achieved using deformable registration.
A comparison of step-and-shoot leaf sequencing algorithms that eliminate tongue-and-groove effects
Energy Technology Data Exchange (ETDEWEB)
Kamath, Srijit [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Sahni, Sartaj [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Ranka, Sanjay [Department of Computer and Information Science and Engineering, University of Florida, Gainesville, FL (United States); Li, Jonathan [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States); Palta, Jatinder [Department of Radiation Oncology, University of Florida, Gainesville, FL (United States)
2004-07-21
The performances of three recently published leaf sequencing algorithms for step-and-shoot intensity-modulated radiation therapy delivery that eliminates tongue-and-groove underdosage are evaluated. Proofs are given to show that the algorithm of Que et al (2004 Phys. Med. Biol. 49 399-405) generates leaf sequences free of tongue-and-groove underdosage and interdigitation. However, the total beam-on times could be up to n times those of the sequences generated by the algorithms of Kamath et al (2004 Phys. Med. Biol. 49 N7-N19), which are optimal in beam-on time for unidirectional leaf movement under the same constraints, where n is the total number of involved leaf pairs. Using 19 clinical fluence matrices and 100 000 randomly generated 15 x 15 matrices, the average monitor units and number of segments of the leaf sequences generated using the algorithm of Que et al are about two to four times those generated by the algorithm of Kamath et al.
Detecting candidate cosmic bubble collisions with optimal filters
McEwen, J D; Johnson, M C; Peiris, H V
2012-01-01
We review an optimal-filter-based algorithm for detecting candidate sources of unknown and differing size embedded in a stochastic background, and its application to detecting candidate cosmic bubble collision signatures in Wilkinson Microwave Anisotropy Probe (WMAP) 7-year observations. The algorithm provides an enhancement in sensitivity over previous methods by a factor of approximately two. Moreover, it is optimal in the sense that no other filter-based approach can provide a superior enhancement of these signatures. Applying this algorithm to WMAP 7-year observations, eight new candidate bubble collision signatures are detected for follow-up analysis.
Jentz, R. R.; Wackerman, C. C.; Shuchman, R. A.; Onstott, R. G.; Gloersen, Per; Cavalieri, Don; Ramseier, Rene; Rubinstein, Irene; Comiso, Joey; Hollinger, James
1991-01-01
Previous research studies have focused on producing algorithms for extracting geophysical information from passive microwave data regarding ice floe size, sea ice concentration, open water lead locations, and sea ice extent. These studies have resulted in four separate algorithms for extracting these geophysical parameters. Sea ice concentration estimates generated from each of these algorithms (i.e., NASA/Team, NASA/Comiso, AES/York, and Navy) are compared to ice concentration estimates produced from coincident high-resolution synthetic aperture radar (SAR) data. The SAR concentration estimates are produced from data collected in both the Beaufort Sea and the Greenland Sea in March 1988 and March 1989, respectively. The SAR data are coincident to the passive microwave data generated by the Special Sensor Microwave/Imager (SSM/I).
Burt, Adam O.; Tinker, Michael L.
2014-01-01
In this paper, genetic algorithm based and gradient-based topology optimization is presented in application to a real hardware design problem. Preliminary design of a planetary lander mockup structure is accomplished using these methods that prove to provide major weight savings by addressing the structural efficiency during the design cycle. This paper presents two alternative formulations of the topology optimization problem. The first is the widely-used gradient-based implementation using commercially available algorithms. The second is formulated using genetic algorithms and internally developed capabilities. These two approaches are applied to a practical design problem for hardware that has been built, tested and proven to be functional. Both formulations converged on similar solutions and therefore were proven to be equally valid implementations of the process. This paper discusses both of these formulations at a high level.
CASE via MS: Ranking Structure Candidates by Mass Spectra
Kerber, Adalbert; Meringer, Markus; Rücker, Christoph
2006-01-01
Two important tasks in computer-aided structure elucidation (CASE) are the generation of candidate structures from a given molecular formula, and the ranking of structure candidates according to compatibility with an experimental spectrum. Candidate ranking with respect to electron impact mass spectra is based on virtual fragmentation of a candidate structure and comparison of the fragments’ isotope distributions against the spectrum of the unknown compound, whence a structure–spectrum compat...
Xu, Beijie; Recker, Mimi; Qi, Xiaojun; Flann, Nicholas; Ye, Lei
2013-01-01
This article examines clustering as an educational data mining method. In particular, two clustering algorithms, the widely used K-means and the model-based Latent Class Analysis, are compared, using usage data from an educational digital library service, the Instructional Architect (IA.usu.edu). Using a multi-faceted approach and multiple data…
Directory of Open Access Journals (Sweden)
Jeng-Fung Chen
2014-10-01
Full Text Available Predicting student academic performance with a high accuracy facilitates admission decisions and enhances educational services at educational institutions. This raises the need to propose a model that predicts student performance, based on the results of standardized exams, including university entrance exams, high school graduation exams, and other influential factors. In this study, an approach to the problem based on the artificial neural network (ANN with the two meta-heuristic algorithms inspired by cuckoo birds and their lifestyle, namely, Cuckoo Search (CS and Cuckoo Optimization Algorithm (COA is proposed. In particular, we used previous exam results and other factors, such as the location of the student’s high school and the student’s gender as input variables, and predicted the student academic performance. The standard CS and standard COA were separately utilized to train the feed-forward network for prediction. The algorithms optimized the weights between layers and biases of the neuron network. The simulation results were then discussed and analyzed to investigate the prediction ability of the neural network trained by these two algorithms. The findings demonstrated that both CS and COA have potential in training ANN and ANN-COA obtained slightly better results for predicting student academic performance in this case. It is expected that this work may be used to support student admission procedures and strengthen the service system in educational institutions.
Energy Technology Data Exchange (ETDEWEB)
Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael
2009-09-01
There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.
Directory of Open Access Journals (Sweden)
Kekana M.C
2015-09-01
Full Text Available In this paper, Volterra Integro differential equations are solved using the Adomian decomposition method. The solutions are obtained in form of infinite series and compared to Runge-Kutta4 algorithm. The technique is described and illustrated with examples; numerical results are also presented graphically. The software used in this study is mathematica10.
Squiers, John J.; Li, Weizhi; King, Darlene R.; Mo, Weirong; Zhang, Xu; Lu, Yang; Sellke, Eric W.; Fan, Wensheng; DiMaio, J. Michael; Thatcher, Jeffrey E.
2016-03-01
The clinical judgment of expert burn surgeons is currently the standard on which diagnostic and therapeutic decisionmaking regarding burn injuries is based. Multispectral imaging (MSI) has the potential to increase the accuracy of burn depth assessment and the intraoperative identification of viable wound bed during surgical debridement of burn injuries. A highly accurate classification model must be developed using machine-learning techniques in order to translate MSI data into clinically-relevant information. An animal burn model was developed to build an MSI training database and to study the burn tissue classification ability of several models trained via common machine-learning algorithms. The algorithms tested, from least to most complex, were: K-nearest neighbors (KNN), decision tree (DT), linear discriminant analysis (LDA), weighted linear discriminant analysis (W-LDA), quadratic discriminant analysis (QDA), ensemble linear discriminant analysis (EN-LDA), ensemble K-nearest neighbors (EN-KNN), and ensemble decision tree (EN-DT). After the ground-truth database of six tissue types (healthy skin, wound bed, blood, hyperemia, partial injury, full injury) was generated by histopathological analysis, we used 10-fold cross validation to compare the algorithms' performances based on their accuracies in classifying data against the ground truth, and each algorithm was tested 100 times. The mean test accuracy of the algorithms were KNN 68.3%, DT 61.5%, LDA 70.5%, W-LDA 68.1%, QDA 68.9%, EN-LDA 56.8%, EN-KNN 49.7%, and EN-DT 36.5%. LDA had the highest test accuracy, reflecting the bias-variance tradeoff over the range of complexities inherent to the algorithms tested. Several algorithms were able to match the current standard in burn tissue classification, the clinical judgment of expert burn surgeons. These results will guide further development of an MSI burn tissue classification system. Given that there are few surgeons and facilities specializing in burn care
New focused crawling algorithm
Institute of Scientific and Technical Information of China (English)
Su Guiyang; Li Jianhua; Ma Yinghua; Li Shenghong; Song Juping
2005-01-01
Focused carawling is a new research approach of search engine. It restricts information retrieval and provides search service in specific topic area. Focused crawling search algorithm is a key technique of focused crawler which directly affects the search quality. This paper first introduces several traditional topic-specific crawling algorithms, then an inverse link based topic-specific crawling algorithm is put forward. Comparison experiment proves that this algorithm has a good performance in recall, obviously better than traditional Breadth-First and Shark-Search algorithms. The experiment also proves that this algorithm has a good precision.
Shokr, Mohammed; Markus, Thorsten
2006-01-01
Ice concentration retrieved from spaceborne passive-microwave observations is a prime input to operational sea-ice-monitoring programs, numerical weather prediction models, and global climate models. Atmospheric Environment Service (AES)- York and the Enhanced National Aeronautics and Space Administration Team (NT2) are two algorithms that calculate ice concentration from Special Sensor Microwave/Imager observations. This paper furnishes a comparison between ice concentrations (total, thin, and thick types) output from NT2 and AES-York algorithms against the corresponding estimates from the operational analysis of Radarsat images in the Canadian Ice Service (CIS). A new data fusion technique, which incorporates the actual sensor's footprint, was developed to facilitate this study. Results have shown that the NT2 and AES-York algorithms underestimate total ice concentration by 18.35% and 9.66% concentration counts on average, with 16.8% and 15.35% standard deviation, respectively. However, the retrieved concentrations of thin and thick ice are in much more discrepancy with the operational CIS estimates when either one of these two types dominates the viewing area. This is more likely to occur when the total ice concentration approaches 100%. If thin and thick ice types coexist in comparable concentrations, the algorithms' estimates agree with CIS'S estimates. In terms of ice concentration retrieval, thin ice is more problematic than thick ice. The concept of using a single tie point to represent a thin ice surface is not realistic and provides the largest error source for retrieval accuracy. While AES-York provides total ice concentration in slightly more agreement with CIS'S estimates, NT2 provides better agreement in retrieving thin and thick ice concentrations.
Jeng-Fung Chen; Ho-Nien Hsieh; Quang Hung Do
2014-01-01
Predicting student academic performance with a high accuracy facilitates admission decisions and enhances educational services at educational institutions. This raises the need to propose a model that predicts student performance, based on the results of standardized exams, including university entrance exams, high school graduation exams, and other influential factors. In this study, an approach to the problem based on the artificial neural network (ANN) with the two meta-heuristic algorithm...
Khehra, Baljit Singh; Pharwaha, Amar Partap Singh
2016-06-01
Ductal carcinoma in situ (DCIS) is one type of breast cancer. Clusters of microcalcifications (MCCs) are symptoms of DCIS that are recognized by mammography. Selection of robust features vector is the process of selecting an optimal subset of features from a large number of available features in a given problem domain after the feature extraction and before any classification scheme. Feature selection reduces the feature space that improves the performance of classifier and decreases the computational burden imposed by using many features on classifier. Selection of an optimal subset of features from a large number of available features in a given problem domain is a difficult search problem. For n features, the total numbers of possible subsets of features are 2n. Thus, selection of an optimal subset of features problem belongs to the category of NP-hard problems. In this paper, an attempt is made to find the optimal subset of MCCs features from all possible subsets of features using genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO). For simulation, a total of 380 benign and malignant MCCs samples have been selected from mammogram images of DDSM database. A total of 50 features extracted from benign and malignant MCCs samples are used in this study. In these algorithms, fitness function is correct classification rate of classifier. Support vector machine is used as a classifier. From experimental results, it is also observed that the performance of PSO-based and BBO-based algorithms to select an optimal subset of features for classifying MCCs as benign or malignant is better as compared to GA-based algorithm.
Ruvio, Giuseppe; Solimene, Raffaele; Cuccaro, Antonio; Ammann, Max
2013-01-01
A comparative analysis of an imaging method based on a multi-frequency Multiple Signal Classification (MUSIC) approach against two common linear detection algorithms based on non-coherent migration is made. The different techniques are tested using synthetic data generated through CST Microwave Studio and a phantom developed from MRI scans of a mostly fat breast. The multi-frequency MUSIC approach shows an overall superior performance compared to the non-coherent techniques. This paper report...
Energy Technology Data Exchange (ETDEWEB)
Mantini, D [ITAB-Institute of Advanced Biomedical Technologies, University Foundation ' G. D' Annunzio' , University of Chieti (Italy); II, K E Hild [Department of Radiology, University of California at San Francisco, CA (United States); Alleva, G [ITAB-Institute of Advanced Biomedical Technologies, University Foundation ' G. D' Annunzio' , University of Chieti (Italy); Comani, S [ITAB-Institute of Advanced Biomedical Technologies, University Foundation ' G. D' Annunzio' , University of Chieti (Italy); Department of Clinical Sciences and Bio-imaging, University of Chieti (Italy)
2006-02-21
Independent component analysis (ICA) algorithms have been successfully used for signal extraction tasks in the field of biomedical signal processing. We studied the performances of six algorithms (FastICA, CubICA, JADE, Infomax, TDSEP and MRMI-SIG) for fetal magnetocardiography (fMCG). Synthetic datasets were used to check the quality of the separated components against the original traces. Real fMCG recordings were simulated with linear combinations of typical fMCG source signals: maternal and fetal cardiac activity, ambient noise, maternal respiration, sensor spikes and thermal noise. Clusters of different dimensions (19, 36 and 55 sensors) were prepared to represent different MCG systems. Two types of signal-to-interference ratios (SIR) were measured. The first involves averaging over all estimated components and the second is based solely on the fetal trace. The computation time to reach a minimum of 20 dB SIR was measured for all six algorithms. No significant dependency on gestational age or cluster dimension was observed. Infomax performed poorly when a sub-Gaussian source was included; TDSEP and MRMI-SIG were sensitive to additive noise, whereas FastICA, CubICA and JADE showed the best performances. Of all six methods considered, FastICA had the best overall performance in terms of both separation quality and computation times.
Kim, Sung Jin; Kim, Sung Kyu
2015-01-01
Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam, collapsed cone, and Monte-Carlo, provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated using the PB, CC, and MC algorithms. Planning treatment volume and organs at risk delineation was performed according to our institutions protocols on the Oncentra MasterPlan image registration module, on 0.3 to 0.5 cm computed tomography slices taken under normal respiration conditions. Four intensity-modulated radiation therapy plans were calculated according to each algorithm for each patient. The plans were conducted on the Oncentra MasterPlan and CMS Monaco treatment planning systems, for 6 MV. The plans were compared in terms of the dose distribution in target, OAR volumes, and...
Directory of Open Access Journals (Sweden)
Muhammad Ilyas
2016-05-01
Full Text Available This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF and Unscented Kalman filter (UKF were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level.
Ilyas, Muhammad; Hong, Beomjin; Cho, Kuk; Baeg, Seung-Ho; Park, Sangdeok
2016-01-01
This paper provides algorithms to fuse relative and absolute microelectromechanical systems (MEMS) navigation sensors, suitable for micro planetary rovers, to provide a more accurate estimation of navigation information, specifically, attitude and position. Planetary rovers have extremely slow speed (~1 cm/s) and lack conventional navigation sensors/systems, hence the general methods of terrestrial navigation may not be applicable to these applications. While relative attitude and position can be tracked in a way similar to those for ground robots, absolute navigation information is hard to achieve on a remote celestial body, like Moon or Mars, in contrast to terrestrial applications. In this study, two absolute attitude estimation algorithms were developed and compared for accuracy and robustness. The estimated absolute attitude was fused with the relative attitude sensors in a framework of nonlinear filters. The nonlinear Extended Kalman filter (EKF) and Unscented Kalman filter (UKF) were compared in pursuit of better accuracy and reliability in this nonlinear estimation problem, using only on-board low cost MEMS sensors. Experimental results confirmed the viability of the proposed algorithms and the sensor suite, for low cost and low weight micro planetary rovers. It is demonstrated that integrating the relative and absolute navigation MEMS sensors reduces the navigation errors to the desired level. PMID:27223293
Energy Technology Data Exchange (ETDEWEB)
Abels, Benjamin [University Hospital Erlangen, Institute of Radiology, Erlangen (Germany); University Hospital Heidelberg, Diagnostic and Interventional Radiology, Heidelberg (Germany); Villablanca, J.P. [UCLA Medical Center, Department of Neuroradiology, Los Angeles, CA (United States); Tomandl, Bernd F. [Klinikum Bremen-Mitte, Department of Neuroradiology, Bremen (Germany); Uder, Michael; Lell, Michael M. [University Hospital Erlangen, Institute of Radiology, Erlangen (Germany)
2012-12-15
To compare ischaemic lesions predicted by different CT perfusion (CTP) post-processing techniques and validate CTP lesions compared with final lesion size in stroke patients. Fifty patients underwent CT, CTP and CT angiography. Quantitative values and colour maps were calculated using least mean square deconvolution (LMSD), maximum slope (MS) and conventional singular value decomposition deconvolution (SVDD) algorithms. Quantitative results, core/penumbra lesion sizes and Alberta Stroke Programme Early CT Score (ASPECTS) were compared among the algorithms; lesion sizes and ASPECTS were compared with final lesions on follow-up MRI + MRA or CT + CTA as a reference standard, accounting for recanalisation status. Differences in quantitative values and lesion sizes were statistically significant, but therapeutic decisions based on ASPECTS and core/penumbra ratios would have been the same in all cases. CTP lesion sizes were highly predictive of final infarct size: Coefficients of determination (R {sup 2}) for CTP versus follow-up lesion sizes in the recanalisation group were 0.87, 0.82 and 0.61 (P < 0.001) for LMSD, MS and SVDD, respectively, and 0.88, 0.87 and 0.76 (P < 0.001), respectively, in the non-recanalisation group. Lesions on CT perfusion are highly predictive of final infarct. Different CTP post-processing algorithms usually lead to the same clinical decision, but for assessing lesion size, LMSD and MS appear superior to SVDD. (orig.)
Directory of Open Access Journals (Sweden)
Gilson Alexandre Pinto
2005-06-01
Full Text Available This work presented the results of the implementation of an off-line smoothing algorithm in the monitoring system, for the partial hydrolysis of cheese whey proteins using enzymes, which used penalized least squares. Different algorithms for on-line signals filtering used by the control were also compared: artificial neural networks, moving average and smoothing algorithm.A hidrólise parcial de proteínas do soro de queijo, realizada por enzimas imobilizadas em suporte inerte, pode alterar ou evidenciar propriedades funcionais dos polipeptídeos produzidos, aumentando assim suas aplicações. O controle do pH do reator de proteólise é de fundamental importância para modular a distribuição de pesos moleculares dos peptídeos formados. Os sinais de pH e temperatura utilizados pelo algoritmo de controle e inferência de estado podem estar sujeitos a ruído considerável, tornando importante sua filtragem. Apresentam-se aqui resultados da implementação, no sistema de monitoramento do processo, de algoritmo suavizador, que utiliza mínimos quadrados com penalização para o pós-tratamento dos dados. Compara-se ainda o desempenho de diferentes algoritmos na filtragem em tempo real dos sinais utilizados pelo sistema de controle, a saber: redes neurais artificiais, média móvel e o sobredito suavizador.
Exploring Optimal Topology and Routing Algorithm for 3D Network on Chip
Directory of Open Access Journals (Sweden)
N. Viswanathan
2012-01-01
Full Text Available Problem statement: Network on Chip (NoC is an appropriate candidate to implement interconnections in SoCs. Increase in number of IP blocks in 2D NoC will lead to increase in chip area, global interconnect, length of the communication channel, number of hops transversed by a packet, latency and difficulty in clock distribution. 3D NoC is evolved to overcome the drawbacks of 2D NoC. Topology, switching mechanism and routing algorithm are major area of 3D NoC research. In this study, three topologies (3D-MT, 3D-ST and 3D-RNT and routing algorithm for 3D NoC are presented. Approach: Experiment is conducted to evaluate the performance of the topologies and routing algorithm. Evaluation parameters are latency, probability and network diameter and energy dissipation. Results: It is demonstrated by a comparison of experimental results analysis that 3D-RNT is a suitable candidate for 3D NoC topology. Conclusion: TThe performance of the topologies and routing algorithm for 3D NoC is analysed. 3D-MT is not a suitable candidate for 3D NoC, 3D-ST is a suitable candidate provided interlayer communications are frequent and 3D-RNT is a suitable candidate as interlayer communications are limited.
Energy Technology Data Exchange (ETDEWEB)
Puchner, Stefan B. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Medical University of Vienna, Department of Biomedical Imaging and Image-Guided Therapy, Vienna (Austria); Ferencik, Maros [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Harvard Medical School, Division of Cardiology, Massachusetts General Hospital, Boston, MA (United States); Maurovich-Horvat, Pal [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Semmelweis University, MTA-SE Lenduelet Cardiovascular Imaging Research Group, Heart and Vascular Center, Budapest (Hungary); Nakano, Masataka; Otsuka, Fumiyuki; Virmani, Renu [CV Path Institute Inc., Gaithersburg, MD (United States); Kauczor, Hans-Ulrich [University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany); Hoffmann, Udo [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); Schlett, Christopher L. [Massachusetts General Hospital, Harvard Medical School, Cardiac MR PET CT Program, Department of Radiology, Boston, MA (United States); University Hospital Heidelberg, Ruprecht-Karls-University of Heidelberg, Department of Diagnostic and Interventional Radiology, Heidelberg (Germany)
2015-01-15
To evaluate whether iterative reconstruction algorithms improve the diagnostic accuracy of coronary CT angiography (CCTA) for detection of lipid-core plaque (LCP) compared to histology. CCTA and histological data were acquired from three ex vivo hearts. CCTA images were reconstructed using filtered back projection (FBP), adaptive-statistical (ASIR) and model-based (MBIR) iterative algorithms. Vessel cross-sections were co-registered between FBP/ASIR/MBIR and histology. Plaque area <60 HU was semiautomatically quantified in CCTA. LCP was defined by histology as fibroatheroma with a large lipid/necrotic core. Area under the curve (AUC) was derived from logistic regression analysis as a measure of diagnostic accuracy. Overall, 173 CCTA triplets (FBP/ASIR/MBIR) were co-registered with histology. LCP was present in 26 cross-sections. Average measured plaque area <60 HU was significantly larger in LCP compared to non-LCP cross-sections (mm{sup 2}: 5.78 ± 2.29 vs. 3.39 ± 1.68 FBP; 5.92 ± 1.87 vs. 3.43 ± 1.62 ASIR; 6.40 ± 1.55 vs. 3.49 ± 1.50 MBIR; all p < 0.0001). AUC for detecting LCP was 0.803/0.850/0.903 for FBP/ASIR/MBIR and was significantly higher for MBIR compared to FBP (p = 0.01). MBIR increased sensitivity for detection of LCP by CCTA. Plaque area <60 HU in CCTA was associated with LCP in histology regardless of the reconstruction algorithm. However, MBIR demonstrated higher accuracy for detecting LCP, which may improve vulnerable plaque detection by CCTA. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Rothe, Jan Holger, E-mail: jan-holger.rothe@charite.de [Klinik für Radiologie, Campus Virchow-Klinikum, Charité – Universitätsmedizin, Berlin (Germany); Grieser, Christian [Klinik für Radiologie, Campus Virchow-Klinikum, Charité – Universitätsmedizin, Berlin (Germany); Lehmkuhl, Lukas [Abteilung für Diagnostische und Interventionelle Radiologie, Herzzentrum Leipzig (Germany); Schnapauff, Dirk; Fernandez, Carmen Perez; Maurer, Martin H.; Mussler, Axel; Hamm, Bernd; Denecke, Timm; Steffen, Ingo G. [Klinik für Radiologie, Campus Virchow-Klinikum, Charité – Universitätsmedizin, Berlin (Germany)
2013-11-01
Objective: To compare different three-dimensional volumetric algorithms (3D-algorithms) and RECIST for size measurement and response assessment in liver metastases from colorectal and pancreatic cancer. Methods: The volumes of a total of 102 liver metastases in 45 patients (pancreatic cancer, n = 22; colon cancer, n = 23) were estimated using three volumetric methods (seeded region growing method, slice-based segmentation, threshold-based segmentation) and the RECIST 1.1 method with volume calculation based on the largest axial diameter. Each measurement was performed three times by one observer. All four methods were applied to follow-up on 55 liver metastases in 29 patients undergoing systemic treatment (median follow-up, 3.5 months; range, 1–10 months). Analysis of variance (ANOVA) with post hoc tests was performed to analyze intraobserver variability and intermethod differences. Results: ANOVA showed significant higher volumes calculated according to the RECIST guideline compared to the other measurement methods (p < 0.001) with relative differences ranging from 0.4% to 41.1%. Intraobserver variability was significantly higher (p < 0.001) for RECIST and threshold based segmentation (3.6–32.8%) compared with slice segmentation (0.4–13.7%) and seeded region growing method (0.6–10.8%). In the follow-up study, the 3D-algorithms and the assessment following RECIST 1.1 showed a discordant classification of treatment response in 10–21% of the patients. Conclusions: This study supports the use of volumetric measurement methods due to significant higher intraobserver reproducibility compared to RECIST. Substantial discrepancies in tumor response classification between RECIST and volumetric methods depending on applied thresholds confirm the requirement of a consensus concerning volumetric criteria for response assessment.
Directory of Open Access Journals (Sweden)
Morita Mitsuo
2011-06-01
Full Text Available Abstract Background A Bayesian approach based on a Dirichlet process (DP prior is useful for inferring genetic population structures because it can infer the number of populations and the assignment of individuals simultaneously. However, the properties of the DP prior method are not well understood, and therefore, the use of this method is relatively uncommon. We characterized the DP prior method to increase its practical use. Results First, we evaluated the usefulness of the sequentially-allocated merge-split (SAMS sampler, which is a technique for improving the mixing of Markov chain Monte Carlo algorithms. Although this sampler has been implemented in a preceding program, HWLER, its effectiveness has not been investigated. We showed that this sampler was effective for population structure analysis. Implementation of this sampler was useful with regard to the accuracy of inference and computational time. Second, we examined the effect of a hyperparameter for the prior distribution of allele frequencies and showed that the specification of this parameter was important and could be resolved by considering the parameter as a variable. Third, we compared the DP prior method with other Bayesian clustering methods and showed that the DP prior method was suitable for data sets with unbalanced sample sizes among populations. In contrast, although current popular algorithms for population structure analysis, such as those implemented in STRUCTURE, were suitable for data sets with uniform sample sizes, inferences with these algorithms for unbalanced sample sizes tended to be less accurate than those with the DP prior method. Conclusions The clustering method based on the DP prior was found to be useful because it can infer the number of populations and simultaneously assign individuals into populations, and it is suitable for data sets with unbalanced sample sizes among populations. Here we presented a novel program, DPART, that implements the SAMS
Directory of Open Access Journals (Sweden)
Noureddine Bouhmala
2012-11-01
Full Text Available In this work, a hierarchical population-based memetic algorithm for solving the satisfiability problem is presented. The approach suggests looking at the evolution as a hierarchical process evolving from a coarse population where the basic unit of a gene is composed of cluster of variables that represent the problem to a fine population where each gene represents a single variable. The optimization process is carried out by letting the converged population at a child level serve as the initial population to the parent level. A benchmark composed of industrial instances is used to compare the effectiveness of the hierarchical approach against its single-level counterpart.
Directory of Open Access Journals (Sweden)
Noureddine Bouhmala
2012-12-01
Full Text Available In this work, a hierarchical population-based memetic algorithm for solving the satisfiability problem ispresented. The approach suggests looking at the evolution as a hierarchical process evolving from a coarsepopulation where the basic unit of a gene is composed of cluster of variables that represent the problem toa fine population where each gene represents a single variable. The optimization process is carried out byletting the converged population at a child level serve as the initial population to the parent level. Abenchmark composed of industrial instances is used to compare the effectiveness of the hierarchicalapproach against its single-level counterpart.
Braun, Theodore E. D.; And Others
1988-01-01
Two different approaches to teaching Voltaire's "Candide", one deriving meaning from the textual fabric or "inside" of the story and the other focusing on the author's "external" intent in writing the story, are presented and compared. (MSE)
Vio, R; Wamsteker, W
2004-01-01
It is well-known that the noise associated with the collection of an astronomical image by a CCD camera is, in large part, Poissonian. One would expect, therefore, that computational approaches that incorporate this a priori information will be more effective than those that do not. The Richardson-Lucy (RL) algorithm, for example, can be viewed as a maximum-likelihood (ML) method for image deblurring when the data noise is assumed to be Poissonian. Least-squares (LS) approaches, on the other hand, arises from the assumption that the noise is Gaussian with fixed variance across pixels, which is rarely accurate. Given this, it is surprising that in many cases results obtained using LS techniques are relatively insensitive to whether the noise is Poissonian or Gaussian. Furthermore, in the presence of Poisson noise, results obtained using LS techniques are often comparable with those obtained by the RL algorithm. We seek an explanation of these phenomena via an examination of the regularization properties of par...
Comparison of Chlorophyll-A Algorithms for the Transition Zone Between the North Sea and Baltic Sea
Huber, Silvia; Hansen, Lars B.; Rasmussen, Mads O.; Kaas, Hanne
2015-12-01
Monitoring water quality of the transition zone between the North Sea and Baltic Sea from space is still a challenge because of the optically complex waters. The presence of suspended sediments and dissolved substances often interfere with the phytoplankton signal and thus confound conventional case-1 algorithms developed for the open ocean. Specific calibration to case-2 waters may compensate for this. In this study we compared chlorophyll-a (chl-a) concentrations derived with three different case-2 algorithms: C2R, FUB/WeW and CoastColour using MERIS data as basis. Default C2R and FUB clearly underestimate higher chl-a concentrations. However, with local tuning we could significantly improve the fit with in-situ data. For instance, the root mean square error is reduced by roughly 50% from 3.06 to 1.6 μ g/L for the calibrated C2R processor as compared to the default C2R. This study is part of the FP7 project AQUA-USERS which has the overall goal to provide the aquaculture industry with timely information based on satellite data and optical in-situ measurements. One of the products is chlorophyll-a concentration.
Comparison between SARS CoV and MERS CoV Using Apriori Algorithm, Decision Tree, SVM
Directory of Open Access Journals (Sweden)
Jang Seongpil
2016-01-01
Full Text Available MERS (Middle East Respiratory Syndrome is a worldwide disease these days. The number of infected people is 1038(08/03/2015 in Saudi Arabia and 186(08/03/2015 in South Korea. MERS is all over the world including Europe and the fatality rate is 38.8%, East Asia and the Middle East. The MERS is also known as a cousin of SARS (Severe Acute Respiratory Syndrome because both diseases show similar symptoms such as high fever and difficulty in breathing. This is why we compared MERS with SARS. We used data of the spike glycoprotein from NCBI. As a way of analyzing the protein, apriori algorithm, decision tree, SVM were used, and particularly SVM was iterated by normal, polynomial, and sigmoid. The result came out that the MERS and the SARS are alike but also different in some way.
Nagayama, T; Mancini, R C; Florido, R; Tommasini, R; Koch, J A; Delettrez, J A; Regan, S P; Smalyuk, V A; Welser-Sherrill, L A; Golovkin, I E
2008-10-01
Detailed analysis of x-ray narrow-band images from argon-doped deuterium-filled inertial confinement fusion implosion experiments yields information about the temperature spatial structure in the core at the collapse of the implosion. We discuss the analysis of direct-drive implosion experiments at OMEGA, in which multiple narrow-band images were recorded with a multimonochromatic x-ray imaging instrument. The temperature spatial structure is investigated by using the sensitivity of the Ly beta/He beta line emissivity ratio to the temperature. Three analysis methods that consider the argon He beta and Ly beta image data are discussed and the results compared. The methods are based on a ratio of image intensities, ratio of Abel-inverted emissivities, and a search and reconstruction technique driven by a Pareto genetic algorithm. PMID:19044576
Irha, E; Vrdoljak, J
2000-01-01
The aim of this study was to select children with pathological lesions of the intra-articular structures from children with identical complaints but with no pathological intra-articular changes. The younger the child, the more difficult it is to make the diagnosis, and the expected distribution of pathology changes increasingly. This is particularly stressed in children aged younger than 13 years. Synovial inflammatory alterations are more frequent, and osteochondral and chondral fractures appear to be more problematic than meniscal and cruciate ligament lesions. Before establishing the indication for knee arthroscopy it is mandatory to implement the algorithm of diagnostic and conservative therapeutic procedures. The indication for knee arthroscopy is considered in cases when complaints persist after conservative treatment, a lesion of intra-articular segments is suspected, and the pathological condition is deemed arthroscopically treatable. Arthroscopy before conservative treatment is justified only in acute cases.
Energy Technology Data Exchange (ETDEWEB)
Sun, Shangjin; Gill, Michelle; Li, Yifei; Huang, Mitchell; Byrd, R. Andrew, E-mail: byrdra@mail.nih.gov [National Cancer Institute, Structural Biophysics Laboratory (United States)
2015-05-15
The advantages of non-uniform sampling (NUS) in offering time savings and resolution enhancement in NMR experiments have been increasingly recognized. The possibility of sensitivity gain by NUS has also been demonstrated. Application of NUS to multidimensional NMR experiments requires the selection of a sampling scheme and a reconstruction scheme to generate uniformly sampled time domain data. In this report, an efficient reconstruction scheme is presented and used to evaluate a range of regularization algorithms that collectively yield a generalized solution to processing NUS data in multidimensional NMR experiments. We compare l1-norm (L1), iterative re-weighted l1-norm (IRL1), and Gaussian smoothed l0-norm (Gaussian-SL0) regularization for processing multidimensional NUS NMR data. Based on the reconstruction of different multidimensional NUS NMR data sets, L1 is demonstrated to be a fast and accurate reconstruction method for both quantitative, high dynamic range applications (e.g. NOESY) and for all J-coupled correlation experiments. Compared to L1, both IRL1 and Gaussian-SL0 are shown to produce slightly higher quality reconstructions with improved linearity in peak intensities, albeit with a computational cost. Finally, a generalized processing system, NESTA-NMR, is described that utilizes a fast and accurate first-order gradient descent algorithm (NESTA) recently developed in the compressed sensing field. NESTA-NMR incorporates L1, IRL1, and Gaussian-SL0 regularization. NESTA-NMR is demonstrated to provide an efficient, streamlined approach to handling all types of multidimensional NMR data using proteins ranging in size from 8 to 32 kDa.
International Nuclear Information System (INIS)
Highlights: • Simultaneous minimization of the thermal resistance and pressure drop is shown. • Genetic algorithm is capable of securing above objectives. • Experimental data using the microchannel heat sinks is limited. • Utilization of experimental data from ammonia-cooled microchannel which is scarce. • Outcomes present potentials for exploratory research into new coolants. - Abstract: Minimization of the thermal resistance and pressure drop of a microchannel heat sink is desirable for efficient heat removal which is becoming a serious challenge due to the demand for continuous miniaturization of such cooling systems with increasing high heat generation rate. However, a reduction in the thermal resistance generally leads to the increase in the pressure drop and vice versa. This paper reports the outcome of optimization of the hydraulic diameter and wall width to channel width ratio of square and circular microchannel heat sink for the simultaneous minimization of the two objectives; thermal resistance and pressure drop. The procedure was completed with multi-objective genetic algorithm (MOGA). Environmentally friendly liquid ammonia was used as the coolant and the thermophysical properties have been obtained based on the average experimental saturation temperatures measured along an ammonia-cooled 3.0 mm internal diameter horizontal microchannel rig. The optimized results showed that with the same hydraulic diameter and pumping power, circular microchannels have lower thermal resistance. Based on the same number of microchannels per square cm, the thermal resistance for the circular channels is lower by 21% at the lowest pumping power and lower by 35% at the highest pumping power than the thermal resistance for the square microchannels. Results obtained at 10 °C and 5 °C showed no significant difference probably due to the slight difference in properties at these temperatures
Directory of Open Access Journals (Sweden)
Robert J Hickey
2007-01-01
Full Text Available Introduction: As an alternative to DNA microarrays, mass spectrometry based analysis of proteomic patterns has shown great potential in cancer diagnosis. The ultimate application of this technique in clinical settings relies on the advancement of the technology itself and the maturity of the computational tools used to analyze the data. A number of computational algorithms constructed on different principles are available for the classification of disease status based on proteomic patterns. Nevertheless, few studies have addressed the difference in the performance of these approaches. In this report, we describe a comparative case study on the classification accuracy of hepatocellular carcinoma based on the serum proteomic pattern generated from a Surface Enhanced Laser Desorption/Ionization (SELDI mass spectrometer.Methods: Nine supervised classifi cation algorithms are implemented in R software and compared for the classification accuracy.Results: We found that the support vector machine with radial function is preferable as a tool for classification of hepatocellular carcinoma using features in SELDI mass spectra. Among the rest of the methods, random forest and prediction analysis of microarrays have better performance. A permutation-based technique reveals that the support vector machine with a radial function seems intrinsically superior in learning from the training data since it has a lower prediction error than others when there is essentially no differential signal. On the other hand, the performance of the random forest and prediction analysis of microarrays rely on their capability of capturing the signals with substantial differentiation between groups.Conclusions: Our finding is similar to a previous study, where classification methods based on the Matrix Assisted Laser Desorption/Ionization (MALDI mass spectrometry are compared for the prediction accuracy of ovarian cancer. The support vector machine, random forest and prediction
Many-Objective Distinct Candidates Optimization using Differential Evolution
DEFF Research Database (Denmark)
Justesen, Peter; Ursem, Rasmus Kjær
2010-01-01
, we present the novel MODCODE algorithm incorporating the ROD measure to measure and control candidate distinctiveness. MODCODE is tested against GDE3 on three real world centrifugal pump design problems supplied by Grundfos. Our algorithm outperforms GDE3 on all problems with respect to all...
Directory of Open Access Journals (Sweden)
S. V. Bukharin
2016-01-01
Full Text Available The financial condition of the enterprise can be estimated by a set of characteristics (solvency and liquidity, structure of the capital, profitability, etc.. The part of financial coefficients is low-informative, and other part contains the interconnected sizes. Therefore for elimination of ambiguity we will pass to the generalized indicators – rating numbers, and as the main means of research it is offered to use the theory of expert systems. As characteristic of the modern theory of expert systems it is necessary to consider application of intellectual ways of data processing of data mining, or simply data mining. The method of immersion of a problem of comparison of a financial condition of economic objects in an expert cover in a class of systems of artificial intelligence is offered (algorithms of a method of the analysis of hierarchies, contiguity leaning of a neural network, algorithm of training with function of activation softmax. The generalized indicator of structure of the capital in the form of rating number is entered and the sign (factorial space for seven concrete enterprises is created. Quantitative signs (financial coefficients of structure of the capital are allocated and their normalization by rules of the theory of expert systems is carried out. To the received set of the generalized indicators the method of the analysis of hierarchies is applied: on the basis of a linguistic scale of T. Saaty the ranks of signs reflecting the relative importance of various financial coefficients are defined and the matrix of pair comparisons is constructed. The vector of priority signs on the basis of the solution of the equation for own numbers and own vectors of the mentioned matrix is calculated. As a result the visualization of the received results which has allowed to eliminate difficulties of interpretation of small and negative values of the generalized indicator is carried out. The neural network with contiguity leaning and
Serag, Ahmed; Blesa, Manuel; Moore, Emma J.; Pataky, Rozalia; Sparrow, Sarah A.; Wilkinson, A. G.; MacNaught, Gillian; Semple, Scott I.; Boardman, James P.
2016-03-01
Accurate whole-brain segmentation, or brain extraction, of magnetic resonance imaging (MRI) is a critical first step in most neuroimage analysis pipelines. The majority of brain extraction algorithms have been developed and evaluated for adult data and their validity for neonatal brain extraction, which presents age-specific challenges for this task, has not been established. We developed a novel method for brain extraction of multi-modal neonatal brain MR images, named ALFA (Accurate Learning with Few Atlases). The method uses a new sparsity-based atlas selection strategy that requires a very limited number of atlases ‘uniformly’ distributed in the low-dimensional data space, combined with a machine learning based label fusion technique. The performance of the method for brain extraction from multi-modal data of 50 newborns is evaluated and compared with results obtained using eleven publicly available brain extraction methods. ALFA outperformed the eleven compared methods providing robust and accurate brain extraction results across different modalities. As ALFA can learn from partially labelled datasets, it can be used to segment large-scale datasets efficiently. ALFA could also be applied to other imaging modalities and other stages across the life course.
Directory of Open Access Journals (Sweden)
Raju Datla
2016-02-01
Full Text Available The radiometric calibration equations for the thermal emissive bands (TEB and the reflective solar bands (RSB measurements of the earth scenes by the polar satellite sensors, (Terra and Aqua MODIS and Suomi NPP (VIIRS, and geostationary sensors, GOES Imager and the GOES-R Advanced Baseline Imager (ABI are analyzed towards calibration algorithm harmonization on the basis of SI traceability which is one of the goals of the NOAA National Calibration Center (NCC. One of the overarching goals of NCC is to provide knowledge base on the NOAA operational satellite sensors and recommend best practices for achieving SI traceability for the radiance measurements on-orbit. As such, the calibration methodologies of these satellite optical sensors are reviewed in light of the recommended practice for radiometric calibration at the National Institute of Standards and Technology (NIST. The equivalence of some of the spectral bands in these sensors for their end products is presented. The operational and calibration features of the sensors for on-orbit observation of radiance are also compared in tabular form. This review is also to serve as a quick cross reference to researchers and analysts on how the observed signals from these sensors in space are converted to radiances.
Odindi, John; Adam, Elhadi; Ngubane, Zinhle; Mutanga, Onisimo; Slotow, Rob
2014-01-01
Plant species invasion is known to be a major threat to socioeconomic and ecological systems. Due to high cost and limited extents of urban green spaces, high mapping accuracy is necessary to optimize the management of such spaces. We compare the performance of the new-generation WorldView-2 (WV-2) and SPOT-5 images in mapping the bracken fern [Pteridium aquilinum (L) kuhn] in a conserved urban landscape. Using the random forest algorithm, grid-search approaches based on out-of-bag estimate error were used to determine the optimal ntree and mtry combinations. The variable importance and backward feature elimination techniques were further used to determine the influence of the image bands on mapping accuracy. Additionally, the value of the commonly used vegetation indices in enhancing the classification accuracy was tested on the better performing image data. Results show that the performance of the new WV-2 bands was better than that of the traditional bands. Overall classification accuracies of 84.72 and 72.22% were achieved for the WV-2 and SPOT images, respectively. Use of selected indices from the WV-2 bands increased the overall classification accuracy to 91.67%. The findings in this study show the suitability of the new generation in mapping the bracken fern within the often vulnerable urban natural vegetation cover types.
Repetto, Silvia A; Ruybal, Paula; Solana, María Elisa; López, Carlota; Berini, Carolina A; Alba Soto, Catalina D; Cappa, Stella M González
2016-05-01
Underdiagnosis of chronic infection with the nematode Strongyloides stercoralis may lead to severe disease in the immunosuppressed. Thus, we have set-up a specific and highly sensitive molecular diagnosis in stool samples. Here, we compared the accuracy of our polymerase chain reaction (PCR)-based method with that of conventional diagnostic methods for chronic infection. We also analyzed clinical and epidemiological predictors of infection to propose an algorithm for the diagnosis of strongyloidiasis useful for the clinician. Molecular and gold standard methods were performed to evaluate a cohort of 237 individuals recruited in Buenos Aires, Argentina. Subjects were assigned according to their immunological status, eosinophilia and/or history of residence in endemic areas. Diagnosis of strongyloidiasis by PCR on the first stool sample was achieved in 71/237 (29.9%) individuals whereas only 35/237(27.4%) were positive by conventional methods, requiring up to four serial stool samples at weekly intervals. Eosinophilia and history of residence in endemic areas have been revealed as independent factors as they increase the likelihood of detecting the parasite according to our study population. Our results underscore the usefulness of robust molecular tools aimed to diagnose chronic S. stercoralis infection. Evidence also highlights the need to survey patients with eosinophilia even when history of an endemic area is absent.
Directory of Open Access Journals (Sweden)
Li Zhen
2008-05-01
analysis of data sets in which in vitro bioassay data is being used to predict in vivo chemical toxicology. From our analysis, we can recommend that several ML methods, most notably SVM and ANN, are good candidates for use in real world applications in this area.
DEFF Research Database (Denmark)
Larsen, Thomas Ostenfeld; Petersen, Bent O.; Duus, Jens Øllgaard;
2005-01-01
X-hitting, a newly developed algorithm for automated comparison of UV data, has been used for the tracking of two novel spiro-quinazoline metabolites, lapatins A (1)andB(2), in a screening study targeting quinazolines. The structures of 1 and 2 were elucidated by analysis of spectroscopic data, p...
Directory of Open Access Journals (Sweden)
George J Burghel
Full Text Available Colorectal cancer (CRC is a leading cause of cancer deaths worldwide. Chromosomal instability (CIN is a major driving force of microsatellite stable (MSS sporadic CRC. CIN tumours are characterised by a large number of somatic chromosomal copy number aberrations (SCNA that frequently affect oncogenes and tumour suppressor genes. The main aim of this work was to identify novel candidate CRC driver genes affected by recurrent and focal SCNA. High resolution genome-wide comparative genome hybridisation (CGH arrays were used to compare tumour and normal DNA for 53 sporadic CRC cases. Context corrected common aberration (COCA analysis and custom algorithms identified 64 deletions and 32 gains of focal minimal common regions (FMCR at high frequency (>10%. Comparison of these FMCR with published genomic profiles from CRC revealed common overlap (42.2% of deletions and 34.4% of copy gains. Pathway analysis showed that apoptosis and p53 signalling pathways were commonly affected by deleted FMCR, and MAPK and potassium channel pathways by gains of FMCR. Candidate tumour suppressor genes in deleted FMCR included RASSF3, IFNAR1, IFNAR2 and NFKBIA and candidate oncogenes in gained FMCR included PRDM16, TNS1, RPA3 and KCNMA1. In conclusion, this study confirms some previously identified aberrations in MSS CRC and provides in silico evidence for some novel candidate driver genes.
Burghel, George J.; Lin, Wei-Yu; Whitehouse, Helen; Brock, Ian; Hammond, David; Bury, Jonathan; Stephenson, Yvonne; George, Rina; Cox, Angela
2013-01-01
Colorectal cancer (CRC) is a leading cause of cancer deaths worldwide. Chromosomal instability (CIN) is a major driving force of microsatellite stable (MSS) sporadic CRC. CIN tumours are characterised by a large number of somatic chromosomal copy number aberrations (SCNA) that frequently affect oncogenes and tumour suppressor genes. The main aim of this work was to identify novel candidate CRC driver genes affected by recurrent and focal SCNA. High resolution genome-wide comparative genome hybridisation (CGH) arrays were used to compare tumour and normal DNA for 53 sporadic CRC cases. Context corrected common aberration (COCA) analysis and custom algorithms identified 64 deletions and 32 gains of focal minimal common regions (FMCR) at high frequency (>10%). Comparison of these FMCR with published genomic profiles from CRC revealed common overlap (42.2% of deletions and 34.4% of copy gains). Pathway analysis showed that apoptosis and p53 signalling pathways were commonly affected by deleted FMCR, and MAPK and potassium channel pathways by gains of FMCR. Candidate tumour suppressor genes in deleted FMCR included RASSF3, IFNAR1, IFNAR2 and NFKBIA and candidate oncogenes in gained FMCR included PRDM16, TNS1, RPA3 and KCNMA1. In conclusion, this study confirms some previously identified aberrations in MSS CRC and provides in silico evidence for some novel candidate driver genes. PMID:24367615
Primary and Presidential Candidates
DEFF Research Database (Denmark)
Goddard, Joseph
2012-01-01
This article looks at primary and presidential candidates in 2008 and 2012. Evidence suggests that voters are less influenced by candidates’ color, gender, or religious observation than previously. Conversely, markers of difference remain salient in the imaginations of pollsters and journalists...
Improved comparison inspection algorithm between ICT images & CAD model%改进的工业CT图像与CAD模型的比对检测
Institute of Scientific and Technical Information of China (English)
张志波; 曾理; 何洪举
2012-01-01
This paper improved an algorithm of analyzing the manufacture error based on comparison inspection between ICT images and the CAD model. Firstly, it segmented the ICT images by 3D Otsu threshold method, and then obtained the edge surface and comer features. Secondly, it calculated the oriented bounding boxes ( OBB) of the ICT images' comer features and the work-piece' s CAD model using the presented rotating projection method presented, then realized the rough registration by the two OBBs. Then the singular value decomposition and iterative closest point (SVD-ICP) algorithm were used to complete the precise registration between the CAD model and comer features of ICT images. The k-d tree was used to improve the calculation speed of searching for the closest point. Finally, it displayed the error using edge surface. The experimental results indicate that the result of rough registration in this paper is more accurate and applicable. The whole algorithm is more rapid and efficient.%改进了一种基于工业计算机断层成像(industrial computed tomography,ICT)图像与计算机辅助设计(computer aided design,CAD)模型的比对检测算法,分析工件制造误差.首先对工业CT图像用三维Otsu法进行阈值分割,并分别提取边缘面与角点特征；然后对工业CT图像角点特征与工件的CAD模型用文中研究的旋转投影法求取方向包围盒,进而实现粗配准；再结合角点特征点集和奇异值分解-迭代最近点算法进行精配准,最近点对的求取用k-d树进行加速；最后在边缘面上显示误差.实验结果表明,该方法在工件比对检测过程中,粗配准精度更高,适应性更好.整个比对检测过程更加高效,速度上有了较大的提高.
Sambuelli, L.; Bohm, G.; Capizzi, P.; Cardarelli, E.; Cosentino, P.
2011-09-01
By late 2008 one of the most important pieces of the 'Museo delle Antichità Egizie' of Turin, the sculpture of the Pharaoh with god Amun, was planned to be one of the masterpieces of a travelling exhibition in Japan. The 'Fondazione Museo delle Antichità Egizie di Torino', who manages the museum, was concerned with the integrity of the base of the statue which actually presents visible signs of restoration dating back to the early 19th century. It was required to estimate the persistence of the visible fractures, to search for unknown ones and to provide information about the overall mechanical strength of the base. To tackle the first question a GPR reflection survey along three sides of the base was performed and the results were assembled in a 3D rendering. As far as the second question is concerned, two parallel, horizontal ultrasonic 2D tomograms across the base were made. We acquired, for each section, 723 ultrasonic signals corresponding to different transmitter and receiver positions. The tomographic data were inverted using four different software packages based upon different algorithms. The obtained velocity images were then compared each other, with the GPR results and with the visible fractures in the base. A critical analysis of the comparisons is finally presented.
DEFF Research Database (Denmark)
Fogliata, Antonella; Vanetti, Eugenio; Albers, Dirk;
2007-01-01
A comparative study was performed to reveal differences and relative figures of merit of seven different calculation algorithms for photon beams when applied to inhomogeneous media. The following algorithms were investigated: Varian Eclipse: the anisotropic analytical algorithm, and the pencil beam...... a systematic deficiency in managing the presence of heterogeneous media. In contrast, complicated patterns were observed for the advanced algorithms with significant discrepancies observed between algorithms in the lighter materials (rho = 0.035 g cm(-3)), enhanced for the most energetic beam. For denser...
Directory of Open Access Journals (Sweden)
Tummala Pradeep
2011-11-01
Full Text Available This paper investigates the use of variable learning rate back-propagation algorithm and Levenberg-Marquardt back-propagation algorithm in Intrusion detection system for detecting attacks. Inthe present study, these 2 neural network (NN algorithms are compared according to their speed,accuracy and, performance using mean squared error (MSE (Closer the value of MSE to 0, higher willbe the performance. Based on the study and test results, the Levenberg-Marquardt algorithm has been found to be faster and having more accuracy and performance than variable learning rate backpropagation algorithm.
Kim, Sung Jin; Kim, Sung Kyu; Kim, Dong Ho
2015-07-01
Treatment planning system calculations in inhomogeneous regions may present significant inaccuracies due to loss of electronic equilibrium. In this study, three different dose calculation algorithms, pencil beam (PB), collapsed cone (CC), and Monte-Carlo (MC), provided by our planning system were compared to assess their impact on the three-dimensional planning of lung and breast cases. A total of five breast and five lung cases were calculated by using the PB, CC, and MC algorithms. Planning treatment volume (PTV) and organs at risk (OARs) delineations were performed according to our institution's protocols on the Oncentra MasterPlan image registration module, on 0.3-0.5 cm computed tomography (CT) slices taken under normal respiration conditions. Intensitymodulated radiation therapy (IMRT) plans were calculated for the three algorithm for each patient. The plans were conducted on the Oncentra MasterPlan (PB and CC) and CMS Monaco (MC) treatment planning systems for 6 MV. The plans were compared in terms of the dose distribution in target, the OAR volumes, and the monitor units (MUs). Furthermore, absolute dosimetry was measured using a three-dimensional diode array detector (ArcCHECK) to evaluate the dose differences in a homogeneous phantom. Comparing the dose distributions planned by using the PB, CC, and MC algorithms, the PB algorithm provided adequate coverage of the PTV. The MUs calculated using the PB algorithm were less than those calculated by using. The MC algorithm showed the highest accuracy in terms of the absolute dosimetry. Differences were found when comparing the calculation algorithms. The PB algorithm estimated higher doses for the target than the CC and the MC algorithms. The PB algorithm actually overestimated the dose compared with those calculated by using the CC and the MC algorithms. The MC algorithm showed better accuracy than the other algorithms.
Institute of Scientific and Technical Information of China (English)
马苗; 刘艳丽
2012-01-01
针对目前研究相对薄弱的群体智能优化算法的性能对比问题,搭建数字图像为生命栖息环境的群体智能优化算法的性能对比平台,提出基于最优个体变化的收敛关联度和收敛面积的新型性能评价指标,并具体进行了遗传算法、粒子群算法、人工鱼群算法、细菌觅食算法等多种群体智能优化算法的性能比较与测试.实验结果显示,所提出的评价平台和性能评价指标能够合理有效地对比不同搜索机制下智能群体的寻优能力.%Aiming at the performance comparison of swarm intelligence optimization algorithms that lacks qualified research findings, we constructed a platform for comparing the performance of the algorithms. Then, we proposed the novel performance evaluation criteria for convergence relational degree and the convergence area based on the changes of the best individual. Specifically, we compared and tested the performances of several swarm intelligence optimization algorithms, such as the genetic algorithm （GA）, particle swarm optimization （PSO） algorithm, artificial fish swarm （AFS） algorithm, bacterial foraging （BF） algorithm and artificial bee colony （ABC） algorithm. Experimental results showed that the platform and criteria of performance evaluation proposed in this paper can be effectively used to compare the capability of optimization search under different mechanisms.
Energy Technology Data Exchange (ETDEWEB)
Ortiz J, J. [Instituto Nacional de Investigaciones Nucleares, Depto. Sistemas Nucleares, A.P. 18-1027, 11801 Mexico D.F. (Mexico); Requena, I. [Universidad de Granada (Spain)
2002-07-01
In this work the results of a genetic algorithm (AG) and a neural recurrent multi state network (RNRME) for optimizing the fuel reload of 5 cycles of the Laguna Verde nuclear power plant (CNLV) are presented. The fuel reload obtained by both methods are compared and it was observed that the RNRME creates better fuel distributions that the AG. Moreover a comparison of the utility for using one or another one techniques is make. (Author)
Constructing a Scheduling Algorithm For Multidirectional Elevators
Edlund, Joakim; Berntsson, Fredrik
2015-01-01
With this thesis we aim to create an efficient scheduling algorithm for elevators that can move in multiple directions and establish if and when the algorithm is efficient in comparison to algorithms constructed for traditional elevator algorithms. To measure efficiency, a simulator is constructed to simulate an elevator system implementing different algorithms. Because of the challenge of constructing a simulator and since we did not find either a simulator nor any algorithms for use in mult...
Institute of Scientific and Technical Information of China (English)
马超
2012-01-01
针对遗传算法和Dijkstra算法在求解动态权值系统中最短路径时的性能问题,采用比较法,将两种算法应用在同一个实际游戏模型中,对其算法的稳定性、智能性、时间复杂度进行对比测试.游戏模型模拟了各种条件下的动态权值系统.为了使遗传算法更加可靠,通过优化其变异过程使得收敛速度更快,可靠性更高.实验数据表明,遗传算法在每张地图上的得分数以及算法所用时间普遍高于Dijkstra算法,从而得出遗传算法在求解动态权值系统中最短路径问题时稳定性和预期效果明显好于Dijkstra算法,但其时间复杂度较高的结论.%Used a comparative approach to compare the performance of the genetic algorithm with the Dijkstra algorithm when solve the shortest path problem in the dynamic weight system. Did an experiment in the actual model with these two algorithms in order to test their stability, intelligence and time complexity. The game model makes" many kinds of dynamic weight system. In order to make the genetic algorithm more reliable, the new algorithm gets a way to optimize the process of mutation to make the speed of the genetic algorithm faster and the reliability better. The experiment data shows that most data of the genetic algorithm is higher than the Dijkstra algorithm. The experiment makes a conclusion that the stability and expected result of the genetic algorithm is better than the Dijkstra algorithm in the dynamic weight system,but the time complexity of algorithm is higher than the Dijkstra algorithm.
An inversion algorithm for general tridiagonal matrix
Institute of Scientific and Technical Information of China (English)
Rui-sheng RAN; Ting-zhu HUANG; Xing-ping LIU; Tong-xiang GU
2009-01-01
An algorithm for the inverse of a general tridiagonal matrix is presented. For a tridiagonal matrix having the Doolittle factorization, an inversion algorithm is established.The algorithm is then generalized to deal with a general tridiagonal matrix without any restriction. Comparison with other methods is provided, indicating low computational complexity of the proposed algorithm, and its applicability to general tridiagonal matrices.
Comparison of Classification Algorithm in Coal Data Analysis System%分类算法在煤矿勘探数据分析系统中的比较
Institute of Scientific and Technical Information of China (English)
莫洪武; 万荣泽
2013-01-01
煤炭开采过程中需要对收集的勘探数据进行分析和研究，从中挖掘出更加有价值的信息。文章针对多种数据分类算法，研究分析他们在煤炭勘探数据分析中的作用。通过研究和比较多种分类算法在数据分析工作中的性能，找到能够更加有效地处理勘探数据的分类算法。%Coal system usually analyze and research on the collected coal data, and mine more valuable information from them. In data mining area, there are several kinds of data classification mining algorithms. Coal system could apply them into real work according to different data type. In this paper, focusing data classification algorithms, we research and analyze the function of the algorithms in coal data analysis. Through the research and comparison the performance of multiple classification algorithms, we find the effective classification algorithms in processing coal data.
Institute of Scientific and Technical Information of China (English)
李林
2012-01-01
分析网络群落划分的GN聚类和模式识别中AP聚类两种算法的设计思想和特点；以图书借阅记录为例构建了顾客聚类的数据集,进行了两种算法的聚类比较.研究表明,两种算法从不同角度揭示了顾客群体的结构特征,GN聚类结果与顾客的宏观特征分类相接近,而AP算法结果反映出顾客需求的分布特征.探讨了算法设计原则对实验结果产生的影响.这些工作可为聚类算法的设计改进和顾客行为的数据挖掘等研究提供一定的参考.%This paper summarized the design ideas and features of two kind clustering algorithms, such as GN clustering of network community division and AP clustering in pattern recognition. To serve as an example of the customer group, it constructed the data set from library borrowing records, and then made the comparison of two clustering algorithms. The results indicate that the two kind algorithms have revealed much more about the structure of the customer group, the outcome of GN clustering algorithm is close to customer macrostructure, and the result of AP clustering algorithm reflects the customer requirement distribution. And it also discussed the effect of algorithm design principles on experiment results. This work can give a va-luable reference for design improvement of clustering algorithm and customer behavior data mining.
Energy Technology Data Exchange (ETDEWEB)
Llacer Martos, S.; Herraiz Lablanca, M. D.; Puchal Ane, R.
2011-07-01
This paper compares the image quality obtained with each of the algorithms is evaluated and its running time, to optimize the choice of algorithm to use taking into account both the quality of the reconstructed image as the time spent on the reconstruction.
Comparison of Two Fast Space Vector Pulse Width Modulation Algorithms%两种快速的空间矢量脉宽调制算法比较
Institute of Scientific and Technical Information of China (English)
范必双; 谭冠政; 樊绍胜; 王玉凤
2014-01-01
A comparison is made between two fast space vector pulse width modulation (SVPWM) algorithms: the 60° non-orthogonal coordinate SVPWM and the 45° rotating coordinate SVPWM. New general methods of the 60° and 45° algorithms for any level SVPWM are also provided, which need only the angle and the modulation depth to generate and arrange the final vector sequence. The analysis shows the latter offers better flexibility with fewer calculations and is well suited for digital implementation. Both methods are implemented in a field programmable gate array (FPGA) with very high speed integrated circuit hardware description language (VHDL) and compared on the basis of implementation complexity and logic resources required. Simulation results show the overwhelm in advantages of the 45° rotating coordinate SVPWM in brevity and efficiency. Finally, the experimental test results for a three-level neutral-point-clamped (NPC) inverter are presented.%对60°非正交坐标系和45°旋转坐标系这两种快速空间矢量脉宽调制算法(space vector pulse width modulation， SVPWM)进行比较，目的是为工程技术人员在这两种SVPWM算法之间选择时提供一个理论和实践的参考。提出了一种新的针对这两种方法的任意多电平调制通用算法，只需角度和调制比两个信号就能够决定最终的矢量开关顺序。理论分析结果表明，45°旋转坐标系 SVPWM 比60°非正交坐标系SVPWM更简单，计算量小，更适合于数字实现。两种算法都以硬件的方式在FPGA上用VHDL语言编程实现，并对实现的复杂度和逻辑资源占用这两方面进行比较。仿真结果表明，45°旋转坐标系下的SVPWM算法比60°坐标系下的SVPWM算法在简洁性和资源占用方面具有明显的优势。最后，在一个三电平中点箝位型逆变器上对所提的两种通用算法进行了实验验证。
Institute of Scientific and Technical Information of China (English)
李红蕾; 袁召; 冯英; 喻新林; 李辉; 何俊佳; 潘垣
2015-01-01
To implement the controlled fault interruption, it is necessary to restore the waveform of fault current and perform zero-point prediction rapidly and accurately. For this purpose, researchers home and abroad proposed various algorithms such as safe-point algorithm, adaptive self-checking algorithm, improved half-wave Fourier algorithm and so on, however, up to now the detailed research result related to concrete comparison of the three common algorithms is not reported. On the basis of presenting respective mathematical models of the three algorithms, according to the features of short-circuit current the Matlab based verification of the three algorithms is performed by calculation examples, and considering the affects of harmonics and DC component in fault current on calculation results the simulation contrast verification of computational results by the three algorithms is conducted. Results of both waveform fitting and parameter calculation indicate that the affects of harmonics on the three algorithms are not same. Based on simulation results and theoretical analysis on the three mathematical models, finally the applicable scopes of the three algorithms are obtained.%为实现短路故障的选相分断，首先需要还原故障电流波形，快速、准确地完成零点预测。为此，国内外学者提出了安全点算法、自适应算法、改进半波傅氏算法等，但是这3种常见算法的具体对比效果、应用范围等并没有相关深入研究。在介绍各算法数学模型的基础上，根据短路电流特点，在 Matlab 中进行了算例验证，考虑谐波及直流参数对计算结果的影响，仿真对比了3种算法的计算效果。波形拟合及参数计算结果表明这3种算法受到谐波因素的影响不同。在仿真结果及各模型理论分析的基础上，最后得到3种算法的适用范围。
Pirotta, Martin; Aquilina, Dorothy; Bhikha, Tilluck; Georg, Dietmar
2005-01-01
The ESTRO formalism for monitor unit (MU) calculations was evaluated and implemented to replace a previous methodology based on dosimetric data measured in a full-scatter phantom. This traditional method relies on data normalised at the depth of dose maximum (Zm), as well as on the utilisation of the BJR 25 table for the conversion of rectangular fields into equivalent square fields. The treatment planning system (TPS) was subsequently updated to reflect the new beam data normalised at a depth ZR of 10 cm. Comparisons were then carried out between the ESTRO formalism, the Clarkson-based dose calculation algorithm on the TPS (with beam data normalised at Zm and ZR), and the traditional "full-scatter" methodology. All methodologies, except for the "full-scatter" methodology, separated head-scatter from phantom-scatter effects and none of the methodologies; except for the ESTRO formalism, utilised wedge depth dose information for calculations. The accuracy of MU calculations was verified against measurements in a homogeneous phantom for square and rectangular open and wedged fields, as well as blocked open and wedged fields, at 5, 10, and 20 cm depths, under fixed SSD and isocentric geometries for 6 and 10 MV. Overall, the ESTRO Formalism showed the most accurate performance, with the root mean square (RMS) error with respect to measurements remaining below 1% even for the most complex beam set-ups investigated. The RMS error for the TPS deteriorated with the introduction of a wedge, with a worse RMS error for the beam data normalised at Zm (4% at 6 MV and 1.6% at 10 MV) than at ZR (1.-9% at 6 MV and 1.1% at 10 MV). The further addition of blocking had only a marginal impact on the accuracy of this methodology. The "full-scatter" methodology showed a loss in accuracy for calculations involving either wedges or blocking, and performed worst for blocked wedged fields (RMS errors of 7.1% at 6 MV and 5% at 10 MV). The origins of these discrepancies were quantified and the
Fitness inheritance in the Bayesian optimization algorithm
Pelikan, Martin; Sastry, Kumara
2004-01-01
This paper describes how fitness inheritance can be used to estimate fitness for a proportion of newly sampled candidate solutions in the Bayesian optimization algorithm (BOA). The goal of estimating fitness for some candidate solutions is to reduce the number of fitness evaluations for problems where fitness evaluation is expensive. Bayesian networks used in BOA to model promising solutions and generate the new ones are extended to allow not only for modeling and sampling candidate solutions...
Building Better Nurse Scheduling Algorithms
Aickelin, Uwe
2008-01-01
The aim of this research is twofold: Firstly, to model and solve a complex nurse scheduling problem with an integer programming formulation and evolutionary algorithms. Secondly, to detail a novel statistical method of comparing and hence building better scheduling algorithms by identifying successful algorithm modifications. The comparison method captures the results of algorithms in a single figure that can then be compared using traditional statistical techniques. Thus, the proposed method of comparing algorithms is an objective procedure designed to assist in the process of improving an algorithm. This is achieved even when some results are non-numeric or missing due to infeasibility. The final algorithm outperforms all previous evolutionary algorithms, which relied on human expertise for modification.
DEFF Research Database (Denmark)
Olsen, Emil; Boye, Jenny Katrine; Pfau, Thilo;
2012-01-01
Motion capture is frequently used over ground in equine locomotion science to study kinematics. Determination of gait events (hoof-on/off and stance) without force plates is essential to cut the data into strides. The lack of comparative evidence emphasise the need to compare existing algorithms...... and use robust and validated algorithms. It is the objective of this study to compare accuracy (bias) and precision (SD) for five published human and equine motion capture foot-on/off and stance phase detection algorithms during walk. Six horses were walked over 8 seamlessly embedded force plates...
General cardinality genetic algorithms
Koehler; Bhattacharyya; Vose
1997-01-01
A complete generalization of the Vose genetic algorithm model from the binary to higher cardinality case is provided. Boolean AND and EXCLUSIVE-OR operators are replaced by multiplication and addition over rings of integers. Walsh matrices are generalized with finite Fourier transforms for higher cardinality usage. Comparison of results to the binary case are provided. PMID:10021767
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
Energy Technology Data Exchange (ETDEWEB)
Pennington, A; Selvaraj, R; Kirkpatrick, S; Oliveira, S [21st Century Oncology, Deerfield Beach, FL (United States); Leventouri, T [Florida Atlantic University, Boca Raton, FL (United States)
2014-06-01
Purpose: The latest publications indicate that the Ray Tracing algorithm significantly overestimates the dose delivered as compared to the Monte Carlo (MC) algorithm. The purpose of this study is to quantify this overestimation and to identify significant correlations between the RT and MC calculated dose distributions. Methods: Preliminary results are based on 50 preexisting RT algorithm dose optimization and calculation treatment plans prepared on the Multiplan treatment planning system (Accuray Inc., Sunnyvale, CA). The analysis will be expanded to include 100 plans. These plans are recalculated using the MC algorithm, with high resolution and 1% uncertainty. The geometry and number of beams for a given plan, as well as the number of monitor units, is constant for the calculations for both algorithms and normalized differences are compared. Results: MC calculated doses were significantly smaller than RT doses. The D95 of the PTV was 27% lower for the MC calculation. The GTV and PTV mean coverage were 13 and 39% less for MC calculation. The first parameter of conformality, as defined as the ratio of the Prescription Isodose Volume to the PTV Volume was on average 1.18 for RT and 0.62 for MC. Maximum doses delivered to OARs was reduced in the MC plans. The doses for 1000 and 1500 cc of total lung minus PTV, respectively were reduced by 39% and 53% for the MC plans. The correlation of the ratio of air in PTV to the PTV with the difference in PTV coverage had a coefficient of −0.54. Conclusion: The preliminary results confirm that the RT algorithm significantly overestimates the dosages delivered confirming previous analyses. Finally, subdividing the data into different size regimes increased the correlation for the smaller size PTVs indicating the MC algorithm improvement verses the RT algorithm is dependent upon the size of the PTV.
Indian Academy of Sciences (India)
SHIDROKH GOUDARZI; WAN HASLINA HASSAN; MOHAMMAD HOSSEIN ANISI; SEYED AHMAD SOLEYMANI
2016-07-01
Genetic algorithms (GAs) and simulated annealing (SA) have emerged as leading methods for search and optimization problems in heterogeneous wireless networks. In this paradigm, various access technologies need to be interconnected; thus, vertical handovers are necessary for seamless mobility. In this paper, the hybrid algorithm for real-time vertical handover using different objective functions has been presented to find the optimal network to connect with a good quality of service in accordance with the user’s preferences. As it is, the characteristics of the current mobile devices recommend using fast andefficient algorithms to provide solutions near to real-time. These constraints have moved us to develop intelligent algorithms that avoid slow and massive computations. This was to, specifically, solve two major problems in GA optimization, i.e. premature convergence and slow convergence rate, and the facilitation of simulated annealing in the merging populations phase of the search. The hybrid algorithm was expected to improve on the pure GA in two ways, i.e., improved solutions for a given number of evaluations, and more stability over many runs. This paper compares the formulation and results of four recent optimization algorithms: artificial bee colony (ABC), genetic algorithm(GA), differential evolution (DE), and particle swarm optimization (PSO). Moreover, a cost function is used to sustain the desired QoS during the transition between networks, which is measured in terms of the bandwidth, BER, ABR, SNR, and monetary cost. Simulation results indicated that choosing the SA rules would minimize the cost function and the GA– SA algorithm could decrease the number of unnecessary handovers, and thereby prevent the ‘Ping-Pong’ effect.
Electoral Systems and Candidate Selection
Hazan, Reuven Y.; Voerman, Gerrit
2006-01-01
Electoral systems at the national level and candidate selection methods at the party level are connected, maybe not causally but they do influence each other. More precisely, the electoral system constrains and conditions the parties' menu of choices concerning candidate selection. Moreover, in ligh
Directory of Open Access Journals (Sweden)
Pongpan Nakkaew
2016-06-01
Full Text Available In manufacturing process where efficiency is crucial in order to remain competitive, flowshop is a common configuration in which machines are arranged in series and products are produced through the stages one by one. In certain production processes, the machines are frequently configured in the way that each production stage may contain multiple processing units in parallel or hybrid. Moreover, along with precedent conditions, the sequence dependent setup times may exist. Finally, in case there is no buffer, a machine is said to be blocked if the next stage to handle its output is being occupied. Such NP-Hard problem, referred as Blocking Hybrid Flowshop Scheduling Problem with Sequence Dependent Setup/Changeover Times, is usually not possible to find the best exact solution to satisfy optimization objectives such as minimization of the overall production time. Thus, it is usually solved by approximate algorithms such as metaheuristics. In this paper, we investigate comparatively the effectiveness of the two approaches: a Genetic Algorithm (GA and an Artificial Bee Colony (ABC algorithm. GA is inspired by the process of natural selection. ABC, in the same manner, resembles the way types of bees perform specific functions and work collectively to find their foods by means of division of labor. Additionally, we apply an algorithm to improve the GA and ABC algorithms so that they can take advantage of parallel processing resources of modern multiple core processors while eliminate the need for screening the optimal parameters of both algorithms in advance.
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
An Improved Weighted Clustering Algorithm in MANET
Institute of Scientific and Technical Information of China (English)
WANG Jin; XU Li; ZHENG Bao-yu
2004-01-01
The original clustering algorithms in Mobile Ad hoc Network (MANET) are firstly analyzed in this paper.Based on which, an Improved Weighted Clustering Algorithm (IWCA) is proposed. Then, the principle and steps of our algorithm are explained in detail, and a comparison is made between the original algorithms and our improved method in the aspects of average cluster number, topology stability, clusterhead load balance and network lifetime. The experimental results show that our improved algorithm has the best performance on average.
Blind Alley Aware ACO Routing Algorithm
Yoshikawa, Masaya; Otani, Kazuo
2010-10-01
The routing problem is applied to various engineering fields. Many researchers study this problem. In this paper, we propose a new routing algorithm which is based on Ant Colony Optimization. The proposed algorithm introduces the tabu search mechanism to escape the blind alley. Thus, the proposed algorithm enables to find the shortest route, even if the map data contains the blind alley. Experiments using map data prove the effectiveness in comparison with Dijkstra algorithm which is the most popular conventional routing algorithm.
Tel, G.
1993-01-01
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of distri
Multithreaded Implementation of Hybrid String Matching Algorithm
Directory of Open Access Journals (Sweden)
Akhtar Rasool
2012-03-01
Full Text Available Reading and taking reference from many books and articles, and then analyzing the Navies algorithm, Boyer Moore algorithm and Knuth Morris Pratt (KMP algorithm and a variety of improved algorithms, summarizes various advantages and disadvantages of the pattern matching algorithms. And on this basis, a new algorithm – Multithreaded Hybrid algorithm is introduced. The algorithm refers to Boyer Moore algorithm, KMP algorithm and the thinking of improved algorithms. Utilize the last character of the string, the next character and the method to compare from side to side, and then advance a new hybrid pattern matching algorithm. And it adjusted the comparison direction and the order of the comparison to make the maximum moving distance of each time to reduce the pattern matching time. The algorithm reduces the comparison number and greatlyreduces the moving number of the pattern and improves the matching efficiency. Multithreaded implementation of hybrid, pattern matching algorithm performs the parallel string searching on different text data by executing a number of threads simultaneously. This approach is advantageous from all other string-pattern matching algorithm in terms of time complexity. This again improves the overall string matching efficiency.
A Clustal Alignment Improver Using Evolutionary Algorithms
DEFF Research Database (Denmark)
Thomsen, Rene; Fogel, Gary B.; Krink, Thimo
2002-01-01
Multiple sequence alignment (MSA) is a crucial task in bioinformatics. In this paper we extended previous work with evolutionary algorithms (EA) by using MSA solutions obtained from the wellknown Clustal V algorithm as a candidate solution seed of the initial EA population. Our results clearly show...
Beckers, M L; Buydens, L M; Pikkemaat, J A; Altona, C
1997-01-01
The three-dimensional spatial structure of a methylene-acetal-linked thymine dimer present in a 10 basepair (bp) sense-antisense DNA duplex was studied with a genetic algorithm designed to interpret NOE distance restraints. Trial solutions were represented by torsion angles. This means that bond angles for the dimer trial structures are kept fixed during the genetic algorithm optimization. Bond angle values were extracted from a 10 bp sense-antisense duplex model that was subjected to energy minimization by means of a modified AMBER force field. A set of 63 proton-proton distance restraints defining the methylene-acetal-linked thymine dimer was available. The genetic algorithm minimizes the difference between distances in the trial structures and distance restraints. A large conformational search space could be covered in the genetic algorithm optimization by allowing a wide range of torsion angles. The genetic algorithm optimization in all cases led to one family of structures. This family of the methylene-acetal-linked thymine dimer in the duplex differs from the family that was suggested from distance geometry calculations. It is demonstrated that the bond angle geometry around the methylene-acetal linkage plays an important role in the optimization. PMID:9081542
基于Snort的模式匹配算法比较%Comparison of Several Pattern Matching Algorithms Based on Snort
Institute of Scientific and Technical Information of China (English)
王敏杰; 朱连轩
2011-01-01
The string pattern matching algorithm is the key for intrusion detection. Several algorithms including BM, BMG, AC and AC-BM algorithm are discussed, the running time and memory consumption of these for algorithms are measured by snort-based pattern matching algorithm on the snort intrusion detection system. The results show that AC and AC-BM are faster than BM and BMG on the large number of pattern, but on the small number of pattern, the opposite result can be obtained .%字符串模式匹配算法是入侵检测的的关键,为了测试BM,BMG,AC,AC-BM四种算法性能,基于Snort的模式匹配算法在Snort入侵检测系统下测量了四种算法的运行时间和内存消耗.实验结果表明当模式数量较大时AC,AC-BM算法运行时间小于BM和BMG算法,但内存消耗相对较大；当模式数量较少时,BM和BMG算法优于AC,AC-BM算法.
Institute of Scientific and Technical Information of China (English)
张斐; 谭军; 谢竞博
2009-01-01
研究转录因子结合位点(TFBs)的主要预测模型及其预测的算法,通过基于调控元件预测的3种代表性的算法MEME、Gibbs采样和Weeder预测拟南芥基因组.比较结果表明,Gibbs采样算法和Weeder算法预测长、短motif效率较高.重点分析MEME算法,提出结合不同算法查找motif的优化方法,并以实验验证该方法能有效提高预测效率.%This paper studies some models and discrimination algorithms of Transcription Factor Binding sites(TFBs). Experiment compares advantages and disadvantages in three representative discrimination algorithms which are based on regulation elements, including MEME, Gibbs sample and Weeder through predicting arabidopsis thaliana genome, against Gibbs sampling algorithm and Weeder algorithms are forecast long and short motif of the characteristics of high efficiency, MEME is intensively analyzed, and proposed an effective way to forecast motifs through MEME binding other discrimination algorithms. Experimental result proves that the method can improve the efficiency of motif finding efficiently.
Schmitt, Joseph R; Fischer, Debra A; Jek, Kian J; Moriarty, John C; Boyajian, Tabetha S; Schwamb, Megan E; Lintott, Chris; Smith, Arfon M; Parrish, Michael; Schawinski, Kevin; Lynn, Stuart; Simpson, Robert; Omohundro, Mark; Winarski, Troy; Goodman, Samuel J; Jebson, Tony; Lacourse, Daryll
2013-01-01
We report the discovery of 14 new transiting planet candidates in the Kepler field from the Planet Hunters citizen science program. None of these candidates overlap with Kepler Objects of Interest (KOIs), and five of the candidates were missed by the Kepler Transit Planet Search (TPS) algorithm. The new candidates have periods ranging from 124-904 days, eight residing in their host star's habitable zone (HZ) and two (now) in multiple planet systems. We report the discovery of one more addition to the six planet candidate system around KOI-351, marking the first seven planet candidate system from Kepler. Additionally, KOI-351 bears some resemblance to our own solar system, with the inner five planets ranging from Earth to mini-Neptune radii and the outer planets being gas giants; however, this system is very compact, with all seven planet candidates orbiting $\\lesssim 1$ AU from their host star. We perform a numerical integration of the orbits and show that the system remains stable for over 100 million years....
Orbital objects detection algorithm using faint streaks
Tagawa, Makoto; Yanagisawa, Toshifumi; Kurosaki, Hirohisa; Oda, Hiroshi; Hanada, Toshiya
2016-02-01
This study proposes an algorithm to detect orbital objects that are small or moving at high apparent velocities from optical images by utilizing their faint streaks. In the conventional object-detection algorithm, a high signal-to-noise-ratio (e.g., 3 or more) is required, whereas in our proposed algorithm, the signals are summed along the streak direction to improve object-detection sensitivity. Lower signal-to-noise ratio objects were detected by applying the algorithm to a time series of images. The algorithm comprises the following steps: (1) image skewing, (2) image compression along the vertical axis, (3) detection and determination of streak position, (4) searching for object candidates using the time-series streak-position data, and (5) selecting the candidate with the best linearity and reliability. Our algorithm's ability to detect streaks with signals weaker than the background noise was confirmed using images from the Australia Remote Observatory.
Analysis and Comparisons of Four Kinds of Ensemble Pulsar Time Algorithm%四种综合脉冲星时算法比较
Institute of Scientific and Technical Information of China (English)
仲崇霞; 杨廷高
2009-01-01
Pulsars, rapidly rotating neutron stars, have extremely stable rotating periods. The pulsar time denned by a single pulsar is influenced by several noise resources. To weaken these influences, the ensemble analysis method is used to obtain the ensemble pulsar time so that the long-term stability of the ensemble pulsar time can be improved. In this paper, four algorithms — the classical weighted average algorithm, the wavelet analysis algorithm, the Wiener filtration analysis algorithm and the Wiener filtration analysis in wavelet domain — are applied to synthetically make an ensemble pulsar time. The data used are the residuals of the two millisecond pulsars (PSR B1855+09 and PSR B1937+21) observed by Arecibo Observatory. First, the classical weighted average algorithm is developed by Petit, in which only one weight can be chosen within the whole interval of the observation on each single pulsar time, and the criterion for weight is the stability σ_x~2(T) of each single pulsar time. Second, an ensemble pulsar time algorithm is developed based on the wavelet multi-resolution analysis and the wavelet packet analysis, which can be obtained by decomposing the observation residuals of pulsars, extracting the components of different frequency domain and then choosing the weight according to the stability of different component denoted with wavelet variance. Third, the pulsar timing residuals are caused by reference atomic clock and pulsar itself, which are uncorrelated. Considering this uncorrelation and the peculiarity of Wiener filtration, we put forward an ensemble pulsar time algorithm of Wiener filtration. Using this algorithm, the error can be separated from an atomic clock and the pulsar itself in the post-fit pulsar timing residuals. The atomic scale component can be filtered from the pulsar phase variations and the remains can be integrated to the ensemble pulsar time. Weights are chosen according to the mean square root. Forth, the wavelet analysis and the
Fast Algorithm for N-2 Contingency Problem
Turitsyn, K. S.; Kaplunovich, P. A.
2013-01-01
We present a novel selection algorithm for N-2 contingency analysis problem. The algorithm is based on the iterative bounding of line outage distribution factors and successive pruning of the set of contingency pair candidates. The selection procedure is non-heuristic, and is certified to identify all events that lead to thermal constraints violations in DC approximation. The complexity of the algorithm is O(N[superscript 2]) comparable to the complexity of N-1 contingency problem. We validat...
An Improved Ant Colony Routing Algorithm for WSNs
Tan Zhi; Zhang Hui
2015-01-01
Ant colony algorithm is a classical routing algorithm. And it are used in a variety of application because it is economic and self-organized. However, the routing algorithm will expend huge amounts of energy at the beginning. In the paper, based on the idea of Dijkstra algorithm, the improved ant colony algorithm was proposed to balance the energy consumption of networks. Through simulation and comparison with basic ant colony algorithms, it is obvious that improved algorithm can effectively...
Genetic algorithms as global random search methods
Peck, Charles C.; Dhawan, Atam P.
1995-01-01
Genetic algorithm behavior is described in terms of the construction and evolution of the sampling distributions over the space of candidate solutions. This novel perspective is motivated by analysis indicating that the schema theory is inadequate for completely and properly explaining genetic algorithm behavior. Based on the proposed theory, it is argued that the similarities of candidate solutions should be exploited directly, rather than encoding candidate solutions and then exploiting their similarities. Proportional selection is characterized as a global search operator, and recombination is characterized as the search process that exploits similarities. Sequential algorithms and many deletion methods are also analyzed. It is shown that by properly constraining the search breadth of recombination operators, convergence of genetic algorithms to a global optimum can be ensured.
两种改进的变步长MPPT算法性能对比研究%Comparison research of two improved variable step length MPPT algorithm
Institute of Scientific and Technical Information of China (English)
潘逸菎; 窦伟
2016-01-01
Photovoltaic array maximum power point tracking technology is one of the key technologies of the appli-cation of photovoltaic power generation.In this paper, based on the academic research, the variable step length in-cremental conductance algorithm and the perturbation and observation algorithm MPPT technologies which are the most practical application algorithm were optimized and the advantages and disadvantages are compared in detail. Comparing the results of simulations and experiments, the improved perturbation and observation algorithm could be more accurately and faster track the maximum power point, and is more suitable for the actual product.%光伏阵列最大功率点跟踪(MPPT)技术是光伏发电应用的关键技术之一.本文基于近年来学界研究成果,对实际应用最多的变步长电导增量法和扰动观察法两种MPPT技术进行优化设计,并详细对比验证了两种改进方法特性.对比仿真和实验结果表明,改进算法均能快速准确地实现最大功率跟踪,其中改进的扰动观察法因算法简单更适合实际产品使用.
Energy Technology Data Exchange (ETDEWEB)
Abramowicz, H. [Tel Aviv University (Israel). Raymond and Beverly Sackler Faculty of Exact Sciences, School of Physics; Max Planck Inst., Munich (Germany); Abt, I. [Max-Planck-Institut fuer Physik, Muenchen (Germany); Adamczyk, L. [AGH-University of Science and Technology, Cracow (PL). Faculty of Physics and Applied Computer Science] (and others)
2010-03-15
For the first time, differential inclusive-jet cross sections have been measured in neutral current deep inelastic ep scattering using the anti-k{sub T} and SIScone algorithms. The measurements were made for boson virtualities Q{sup 2} > 125 GeV{sup 2} with the ZEUS detector at HERA using an integrated luminosity of 82 pb{sup -1} and the jets were identified in the Breit frame. The performance and suitability of the jet algorithms for their use in hadron-like reactions were investigated by comparing the measurements to those performed with the k{sub T} algorithm. Next-to-leading-order QCD calculations give a good description of the measurements. Measurements of the ratios of cross sections using different jet algorithms are also presented; the measured ratios are well described by calculations including up to O({alpha}{sub s}{sup 3}) terms. Values of {alpha}{sub s}(M{sub Z}) were extracted from the data; the results are compatible with and have similar precision to the value extracted from the k{sub T} analysis. (orig.)
Othman, Arsalan; Gloaguen, Richard
2015-04-01
Topographic effects and complex vegetation cover hinder lithology classification in mountain regions based not only in field, but also in reflectance remote sensing data. The area of interest "Bardi-Zard" is located in the NE of Iraq. It is part of the Zagros orogenic belt, where seven lithological units outcrop and is known for its chromite deposit. The aim of this study is to compare three machine learning algorithms (MLAs): Maximum Likelihood (ML), Support Vector Machines (SVM), and Random Forest (RF) in the context of a supervised lithology classification task using Advanced Space-borne Thermal Emission and Reflection radiometer (ASTER) satellite, its derived, spatial information (spatial coordinates) and geomorphic data. We emphasize the enhancement in remote sensing lithological mapping accuracy that arises from the integration of geomorphic features and spatial information (spatial coordinates) in classifications. This study identifies that RF is better than ML and SVM algorithms in almost the sixteen combination datasets, which were tested. The overall accuracy of the best dataset combination with the RF map for the all seven classes reach ~80% and the producer and user's accuracies are ~73.91% and 76.09% respectively while the kappa coefficient is ~0.76. TPI is more effective with SVM algorithm than an RF algorithm. This paper demonstrates that adding geomorphic indices such as TPI and spatial information in the dataset increases the lithological classification accuracy.
Directory of Open Access Journals (Sweden)
Hajar Abbasi
2015-06-01
Conclusions: An ANN is a powerful method for predicting the Farinograph properties of dough. Taking advantages of performance criteria proved that the GA is more powerful than trial-and-error in determining the critical parameters of ANN’s structure, and improving its performance. Keywords: Artificial neural network, Genetic algorithm, Rheological characterization, Wheat-flour dough
Masunun, P.; Tangboonduangjit, P.; Dumrongkijudom, N.
2016-03-01
The purpose of this study is to compare the build-up region doses on breast Rando phantom surface with the bolus covered, the doses in breast Rando phantom and also the doses in a lung that is the heterogeneous region by two algorithms. The AAA in Eclipse TPS and the collapsed cone convolution algorithm in Pinnacle treatment planning system were used to plan in tangential field technique with 6 MV photon beam at 200 cGy total doses in Breast Rando phantom with bolus covered (5 mm and 10 mm). TLDs were calibrated with Cobalt-60 and used to measure the doses in irradiation process. The results in treatment planning show that the doses in build-up region and the doses in breast phantom were closely matched in both algorithms which are less than 2% differences. However, overestimate of doses in a lung (L2) were found in AAA with 13.78% and 6.06% differences at 5 mm and 10 mm bolus thickness, respectively when compared with CCC algorithm. The TLD measurements show the underestimate in buildup region and in breast phantom but the doses in a lung (L2) were overestimated when compared with the doses in the two plannings at both thicknesses of the bolus.
We analyzed 10 established and 4 new satellite reflectance algorithms for estimating chlorophyll-a (Chl-a) in a temperate reservoir in southwest Ohio using coincident hyperspectral aircraft imagery and dense water truth collected within one hour of image acquisition to develop si...
International Nuclear Information System (INIS)
For the first time, differential inclusive-jet cross sections have been measured in neutral current deep inelastic ep scattering using the anti-kT and SIScone algorithms. The measurements were made for boson virtualities Q2>125 GeV2 with the ZEUS detector at HERA using an integrated luminosity of 82 pb-1 and the jets were identified in the Breit frame. The performance and suitability of the jet algorithms for their use in hadron-like reactions were investigated by comparing the measurements to those performed with the kT algorithm. Next-to-leading-order QCD calculations give a good description of the measurements. Measurements of the ratios of cross sections using different jet algorithms are also presented; the measured ratios are well described by calculations including up to O(αs3) terms. Values of αs(MZ) were extracted from the data; the results are compatible with and have similar precision to the value extracted from the kT analysis.
International Nuclear Information System (INIS)
For the first time, differential inclusive-jet cross sections have been measured in neutral current deep inelastic ep scattering using the anti-kT and SIScone algorithms. The measurements were made for boson virtualities Q2 > 125 GeV2 with the ZEUS detector at HERA using an integrated luminosity of 82 pb-1 and the jets were identified in the Breit frame. The performance and suitability of the jet algorithms for their use in hadron-like reactions were investigated by comparing the measurements to those performed with the kT algorithm. Next-to-leading-order QCD calculations give a good description of the measurements. Measurements of the ratios of cross sections using different jet algorithms are also presented; the measured ratios are well described by calculations including up to O(αs3) terms. Values of αs(MZ) were extracted from the data; the results are compatible with and have similar precision to the value extracted from the kT analysis. (orig.)
Abramowicz, H
2010-01-01
For the first time, differential inclusive-jet cross sections have been measured in neutral current deep inelastic ep scattering using the anti-kT and SIScone algorithms. The measurements were made for boson virtualities Q^2 > 125 GeV^2 with the ZEUS detector at HERA using an integrated luminosity of 82 pb^-1 and the jets were identified in the Breit frame. The performance and suitability of the jet algorithms for their use in hadron-like reactions were investigated by comparing the measurements to those performed with the kT algorithm. Next-to-leading-order QCD calculations give a good description of the measurements. Measurements of the ratios of cross sections using different jet algorithms are also presented; the measured ratios are well described by calculations including up to O(alphas^3) terms. Values of alphas(Mz) were extracted from the data; the results are compatible with and have similar precision to the value extracted from the kT analysis.
三类有源噪声控制算法性能比较%Comparison of Performances of Three Types of Active Noise Control Algorithms
Institute of Scientific and Technical Information of China (English)
陈珏; 玉昊昕; 陈克安
2013-01-01
设计了FxLMS、GSFxAP、FsLMS等三类有源控制算法的仿真实验和消声室实验，分析了算法性能，对算法的适用条件进行深入研究。结果表明：当次级通路为线性通路时，在实际应用对收敛速度要求不高的情况下，选择FxLMS算法的性能代价比最高。如果欲控制的噪声为非平稳噪声或对算法收敛速度要求较高，GSFxAP算法是最优选择。如果参考信号与初级信号相关性差，选用FsLMS算法最为合适。上述结论为实际工程中有源控制算法的选择提供了理论依据。%In order to reasonably choose active noise control (ANC) algorithm in practical engineering, the performances of three typical ANC algorithms, FxLMS, GSFxAP and FsLMS, were investigated in different conditions by simulations and experiments. The conditions of application of the algorithms were also studied. It was concluded that if the secondary path is linear and the convergence speed does not need to be very high, FxLMS algorithm is the best choice;if the noise to be controlled (i.e. the primary noise) is non-stationary or the convergence speed needs to be very high, GSFxAP algorithm is a suitable choice;if the correlation between the primary noise and reference signal is weak, FsLMS algorithm is the reasonable choice. This conclusion provides a theoretical guide for the choice of ANC algorithm in practical engineering.
Undercover Stars Among Exoplanet Candidates
2005-03-01
Very Large Telescope Finds Planet-Sized Transiting Star Summary An international team of astronomers have accurately determined the radius and mass of the smallest core-burning star known until now. The observations were performed in March 2004 with the FLAMES multi-fibre spectrograph on the 8.2-m VLT Kueyen telescope at the ESO Paranal Observatory (Chile). They are part of a large programme aimed at measuring accurate radial velocities for sixty stars for which a temporary brightness "dip" has been detected during the OGLE survey. The astronomers find that the dip seen in the light curve of the star known as OGLE-TR-122 is caused by a very small stellar companion, eclipsing this solar-like star once every 7.3 days. This companion is 96 times heavier than planet Jupiter but only 16% larger. It is the first time that direct observations demonstrate that stars less massive than 1/10th of the solar mass are of nearly the same size as giant planets. This fact will obviously have to be taken into account during the current search for transiting exoplanets. In addition, the observations with the Very Large Telescope have led to the discovery of seven new eclipsing binaries, that harbour stars with masses below one-third the mass of the Sun, a real bonanza for the astronomers. PR Photo 06a/05: Brightness "Dip" and Velocity Variations of OGLE-TR-122. PR Photo 06b/05: Properties of Low-Mass Stars and Planets. PR Photo 06c/05: Comparison Between OGLE-TR-122b, Jupiter and the Sun. The OGLE Survey When a planet happens to pass in front of its parent star (as seen from the Earth), it blocks a small fraction of the star's light from our view [1]. These "planetary transits" are of great interest as they allow astronomers to measure in a unique way the mass and the radius of exoplanets. Several surveys are therefore underway which attempt to find these faint signatures of other worlds. One of these programmes is the OGLE survey which was originally devised to detect microlensing
Institute of Scientific and Technical Information of China (English)
苏秀珍
2013-01-01
On the basis of analyzing the traditional perturb and observe (P&O) maximum power point tracking algorithm, in order to overcome the shortcoming of that method occurring misjudgment in the case of rapidly changing of irradiance, this paper introduces the MPPT algorithm of three-point weight comparison method, and aiming at the drawbacks of the traditional P&O method unable to reconcile accuracy and response speed, the paper propose variable step size algorithm. The feasibility of this method is conlirmed by simulation study using software Matlab/Simulink, This paper will provide reference for future work in MPPT study.%在分析传统的干涉观察算法的基础上，为了克服该方法在光照强度快速变化的情况下会出现误判的缺点，文章引入三点重心比较法的MPPT算法，并针对传统的干涉观察算法跟踪精度和响应速度无法兼顾的缺点提出了变步长算法。通过Matlab/Simulink仿真验证了其优越性，为光伏电池最大输出功率点跟踪的研究提供参考。
Institute of Scientific and Technical Information of China (English)
王风华; 孟文杰
2012-01-01
虹膜识别易受环境影响,利用多算法融合识别提高复杂应用环境下虹膜识别可靠性是一种非常有效的途径.本文针对多算法融合虹膜识别中的关键步骤——规范化模型选择做了比较性研究.首先搭建多算法融合虹膜识别平台,对常见的三种规范化模型在UBIRIS虹膜库中做了比较测试,实验结果证明双sigmoid函数指数模型性能最优.本文研究可对多算法融合的研究提供理论参考.%Iris recognition is susceptible to the environment, and multi-algorithmic fusion is an effective way to improve the performance of iris recognition in complicated environment. This paper makes a research on normalization model comparison. Iris recognition framework based on multi-algorithmic fusion is first built and three common normalization models are compared in UBIRIS iris database. Experimental results show that exponential model using sigmoid function can get best recognition performance. This work can give theoretical reference for the research of multi-algorithmic fusion.
A Modern Non Candidate Approach for sequential pattern mining with Dynamic Minimum Support
Directory of Open Access Journals (Sweden)
Kumudbala Saxena
2011-12-01
Full Text Available Finding frequent patterns in data mining plays a significant role for finding the relational patterns. Data mining is also called knowledge discovery in several database including mobile databases and for heterogeneous environment. In this paper we proposed a modern non candidate approach for sequential pattern mining with dynamic minimum support. Our modern approach is divided into six parts. 1 Accept the dataset from the heterogeneous input set. 2 Generate Token Based on the character, we only generate posterior tokens. 3 Minimum support is entering by the user according to the need and place. 4 Find the frequent pattern which is according to the dynamic minimum support 5 Find associated member according to the token value 6 Find useful pattern after applying pruning. Our approach is not based on candidate key so it takes less time and memory in comparison to the previous algorithm. Second and main thing is the dynamic minimum support which gives us the flexibility to find the frequent pattern based on location and user requirement.
A Modern Non Candidate Approach for sequential pattern mining with Dynamic Minimum Support
Directory of Open Access Journals (Sweden)
Ms. Kumudbala Saxena
2011-09-01
Full Text Available Finding frequent patterns in data mining plays a significant role for finding the relational patterns. Data mining is also called knowledge discovery in several database including mobile databases and for heterogeneous environment. In this paper we proposed a modern non candidate approach for sequential pattern mining with dynamic minimum support. Our modern approach is divided into six parts. 1 Accept the dataset from the heterogeneous input set. 2 Generate Token Based on the character, we only generate posterior tokens. 3 Minimum support is entering by the user according to the need and place. 4 Find the frequent pattern which is according to the dynamic minimum support 5 Find associated member according to the token value 6 Find useful pattern after applying pruning. Our approach is not based on candidate key so it takes less time and memory in comparison to the previous algorithm. Second and main thing is the dynamic minimum support which gives us the flexibility to find the frequent pattern based on location and user requirement.
Mahmoodabadi, M J; Taherkhorsandi, M; Bagheri, A
2014-01-01
An optimal robust state feedback tracking controller is introduced to control a biped robot. In the literature, the parameters of the controller are usually determined by a tedious trial and error process. To eliminate this process and design the parameters of the proposed controller, the multiobjective evolutionary algorithms, that is, the proposed method, modified NSGAII, Sigma method, and MATLAB's Toolbox MOGA, are employed in this study. Among the used evolutionary optimization algorithms to design the controller for biped robots, the proposed method operates better in the aspect of designing the controller since it provides ample opportunities for designers to choose the most appropriate point based upon the design criteria. Three points are chosen from the nondominated solutions of the obtained Pareto front based on two conflicting objective functions, that is, the normalized summation of angle errors and normalized summation of control effort. Obtained results elucidate the efficiency of the proposed controller in order to control a biped robot.
Cuba Gyllensten, Illapha; Alberto G Bonomi; Goode, Kevin M.; Reiter, Harald; Habetha, Joerg; Amft, Oliver; Cleland, John GF
2016-01-01
Background Heart Failure (HF) is a common reason for hospitalization. Admissions might be prevented by early detection of and intervention for decompensation. Conventionally, changes in weight, a possible measure of fluid accumulation, have been used to detect deterioration. Transthoracic impedance may be a more sensitive and accurate measure of fluid accumulation. Objective In this study, we review previously proposed predictive algorithms using body weight and noninvasive transthoracic bio-...
Xie, Jianwen; Douglas, Pamela K.; Wu, Ying Nian; Brody, Arthur L.; Anderson, Ariana E.
2016-01-01
Brain networks in fMRI are typically identified using spatial independent component analysis (ICA), yet mathematical constraints such as sparse coding and positivity both provide alternate biologically-plausible frameworks for generating brain networks. Non-negative Matrix Factorization (NMF) would suppress negative BOLD signal by enforcing positivity. Spatial sparse coding algorithms ($L1$ Regularized Learning and K-SVD) would impose local specialization and a discouragement of multitasking,...
Energy Technology Data Exchange (ETDEWEB)
Ahn, Hye Shin; Kim, Sun Mi; Jang, Mi Jung; Yun, Bo La; Kim, Boh Young [Dept. of Radiology, Seoul National University Bundang Hospital, Seongnam (Korea, Republic of); Ko, Eun Sook; Han, Boo Kyung [Dept. of Radiology, Samsung Medical Center, Seoul (Korea, Republic of); Chang, Jung Min; Yi, Ann; Cho, Nariya; Moon, Woo Kyung [Dept. of Radiology, Seoul National University Hospital, Seoul National University College of Medicine, Seoul (Korea, Republic of); Choi, Hye Young [Dept. of Radiology, Gyeongsang National University Hospital, Jinju (Korea, Republic of)
2014-06-15
To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of {sup M}ammogram enhancement ver. 2.0{sup ;} group B (SMB), specimen mammography with application of {sup M}ammogram enhancement ver. 2.0{sup .} Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software.
International Nuclear Information System (INIS)
To compare new full-field digital mammography (FFDM) with and without use of an advanced post-processing algorithm to improve image quality, lesion detection, diagnostic performance, and priority rank. During a 22-month period, we prospectively enrolled 100 cases of specimen FFDM mammography (Brestige), which was performed alone or in combination with a post-processing algorithm developed by the manufacturer: group A (SMA), specimen mammography without application of Mammogram enhancement ver. 2.0; group B (SMB), specimen mammography with application of Mammogram enhancement ver. 2.0. Two sets of specimen mammographies were randomly reviewed by five experienced radiologists. Image quality, lesion detection, diagnostic performance, and priority rank with regard to image preference were evaluated. Three aspects of image quality (overall quality, contrast, and noise) of the SMB were significantly superior to those of SMA (p < 0.05). SMB was significantly superior to SMA for visualizing calcifications (p < 0.05). Diagnostic performance, as evaluated by cancer score, was similar between SMA and SMB. SMB was preferred to SMA by four of the five reviewers. The post-processing algorithm may improve image quality with better image preference in FFDM than without use of the software.
76 FR 4896 - Call for Candidates
2011-01-27
... From the Federal Register Online via the Government Publishing Office FEDERAL ACCOUNTING STANDARDS ADVISORY BOARD Call for Candidates AGENCY: Federal Accounting Standards Advisory Board. ACTION: Notice... Federal Accounting Standards Advisory Board (FASAB) is currently seeking candidates (candidates must...
Performance Analysis of Cone Detection Algorithms
Mariotti, Letizia
2015-01-01
Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of two popular cone detection algorithms and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the three algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimat...
Halopentacenes: Promising Candidates for Organic Semiconductors
Institute of Scientific and Technical Information of China (English)
DU Gong-He; REN Zhao-Yu; GUO Ping; ZHENG Ji-Ming
2009-01-01
We introduce polar substituents such as F, Cl, Br into pentacene to enhance the dissolubility in common organic solvents while retaining the high charge-carrier mobilities of pentacene. Geometric structures, dipole moments,frontier molecule orbits, ionization potentials and electron affinities, as well as reorganization energies of those molecules, and of pentacene for comparison, are successively calculated by density functional theory. The results indicate that halopentacenes have rather small reorganization energies (< 0.2 eV), and when the substituents are in position 2 or positions 2 and 9, they are polarity molecules. Thus we conjecture that they can easily be dissolved in common organic solvents, and are promising candidates for organic semiconductors.
Comparing Online Algorithms for Bin Packing Problems
DEFF Research Database (Denmark)
Epstein, Leah; Favrholdt, Lene Monrad; Kohrt, Jens Svalgaard
2012-01-01
The relative worst-order ratio is a measure of the quality of online algorithms. In contrast to the competitive ratio, this measure compares two online algorithms directly instead of using an intermediate comparison with an optimal offline algorithm. In this paper, we apply the relative worst......-order ratio to online algorithms for several common variants of the bin packing problem. We mainly consider pairs of algorithms that are not distinguished by the competitive ratio and show that the relative worst-order ratio prefers the intuitively better algorithm of each pair....
Empathy Development in Teacher Candidates
Boyer, Wanda
2010-01-01
Using a grounded theory research design, the author examined 180 reflective essays of teacher candidates who participated in a "Learning Process Project," in which they were asked to synthesize and document their discoveries about the learning process over the course of a completely new learning experience as naive learners. This study explored…
Candidate gene prioritization with Endeavour.
Tranchevent, Léon-Charles; Ardeshirdavani, Amin; ElShal, Sarah; Alcaide, Daniel; Aerts, Jan; Auboeuf, Didier; Moreau, Yves
2016-07-01
Genomic studies and high-throughput experiments often produce large lists of candidate genes among which only a small fraction are truly relevant to the disease, phenotype or biological process of interest. Gene prioritization tackles this problem by ranking candidate genes by profiling candidates across multiple genomic data sources and integrating this heterogeneous information into a global ranking. We describe an extended version of our gene prioritization method, Endeavour, now available for six species and integrating 75 data sources. The performance (Area Under the Curve) of Endeavour on cross-validation benchmarks using 'gold standard' gene sets varies from 88% (for human phenotypes) to 95% (for worm gene function). In addition, we have also validated our approach using a time-stamped benchmark derived from the Human Phenotype Ontology, which provides a setting close to prospective validation. With this benchmark, using 3854 novel gene-phenotype associations, we observe a performance of 82%. Altogether, our results indicate that this extended version of Endeavour efficiently prioritizes candidate genes. The Endeavour web server is freely available at https://endeavour.esat.kuleuven.be/. PMID:27131783
Candidate Prediction Models and Methods
DEFF Research Database (Denmark)
Nielsen, Henrik Aalborg; Nielsen, Torben Skov; Madsen, Henrik;
2005-01-01
This document lists candidate prediction models for Work Package 3 (WP3) of the PSO-project called ``Intelligent wind power prediction systems'' (FU4101). The main focus is on the models transforming numerical weather predictions into predictions of power production. The document also outlines the...
Candidate cave entrances on Mars
Cushing, Glen E.
2012-01-01
This paper presents newly discovered candidate cave entrances into Martian near-surface lava tubes, volcano-tectonic fracture systems, and pit craters and describes their characteristics and exploration possibilities. These candidates are all collapse features that occur either intermittently along laterally continuous trench-like depressions or in the floors of sheer-walled atypical pit craters. As viewed from orbit, locations of most candidates are visibly consistent with known terrestrial features such as tube-fed lava flows, volcano-tectonic fractures, and pit craters, each of which forms by mechanisms that can produce caves. Although we cannot determine subsurface extents of the Martian features discussed here, some may continue unimpeded for many kilometers if terrestrial examples are indeed analogous. The features presented here were identified in images acquired by the Mars Odyssey's Thermal Emission Imaging System visible-wavelength camera, and by the Mars Reconnaissance Orbiter's Context Camera. Select candidates have since been targeted by the High-Resolution Imaging Science Experiment. Martian caves are promising potential sites for future human habitation and astrobiology investigations; understanding their characteristics is critical for long-term mission planning and for developing the necessary exploration technologies.
Energy Technology Data Exchange (ETDEWEB)
Chang, Liyun, E-mail: cliyun2000@gmail.com [Department of Medical Imaging and Radiological Sciences, I-Shou University, Kaohsiung, Taiwan (China); Ho, Sheng-Yow [Department of Radiation Oncology, Chi Mei Medical Center, Liouying, Tainan, Taiwan (China); Lee, Tsair-Fwu [Medical Physics and Informatics Laboratory, Department of Electronics Engineering, National Kaohsiung University of Applied Sciences, Kaohsiung, Taiwan (China); Yeh, Shyh-An [Department of Medical Imaging and Radiological Sciences, I-Shou University, Kaohsiung, Taiwan (China); Department of Radiation Oncology, E-Da Hospital, Kaohsiung, Taiwan (China); Ding, Hueisch-Jy [Department of Medical Imaging and Radiological Sciences, I-Shou University, Kaohsiung, Taiwan (China); Chen, Pang-Yu, E-mail: pangyuchen@yahoo.com.tw [Department of Radiation Oncology, Sinlau Christian Hospital, Tainan, Taiwan (China)
2015-03-21
EBT2 film is a convenient dosimetry quality-assurance (QA) tool with high 2D dosimetry resolution and a self-development property for use in verifications of radiation therapy treatment planning and special projects; however, the user will suffer from a relatively higher degree of uncertainty (more than ±6% by Hartmann et al. [29]), and the trouble of cutting one piece of film into small pieces and then reintegrating them each time. To prevent this tedious cutting work, and save calibration time and budget, a dose range analysis is presented in this study for EBT2 film calibration using the Percentage–Depth–Dose (PDD) method. Different combinations of the three dose ranges, 9–26 cGy, 33–97 cGy and 109–320 cGy, with two types of curve fitting algorithms, film pixel values and net optical densities converting into doses, were tested and compared. With the lowest error and acceptable inaccuracy of less than 3 cGy for the clinical dose range (9–320 cGy), a single film calibrated by the net optical density algorithm with the dose range 109–320 cGy was suggested for routine calibration.
Directory of Open Access Journals (Sweden)
V.B.Kirubanand
2010-03-01
Full Text Available The main theme of this paper is to find the performance of the Hub, Switch and Bluetooth technology using the Queueing Petri-net model and the markov algorithm with the security of Steganography. This paper mainly focuses on comparis on of Hub, switch and Bluetooth technologies in terms of service rate and arrival rate by using Markov algorithm (M/M(1,b/1. When comparing the service rates from the Hub network, switch network and the Bluetooth technology, it has been found that the service rate from the Bluetooth technology is very efficient for implementation. The values obtained from the Bluetooth technology can used for calculating the performance of other wireless technologies. QPNs facilitate the integration of both hardware and software aspects of the system behavior in the improved model. The purpose of Steganography is to send the hidden the information from one system to another through the Bluetooth technology with security measures. Queueing Petri Nets are very powerful as a performance analysis and prediction tool. By demonstrating the power of QPNs as a modeling paradigm in further fore coming technologies we hope to motivate further research in this area.
Directory of Open Access Journals (Sweden)
Xiaolei Yu
2014-10-01
Full Text Available Accurate inversion of land surface geo/biophysical variables from remote sensing data for earth observation applications is an essential and challenging topic for the global change research. Land surface temperature (LST is one of the key parameters in the physics of earth surface processes from local to global scales. The importance of LST is being increasingly recognized and there is a strong interest in developing methodologies to measure LST from the space. Landsat 8 Thermal Infrared Sensor (TIRS is the newest thermal infrared sensor for the Landsat project, providing two adjacent thermal bands, which has a great benefit for the LST inversion. In this paper, we compared three different approaches for LST inversion from TIRS, including the radiative transfer equation-based method, the split-window algorithm and the single channel method. Four selected energy balance monitoring sites from the Surface Radiation Budget Network (SURFRAD were used for validation, combining with the MODIS 8 day emissivity product. For the investigated sites and scenes, results show that the LST inverted from the radiative transfer equation-based method using band 10 has the highest accuracy with RMSE lower than 1 K, while the SW algorithm has moderate accuracy and the SC method has the lowest accuracy.
Giacometti, Achille; Gögelein, Christoph; Lado, Fred; Sciortino, Francesco; Ferrari, Silvano; Pastore, Giorgio
2014-03-01
Building upon past work on the phase diagram of Janus fluids [F. Sciortino, A. Giacometti, and G. Pastore, Phys. Rev. Lett. 103, 237801 (2009)], we perform a detailed study of integral equation theory of the Kern-Frenkel potential with coverage that is tuned from the isotropic square-well fluid to the Janus limit. An improved algorithm for the reference hypernetted-chain (RHNC) equation for this problem is implemented that significantly extends the range of applicability of RHNC. Results for both structure and thermodynamics are presented and compared with numerical simulations. Unlike previous attempts, this algorithm is shown to be stable down to the Janus limit, thus paving the way for analyzing the frustration mechanism characteristic of the gas-liquid transition in the Janus system. The results are also compared with Barker-Henderson thermodynamic perturbation theory on the same model. We then discuss the pros and cons of both approaches within a unified treatment. On balance, RHNC integral equation theory, even with an isotropic hard-sphere reference system, is found to be a good compromise between accuracy of the results, computational effort, and uniform quality to tackle self-assembly processes in patchy colloids of complex nature. Further improvement in RHNC however clearly requires an anisotropic reference bridge function.
Giacometti, Achille; Gögelein, Christoph; Lado, Fred; Sciortino, Francesco; Ferrari, Silvano; Pastore, Giorgio
2014-03-01
Building upon past work on the phase diagram of Janus fluids [F. Sciortino, A. Giacometti, and G. Pastore, Phys. Rev. Lett. 103, 237801 (2009)], we perform a detailed study of integral equation theory of the Kern-Frenkel potential with coverage that is tuned from the isotropic square-well fluid to the Janus limit. An improved algorithm for the reference hypernetted-chain (RHNC) equation for this problem is implemented that significantly extends the range of applicability of RHNC. Results for both structure and thermodynamics are presented and compared with numerical simulations. Unlike previous attempts, this algorithm is shown to be stable down to the Janus limit, thus paving the way for analyzing the frustration mechanism characteristic of the gas-liquid transition in the Janus system. The results are also compared with Barker-Henderson thermodynamic perturbation theory on the same model. We then discuss the pros and cons of both approaches within a unified treatment. On balance, RHNC integral equation theory, even with an isotropic hard-sphere reference system, is found to be a good compromise between accuracy of the results, computational effort, and uniform quality to tackle self-assembly processes in patchy colloids of complex nature. Further improvement in RHNC however clearly requires an anisotropic reference bridge function. PMID:24606350
Hromkovic, Juraj
2009-01-01
Explores the science of computing. This book starts with the development of computer science, algorithms and programming, and then explains and shows how to exploit the concepts of infinity, computability, computational complexity, nondeterminism and randomness.
Institute of Scientific and Technical Information of China (English)
De-xuan ZOU; Li-qun GAO; Steven LI
2014-01-01
This paper presents a ranked differential evolution (RDE) algorithm for solving the identification problem of non-linear discrete-time systems based on a Volterra filter model. In the improved method, a scale factor, generated by combining a sine function and randomness, effectively keeps a balance between the global search and the local search. Also, the mutation operation is modified after ranking all candidate solutions of the population to help avoid the occurrence of premature convergence. Finally, two examples including a highly nonlinear discrete-time rational system and a real heat exchanger are used to evaluate the per-formance of the RDE algorithm and five other approaches. Numerical experiments and comparisons demonstrate that the RDE algorithm performs better than the other approaches in most cases.
OPTIMISED RANDOM MUTATIONS FOR EVOLUTIONARY ALGORITHMS
Directory of Open Access Journals (Sweden)
Sean McGerty
2014-07-01
Full Text Available To demonstrate our approaches we will use Sudoku puzzles, which are an excellent test bed for evolutionary algorithms. The puzzles are accessible enough for people to enjoy. However the more complex puzzles require thousands of iterations before an evolutionary algorithm finds a solution. If we were attempting to compare evolutionary algorithms we could count their iterations to solution as an indicator of relative efficiency. Evolutionary algorithms however include a process of random mutation for solution candidates. We will show that by improving the random mutation behaviours we were able to solve problems with minimal evolutionary optimisation. Experiments demonstrated the random mutation was at times more effective at solving the harder problems than the evolutionary algorithms. This implies that the quality of random mutation may have a significant impact on the performance of evolutionary algorithms with Sudoku puzzles. Additionally this random mutation may hold promise for reuse in hybrid evolutionary algorithm behaviours.
ISINA: INTEGRAL Source Identification Network Algorithm
Scaringi, S; Clark, D J; Dean, A J; Hill, A B; McBride, V A; Shaw, S E
2008-01-01
We give an overview of ISINA: INTEGRAL Source Identification Network Algorithm. This machine learning algorithm, using Random Forests, is applied to the IBIS/ISGRI dataset in order to ease the production of unbiased future soft gamma-ray source catalogues. First we introduce the dataset and the problems encountered when dealing with images obtained using the coded mask technique. The initial step of source candidate searching is introduced and an initial candidate list is created. A description of the feature extraction on the initial candidate list is then performed together with feature merging for these candidates. Three training and testing sets are created in order to deal with the diverse timescales encountered when dealing with the gamma-ray sky. Three independent Random Forest are built: one dealing with faint persistent source recognition, one dealing with strong persistent sources and a final one dealing with transients. For the latter, a new transient detection technique is introduced and described...
A Novel Algorithm for Finding Interspersed Repeat Regions
Institute of Scientific and Technical Information of China (English)
Dongdong Li; Zhengzhi Wang; Qingshan Ni
2004-01-01
The analysis of repeats in the DNA sequences is an important subject in bioinformatics. In this paper, we propose a novel projection-assemble algorithm to find unknown interspersed repeats in DNA sequences. The algorithm employs random projection algorithm to obtain a candidate fragment set, and exhaustive search algorithm to search each pair of fragments from the candidate fragment set to find potential linkage, and then assemble them together. The complexity of our projection-assemble algorithm is nearly linear to the length of the genome sequence, and its memory usage is limited by the hardware. We tested our algorithm with both simulated data and real biology data, and the results show that our projection-assemble algorithm is efficient. By means of this algorithm, we found an un-labeled repeat region that occurs five times in Escherichia coli genome, with its length more than 5,000 bp, and a mismatch probability less than 4%.
Integrative analysis to select cancer candidate biomarkers to targeted validation
Heberle, Henry; Domingues, Romênia R.; Granato, Daniela C.; Yokoo, Sami; Canevarolo, Rafael R.; Winck, Flavia V.; Ribeiro, Ana Carolina P.; Brandão, Thaís Bianca; Filgueiras, Paulo R.; Cruz, Karen S. P.; Barbuto, José Alexandre; Poppi, Ronei J.; Minghim, Rosane; Telles, Guilherme P.; Fonseca, Felipe Paiva; Fox, Jay W.; Santos-Silva, Alan R.; Coletta, Ricardo D.; Sherman, Nicholas E.; Paes Leme, Adriana F.
2015-01-01
Targeted proteomics has flourished as the method of choice for prospecting for and validating potential candidate biomarkers in many diseases. However, challenges still remain due to the lack of standardized routines that can prioritize a limited number of proteins to be further validated in human samples. To help researchers identify candidate biomarkers that best characterize their samples under study, a well-designed integrative analysis pipeline, comprising MS-based discovery, feature selection methods, clustering techniques, bioinformatic analyses and targeted approaches was performed using discovery-based proteomic data from the secretomes of three classes of human cell lines (carcinoma, melanoma and non-cancerous). Three feature selection algorithms, namely, Beta-binomial, Nearest Shrunken Centroids (NSC), and Support Vector Machine-Recursive Features Elimination (SVM-RFE), indicated a panel of 137 candidate biomarkers for carcinoma and 271 for melanoma, which were differentially abundant between the tumor classes. We further tested the strength of the pipeline in selecting candidate biomarkers by immunoblotting, human tissue microarrays, label-free targeted MS and functional experiments. In conclusion, the proposed integrative analysis was able to pre-qualify and prioritize candidate biomarkers from discovery-based proteomics to targeted MS. PMID:26540631
Directory of Open Access Journals (Sweden)
Wenjuan Li
2015-11-01
Full Text Available The leaf area index (LAI and the fraction of photosynthetically active radiation absorbed by green vegetation (FAPAR are essential climatic variables in surface process models. FCOVER is also important to separate vegetation and soil for energy balance processes. Currently, several LAI, FAPAR and FCOVER satellite products are derived moderate to coarse spatial resolution. The launch of Sentinel-2 in 2015 will provide data at decametric resolution with a high revisit frequency to allow quantifying the canopy functioning at the local to regional scales. The aim of this study is thus to evaluate the performances of a neural network based algorithm to derive LAI, FAPAR and FCOVER products at decametric spatial resolution and high temporal sampling. The algorithm is generic, i.e., it is applied without any knowledge of the landcover. A time series of high spatial resolution SPOT4_HRVIR (16 scenes and Landsat 8 (18 scenes images acquired in 2013 over the France southwestern site were used to generate the LAI, FAPAR and FCOVER products. For each sensor and each biophysical variable, a neural network was first trained over PROSPECT+SAIL radiative transfer model simulations of top of canopy reflectance data for green, red, near-infra red and short wave infra-red bands. Our results show a good spatial and temporal consistency between the variables derived from both sensors: almost half the pixels show an absolute difference between SPOT and LANDSAT estimates of lower that 0.5 unit for LAI, and 0.05 unit for FAPAR and FCOVER. Finally, downward-looking digital hemispherical cameras were completed over the main land cover types to validate the accuracy of the products. Results show that the derived products are strongly correlated with the field measurements (R2 > 0.79, corresponding to a RMSE = 0.49 for LAI, RMSE = 0.10 (RMSE = 0.12 for black-sky (white sky FAPAR and RMSE = 0.15 for FCOVER. It is concluded that the proposed generic algorithm provides a good
Chemyakin, E.; Sawamura, P.; Mueller, D.; Burton, S. P.; Ferrare, R. A.; Hostetler, C. A.; Scarino, A. J.; Hair, J. W.; Berkoff, T.; Cook, A. L.; Harper, D. B.; Seaman, S. T.
2015-12-01
Although aerosols are only a fairly minor constituent of Earth's atmosphere they are able to affect its radiative energy balance significantly. Light detection and ranging (lidar) instruments have the potential to play a crucial role in atmospheric research as only these instruments provide information about aerosol properties at a high vertical resolution. We are exploring different algorithmic approaches to retrieve microphysical properties of aerosols using lidar. Almost two decades ago we started with inversion techniques based on Tikhonov's regularization that became a reference point for the improvement of retrieval capabilities of inversion algorithms. Recently we began examining the potential of the "arrange and average" scheme, which relies on a look-up table of optical and microphysical aerosol properties. The future combination of these two different inversion schemes may help us to improve the accuracy of the microphysical data products.The novel arrange and average algorithm was applied to retrieve aerosol optical and microphysical parameters using NASA Langley Research Center (LaRC) High Spectral Resolution Lidar (HSRL-2) data. HSRL-2 is the first airborne HSRL system that is able to provide advanced datasets consisting of backscatter coefficients at 355, 532, and 1064 nm, and extinction coefficients at 355 and 532 nm as input information for aerosol microphysical retrievals. HSRL-2 was deployed on-board NASA LaRC's King Air aircraft during the Deriving Information on Surface Conditions from Column and VERtically Resolved Observations Relevant to Air Quality (DISCOVER-AQ) field campaigns over the California Central Valley and Houston. Vertical profiles of aerosol optical properties and size distributions were obtained from in-situ instruments on-board the NASA's P-3B aircraft. As HSRL-2 flew along the same flight track of the P-3B, synergistic measurements and retrievals were obtained by these two independent platforms. We will present an
El-habashi, A.; Ahmed, S.
2015-10-01
New approaches are described that use of the Ocean Color Remote Sensing Reflectance readings (OC Rrs) available from the existing Visible Infrared Imaging Radiometer Suite (VIIRS) bands to detect and retrieve Karenia brevis (KB) Harmful Algal Blooms (HABs) that frequently plague the coasts of the West Florida Shelf (WFS). Unfortunately, VIIRS, unlike MODIS, does not have a 678 nm channel to detect Chlorophyll fluorescence, which is used with MODIS in the normalized fluorescence height (nFLH) algorithm which has been shown to help in effectively detecting and tracking KB HABs. We present here the use of neural network (NN) algorithms for KB HABS retrievals in the WFS. These NNs, previously reported by us, were trained, using a wide range of suitably parametrized synthetic data typical of coastal waters, to form a multiband inversion algorithm which models the relationship between Rrs values at the 486, 551 and 671nm VIIRS bands against the values of phytoplankton absorption (aph), CDOM absorption (ag), non-algal particles (NAP) absorption (aNAP) and the particulate backscattering bbp coefficients, all at 443nm, and permits retrievals of these parameters. We use the NN to retrieve aph443 in the WFS. The retrieved aph443 values are then filtered by applying known limiting conditions on minimum Chlorophyll concentration [Chla] and low backscatter properties associated with KB HABS in the WFS, thereby identifying, delineating and quantifying the aph443 values, and hence [Chl] concentrations representing KB HABS. Comparisons with in-situ measurements and other techniques including MODIS nFLH confirm the viability of both the NN retrievals and the filtering approaches devised.
Uilhoorn, F. E.
2016-10-01
In this article, the stochastic modelling approach proposed by Box and Jenkins is treated as a mixed-integer nonlinear programming (MINLP) problem solved with a mesh adaptive direct search and a real-coded genetic class of algorithms. The aim is to estimate the real-valued parameters and non-negative integer, correlated structure of stationary autoregressive moving average (ARMA) processes. The maximum likelihood function of the stationary ARMA process is embedded in Akaike's information criterion and the Bayesian information criterion, whereas the estimation procedure is based on Kalman filter recursions. The constraints imposed on the objective function enforce stability and invertibility. The best ARMA model is regarded as the global minimum of the non-convex MINLP problem. The robustness and computational performance of the MINLP solvers are compared with brute-force enumeration. Numerical experiments are done for existing time series and one new data set.
Directory of Open Access Journals (Sweden)
Hamed Piarehzadeh
2012-08-01
Full Text Available In this study is tried to optimal distributed generation allocation for stability improvement in radial distribution systems. Voltage instability implies an uncontrolled decrease in voltage triggered by a disturbance, leading to voltage collapse and is primarily caused by dynamics connected with the load. The instability is divided into steady state and transient voltage instability Based on the time spectrum of the incident of the phenomena. The analysis is accomplished using a steady state voltage stability index which can be evaluated at each node of the distribution system. Several optimal capacities and locations are used to check these results. The location of DG has the main effect voltage stability on the system. Effects of location and capacity on incrementing steady state voltage stability in radial distribution systems are examined through Harmony Search Algorithm (HSA and finally the results are compared to Particle Swarm Optimization (PSO on the terms of speed, convergence and accuracy.
Comparison and Analysis of Traffic Signs Recognition Algorithm%交通标志识别算法的对比与分析
Institute of Scientific and Technical Information of China (English)
钟玲; 于雅洁; 张志佳; 靳永超
2016-01-01
交通标志识别作为典型的机器视觉应用，已有多种机器视觉算法得到广泛的应用。卷积神经网络能够避免显式的人工特征提取过程，因此本文引入卷积神经网络为交通标志进行识别研究，并与BP神经网络、支持向量机进行对比实验，通过对实验结果的理解与分析，可以得出卷积神经网络在识别率及训练速度上均显著高于另两种算法，并能取得最佳的识别效果。%Traffic signs recognition as a typical machine vision application,a variety of machine vision algorithms have been widely used.Convolutional neural network can avoid explicit artificial feature extraction process.Therefore,this thesis introduces convolutional neural network for traffic sign recognition research,and comparative experiments with BP neural network,support vector machine,through the understanding and analysis of the experimental results,it can be derived from the convolution neural network in recognition rate and the training speed were significantly higher than those of the other two algorithm, and can achieve the best effect of recognition.
Calzado, A; Geleijns, J; Joemai, R M S; Veldkamp, W J H
2014-01-01
Objective: To compare low-contrast detectability (LCDet) performance between a model [non–pre-whitening matched filter with an eye filter (NPWE)] and human observers in CT images reconstructed with filtered back projection (FBP) and iterative [adaptive iterative dose reduction three-dimensional (AIDR 3D; Toshiba Medical Systems, Zoetermeer, Netherlands)] algorithms. Methods: Images of the Catphan® phantom (Phantom Laboratories, New York, NY) were acquired with Aquilion ONE™ 320-detector row CT (Toshiba Medical Systems, Tokyo, Japan) at five tube current levels (20–500 mA range) and reconstructed with FBP and AIDR 3D. Samples containing either low-contrast objects (diameters, 2–15 mm) or background were extracted and analysed by the NPWE model and four human observers in a two-alternative forced choice detection task study. Proportion correct (PC) values were obtained for each analysed object and used to compare human and model observer performances. An efficiency factor (η) was calculated to normalize NPWE to human results. Results: Human and NPWE model PC values (normalized by the efficiency, η = 0.44) were highly correlated for the whole dose range. The Pearson's product-moment correlation coefficients (95% confidence interval) between human and NPWE were 0.984 (0.972–0.991) for AIDR 3D and 0.984 (0.971–0.991) for FBP, respectively. Bland–Altman plots based on PC results showed excellent agreement between human and NPWE [mean absolute difference 0.5 ± 0.4%; range of differences (−4.7%, 5.6%)]. Conclusion: The NPWE model observer can predict human performance in LCDet tasks in phantom CT images reconstructed with FBP and AIDR 3D algorithms at different dose levels. Advances in knowledge: Quantitative assessment of LCDet in CT can accurately be performed using software based on a model observer. PMID:24837275
Explicit filtering of building blocks for genetic algorithms
Kemenade, C.H.M. van
1996-01-01
Genetic algorithms are often applied to building block problems. We have developed a simple filtering algorithm that can locate building blocks within a bit-string, and does not make assumptions regarding the linkage of the bits. A comparison between the filtering algorithm and genetic algorithms re
Kwon, Ji-Wook; Kim, Jin Hyo; Seo, Jiwon
2015-05-06
This paper proposes a Multiple Leader Candidate (MLC) structure and a Competitive Position Allocation (CPA) algorithm which can be applicable for various applications including environmental sensing. Unlike previous formation structures such as virtual-leader and actual-leader structures with position allocation including a rigid allocation and an optimization based allocation, the formation employing the proposed MLC structure and CPA algorithm is robust against the fault (or disappearance) of the member robots and reduces the entire cost. In the MLC structure, a leader of the entire system is chosen among leader candidate robots. The CPA algorithm is the decentralized position allocation algorithm that assigns the robots to the vertex of the formation via the competition of the adjacent robots. The numerical simulations and experimental results are included to show the feasibility and the performance of the multiple robot system employing the proposed MLC structure and the CPA algorithm.
Bovchaliuk, Valentyn; Goloub, Philippe; Podvin, Thierry; Veselovskii, Igor; Tanre, Didier; Chaikovsky, Anatoli; Dubovik, Oleg; Mortier, Augustin; Lopatin, Anton; Korenskiy, Mikhail; Victori, Stephane
2016-07-01
Aerosol particles are important and highly variable components of the terrestrial atmosphere, and they affect both air quality and climate. In order to evaluate their multiple impacts, the most important requirement is to precisely measure their characteristics. Remote sensing technologies such as lidar (light detection and ranging) and sun/sky photometers are powerful tools for determining aerosol optical and microphysical properties. In our work, we applied several methods to joint or separate lidar and sun/sky-photometer data to retrieve aerosol properties. The Raman technique and inversion with regularization use only lidar data. The LIRIC (LIdar-Radiometer Inversion Code) and recently developed GARRLiC (Generalized Aerosol Retrieval from Radiometer and Lidar Combined data) inversion methods use joint lidar and sun/sky-photometer data. This paper presents a comparison and discussion of aerosol optical properties (extinction coefficient profiles and lidar ratios) and microphysical properties (volume concentrations, complex refractive index values, and effective radius values) retrieved using the aforementioned methods. The comparison showed inconsistencies in the retrieved lidar ratios. However, other aerosol properties were found to be generally in close agreement with the AERONET (AErosol RObotic NETwork) products. In future studies, more cases should be analysed in order to clearly define the peculiarities in our results.
IAEA Director General candidates announced
International Nuclear Information System (INIS)
Full text: The IAEA today confirms receipt of the nomination of five candidates for Director General of the IAEA. Nominations of the following individuals have been received by the Chairperson of the IAEA Board of Governors, Ms. Taous Feroukhi: Mr. Jean-Pol Poncelet of Belgium; Mr. Yukiya Amano of Japan; Mr. Ernest Petric of Slovenia; Mr. Abdul Samad Minty of South Africa; and Mr. Luis Echavarri of Spain. The five candidates were nominated in line with a process approved by the Board in October 2008. IAEA Director General Mohamed ElBaradei's term of office expires on 30 November 2009. He has served as Director General since 1997 and has stated that he is not available for a fourth term of office. (IAEA)
Enthalpy screen of drug candidates.
Schön, Arne; Freire, Ernesto
2016-11-15
The enthalpic and entropic contributions to the binding affinity of drug candidates have been acknowledged to be important determinants of the quality of a drug molecule. These quantities, usually summarized in the thermodynamic signature, provide a rapid assessment of the forces that drive the binding of a ligand. Having access to the thermodynamic signature in the early stages of the drug discovery process will provide critical information towards the selection of the best drug candidates for development. In this paper, the Enthalpy Screen technique is presented. The enthalpy screen allows fast and accurate determination of the binding enthalpy for hundreds of ligands. As such, it appears to be ideally suited to aid in the ranking of the hundreds of hits that are usually identified after standard high throughput screening.
Leishmaniasis: vaccine candidates and perspectives.
Singh, Bhawana; Sundar, Shyam
2012-06-01
Leishmania is a protozoan parasite and a causative agent of the various clinical forms of leishmaniasis. High cost, resistance and toxic side effects of traditional drugs entail identification and development of therapeutic alternatives. The sound understanding of parasite biology is key for identifying novel drug targets, that can induce the cell mediated immunity (mainly CD4+ and CD8+ IFN-gamma mediated responses) polarized towards a Th1 response. These aspects are important in designing a new vaccine along with the consideration of the candidates with respect to their ability to raise memory response in order to improve the vaccine performance. This review is an effort to identify molecules according to their homology with the host and their ability to be used as potent vaccine candidates.
A secured Cryptographic Hashing Algorithm
Mohanty, Rakesh; Bishi, Sukant kumar
2010-01-01
Cryptographic hash functions for calculating the message digest of a message has been in practical use as an effective measure to maintain message integrity since a few decades. This message digest is unique, irreversible and avoids all types of collisions for any given input string. The message digest calculated from this algorithm is propagated in the communication medium along with the original message from the sender side and on the receiver side integrity of the message can be verified by recalculating the message digest of the received message and comparing the two digest values. In this paper we have designed and developed a new algorithm for calculating the message digest of any message and implemented t using a high level programming language. An experimental analysis and comparison with the existing MD5 hashing algorithm, which is predominantly being used as a cryptographic hashing tool, shows this algorithm to provide more randomness and greater strength from intrusion attacks. In this algorithm th...
A heuristic path-estimating algorithm for large-scale real-time traffic information calculating
Institute of Scientific and Technical Information of China (English)
2008-01-01
As the original Global Position System (GPS) data in Floating Car Data have the accuracy problem,this paper proposes a heuristic path-estimating algorithm for large-scale real-time traffic information calculating. It uses the heuristic search method,imports the restriction with geometric operation,and makes comparison between the vectors composed of the vehicular GPS points and the special road network model to search the set of vehicular traveling route candidates. Finally,it chooses the most optimal one according to weight. Experimental results indicate that the algorithm has considerable efficiency in accuracy (over 92.7%) and com-putational speed (max 8000 GPS records per second) when handling the GPS tracking data whose sampling rate is larger than 1 min even under complex road network conditions.
Directory of Open Access Journals (Sweden)
Quandalle P.
2006-11-01
Full Text Available Cet article est une étude comparative de différentes méthodes itératives de résolution matricielle sur le calc-lateur CRAY 1. Les différentes méthodes retenues sont des méthodes qui ont déjà été décrites dans la littérature pétrolière, mais dont la structure (plus ou moins vectorisable leur confère un regain d'intérêt sur des calculateurs tels que le CRAY 1 ou le CYBER 205. Le contexte dans lequel nous nous plaçons est celui de la simulation d'un écoulement triphasique et tridimensionnel en milieu poreux sur un modèle de type Black Oil. Nous supposerons que les équations qui décrivent l'écoulement sont discrétisées par la méthode des différences finies utilisant un schéma à cinq points [1]. Les algorithmes que nous allons étudier sont dérivés de trois méthodes, la méthode de Surrelaxation par blocks, la Strong lmplicit Procedure, et la méthode Orthomin. A l'aide d'exemples, nous essaierons de dégager des informations tant sur leur rapidité d'exécution que sur la qualité de leur solution. This article makes a comparative study of different iterative methods for matrix solving on a CRAY 1 computer. The selected methods have been described in the petroleum litterature but are such that their (more or less vectorizable structure makes them of renewed interest with computers such as the CRAY 1 or CYBER 205. The context dealt with here is that of simulating a three-phase three-dimensional flow in a porous medium on a Black Oil model. We assume that the equations describing the flow are discretized by the finite-difference method using a five-spot pattern. The algorithme we are going to investigate are derived from three methods : the Block Successive Over Relaxation method, the Strong Implicit Procedure, and the Orthomin method. Examples will be used to bring out information on both their execution speed and the quality of their solution.
Alstad, K. P.; Venterea, R. T.; Tan, S. M.; Saad, N.
2015-12-01
Understanding chamber-based soil flux model fitting and measurement error is key to scaling soils GHG emissions and resolving the primary uncertainties in climate and management feedbacks at regional scales. One key challenge is the selection of the correct empirical model applied to soil flux rate analysis in chamber-based experiments. Another challenge is the characterization of error in the chamber measurement. Traditionally, most chamber-based N2O and CH4 measurements and model derivations have used discrete sampling for GC analysis, and have been conducted using extended chamber deployment periods (DP) which are expected to result in substantial alteration of the pre-deployment flux. The development of high-precision, high-frequency CRDS analyzers has advanced the science of soil flux analysis by facilitating much shorter DP and, in theory, less chamber-induced suppression of the soil-atmosphere diffusion gradient. As well, a new software tool developed by Picarro (the "Soil Flux Processor" or "SFP") links the power of Cavity Ring-Down Spectroscopy (CRDS) technology with an easy-to-use interface that features flexible sample-ID and run-schemes, and provides real-time monitoring of chamber accumulations and environmental conditions. The SFP also includes a sophisticated flux analysis interface which offers a user-defined model selection, including three predominant fit algorithms as default, and an open-code interface for user-composed algorithms. The SFP is designed to couple with the Picarro G2508 system, an analyzer which simplifies soils flux studies by simultaneously measuring primary GHG species -- N2O, CH4, CO2 and H2O. In this study, Picarro partners with the ARS USDA Soil & Water Management Research Unit (R. Venterea, St. Paul), to examine the degree to which the high-precision, high-frequency Picarro analyzer allows for much shorter DPs periods in chamber-based flux analysis, and, in theory, less chamber-induced suppression of the soil
Improved Tiled Bitmap Forensic Analysis Algorithm
Directory of Open Access Journals (Sweden)
C. D. Badgujar, G. N. Dhanokar
2012-12-01
Full Text Available In Computer network world, the needs for securityand proper systems of control are obvious and findout the intruders who do the modification andmodified data. Nowadays Frauds that occurs incompanies are not only by outsiders but also byinsiders. Insider may perform illegal activity & tryto hide illegal activity. Companies would like to beassured that such illegal activity i.e. tampering hasnot occurred, or if it does, it should be quicklydiscovered. Mechanisms now exist that detecttampering of a database, through the use ofcryptographically-strong hash functions. This papercontains a survey which explores the various beliefsupon database forensics through differentmethodologies using forensic algorithms and toolsfor investigations. Forensic analysis algorithms areused to determine who, when, and what data hadbeen tampered. Tiled Bitmap Algorithm introducesthe notion of a candidate set (all possible locationsof detected tampering(s and provides a completecharacterization of the candidate set and itscardinality. Improved tiled bitmap algorithm willcover come the drawbacks of existing tiled bitmapalgorithm.
Fast Algorithm for N-2 Contingency Problem
Turitsyn, K S
2012-01-01
We present a novel selection algorithm for N-2 contingency analysis problem. The algorithm is based on the iterative bounding of line outage distribution factors and successive pruning of the set of contingency pair candidates. The selection procedure is non-heuristic, and is certified to identify all events that lead to thermal constraints violations in DC approximation. The complexity of the algorithm is O(N^2) comparable to the complexity of N-1 contingency problem. We validate and test the algorithm on the Polish grid network with around 3000 lines. For this test case two iterations of the pruning procedure reduce the total number of candidate pairs by a factor of almost 1000 from 5 millions line pairs to only 6128.
Institute of Scientific and Technical Information of China (English)
金恩淑; 汪有成; 王红艳; 陈喜峰; 王星棋
2013-01-01
A wide area protection algorithm based on negative sequence power direction comparison is proposed. The special associated intelligent electronic devices (IEDs) zones which contain buses and transmission lines are created according to the installation location of the IEDs. When a fault occurs in power network, combining the fault information collecting and sharing from associated zones with the fault discrimination principle defined in the paper, the IEDs can identify the fault position and cut the fault according to the predetermined action strategy. This algorithm can be used as a primary protection for quick action after a system fault, while possessing the back-up protection function. The results of case study show that the proposed algorithm can achieve the primary protection and back-up protection function perfectly, verifying the effectiveness of the algorithm.%提出一种基于负序功率方向比较原理的广域继电保护算法。根据智能电子设备（Intelligent Electric Device，IED）的安装位置，形成包含有母线及线路在内的IED关联域。系统发生故障后，通过IED在关联域内对故障信息的采集和共享，结合定义的故障判别原则，确定故障的位置；并根据IED预定的动作策略，快速地切除故障。该算法既能做系统发生故障后快速动作的主保护，同时又兼顾后备保护的功能。通过仿真验证了该算法的有效性，表明该算法可以很好地实现主保护和后备保护的功能。
DEFF Research Database (Denmark)
Markham, Annette
This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
Institute of Scientific and Technical Information of China (English)
Armand BABOLI; Mohammadali Pirayesh NEGHAB; Rasoul HAJI
2008-01-01
This paper considers a two-level supply chain consisting of one warehouse and one retailer. In this model we determine the optimal ordering policy according to inventory and transportation costs. We assume that the demand rate by the retailer is known. Shortages are allowed neither at the retailer nor at the warehouse. We study this model in two cases; decentralized and centralized. In the decentralized case the retailer and the warehouse independently minimize their own costs; while in the centralized case the warehouse and the retailer are considered as a whole firm. We propose an algorithm to find economic order quantities for both the retailer and the warehouse which minimize the total system cost in the centralized case. The total system cost contains the holding and ordering costs at the retailer and the warehouse as well as the transportation cost from the warehouse to the retailer. The application of this model into the pharmaceutical downstream supply chain of a public hospital allows obtaining significant savings. By numerical examples, the costs are computed in MATLAB(C) to compare the costs in the centralized case with decentralized one and to propose a saving-sharing mechanism through quantity discount.
An Efficient Hybrid Face Recognition Algorithm Using PCA and GABOR Wavelets
Directory of Open Access Journals (Sweden)
Hyunjong Cho
2014-04-01
Full Text Available With the rapid development of computers and the increasing, mass use of high-tech mobile devices, vision-based face recognition has advanced significantly. However, it is hard to conclude that the performance of computers surpasses that of humans, as humans have generally exhibited better performance in challenging situations involving occlusion or variations. Motivated by the recognition method of humans who utilize both holistic and local features, we present a computationally efficient hybrid face recognition method that employs dual-stage holistic and local feature-based recognition algorithms. In the first coarse recognition stage, the proposed algorithm utilizes Principal Component Analysis (PCA to identify a test image. The recognition ends at this stage if the confidence level of the result turns out to be reliable. Otherwise, the algorithm uses this result for filtering out top candidate images with a high degree of similarity, and passes them to the next fine recognition stage where Gabor filters are employed. As is well known, recognizing a face image with Gabor filters is a computationally heavy task. The contribution of our work is in proposing a flexible dual-stage algorithm that enables fast, hybrid face recognition. Experimental tests were performed with the Extended Yale Face Database B to verify the effectiveness and validity of the research, and we obtained better recognition results under illumination variations not only in terms of computation time but also in terms of the recognition rate in comparison to PCA- and Gabor wavelet-based recognition algorithms.
Automatic extraction of candidate nomenclature terms using the doublet method
Directory of Open Access Journals (Sweden)
Berman Jules J
2005-10-01
Full Text Available Abstract Background New terminology continuously enters the biomedical literature. How can curators identify new terms that can be added to existing nomenclatures? The most direct method, and one that has served well, involves reading the current literature. The scholarly curator adds new terms as they are encountered. Present-day scholars are severely challenged by the enormous volume of biomedical literature. Curators of medical nomenclatures need computational assistance if they hope to keep their terminologies current. The purpose of this paper is to describe a method of rapidly extracting new, candidate terms from huge volumes of biomedical text. The resulting lists of terms can be quickly reviewed by curators and added to nomenclatures, if appropriate. The candidate term extractor uses a variation of the previously described doublet coding method. The algorithm, which operates on virtually any nomenclature, derives from the observation that most terms within a knowledge domain are composed entirely of word combinations found in other terms from the same knowledge domain. Terms can be expressed as sequences of overlapping word doublets that have more specific meaning than the individual words that compose the term. The algorithm parses through text, finding contiguous sequences of word doublets that are known to occur somewhere in the reference nomenclature. When a sequence of matching word doublets is encountered, it is compared with whole terms already included in the nomenclature. If the doublet sequence is not already in the nomenclature, it is extracted as a candidate new term. Candidate new terms can be reviewed by a curator to determine if they should be added to the nomenclature. An implementation of the algorithm is demonstrated, using a corpus of published abstracts obtained through the National Library of Medicine's PubMed query service and using "The developmental lineage classification and taxonomy of neoplasms" as a reference
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...
Directory of Open Access Journals (Sweden)
Cristina Anton
2012-01-01
Full Text Available OBJECTIVE: Differentiation between benign and malignant ovarian neoplasms is essential for creating a system for patient referrals. Therefore, the contributions of the tumor markers CA125 and human epididymis protein 4 (HE4 as well as the risk ovarian malignancy algorithm (ROMA and risk malignancy index (RMI values were considered individually and in combination to evaluate their utility for establishing this type of patient referral system. METHODS: Patients who had been diagnosed with ovarian masses through imaging analyses (n = 128 were assessed for their expression of the tumor markers CA125 and HE4. The ROMA and RMI values were also determined. The sensitivity and specificity of each parameter were calculated using receiver operating characteristic curves according to the area under the curve (AUC for each method. RESULTS: The sensitivities associated with the ability of CA125, HE4, ROMA, or RMI to distinguish between malignant versus benign ovarian masses were 70.4%, 79.6%, 74.1%, and 63%, respectively. Among carcinomas, the sensitivities of CA125, HE4, ROMA (pre-and post-menopausal, and RMI were 93.5%, 87.1%, 80%, 95.2%, and 87.1%, respectively. The most accurate numerical values were obtained with RMI, although the four parameters were shown to be statistically equivalent. CONCLUSION: There were no differences in accuracy between CA125, HE4, ROMA, and RMI for differentiating between types of ovarian masses. RMI had the lowest sensitivity but was the most numerically accurate method. HE4 demonstrated the best overall sensitivity for the evaluation of malignant ovarian tumors and the differential diagnosis of endometriosis. All of the parameters demonstrated increased sensitivity when tumors with low malignancy potential were considered low-risk, which may be used as an acceptable assessment method for referring patients to reference centers.
Auletta, Gianluca; Ditommaso, Rocco; Iacovino, Chiara; Carlo Ponzo, Felice; Pina Limongelli, Maria
2016-04-01
Continuous monitoring based on vibrational identification methods is increasingly employed with the aim of evaluate the state of the health of existing structures and infrastructures and to evaluate the performance of safety interventions over time. In case of earthquakes, data acquired by means of continuous monitoring systems can be used to localize and quantify a possible damage occurred on a monitored structure using appropriate algorithms based on the variations of structural parameters. Most of the damage identification methods are based on the variation of few modal and/or non-modal parameters: the former, are strictly related to the structural eigenfrequencies, equivalent viscous damping factors and mode shapes; the latter, are based on the variation of parameters related to the geometric characteristics of the monitored structure whose variations could be correlated related to damage. In this work results retrieved from the application of a curvature evolution based method and an interpolation error based method are compared. The first method is based on the evaluation of the curvature variation (related to the fundamental mode of vibration) over time and compares the variations before, during and after the earthquake. The Interpolation Method is based on the detection of localized reductions of smoothness in the Operational Deformed Shapes (ODSs) of the structure. A damage feature is defined in terms of the error related to the use of a spline function in interpolating the ODSs of the structure: statistically significant variations of the interpolation error between two successive inspections of the structure indicate the onset of damage. Both methods have been applied using both numerical data retrieved from nonlinear FE models and experimental tests on scaled structures carried out on the shaking table of the University of Basilicata. Acknowledgements This study was partially funded by the Italian Civil Protection Department within the project DPC
Five modified boundary scan adaptive test generation algorithms
Institute of Scientific and Technical Information of China (English)
Niu Chunping; Ren Zheping; Yao Zongzhong
2006-01-01
To study the diagnostic problem of Wire-OR (W-O) interconnect fault of PCB (Printed Circuit Board), five modified boundary scan adaptive algorithms for interconnect test are put forward. These algorithms apply Global-diagnosis sequence algorithm to replace the equal weight algorithm of primary test, and the test time is shortened without changing the fault diagnostic capability. The descriptions of five modified adaptive test algorithms are presented, and the capability comparison between the modified algorithm and the original algorithm is made to prove the validity of these algorithms.
Béland, Laurent K; Stoller, Roger; Xu, Haixuan
2014-01-01
We present a comparison of the kinetic Activation-Relaxation Technique (k-ART) and the Self-Evolving Atomistic Kinetic Monte Carlo (SEAKMC), two off-lattice, on-the-fly kinetic Monte Carlo (KMC) techniques that were recently used to solve several materials science problems. We show that if the initial displacements are localized the dimer method and the Activation-Relaxation Technique \\emph{nouveau} provide similar performance. We also show that k-ART and SEAKMC, although based on different approximations, are in agreement with each other, as demonstrated by the examples of 50 vacancies in a 1950-atom Fe box and of interstitial loops in 16000-atom boxes. Generally speaking, k-ART's treatment of geometry and flickers is more flexible, e.g. it can handle amorphous systems, and rigorous than SEAKMC's, while the later's concept of active volumes permits a significant speedup of simulations for the systems under consideration and therefore allows investigations of processes requiring large systems that are not acc...
Directory of Open Access Journals (Sweden)
Robin Roj
2014-07-01
Full Text Available This paper presents three different search engines for the detection of CAD-parts in large databases. The analysis of the contained information is performed by the export of the data that is stored in the structure trees of the CAD-models. A preparation program generates one XML-file for every model, which in addition to including the data of the structure tree, also owns certain physical properties of each part. The first search engine is specializes in the discovery of standard parts, like screws or washers. The second program uses certain user input as search parameters, and therefore has the ability to perform personalized queries. The third one compares one given reference part with all parts in the database, and locates files that are identical, or similar to, the reference part. All approaches run automatically, and have the analysis of the structure tree in common. Files constructed with CATIA V5, and search engines written with Python have been used for the implementation. The paper also includes a short comparison of the advantages and disadvantages of each program, as well as a performance test.
Directory of Open Access Journals (Sweden)
J. M. A. C. Souza
2011-03-01
Full Text Available Three methods for automatic detection of mesoscale coherent structures are applied to Sea Level Anomaly (SLA fields in the South Atlantic. The first method is based on the wavelet packet decomposition of the SLA data, the second on the estimation of the Okubo-Weiss parameter and the third on a geometric criterion using the winding-angle approach. The results provide a comprehensive picture of the mesoscale eddies over the South Atlantic Ocean, emphasizing their main characteristics: amplitude, diameter, duration and propagation velocity. Five areas of particular eddy dynamics were selected: the Brazil Current, the Agulhas eddies propagation corridor, the Agulhas Current retroflexion, the Brazil-Malvinas confluence zone and the northern branch of the Antarctic Circumpolar Current (ACC. For these areas, mean propagation velocities and amplitudes were calculated. Two regions with long duration eddies were observed, corresponding to the propagation of Agulhas and ACC eddies. Through the comparison between the identification methods, their main advantages and shortcomings were detailed. The geometric criterion presents a better performance, mainly in terms of number of detections, duration of the eddies and propagation velocities. The results are particularly good for the Agulhas Rings, that presented the longest lifetimes of all South Atlantic eddies.
Directory of Open Access Journals (Sweden)
Jyoti Kalyani
2006-01-01
Full Text Available Security of wired and wireless networks is the most challengeable in today's computer world. The aim of this study was to give brief introduction about viruses and worms, their creators and characteristics of algorithms used by viruses. Here wired and wireless network viruses are elaborated. Also viruses are compared with human immune system. On the basis of this comparison four guidelines are given to detect viruses so that more secure systems are made. While concluding this study it is found that the security is most challengeable, thus it is required to make more secure models which automatically detect viruses and prevent the system from its affect.
Institute of Scientific and Technical Information of China (English)
Pei Yusheng; Cai Tong; Gao Hua; Tan Dejiang; Zhang Yuchen; Zhang Guolai
2014-01-01
Background The bacterial endotoxins test (BET) is a method used to detect or quantify endotoxins (lipo-polysaccharide,LPS) and is widely used in the quality control of parenteral medicines/vaccines and clinical dialysis fluid.It is also used in the diagnosis of endotoxemia and in detection of environment air quality control.Although BET has been adopted by most pharmacopoeias,result judgment algorithms (RJAs) of the test for interfering factors in the BET still differ between certain pharmacopoeias.We have evaluated RJAs of the test for interfering factors for the revision of BET described in the Chinese Pharmacopoeia 2010 (CHP2010).Methods Original data from 1 748 samples were judged by RJAs of the Chinese Pharmacopoeia 2010,the Japanese Pharmacopoeia 2011 (JP2011),the European Pharmacopoeia 7.0 (EP7.0),the United States Pharmacopoeia 36 (USP36),and the Indian Pharmacopoeia 2010 (IP2010),respectively.A SAS software package was used in the statistical analysis.Results The results using CHP2010 and USP36,JP2011,EP7.0,and IP2010 had no significant difference (P=-0.7740).The results using CHP2010 of 1 748 samples showed that 132 samples (7.6％) required an additional step; nevertheless there was no such requirement when using the other pharmacopeias.The kappa value of two RJAs (CHP2010 and EP7.0) was 0.6900 (0.6297-0.7504) indicating that the CHP2010 and other pharmacopoeias have good consistency.Conclusions The results using CHP2010 and USP36,JP2011,EP7.0,and IP2010 have different characteristics.CHP2010 method shows a good performance in Specificity,mistake diagnostic rate,agreement rate,predictive value for suspicious rate,and predictive value for passed rate.The CHP2010 method only had disadvantages in sensitivity compared with other pharmacopeias.We suggest that the Chinese pharmacopoeia interference test be revised in accordance with the USP36,JP2011,EP7.0,and IP2010 judgment model.
Immunological Evaluation and Comparison of Different EV71 Vaccine Candidates
Directory of Open Access Journals (Sweden)
Ai-Hsiang Chou
2012-01-01
Full Text Available Enterovirus 71 (EV71 and coxsackievirus A16 (CVA16 are major causative agents of hand, foot, and mouth diseases (HFMDs, and EV71 is now recognized as an emerging neurotropic virus in Asia. Effective medications and/or prophylactic vaccines against HFMD are not available. The current results from mouse immunogenicity studies using in-house standardized RD cell virus neutralization assays indicate that (1 VP1 peptide (residues 211–225 formulated with Freund’s adjuvant (CFA/IFA elicited low virus neutralizing antibody response (1/32 titer; (2 recombinant virus-like particles produced from baculovirus formulated with CFA/IFA could elicit good virus neutralization titer (1/160; (3 individual recombinant EV71 antigens (VP1, VP2, and VP3 formulated with CFA/IFA, only VP1 elicited antibody response with 1/128 virus neutralization titer; and (4 the formalin-inactivated EV71 formulated in alum elicited antibodies that cross-neutralized different EV71 genotypes (1/640, but failed to neutralize CVA16. In contrast, rabbits antisera could cross-neutralize strongly against different genotypes of EV71 but weakly against CVA16, with average titers 1/6400 and 1/32, respectively. The VP1 amino acid sequence dissimilarity between CVA16 and EV71 could partially explain why mouse antibodies failed to cross-neutralize CVA16. Therefore, the best formulation for producing cost-effective HFMD vaccine is a combination of formalin-inactivated EV71 and CAV16 virions.
Institute of Scientific and Technical Information of China (English)
崔英敏; 章国冰
2014-01-01
为解决认知无线网络中宽带频谱感知模式的计算量大和感知时间延长等问题，提出了一种基于速率的认知无线网络低复杂度宽带频谱感知算法。在宽带频谱感知的基础上，设定选择条件，对信道的期望传输速率进行比较，确定需要感知的信道，以实现减少认知无线网络次用户系统的信道感知时间和感知计算量的目标。经过仿真实验分析对比表明，该算法可以有效地平衡次用户系统吞吐量最大化和主用户系统的干扰之间的矛盾。%To solve cognitive wireless network broadband spectrum sensing modes computationally intensive and prolonged perception problems, a rate-based cognitive wireless networks with low complexity wideband spectrum sensing algorithm has been proposed. In the broadband spectrum sensing based on set selection criteria, the expectations of the channel transfer rate are compared to determine the need to perceive the channel, in order to achieve reduced cognitive wireless network system channel secondary users perception of time and perception of the calculated amount of goals. After analysis and comparison of simulation, experiments show that the algorithm can effectively balance the secondary user system throughput maximization and the main conflicts between the interference of the user’s system.
Review on Sorting Algorithms A Comparative Study
Directory of Open Access Journals (Sweden)
Khalid Suleiman Al-Kharabsheh
2013-09-01
Full Text Available There are many popular problems in different practical fields of computer sciences, database applications, Networks and Artificial intelligence. One of these basic operations and problems is sorting algorithm; the sorting problem has attracted a great deal of research. A lot of sorting algorithms has been developed to enhance the performance in terms of computational complexity. there are several factors that must be taken in consideration; time complexity, stability, memory space. Information growth rapidly in our world leads to increase developing sort algorithms .a stable sorting algorithms maintain the relative order of records with equal keys This paper makes a comparison between the Grouping Comparison Sort (GCS and conventional algorithm such as Selection sort, Quick sort, Insertion sort , Merge sort and Bubble sort with respect execution time to show how this algorithm perform reduce execution time.
Planetary transit candidates in COROT-IRa01 field
Carpano, S; Alonso, R; Barge, P; Aigrain, S; Almenara, J -M; Bordé, P; Bouchy, F; Carone, L; Deeg, H J; De la Reza, R; Deleuil, M; Dvorak, R; Erikson, A; Fressin, F; Fridlund, M; Gondoin, P; Guillot, T; Hatzes, A; Jorda, L; Lammer, H; Léger, A; Llebaria, A; Magain, P; Moutou, C; Ofir, A; Ollivier, M; Pacheco, E J; Pátzold, M; Pont, F; Queloz, D; Rauer, H; Régulo, C; Renner, S; Rouan, D; Samuel, B; Schneider, J; Wuchterl, G
2009-01-01
Context: CoRoT is a pioneering space mission devoted to the analysis of stellar variability and the photometric detection of extrasolar planets. Aims: We present the list of planetary transit candidates detected in the first field observed by CoRoT, IRa01, the initial run toward the Galactic anticenter, which lasted for 60 days. Methods: We analysed 3898 sources in the coloured bands and 5974 in the monochromatic band. Instrumental noise and stellar variability were taken into account using detrending tools before applying various transit search algorithms. Results: Fifty sources were classified as planetary transit candidates and the most reliable 40 detections were declared targets for follow-up ground-based observations. Two of these targets have so far been confirmed as planets, COROT-1b and COROT-4b, for which a complete characterization and specific studies were performed.
Congenital diaphragmatic hernia candidate genes derived from embryonic transcriptomes
DEFF Research Database (Denmark)
Russell, Meaghan K; Longoni, Mauro; Wells, Julie;
2012-01-01
perturbations lead to CDH phenotypes, and E16.5 when the diaphragm is fully formed. Gene sets defining biologically relevant pathways and temporal expression trends were identified by using a series of bioinformatic algorithms. These developmental sets were then compared with a manually curated list of genes...... expression profiling of developing embryonic diaphragms would help identify genes likely to be associated with diaphragm defects. We generated a time series of whole-transcriptome expression profiles from laser captured embryonic mouse diaphragms at embryonic day (E)11.5 and E12.5 when experimental...... previously shown to cause diaphragm defects in humans and in mouse models. Our integrative filtering strategy identified 27 candidates for CDH. We examined the diaphragms of knockout mice for one of the candidate genes, pre-B-cell leukemia transcription factor 1 (Pbx1), and identified a range of previously...
A thermodynamic approach to the affinity optimization of drug candidates.
Freire, Ernesto
2009-11-01
High throughput screening and other techniques commonly used to identify lead candidates for drug development usually yield compounds with binding affinities to their intended targets in the mid-micromolar range. The affinity of these molecules needs to be improved by several orders of magnitude before they become viable drug candidates. Traditionally, this task has been accomplished by establishing structure activity relationships to guide chemical modifications and improve the binding affinity of the compounds. As the binding affinity is a function of two quantities, the binding enthalpy and the binding entropy, it is evident that a more efficient optimization would be accomplished if both quantities were considered and improved simultaneously. Here, an optimization algorithm based upon enthalpic and entropic information generated by Isothermal Titration Calorimetry is presented.
11 CFR 100.154 - Candidate debates.
2010-01-01
... Expenditures § 100.154 Candidate debates. Funds used to defray costs incurred in staging candidate debates in accordance with the provisions of 11 CFR 110.13 and 114.4(f) are not expenditures. ... 11 Federal Elections 1 2010-01-01 2010-01-01 false Candidate debates. 100.154 Section...
11 CFR 100.92 - Candidate debates.
2010-01-01
... Contributions § 100.92 Candidate debates. Funds provided to defray costs incurred in staging candidate debates in accordance with the provisions of 11 CFR 110.13 and 114.4(f) are not contributions. ... 11 Federal Elections 1 2010-01-01 2010-01-01 false Candidate debates. 100.92 Section...
Excursion-Set-Mediated Genetic Algorithm
Noever, David; Baskaran, Subbiah
1995-01-01
Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.
Flow enforcement algorithms for ATM networks
DEFF Research Database (Denmark)
Dittmann, Lars; Jacobsen, Søren B.; Moth, Klaus
1991-01-01
Four measurement algorithms for flow enforcement in asynchronous transfer mode (ATM) networks are presented. The algorithms are the leaky bucket, the rectangular sliding window, the triangular sliding window, and the exponentially weighted moving average. A comparison, based partly on teletraffic....... Implementations are proposed on the block diagram level, and dimensioning examples are carried out when flow enforcing a renewal-type connection using the four algorithms. The corresponding hardware demands are estimated aid compared...
Wind Turbines Support Techniques during Frequency Drops — Energy Utilization Comparison
Directory of Open Access Journals (Sweden)
Ayman B. Attya
2014-08-01
Full Text Available The supportive role of wind turbines during frequency drops is still not clear enough, although there are many proposed algorithms. Most of the offered techniques make the wind turbine deviates from optimum power generation operation to special operation modes, to guarantee the availability of reasonable power support, when the system suffers frequency deviations. This paper summarizes the most dominant support algorithms and derives wind turbine power curves for each one. It also conducts a comparison from the point of view of wasted energy, with respect to optimum power generation. The authors insure the advantage of a frequency support algorithm, they previously presented, as it achieved lower amounts of wasted energy. This analysis is performed in two locations that are promising candidates for hosting wind farms in Egypt. Additionally, two different types of wind turbines from two different manufacturers are integrated. Matlab and Simulink are the implemented simulation environments.
Planetary Candidates Observed by Kepler VI: Planet Sample from Q1-Q16 (47 Months)
Mullally, F; Thompson, Susan E; Rowe, Jason; Burke, Christopher; Latham, David W; Batalha, Natalie M; Bryson, Stephen T; Christiansen, Jessie; Henze, Christopher E; Ofir, Aviv; Quarles, Billy; Shporer, Avi; Van Eylen, Vincent; Van Laerhoven, Christa; Shah, Yash; Wolfgang, Angie; Chaplin, W J; Xie, Ji-Wei; Akeson, Rachel; Argabright, Vic; Bachtell, Eric; Borucki, Thomas Barclay William J; Caldwell, Douglas A; Campbell, Jennifer R; Catanzarite, Joseph H; Cochran, William D; Duren, Riley M; Fleming, Scott W; Fraquelli, Dorothy; Girouard, Forrest R; Haas, Michael R; Hełminiak, Krzysztof G; Howell, Steve B; Huber, Daniel; Larson, Kipp; Gautier, Thomas N; Jenkins, Jon; Li, Jie; Lissauer, Jack J; McArthur, Scot; Miller, Chris; Morris, Robert L; Patil-Sabale, Anima; Plavchan, Peter; Putnam, Dustin; Quintana, Elisa V; Ramirez, Solange; Aguirre, V Silva; Seader, Shawn; Smith, Jeffrey C; Steffen, Jason H; Stewart, Chris; Stober, Jeremy; Still, Martin; Tenenbaum, Peter; Troeltzsch, John; Twicken, Joseph D; Zamudio, Khadeejah A
2015-01-01
\\We present the sixth catalog of Kepler candidate planets based on nearly 4 years of high precision photometry. This catalog builds on the legacy of previous catalogs released by the Kepler project and includes 1493 new Kepler Objects of Interest (KOIs) of which 554 are planet candidates, and 131 of these candidates have best fit radii 50 days to provide a consistently vetted sample that can be used to improve planet occurrence rate calculations. We discuss the performance of our planet detection algorithms, and the consistency of our vetting products. The full catalog is publicly available at the NASA Exoplanet Archive.
Candidate worldviews for design theory
DEFF Research Database (Denmark)
Galle, Per
2008-01-01
Our growing body of design theory risks being infected by more inconsistency than is justifiable by genuine disagreement among design theorists. Taking my cue from C. S. Peirce, who argued that theory inevitably rests on basic metaphysical assumptions that theorists ought to be critically aware of......, I demonstrate how ‘insidious inconsistency’ may infect design theory if we ignore his admonition. As a possible remedy, I propose a method by which the philosophy of design may develop sound metaphysical foundations (‘worldviews’) for design theory – and generate philosophical insights into design...... at the same time. Examples are given of how the first steps of the method may be carried out and a number of candidate worldviews are outlined and briefly discussed. In its own way, each worldview answers certain fundamental questions about the nature of design. These include the ontological question of what...
Hybrid ant colony algorithm for traveling salesman problem
Institute of Scientific and Technical Information of China (English)
无
2003-01-01
A hybrid approach based on ant colony algorithm for the traveling salesman problem is proposed, which is an improved algorithm characterized by adding a local search mechanism, a cross-removing strategy and candidate lists. Experimental results show that it is competitive in terms of solution quality and computation time.
Ravari, Alireza Norouzzadeh; Taghirad, Hamid D
2014-10-01
In this paper the problem of loop closing from depth or camera image information in an unknown environment is investigated. A sparse model is constructed from a parametric dictionary for every range or camera image as mobile robot observations. In contrast to high-dimensional feature-based representations, in this model, the dimension of the sensor measurements' representations is reduced. Considering the loop closure detection as a clustering problem in high-dimensional space, little attention has been paid to the curse of dimensionality in the existing state-of-the-art algorithms. In this paper, a representation is developed from a sparse model of images, with a lower dimension than original sensor observations. Exploiting the algorithmic information theory, the representation is developed such that it has the geometrically transformation invariant property in the sense of Kolmogorov complexity. A universal normalized metric is used for comparison of complexity based representations of image models. Finally, a distinctive property of normalized compression distance is exploited for detecting similar places and rejecting incorrect loop closure candidates. Experimental results show efficiency and accuracy of the proposed method in comparison to the state-of-the-art algorithms and some recently proposed methods.
A New Algorithm for Mining Frequent Pattern
Institute of Scientific and Technical Information of China (English)
李力; 靳蕃
2002-01-01
Mining frequent pattern in transaction database, time-series databases, and many other kinds of databases have been studied popularly in data mining research. Most of the previous studies adopt Apriori-like candidate set generation-and-test approach. However, candidate set generation is very costly. Han J. proposed a novel algorithm FP-growth that could generate frequent pattern without candidate set. Based on the analysis of the algorithm FP-growth, this paper proposes a concept of equivalent FP-tree and proposes an improved algorithm, denoted as FP-growth*, which is much faster in speed, and easy to realize. FP-growth* adopts a modified structure of FP-tree and header table, and only generates a header table in each recursive operation and projects the tree to the original FP-tree. The two algorithms get the same frequent pattern set in the same transaction database, but the performance study on computer shows that the speed of the improved algorithm, FP-growth*, is at least two times as fast as that of FP-growth.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
Microgenetic optimization algorithm for optimal wavefront shaping
Anderson, Benjamin R; Gunawidjaja, Ray; Eilers, Hergen
2015-01-01
One of the main limitations of utilizing optimal wavefront shaping in imaging and authentication applications is the slow speed of the optimization algorithms currently being used. To address this problem we develop a micro-genetic optimization algorithm ($\\mu$GA) for optimal wavefront shaping. We test the abilities of the $\\mu$GA and make comparisons to previous algorithms (iterative and simple-genetic) by using each algorithm to optimize transmission through an opaque medium. From our experiments we find that the $\\mu$GA is faster than both the iterative and simple-genetic algorithms and that both genetic algorithms are more resistant to noise and sample decoherence than the iterative algorithm.
Institute of Scientific and Technical Information of China (English)
陈昌敏; 谢维成; 范颂颂
2011-01-01
Aiming at the drawbacks of slow convergence speed and being easy to fall into local optimal point for basic ant colony algorithm in logistics vehicle routing optimization issue, this paper adopted an adaptive ant colony algorithm and the max-min ant colony algorithm to overcome the basic ant colony' s shortcomings.The analysis and comparison for the two algorithms were conducted,and the simulation of vehicle routing optimization in Matlab environment using adaptive and max-min ant colony algorithm was performed as well.Experimental results show that the max-min ant colony algorithm is better than adaptive ant colony algorithm in convergence speed and shortest path search, so max-min ant colony algorithm is superior to adaptive ant colony algorithm for logistics vehicle routing optimization.%针对物流车辆路径优化问题,考虑到基本蚁群算法有收敛速度慢、易陷入局部最优的缺点,采用了自适应蚁群算法和最大最小蚁群算法进行车辆路径优化,分析、比较了这两种算法的不同并在Matlab上做了仿真.仿真实验结果显示自适应蚁群算法在收敛速度和寻找最短路径上都略逊于最大最小蚁群算法,最大最小蚁群算法在物流车辆路径优化上优于适应蚁群算法.
A Hybrid Intelligent Algorithm for Optimal Birandom Portfolio Selection Problems
Directory of Open Access Journals (Sweden)
Qi Li
2014-01-01
Full Text Available Birandom portfolio selection problems have been well developed and widely applied in recent years. To solve these problems better, this paper designs a new hybrid intelligent algorithm which combines the improved LGMS-FOA algorithm with birandom simulation. Since all the existing algorithms solving these problems are based on genetic algorithm and birandom simulation, some comparisons between the new hybrid intelligent algorithm and the existing algorithms are given in terms of numerical experiments, which demonstrate that the new hybrid intelligent algorithm is more effective and precise when the numbers of the objective function computations are the same.
An Improved Ant Colony Routing Algorithm for WSNs
Directory of Open Access Journals (Sweden)
Tan Zhi
2015-01-01
Full Text Available Ant colony algorithm is a classical routing algorithm. And it are used in a variety of application because it is economic and self-organized. However, the routing algorithm will expend huge amounts of energy at the beginning. In the paper, based on the idea of Dijkstra algorithm, the improved ant colony algorithm was proposed to balance the energy consumption of networks. Through simulation and comparison with basic ant colony algorithms, it is obvious that improved algorithm can effectively balance energy consumption and extend the lifetime of WSNs.
Fusion of Image Segmentation Algorithms using Consensus Clustering
Ozay, Mete; Vural, Fatos T. Yarman; Kulkarni, Sanjeev R.; Poor, H. Vincent
2015-01-01
A new segmentation fusion method is proposed that ensembles the output of several segmentation algorithms applied on a remotely sensed image. The candidate segmentation sets are processed to achieve a consensus segmentation using a stochastic optimization algorithm based on the Filtered Stochastic BOEM (Best One Element Move) method. For this purpose, Filtered Stochastic BOEM is reformulated as a segmentation fusion problem by designing a new distance learning approach. The proposed algorithm...
AN INCREMENTAL UPDATING ALGORITHM FOR MINING ASSOCIATION RULES
Institute of Scientific and Technical Information of China (English)
Xu Baowen; Yi Tong; Wu Fangjun; Chen Zhenqiang
2002-01-01
In this letter, on the basis of Frequent Pattern(FP) tree, the support function to update FP-tree is introduced, then an Incremental FP (IFP) algorithm for mining association rules is proposed. IFP algorithm considers not only adding new data into the database but also reducing old data from the database. Furthermore, it can predigest five cases to three cases.The algorithm proposed in this letter can avoid generating lots of candidate items, and it is high efficient.
Indicators of Psychical Stability Among Junior and Youth Track and Field National Team Candidates
Directory of Open Access Journals (Sweden)
Romualdas K. Malinauskas
2014-03-01
Full Text Available This article deals with questions of psychical stability among junior and youth track and field national team candidates. Two methods were used to carry out the survey: The Competitive State Anxiety Inventory developed by Martens et al. and Athletes Psychical Stability Questionnaire developed by Milman. The random sample size consists of 81 junior and youth track and field national team candidates. Participants comprised 39 youth teams and 42 junior national team candidates. It was determined that, in comparison with the junior track and field national team candidates, anxiety of youth track and field national team candidates is lower (p<0.05. The psychical stability of youth track and field national team candidates, were found to be significantly higher than those of junior track and field national team candidates because it was found that youth track and field national team candidates scored higher (p <.05 than junior track and field national team candidates in following components of psychical stability: in precompetitive emotional stability and in self-regulation.
Ground state occupation probabilities of neutrinoless double beta decay candidates
Kotila, Jenni; Barea, Jose
2015-10-01
A better understanding of nuclear structure can offer important constraints on the calculation of 0 νββ nuclear matrix elements. A simple way to consider differences between initial and final states of neutrinoless double beta decay candidates is to look at the ground state occupation probabilities of initial and final nuclei. As is well known, microscopic interacting boson model (IBM-2) has found to be very useful in the description of detailed aspects of nuclear structure. In this talk I will present results for ground state occupation probabilities obtained using IBM-2 for several interesting candidates of 0 νββ -decay. Comparison with recent experimental results is also made. This work was supported Academy of Finland (Project 266437) and Chilean Ministry of Education (Fondecyt Grant No. 1150564),
Fundamental Properties of Kepler Planet-candidate Host Stars using Asteroseismology
Huber, Daniel; Chaplin, William J.; Christensen-Dalsgaard, Jørgen; Gilliland, Ronald L.; Kjeldsen, Hans; Buchhave, Lars A.; Fischer, Debra A.; Lissauer, Jack J.; Rowe, Jason F.; Sanchis-Ojeda, Roberto; Basu, Sarbani; Handberg, Rasmus; Hekker, Saskia; Howard, Andrew W.; Isaacson, Howard
2013-01-01
We have used asteroseismology to determine fundamental properties for 66 Kepler planet-candidate host stars, with typical uncertainties of 3% and 7% in radius and mass, respectively. The results include new asteroseismic solutions for four host stars with confirmed planets (Kepler-4, Kepler-14, Kepler-23 and Kepler-25) and increase the total number of Kepler host stars with asteroseismic solutions to 77. A comparison with stellar properties in the planet-candidate catalog by Batalha et al. sh...
Cardiac evaluation of liver transplant candidates
Mandell, Mercedes Susan; Lindenfeld, JoAnn; Tsou, Mei-Yung; Zimmerman, Michael
2008-01-01
Physicians previously thought that heart disease was rare in patients with end stage liver disease. However, recent evidence shows that the prevalence of ischemic heart disease and cardiomyopathy is increased in transplant candidates compared to most other surgical candidates. Investigators estimate that up to 26% of all liver transplant candidates have at least one critical coronary artery stenosis and that at least half of these patients will die perioperatively of cardiac complications. Ca...
Newly identified YSO candidates towards LDN 1188
Marton , G.; Verebélyi, E.; Kiss, Cs.; Smidla, J.
2013-11-01
We present an analysis of young stellar object (YSO) candidates towards the LDN 1188 molecular cloud. The YSO candidates were selected from the WISE all-sky catalogue, based on a statistical method. We found 601 candidates in the region, and classified them as Class I, Flat, and Class II YSOs. Groups were identified and described with the Minimal Spanning Tree (MST) method. Previously identified molecular cores show evidence of ongoing star formation at different stages throughout the cloud complex.
An Improved Heuristic Algorithm of Attribute Reduction in Rough Set
Institute of Scientific and Technical Information of China (English)
ShunxiangWu; MaoqingLi; WentingHuang; SifengLiu
2004-01-01
This paper introduces background of rough set theory, then proposes a new algorithm for finding optimal reduction and make comparison between the original algorithm and the improved one by the experiment about the nine standard data set in UL database to explain the validity of the improved heuristic algorithm.
A Framing Link Based Tabu Search Algorithm for Large-Scale Multidepot Vehicle Routing Problems
Directory of Open Access Journals (Sweden)
Xuhao Zhang
2014-01-01
Full Text Available A framing link (FL based tabu search algorithm is proposed in this paper for a large-scale multidepot vehicle routing problem (LSMDVRP. Framing links are generated during continuous great optimization of current solutions and then taken as skeletons so as to improve optimal seeking ability, speed up the process of optimization, and obtain better results. Based on the comparison between pre- and postmutation routes in the current solution, different parts are extracted. In the current optimization period, links involved in the optimal solution are regarded as candidates to the FL base. Multiple optimization periods exist in the whole algorithm, and there are several potential FLs in each period. If the update condition is satisfied, the FL base is updated, new FLs are added into the current route, and the next period starts. Through adjusting the borderline of multidepot sharing area with dynamic parameters, the authors define candidate selection principles for three kinds of customer connections, respectively. Link split and the roulette approach are employed to choose FLs. 18 LSMDVRP instances in three groups are studied and new optimal solution values for nine of them are obtained, with higher computation speed and reliability.
Hospital Case Cost Estimates Modelling - Algorithm Comparison
Andru, Peter
2008-01-01
Ontario (Canada) Health System stakeholders support the idea and necessity of the integrated source of data that would include both clinical (e.g. diagnosis, intervention, length of stay, case mix group) and financial (e.g. cost per weighted case, cost per diem) characteristics of the Ontario healthcare system activities at the patient-specific level. At present, the actual patient-level case costs in the explicit form are not available in the financial databases for all hospitals. The goal of this research effort is to develop financial models that will assign each clinical case in the patient-specific data warehouse a dollar value, representing the cost incurred by the Ontario health care facility which treated the patient. Five mathematical models have been developed and verified using real dataset. All models can be classified into two groups based on their underlying method: 1. Models based on using relative intensity weights of the cases, and 2. Models based on using cost per diem.
Characterization for Fusion Candidate Vanadium Alloys
Institute of Scientific and Technical Information of China (English)
T. Muroga; T. Nagasaka; J. M. Chen; Z. Y. Xu; Q. Y. Huang; y. C. Wu
2004-01-01
This paper summarizes recent achievements in the characterization of candidate vanadium alloys obtained for fusion in the framework of the Japan-China Core University Program.National Institute for Fusion Science (NIFS) has a program of fabricating high-purity V-4Cr4Ti alloys. The resulting products (NIFS-HEAT-1,2), were characterized by various research groups in the world including Chinese partners. South Western Institute of Physics (SWIP) fabricated a new V-4Cr-4Ti alloy (SWIP-Heat), and carried out a comparative evaluation of hydrogen embrittlement of NIFS-HEATs and SWIP-Heat. The tensile test of hydrogen-doped alloys showed that the NIFS-HEAT maintained the ductility to relatively high hydrogen levels.The comparison of the data with those of previous studies suggested that the reduced oxygen level in the NIFS-HEATs should be responsible for the increased resistance to hydrogen embrittlement.Based on the chemical analysis data of NIFS-HEATs and SWIP-Heats, neutron-induced activation was analyzed in Institute of Plasma Physics (IPP-CAS) as a function of cooling time after the use in the fusion first wall. The results showed that the low level of Co dominates the activity up to 50 years followed by a domination of Nb or Nb and Al in the respective alloys. It was suggested that reduction of Co and Nb, both of which are thought to have been introduced via cross-contamination into the alloys from the molds used should be crucial for reducing further the activation.
The Books Recommend Service System Based on Improved Algorithm for Mining Association Rules
Institute of Scientific and Technical Information of China (English)
王萍
2009-01-01
The Apriori algorithm is a classical method of association rules mining. Based on analysis of this theory, the paper provides an improved Apriori algorithm. The paper puts foward with algorithm combines HASH table technique and reduction of candidate item sets to en-hance the usage efficiency of resources as well as the individualized service of the data library.
Evolutionary Graph Drawing Algorithms
Institute of Scientific and Technical Information of China (English)
Huang Jing-wei; Wei Wen-fang
2003-01-01
In this paper, graph drawing algorithms based on genetic algorithms are designed for general undirected graphs and directed graphs. As being shown, graph drawing algorithms designed by genetic algorithms have the following advantages: the frames of the algorithms are unified, the method is simple, different algorithms may be attained by designing different objective functions, therefore enhance the reuse of the algorithms. Also, aesthetics or constrains may be added to satisfy different requirements.
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Special Education Teacher Candidate Assessment: A Review
McCall, Zach; McHatton, Patricia Alvarez; Shealey, Monika Williams
2014-01-01
Teacher preparation has been under intense scrutiny in recent years. In order for preparation of special education teacher candidates to remain viable, candidate assessment practices must apply practices identified in the extant literature base, while special education teacher education researchers must extend this base with rigorous efforts to…
11 CFR 110.13 - Candidate debates.
2010-01-01
... political parties may stage candidate debates in accordance with this section and 11 CFR 114.4(f). (2... also cover or carry candidate debates in accordance with 11 CFR part 100, subparts B and C and part 100... CFR 114.4(f), provided that they are not owned or controlled by a political party, political...
Adaptive link selection algorithms for distributed estimation
Xu, Songcen; de Lamare, Rodrigo C.; Poor, H. Vincent
2015-12-01
This paper presents adaptive link selection algorithms for distributed estimation and considers their application to wireless sensor networks and smart grids. In particular, exhaustive search-based least mean squares (LMS) / recursive least squares (RLS) link selection algorithms and sparsity-inspired LMS / RLS link selection algorithms that can exploit the topology of networks with poor-quality links are considered. The proposed link selection algorithms are then analyzed in terms of their stability, steady-state, and tracking performance and computational complexity. In comparison with the existing centralized or distributed estimation strategies, the key features of the proposed algorithms are as follows: (1) more accurate estimates and faster convergence speed can be obtained and (2) the network is equipped with the ability of link selection that can circumvent link failures and improve the estimation performance. The performance of the proposed algorithms for distributed estimation is illustrated via simulations in applications of wireless sensor networks and smart grids.
State transition algorithm for traveling salesman problem
Chunhua, Yang; Xiaojun, Zhou; Weihua, Gui
2012-01-01
Discrete version of state transition algorithm is proposed in order to solve the traveling salesman problem. Three special operators for discrete optimization problem named swap, shift and symmetry transformations are presented. Convergence analysis and time complexity of the algorithm are also considered. To make the algorithm simple and efficient, no parameter adjusting is suggested in current version. Experiments are carried out to test the performance of the strategy, and comparisons with simulated annealing and ant colony optimization have demonstrated the effectiveness of the proposed algorithm. The results also show that the discrete state transition algorithm consumes much less time and has better search ability than its counterparts, which indicates that state transition algorithm is with strong adaptability.
An adaptively spatial color gamut mapping algorithm
Institute of Scientific and Technical Information of China (English)
Xiandou Zhang; Haisong Xu
2009-01-01
To improve the accuracy of color image reproduction from displays to printers,an adaptively spatial color gamut mapping algorithm(ASCGMA)is proposed.In this algorithm,the compression degree of outof-reproduction-gamut color is not only related to the position of the color in CIELCH color space,but also depending on the neighborhood of the color to be mapped.The psychophysical experiment of pair comparison ks carried out to evaluate and compare this new algorithm with the HPMINDE and SGCK gamut mapping algorithms recommended by the International Commission on Illumination(CIE).The experimental results indicate that the proposed algorithm outperforms the algorithms of HPMINDE and SGCK except for the very dark images.
A novel algorithm for satellite data transmission
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
For remote sensing satellite data transmission,a novel algorithm is proposed in this paper.It integrates different type feature descriptors into multistage recognizers.In the first level,the dynamic clustering algorithm is used.In the second level,the improved support vector machines algorithm demonstrates its validity.In the third level,the shape matrices similarity comparison algorithm shows its excellent performance.The single child recognizers are connected in series,but they are independent of each other.Objects which are not recognized correctly by the lower level recognizers are then put into the higher level recognizers.Experimental results show that the multistage recognition algorithm improves the accuracy greatly with higher level feature descriptors and higher level recognizers.The algorithm may offer a new methodology for high speed satellite data transmission.
A novel algorithm for satellite data transmission
Institute of Scientific and Technical Information of China (English)
ZHANG ShouJuan; ZHOU Ouan
2009-01-01
For remote sensing satellite data transmission, a novel algorithm is proposed in this paper. It integrates different type feature descriptors into multistage recognizers. In the first level, the dynamic clustering algorithm is used. In the second level, the improved support vector machines algorithm demonstrates its validity. In the third level, the shape matrices similarity comparison algorithm shows its excellent performance. The single child recognizers are connected in series, but they are independent of each other. Objects which are not recognized correctly by the lower level recognizers are then put into the higher level recognizers. Experimental results show that the multistage recognition algorithm improves the accuracy greatly with higher level feature descriptors and higher level recognizers. The algorithm may offer a new methodology for high speed satellite data transmission.
Research on Palmprint Identification Method Based on Quantum Algorithms
Directory of Open Access Journals (Sweden)
Hui Li
2014-01-01
Full Text Available Quantum image recognition is a technology by using quantum algorithm to process the image information. It can obtain better effect than classical algorithm. In this paper, four different quantum algorithms are used in the three stages of palmprint recognition. First, quantum adaptive median filtering algorithm is presented in palmprint filtering processing. Quantum filtering algorithm can get a better filtering result than classical algorithm through the comparison. Next, quantum Fourier transform (QFT is used to extract pattern features by only one operation due to quantum parallelism. The proposed algorithm exhibits an exponential speed-up compared with discrete Fourier transform in the feature extraction. Finally, quantum set operations and Grover algorithm are used in palmprint matching. According to the experimental results, quantum algorithm only needs to apply square of N operations to find out the target palmprint, but the traditional method needs N times of calculation. At the same time, the matching accuracy of quantum algorithm is almost 100%.
A new classification algorithm based on RGH-tree search
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
In this paper, we put forward a new classification algorithm based on RGH-Tree search and perform the classification analysis and comparison study. This algorithm can save computing resource and increase the classification efficiency. The experiment shows that this algorithm can get better effect in dealing with three dimensional multi-kind data. We find that the algorithm has better generalization ability for small training set and big testing result.
Algorithmic Relative Complexity
Directory of Open Access Journals (Sweden)
Daniele Cerra
2011-04-01
Full Text Available Information content and compression are tightly related concepts that can be addressed through both classical and algorithmic information theories, on the basis of Shannon entropy and Kolmogorov complexity, respectively. The definition of several entities in Kolmogorov’s framework relies upon ideas from classical information theory, and these two approaches share many common traits. In this work, we expand the relations between these two frameworks by introducing algorithmic cross-complexity and relative complexity, counterparts of the cross-entropy and relative entropy (or Kullback-Leibler divergence found in Shannon’s framework. We define the cross-complexity of an object x with respect to another object y as the amount of computational resources needed to specify x in terms of y, and the complexity of x related to y as the compression power which is lost when adopting such a description for x, compared to the shortest representation of x. Properties of analogous quantities in classical information theory hold for these new concepts. As these notions are incomputable, a suitable approximation based upon data compression is derived to enable the application to real data, yielding a divergence measure applicable to any pair of strings. Example applications are outlined, involving authorship attribution and satellite image classification, as well as a comparison to similar established techniques.
Planetary Candidates Observed by \\ik V: Planet Sample from Q1-Q12 (36 Months)
Rowe, Jason F; Antoci, Victoria; Barclay, Thomas; Batalha, Natalie M; Borucki, William J; Burke, Christopher J; Bryson, Steven T; Caldwell, Douglas A; Campbell, Jennifer R; Catanzarite, Joseph H; Christiansen, Jessie L; Cochran, William; Gilliland, Ronald L; Girouard, Forrest R; Haas, Michael R; Helminiak, Krzysztof G; Henze, Christopher E; Hoffman, Kelsey L; Howell, Steve B; Huber, Daniel; Hunter, Roger C; Jang-Condell, Hannah; Jenkins, Jon M; Klaus, Todd C; Latham, David W; Li, Jie; Lissauer, Jack J; McCauliff, Sean D; Morris, Robert L; Mullally, F; Ofir, Aviv; Quarles, Billy; Quintana, Elisa; Sabale, Anima; Seader, Shawn; Shporer, Avi; Smith, Jeffrey C; Steffen, Jason H; Still, Martin; Tenenbaum, Peter; Thompson, Susan E; Twicken, Joseph D; Van Laerhoven, Christa; Wolfgang, Angie; Zamudio, Khadeejah A
2015-01-01
The Kepler mission discovered 2842 exoplanet candidates with 2 years of data. We provide updates to the Kepler planet candidate sample based upon 3 years (Q1-Q12) of data. Through a series of tests to exclude false-positives, primarily caused by eclipsing binary stars and instrumental systematics, 855 additional planetary candidates have been discovered, bringing the total number known to 3697. We provide revised transit parameters and accompanying posterior distributions based on a Markov Chain Monte Carlo algorithm for the cumulative catalogue of Kepler Objects of Interest. There are now 130 candidates in the cumulative catalogue that receive less than twice the flux the Earth receives and more than 1100 have a radius less than 1.5 Rearth. There are now a dozen candidates meeting both criteria, roughly doubling the number of candidate Earth analogs. A majority of planetary candidates have a high probability of being bonafide planets, however, there are populations of likely false-positives. We discuss and s...
Geometrical interpretation and improvements of the Blahut-Arimoto's algorithm
Naja, Ziad; Duhamel, P; 10.1109/ICASSP.2009.4960131
2010-01-01
The paper first recalls the Blahut Arimoto algorithm for computing the capacity of arbitrary discrete memoryless channels, as an example of an iterative algorithm working with probability density estimates. Then, a geometrical interpretation of this algorithm based on projections onto linear and exponential families of probabilities is provided. Finally, this understanding allows also to propose to write the Blahut-Arimoto algorithm, as a true proximal point algorithm. it is shown that the corresponding version has an improved convergence rate, compared to the initial algorithm, as well as in comparison with other improved versions.
Performance analysis of cone detection algorithms.
Mariotti, Letizia; Devaney, Nicholas
2015-04-01
Many algorithms have been proposed to help clinicians evaluate cone density and spacing, as these may be related to the onset of retinal diseases. However, there has been no rigorous comparison of the performance of these algorithms. In addition, the performance of such algorithms is typically determined by comparison with human observers. Here we propose a technique to simulate realistic images of the cone mosaic. We use the simulated images to test the performance of three popular cone detection algorithms, and we introduce an algorithm which is used by astronomers to detect stars in astronomical images. We use Free Response Operating Characteristic (FROC) curves to evaluate and compare the performance of the four algorithms. This allows us to optimize the performance of each algorithm. We observe that performance is significantly enhanced by up-sampling the images. We investigate the effect of noise and image quality on cone mosaic parameters estimated using the different algorithms, finding that the estimated regularity is the most sensitive parameter. PMID:26366758
Bayesian Optimisation Algorithm for Nurse Scheduling
Li, Jingpeng
2008-01-01
Our research has shown that schedules can be built mimicking a human scheduler by using a set of rules that involve domain knowledge. This chapter presents a Bayesian Optimization Algorithm (BOA) for the nurse scheduling problem that chooses such suitable scheduling rules from a set for each nurses assignment. Based on the idea of using probabilistic models, the BOA builds a Bayesian network for the set of promising solutions and samples these networks to generate new candidate solutions. Computational results from 52 real data instances demonstrate the success of this approach. It is also suggested that the learning mechanism in the proposed algorithm may be suitable for other scheduling problems.