WorldWideScience

Sample records for evaluating algorithm performance

  1. Interactive segmentation techniques algorithms and performance evaluation

    CERN Document Server

    He, Jia; Kuo, C-C Jay

    2013-01-01

    This book focuses on interactive segmentation techniques, which have been extensively studied in recent decades. Interactive segmentation emphasizes clear extraction of objects of interest, whose locations are roughly indicated by human interactions based on high level perception. This book will first introduce classic graph-cut segmentation algorithms and then discuss state-of-the-art techniques, including graph matching methods, region merging and label propagation, clustering methods, and segmentation methods based on edge detection. A comparative analysis of these methods will be provided

  2. Performance Evaluation of Incremental K-means Clustering Algorithm

    OpenAIRE

    Chakraborty, Sanjay; Nagwani, N. K.

    2014-01-01

    The incremental K-means clustering algorithm has already been proposed and analysed in paper [Chakraborty and Nagwani, 2011]. It is a very innovative approach which is applicable in periodically incremental environment and dealing with a bulk of updates. In this paper the performance evaluation is done for this incremental K-means clustering algorithm using air pollution database. This paper also describes the comparison on the performance evaluations between existing K-means clustering and i...

  3. Evaluation of Activity Recognition Algorithms for Employee Performance Monitoring

    OpenAIRE

    Mehreen Mumtaz; Hafiz Adnan Habib

    2012-01-01

    Successful Human Resource Management plays a key role in success of any organization. Traditionally, human resource managers rely on various information technology solutions such as Payroll and Work Time Systems incorporating RFID and biometric technologies. This research evaluates activity recognition algorithms for employee performance monitoring. An activity recognition algorithm has been implemented that categorized the activity of employee into following in to classes: job activities and...

  4. Performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Giger, M.L.; Chen, C.T.; Sullivan, B.J.

    1990-01-01

    In this paper the authors evaluate the expectation maximization (EM) algorithm, both qualitatively and quantitatively, as a technique for enhancing radiographic images. Previous studies have qualitatively shown the usefulness of the EM algorithm but have failed to quantify and compare its performance with those of other image processing techniques. Recent studies by Loo et al, Ishida et al, and Giger et al, have explained improvements in image quality quantitatively in terms of a signal-to-noise ratio (SNR) derived from signal detection theory. In this study, we take a similar approach in quantifying the effect of the EM algorithm on detection of simulated low-contrast square objects superimposed on radiographic mottle. The SNRs of the original and processed images are calculated taking into account both the human visual system response and the screen-film transfer function as well as a noise component internal to the eye-brain system. The EM algorithm was also implemented on digital screen-film images of test patterns and clinical mammograms

  5. Performance evaluation of PCA-based spike sorting algorithms.

    Science.gov (United States)

    Adamos, Dimitrios A; Kosmidis, Efstratios K; Theophilidis, George

    2008-09-01

    Deciphering the electrical activity of individual neurons from multi-unit noisy recordings is critical for understanding complex neural systems. A widely used spike sorting algorithm is being evaluated for single-electrode nerve trunk recordings. The algorithm is based on principal component analysis (PCA) for spike feature extraction. In the neuroscience literature it is generally assumed that the use of the first two or most commonly three principal components is sufficient. We estimate the optimum PCA-based feature space by evaluating the algorithm's performance on simulated series of action potentials. A number of modifications are made to the open source nev2lkit software to enable systematic investigation of the parameter space. We introduce a new metric to define clustering error considering over-clustering more favorable than under-clustering as proposed by experimentalists for our data. Both the program patch and the metric are available online. Correlated and white Gaussian noise processes are superimposed to account for biological and artificial jitter in the recordings. We report that the employment of more than three principal components is in general beneficial for all noise cases considered. Finally, we apply our results to experimental data and verify that the sorting process with four principal components is in agreement with a panel of electrophysiology experts.

  6. A string matching based algorithm for performance evaluation of ...

    Indian Academy of Sciences (India)

    In this paper, we have addressed the problem of automated performance evaluation of Mathematical Expression (ME) recognition. Automated evaluation requires that recognition output and ground truth in some editable format like LaTeX, MathML, etc. have to be matched. But standard forms can have extraneous symbols ...

  7. Performance indices and evaluation of algorithms in building energy efficient design optimization

    International Nuclear Information System (INIS)

    Si, Binghui; Tian, Zhichao; Jin, Xing; Zhou, Xin; Tang, Peng; Shi, Xing

    2016-01-01

    Building energy efficient design optimization is an emerging technique that is increasingly being used to design buildings with better overall performance and a particular emphasis on energy efficiency. To achieve building energy efficient design optimization, algorithms are vital to generate new designs and thus drive the design optimization process. Therefore, the performance of algorithms is crucial to achieving effective energy efficient design techniques. This study evaluates algorithms used for building energy efficient design optimization. A set of performance indices, namely, stability, robustness, validity, speed, coverage, and locality, is proposed to evaluate the overall performance of algorithms. A benchmark building and a design optimization problem are also developed. Hooke–Jeeves algorithm, Multi-Objective Genetic Algorithm II, and Multi-Objective Particle Swarm Optimization algorithm are evaluated by using the proposed performance indices and benchmark design problem. Results indicate that no algorithm performs best in all six areas. Therefore, when facing an energy efficient design problem, the algorithm must be carefully selected based on the nature of the problem and the performance indices that matter the most. - Highlights: • Six indices of algorithm performance in building energy optimization are developed. • For each index, its concept is defined and the calculation formulas are proposed. • A benchmark building and benchmark energy efficient design problem are proposed. • The performance of three selected algorithms are evaluated.

  8. Evaluation of the performance of different firefly algorithms to the ...

    African Journals Online (AJOL)

    of firefly algorithms are applied to solve the nonlinear ELD problem. ... problem using those recent variants and the classical firefly algorithm for different test cases. Efficiency ...... International Journal of Machine. Learning and Computing, Vol.

  9. General Video Game Evaluation Using Relative Algorithm Performance Profiles

    DEFF Research Database (Denmark)

    Nielsen, Thorbjørn; Barros, Gabriella; Togelius, Julian

    2015-01-01

    In order to generate complete games through evolution we need generic and reliably evaluation functions for games. It has been suggested that game quality could be characterised through playing a game with different controllers and comparing their performance. This paper explores that idea throug...

  10. Evaluating Prognostics Performance for Algorithms Incorporating Uncertainty Estimates

    Data.gov (United States)

    National Aeronautics and Space Administration — Uncertainty Representation and Management (URM) are an integral part of the prognostic system development.1As capabilities of prediction algorithms evolve, research...

  11. A Performance Evaluation of Lightning-NO Algorithms in CMAQ

    Science.gov (United States)

    In the Community Multiscale Air Quality (CMAQv5.2) model, we have implemented two algorithms for lightning NO production; one algorithm is based on the hourly observed cloud-to-ground lightning strike data from National Lightning Detection Network (NLDN) to replace the previous m...

  12. Performance evaluation of image segmentation algorithms on microscopic image data

    Czech Academy of Sciences Publication Activity Database

    Beneš, Miroslav; Zitová, Barbara

    2015-01-01

    Roč. 275, č. 1 (2015), s. 65-85 ISSN 0022-2720 R&D Projects: GA ČR GAP103/12/2211 Institutional support: RVO:67985556 Keywords : image segmentation * performance evaluation * microscopic images Subject RIV: JC - Computer Hardware ; Software Impact factor: 2.136, year: 2015 http://library.utia.cas.cz/separaty/2014/ZOI/zitova-0434809-DOI.pdf

  13. Ridge Distance Estimation in Fingerprint Images: Algorithm and Performance Evaluation

    Directory of Open Access Journals (Sweden)

    Tian Jie

    2004-01-01

    Full Text Available It is important to estimate the ridge distance accurately, an intrinsic texture property of a fingerprint image. Up to now, only several articles have touched directly upon ridge distance estimation. Little has been published providing detailed evaluation of methods for ridge distance estimation, in particular, the traditional spectral analysis method applied in the frequency field. In this paper, a novel method on nonoverlap blocks, called the statistical method, is presented to estimate the ridge distance. Direct estimation ratio (DER and estimation accuracy (EA are defined and used as parameters along with time consumption (TC to evaluate performance of these two methods for ridge distance estimation. Based on comparison of performances of these two methods, a third hybrid method is developed to combine the merits of both methods. Experimental results indicate that DER is 44.7%, 63.8%, and 80.6%; EA is 84%, 93%, and 91%; and TC is , , and seconds, with the spectral analysis method, statistical method, and hybrid method, respectively.

  14. Performance Evaluation of Spectral Clustering Algorithm using Various Clustering Validity Indices

    OpenAIRE

    M. T. Somashekara; D. Manjunatha

    2014-01-01

    In spite of the popularity of spectral clustering algorithm, the evaluation procedures are still in developmental stage. In this article, we have taken benchmarking IRIS dataset for performing comparative study of twelve indices for evaluating spectral clustering algorithm. The results of the spectral clustering technique were also compared with k-mean algorithm. The validity of the indices was also verified with accuracy and (Normalized Mutual Information) NMI score. Spectral clustering algo...

  15. NETRA: A parallel architecture for integrated vision systems 2: Algorithms and performance evaluation

    Science.gov (United States)

    Choudhary, Alok N.; Patel, Janak H.; Ahuja, Narendra

    1989-01-01

    In part 1 architecture of NETRA is presented. A performance evaluation of NETRA using several common vision algorithms is also presented. Performance of algorithms when they are mapped on one cluster is described. It is shown that SIMD, MIMD, and systolic algorithms can be easily mapped onto processor clusters, and almost linear speedups are possible. For some algorithms, analytical performance results are compared with implementation performance results. It is observed that the analysis is very accurate. Performance analysis of parallel algorithms when mapped across clusters is presented. Mappings across clusters illustrate the importance and use of shared as well as distributed memory in achieving high performance. The parameters for evaluation are derived from the characteristics of the parallel algorithms, and these parameters are used to evaluate the alternative communication strategies in NETRA. Furthermore, the effect of communication interference from other processors in the system on the execution of an algorithm is studied. Using the analysis, performance of many algorithms with different characteristics is presented. It is observed that if communication speeds are matched with the computation speeds, good speedups are possible when algorithms are mapped across clusters.

  16. Evaluation of the performance of different firefly algorithms to the ...

    African Journals Online (AJOL)

    To solve the economic load dispatch problem, traditional and intelligent techniques were applied. Researchers have shown interest in utilizing metaheuristic methods to solve complex optimization problems in real life applications. In this paper, three alternatives of firefly algorithms are applied to solve the nonlinear ELD ...

  17. New Algorithm for Evaluating the Green Supply Chain Performance in an Uncertain Environment

    Directory of Open Access Journals (Sweden)

    Pan Liu

    2016-09-01

    Full Text Available An effective green supply chain (GSC can help an enterprise obtain more benefits and reduce costs. Therefore, developing an effective evaluation method for GSC performance evaluation is becoming increasingly important. In this study, the advantages and disadvantages of the current performance evaluations and algorithms for GSC performance evaluations were discussed and evaluated. Based on these findings, an improved five-dimensional balanced scorecard was proposed in which the green performance indicators were revised to facilitate their measurement. A model based on Rough Set theory, the Genetic Algorithm, and the Levenberg Marquardt Back Propagation (LMBP neural network algorithm was proposed. Next, using Matlab, the Rosetta tool, and the practical data of company F, a case study was conducted. The results indicate that the proposed model has a high convergence speed and an accurate prediction ability. The credibility and effectiveness of the proposed model was validated. In comparison with the normal Back Propagation neural network algorithm and the LMBP neural network algorithm, the proposed model has greater credibility and effectiveness. In practice, this method provides a more suitable indicator system and algorithm for enterprises to be able to implement GSC performance evaluations in an uncertain environment. Academically, the proposed method addresses the lack of a theoretical basis for GSC performance evaluation, thus representing a new development in GSC performance evaluation theory.

  18. A comprehensive performance evaluation on the prediction results of existing cooperative transcription factors identification algorithms.

    Science.gov (United States)

    Lai, Fu-Jou; Chang, Hong-Tsun; Huang, Yueh-Min; Wu, Wei-Sheng

    2014-01-01

    Eukaryotic transcriptional regulation is known to be highly connected through the networks of cooperative transcription factors (TFs). Measuring the cooperativity of TFs is helpful for understanding the biological relevance of these TFs in regulating genes. The recent advances in computational techniques led to various predictions of cooperative TF pairs in yeast. As each algorithm integrated different data resources and was developed based on different rationales, it possessed its own merit and claimed outperforming others. However, the claim was prone to subjectivity because each algorithm compared with only a few other algorithms and only used a small set of performance indices for comparison. This motivated us to propose a series of indices to objectively evaluate the prediction performance of existing algorithms. And based on the proposed performance indices, we conducted a comprehensive performance evaluation. We collected 14 sets of predicted cooperative TF pairs (PCTFPs) in yeast from 14 existing algorithms in the literature. Using the eight performance indices we adopted/proposed, the cooperativity of each PCTFP was measured and a ranking score according to the mean cooperativity of the set was given to each set of PCTFPs under evaluation for each performance index. It was seen that the ranking scores of a set of PCTFPs vary with different performance indices, implying that an algorithm used in predicting cooperative TF pairs is of strength somewhere but may be of weakness elsewhere. We finally made a comprehensive ranking for these 14 sets. The results showed that Wang J's study obtained the best performance evaluation on the prediction of cooperative TF pairs in yeast. In this study, we adopted/proposed eight performance indices to make a comprehensive performance evaluation on the prediction results of 14 existing cooperative TFs identification algorithms. Most importantly, these proposed indices can be easily applied to measure the performance of new

  19. Evaluation of the performance of existing non-laboratory based cardiovascular risk assessment algorithms

    Science.gov (United States)

    2013-01-01

    Background The high burden and rising incidence of cardiovascular disease (CVD) in resource constrained countries necessitates implementation of robust and pragmatic primary and secondary prevention strategies. Many current CVD management guidelines recommend absolute cardiovascular (CV) risk assessment as a clinically sound guide to preventive and treatment strategies. Development of non-laboratory based cardiovascular risk assessment algorithms enable absolute risk assessment in resource constrained countries. The objective of this review is to evaluate the performance of existing non-laboratory based CV risk assessment algorithms using the benchmarks for clinically useful CV risk assessment algorithms outlined by Cooney and colleagues. Methods A literature search to identify non-laboratory based risk prediction algorithms was performed in MEDLINE, CINAHL, Ovid Premier Nursing Journals Plus, and PubMed databases. The identified algorithms were evaluated using the benchmarks for clinically useful cardiovascular risk assessment algorithms outlined by Cooney and colleagues. Results Five non-laboratory based CV risk assessment algorithms were identified. The Gaziano and Framingham algorithms met the criteria for appropriateness of statistical methods used to derive the algorithms and endpoints. The Swedish Consultation, Framingham and Gaziano algorithms demonstrated good discrimination in derivation datasets. Only the Gaziano algorithm was externally validated where it had optimal discrimination. The Gaziano and WHO algorithms had chart formats which made them simple and user friendly for clinical application. Conclusion Both the Gaziano and Framingham non-laboratory based algorithms met most of the criteria outlined by Cooney and colleagues. External validation of the algorithms in diverse samples is needed to ascertain their performance and applicability to different populations and to enhance clinicians’ confidence in them. PMID:24373202

  20. A string matching based algorithm for performance evaluation of ...

    Indian Academy of Sciences (India)

    Zanibbi et al (2011) have proposed performance metrics based on bipartite graphs at stroke level. ... bipartite graphs on which metrics based on hamming distances are defined. ...... Document Image Analysis for Libraries 320–331 ... Lee H J and Wang J S 1997 Design of a mathematical expression understanding system.

  1. Performance evaluation of recommendation algorithms on Internet of Things services

    Science.gov (United States)

    Mashal, Ibrahim; Alsaryrah, Osama; Chung, Tein-Yaw

    2016-06-01

    Internet of Things (IoT) is the next wave of industry revolution that will initiate many services, such as personal health care and green energy monitoring, which people may subscribe for their convenience. Recommending IoT services to the users based on objects they own will become very crucial for the success of IoT. In this work, we introduce the concept of service recommender systems in IoT by a formal model. As a first attempt in this direction, we have proposed a hyper-graph model for IoT recommender system in which each hyper-edge connects users, objects, and services. Next, we studied the usefulness of traditional recommendation schemes and their hybrid approaches on IoT service recommendation (IoTSRS) based on existing well known metrics. The preliminary results show that existing approaches perform reasonably well but further extension is required for IoTSRS. Several challenges were discussed to point out the direction of future development in IoTSR.

  2. Performance Evaluation of New Joint EDF-RM Scheduling Algorithm for Real Time Distributed System

    Directory of Open Access Journals (Sweden)

    Rashmi Sharma

    2014-01-01

    Full Text Available In Real Time System, the achievement of deadline is the main target of every scheduling algorithm. Earliest Deadline First (EDF, Rate Monotonic (RM, and least Laxity First are some renowned algorithms that work well in their own context. As we know, there is a very common problem Domino's effect in EDF that is generated due to overloading condition (EDF is not working well in overloading situation. Similarly, performance of RM is degraded in underloading condition. We can say that both algorithms are complements of each other. Deadline missing in both events happens because of their utilization bounding strategy. Therefore, in this paper we are proposing a new scheduling algorithm that carries through the drawback of both existing algorithms. Joint EDF-RM scheduling algorithm is implemented in global scheduler that permits task migration mechanism in between processors in the system. In order to check the improved behavior of proposed algorithm we perform simulation. Results are achieved and evaluated in terms of Success Ratio (SR, Average CPU Utilization (ECU, Failure Ratio (FR, and Maximum Tardiness parameters. In the end, the results are compared with the existing (EDF, RM, and D_R_EDF algorithms. It has been shown that the proposed algorithm performs better during overloading condition as well in underloading condition.

  3. A high-performance spatial database based approach for pathology imaging algorithm evaluation

    Directory of Open Access Journals (Sweden)

    Fusheng Wang

    2013-01-01

    Full Text Available Background: Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. Context: The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS data model. Aims: (1 Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2 Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3 Develop a set of queries to support data sampling and result comparisons; (4 Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. Materials and Methods: We have considered two scenarios for algorithm evaluation: (1 algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2 algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The

  4. A high-performance spatial database based approach for pathology imaging algorithm evaluation.

    Science.gov (United States)

    Wang, Fusheng; Kong, Jun; Gao, Jingjing; Cooper, Lee A D; Kurc, Tahsin; Zhou, Zhengwen; Adler, David; Vergara-Niedermayr, Cristobal; Katigbak, Bryan; Brat, Daniel J; Saltz, Joel H

    2013-01-01

    Algorithm evaluation provides a means to characterize variability across image analysis algorithms, validate algorithms by comparison with human annotations, combine results from multiple algorithms for performance improvement, and facilitate algorithm sensitivity studies. The sizes of images and image analysis results in pathology image analysis pose significant challenges in algorithm evaluation. We present an efficient parallel spatial database approach to model, normalize, manage, and query large volumes of analytical image result data. This provides an efficient platform for algorithm evaluation. Our experiments with a set of brain tumor images demonstrate the application, scalability, and effectiveness of the platform. The paper describes an approach and platform for evaluation of pathology image analysis algorithms. The platform facilitates algorithm evaluation through a high-performance database built on the Pathology Analytic Imaging Standards (PAIS) data model. (1) Develop a framework to support algorithm evaluation by modeling and managing analytical results and human annotations from pathology images; (2) Create a robust data normalization tool for converting, validating, and fixing spatial data from algorithm or human annotations; (3) Develop a set of queries to support data sampling and result comparisons; (4) Achieve high performance computation capacity via a parallel data management infrastructure, parallel data loading and spatial indexing optimizations in this infrastructure. WE HAVE CONSIDERED TWO SCENARIOS FOR ALGORITHM EVALUATION: (1) algorithm comparison where multiple result sets from different methods are compared and consolidated; and (2) algorithm validation where algorithm results are compared with human annotations. We have developed a spatial normalization toolkit to validate and normalize spatial boundaries produced by image analysis algorithms or human annotations. The validated data were formatted based on the PAIS data model and

  5. ASSESSMENT OF PERFORMANCES OF VARIOUS MACHINE LEARNING ALGORITHMS DURING AUTOMATED EVALUATION OF DESCRIPTIVE ANSWERS

    Directory of Open Access Journals (Sweden)

    C. Sunil Kumar

    2014-07-01

    Full Text Available Automation of descriptive answers evaluation is the need of the hour because of the huge increase in the number of students enrolling each year in educational institutions and the limited staff available to spare their time for evaluations. In this paper, we use a machine learning workbench called LightSIDE to accomplish auto evaluation and scoring of descriptive answers. We attempted to identify the best supervised machine learning algorithm given a limited training set sample size scenario. We evaluated performances of Bayes, SVM, Logistic Regression, Random forests, Decision stump and Decision trees algorithms. We confirmed SVM as best performing algorithm based on quantitative measurements across accuracy, kappa, training speed and prediction accuracy with supplied test set.

  6. A quantitative performance evaluation of the EM algorithm applied to radiographic images

    International Nuclear Information System (INIS)

    Brailean, J.C.; Sullivan, B.J.; Giger, M.L.; Chen, C.T.

    1991-01-01

    In this paper, the authors quantitatively evaluate the performance of the Expectation Maximization (EM) algorithm as a restoration technique for radiographic images. The perceived signal-to-noise ratio (SNR), of simple radiographic patterns processed by the EM algorithm are calculated on the basis of a statistical decision theory model that includes both the observer's visual response function and a noise component internal to the eye-brain system. The relative SNR (ratio of the processed SNR to the original SNR) is calculated and used as a metric to quantitatively compare the effects of the EM algorithm to two popular image enhancement techniques: contrast enhancement (windowing) and unsharp mask filtering

  7. Performance evaluation of 2D image registration algorithms with the numeric image registration and comparison platform

    International Nuclear Information System (INIS)

    Gerganov, G.; Kuvandjiev, V.; Dimitrova, I.; Mitev, K.; Kawrakow, I.

    2012-01-01

    The objective of this work is to present the capabilities of the NUMERICS web platform for evaluation of the performance of image registration algorithms. The NUMERICS platform is a web accessible tool which provides access to dedicated numerical algorithms for registration and comparison of medical images (http://numerics.phys.uni-sofia.bg). The platform allows comparison of noisy medical images by means of different types of image comparison algorithms, which are based on statistical tests for outliers. The platform also allows 2D image registration with different techniques like Elastic Thin-Plate Spline registration, registration based on rigid transformations, affine transformations, as well as non-rigid image registration based on Mobius transformations. In this work we demonstrate how the platform can be used as a tool for evaluation of the quality of the image registration process. We demonstrate performance evaluation of a deformable image registration technique based on Mobius transformations. The transformations are applied with appropriate cost functions like: Mutual information, Correlation coefficient, Sum of Squared Differences. The accent is on the results provided by the platform to the user and their interpretation in the context of the performance evaluation of 2D image registration. The NUMERICS image registration and image comparison platform provides detailed statistical information about submitted image registration jobs and can be used to perform quantitative evaluation of the performance of different image registration techniques. (authors)

  8. Algoritmi selektivnog šifrovanja - pregled sa ocenom performansi / Selective encryption algorithms: Overview with performance evaluation

    Directory of Open Access Journals (Sweden)

    Boriša Ž. Jovanović

    2010-10-01

    name says, it consists of encrypting only a subset of the data. The aim of selective encryption is to reduce the amount of data to encrypt while preserving a sufficient level of security. Theoretical foundation of selective encryption The first theoretical foundation of selective encryption was given indirectly by Claude Elwood Shannon in his work about communication theory of secrecy systems. It is well known that statistics for image and video data differ much from classical text data. Indeed, image and video data are strongly correlated and have strong spatial/temporal redundancy. Evaluation criteria for selective encryption algorithm performance evaluation We need to define a set of evaluation criteria that will help evaluating and comparing selective encryption algorithms. - Tunability - Visual degradation - Cryptographic security - Encryption ratio - Compression friendliness - Format compliance - Error tolerance Classification of selective encryption algorithms One possible classification of selective encryption algorithms is relative to when encryption is performed with respect to compression. This classification is adequate since it has intrinsic consequences on selective encryption algorithms behavior. We consider three classes of algorithms as follows: - Precompression - Incompression - Postcompression Overview of selective encryption algorithms In accordance with their precedently defined classification, selective encryption algorithms were compared, briefly described with advantages and disadvantages and their quality was assessed. Applications Selective encryption mechanisms became more and more important and can be applied in many different areas. Some potential application areas of this mechanism are: - Monitoring encrypted content - PDAs (PDA - Personal Digital Assistant, mobile phones, and other mobile terminals - Multiple encryptions - Transcodability/scalability of encrypted content Conclusion As we can see through foregoing analysis, we can notice

  9. Evaluation of Cutting Performance of Diamond Saw Machine Using Artificial Bee Colony (ABC Algorithm

    Directory of Open Access Journals (Sweden)

    Masoud Akhyani

    2017-12-01

    Full Text Available Artificial Intelligence (AI techniques are used for solving the intractable engineering problems. In this study, it is aimed to study the application of artificial bee colony algorithm for predicting the performance of circular diamond saw in sawing of hard rocks. For this purpose, varieties of fourteen types of hard rocks were cut in laboratory using a cutting rig at 5 mm depth of cut, 40 cm/min feed rate and 3000 rpm peripheral speed. Four major mechanical and physical properties of studied rocks such as uniaxial compressive strength (UCS, Schimazek abrasivity factor (SF-a, Mohs hardness (Mh, and Young’s modulus (Ym were determined in rock mechanic laboratory. Artificial bee colony (ABC was used to classify the performance of circular diamond saw based on mentioned mechanical properties of rocks. Ampere consumption and wear rate of diamond saw were selected as criteria to evaluate the result of ABC algorithm. Ampere consumption was determined during cutting process and the average wear rate of diamond saw was calculated from width, length and height loss. The results of comparison between ABC’s results and cutting performance (ampere consumption and wear rate of diamond saw indicated the ability of metaheuristic algorithm such as ABC to evaluate the cutting performance.

  10. Performance evaluation of Genetic Algorithms on loading pattern optimization of PWRs

    International Nuclear Information System (INIS)

    Tombakoglu, M.; Bekar, K.B.; Erdemli, A.O.

    2001-01-01

    Genetic Algorithm (GA) based systems are used for search and optimization problems. There are several applications of GAs in literature successfully applied for loading pattern optimization problems. In this study, we have selected loading pattern optimization problem of Pressurised Water Reactor (PWR). The main objective of this work is to evaluate the performance of Genetic Algorithm operators such as regional crossover, crossover and mutation, and selection and construction of initial population and its size for PWR loading pattern optimization problems. The performance of GA with antithetic variates is compared to traditional GA. Antithetic variates are used to generate the initial population and its use with GA operators are also discussed. Finally, the results of multi-cycle optimization problems are discussed for objective function taking into account cycle burn-up and discharge burn-up.(author)

  11. Performance evaluation of the Champagne source reconstruction algorithm on simulated and real M/EEG data.

    Science.gov (United States)

    Owen, Julia P; Wipf, David P; Attias, Hagai T; Sekihara, Kensuke; Nagarajan, Srikantan S

    2012-03-01

    In this paper, we present an extensive performance evaluation of a novel source localization algorithm, Champagne. It is derived in an empirical Bayesian framework that yields sparse solutions to the inverse problem. It is robust to correlated sources and learns the statistics of non-stimulus-evoked activity to suppress the effect of noise and interfering brain activity. We tested Champagne on both simulated and real M/EEG data. The source locations used for the simulated data were chosen to test the performance on challenging source configurations. In simulations, we found that Champagne outperforms the benchmark algorithms in terms of both the accuracy of the source localizations and the correct estimation of source time courses. We also demonstrate that Champagne is more robust to correlated brain activity present in real MEG data and is able to resolve many distinct and functionally relevant brain areas with real MEG and EEG data. Copyright © 2011 Elsevier Inc. All rights reserved.

  12. Performance Evaluation of Machine Learning Algorithms for Urban Pattern Recognition from Multi-spectral Satellite Images

    Directory of Open Access Journals (Sweden)

    Marc Wieland

    2014-03-01

    Full Text Available In this study, a classification and performance evaluation framework for the recognition of urban patterns in medium (Landsat ETM, TM and MSS and very high resolution (WorldView-2, Quickbird, Ikonos multi-spectral satellite images is presented. The study aims at exploring the potential of machine learning algorithms in the context of an object-based image analysis and to thoroughly test the algorithm’s performance under varying conditions to optimize their usage for urban pattern recognition tasks. Four classification algorithms, Normal Bayes, K Nearest Neighbors, Random Trees and Support Vector Machines, which represent different concepts in machine learning (probabilistic, nearest neighbor, tree-based, function-based, have been selected and implemented on a free and open-source basis. Particular focus is given to assess the generalization ability of machine learning algorithms and the transferability of trained learning machines between different image types and image scenes. Moreover, the influence of the number and choice of training data, the influence of the size and composition of the feature vector and the effect of image segmentation on the classification accuracy is evaluated.

  13. Evaluation of odometry algorithm performances using a railway vehicle dynamic model

    Science.gov (United States)

    Allotta, B.; Pugi, L.; Ridolfi, A.; Malvezzi, M.; Vettori, G.; Rindi, A.

    2012-05-01

    In modern railway Automatic Train Protection and Automatic Train Control systems, odometry is a safety relevant on-board subsystem which estimates the instantaneous speed and the travelled distance of the train; a high reliability of the odometry estimate is fundamental, since an error on the train position may lead to a potentially dangerous overestimation of the distance available for braking. To improve the odometry estimate accuracy, data fusion of different inputs coming from a redundant sensor layout may be used. Simplified two-dimensional models of railway vehicles have been usually used for Hardware in the Loop test rig testing of conventional odometry algorithms and of on-board safety relevant subsystems (like the Wheel Slide Protection braking system) in which the train speed is estimated from the measures of the wheel angular speed. Two-dimensional models are not suitable to develop solutions like the inertial type localisation algorithms (using 3D accelerometers and 3D gyroscopes) and the introduction of Global Positioning System (or similar) or the magnetometer. In order to test these algorithms correctly and increase odometry performances, a three-dimensional multibody model of a railway vehicle has been developed, using Matlab-Simulink™, including an efficient contact model which can simulate degraded adhesion conditions (the development and prototyping of odometry algorithms involve the simulation of realistic environmental conditions). In this paper, the authors show how a 3D railway vehicle model, able to simulate the complex interactions arising between different on-board subsystems, can be useful to evaluate the odometry algorithm and safety relevant to on-board subsystem performances.

  14. Performance evaluation of grid-enabled registration algorithms using bronze-standards

    CERN Document Server

    Glatard, T; Montagnat, J

    2006-01-01

    Evaluating registration algorithms is difficult due to the lack of gold standard in most clinical procedures. The bronze standard is a real-data based statistical method providing an alternative registration reference through a computationally intensive image database registration procedure. We propose in this paper an efficient implementation of this method through a grid-interfaced workflow enactor enabling the concurrent processing of hundreds of image registrations in a couple of hours only. The performances of two different grid infrastructures were compared. We computed the accuracy of 4 different rigid registration algorithms on longitudinal MRI images of brain tumors. Results showed an average subvoxel accuracy of 0.4 mm and 0.15 degrees in rotation.

  15. On e-business strategy planning and performance evaluation: An adaptive algorithmic managerial approach

    Directory of Open Access Journals (Sweden)

    Alexandra Lipitakis

    2017-07-01

    Full Text Available A new e-business strategy planning and performance evaluation scheme based on adaptive algorithmic modelling techniques is presented. The effect of financial and non-financial performance of organizations on e-business strategy planning is investigated. The relationships between the four strategic planning parameters are examined, the directions of these relationships are given and six additional basic components are also considered. The new conceptual model has been constructed for e-business strategic planning and performance evaluation and an adaptive algorithmic modelling approach is presented. The new adaptive algorithmic modelling scheme including eleven dynamic modules, can be optimized and used effectively in e-business strategic planning and strategic planning evaluation of various e-services in very large organizations and businesses. A synoptic statistical analysis and comparative numerical results for the case of UK and Greece are given. The proposed e-business models indicate how e-business strategic planning may affect financial and non-financial performance in business and organizations by exploring whether models which are used for strategy planning can be applied to e-business planning and whether these models would be valid in different environments. A conceptual model has been constructed and qualitative research methods have been used for testing a predetermined number of considered hypotheses. The proposed models have been tested in the UK and Greece and the conclusions including numerical results and statistical analyses indicated existing relationships between considered dependent and independent variables. The proposed e-business models are expected to contribute to e-business strategy planning of businesses and organizations and managers should consider applying these models to their e-business strategy planning to improve their companies’ performances. This research study brings together elements of e

  16. Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer

    Science.gov (United States)

    2017-01-05

    vol. 74, pp. 279–295, 1999. [11] M. Fröhlich, D. Michaelis, and H. W. Strube, “SIM— simultaneous inverse filtering and matching of a glottal flow...1 Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer Yu-Ren Chien, Daryush...D. Mehta, Member, IEEE, Jón Guðnason, Matías Zañartu, Member, IEEE, and Thomas F. Quatieri, Fellow, IEEE Abstract—Glottal inverse filtering aims to

  17. Performance Evaluation of Bidding-Based Multi-Agent Scheduling Algorithms for Manufacturing Systems

    Directory of Open Access Journals (Sweden)

    Antonio Gordillo

    2014-10-01

    Full Text Available Artificial Intelligence techniques have being applied to many problems in manufacturing systems in recent years. In the specific field of manufacturing scheduling many studies have been published trying to cope with the complexity of the manufacturing environment. One of the most utilized approaches is (multi agent-based scheduling. Nevertheless, despite the large list of studies reported in this field, there is no resource or scientific study on the performance measure of this type of approach under very common and critical execution situations. This paper focuses on multi-agent systems (MAS based algorithms for task allocation, particularly in manufacturing applications. The goal is to provide a mechanism to measure the performance of agent-based scheduling approaches for manufacturing systems under key critical situations such as: dynamic environment, rescheduling, and priority change. With this mechanism it will be possible to simulate critical situations and to stress the system in order to measure the performance of a given agent-based scheduling method. The proposed mechanism is a pioneering approach for performance evaluation of bidding-based MAS approaches for manufacturing scheduling. The proposed method and evaluation methodology can be used to run tests in different manufacturing floors since it is independent of the workshop configuration. Moreover, the evaluation results presented in this paper show the key factors and scenarios that most affect the market-like MAS approaches for manufacturing scheduling.

  18. Performance Evaluation of Block Acquisition and Tracking Algorithms Using an Open Source GPS Receiver Platform

    Science.gov (United States)

    Ramachandran, Ganesh K.; Akopian, David; Heckler, Gregory W.; Winternitz, Luke B.

    2011-01-01

    Location technologies have many applications in wireless communications, military and space missions, etc. US Global Positioning System (GPS) and other existing and emerging Global Navigation Satellite Systems (GNSS) are expected to provide accurate location information to enable such applications. While GNSS systems perform very well in strong signal conditions, their operation in many urban, indoor, and space applications is not robust or even impossible due to weak signals and strong distortions. The search for less costly, faster and more sensitive receivers is still in progress. As the research community addresses more and more complicated phenomena there exists a demand on flexible multimode reference receivers, associated SDKs, and development platforms which may accelerate and facilitate the research. One of such concepts is the software GPS/GNSS receiver (GPS SDR) which permits a facilitated access to algorithmic libraries and a possibility to integrate more advanced algorithms without hardware and essential software updates. The GNU-SDR and GPS-SDR open source receiver platforms are such popular examples. This paper evaluates the performance of recently proposed block-corelator techniques for acquisition and tracking of GPS signals using open source GPS-SDR platform.

  19. Performance evaluation of firefly algorithm with variation in sorting for non-linear benchmark problems

    Science.gov (United States)

    Umbarkar, A. J.; Balande, U. T.; Seth, P. D.

    2017-06-01

    The field of nature inspired computing and optimization techniques have evolved to solve difficult optimization problems in diverse fields of engineering, science and technology. The firefly attraction process is mimicked in the algorithm for solving optimization problems. In Firefly Algorithm (FA) sorting of fireflies is done by using sorting algorithm. The original FA is proposed with bubble sort for ranking the fireflies. In this paper, the quick sort replaces bubble sort to decrease the time complexity of FA. The dataset used is unconstrained benchmark functions from CEC 2005 [22]. The comparison of FA using bubble sort and FA using quick sort is performed with respect to best, worst, mean, standard deviation, number of comparisons and execution time. The experimental result shows that FA using quick sort requires less number of comparisons but requires more execution time. The increased number of fireflies helps to converge into optimal solution whereas by varying dimension for algorithm performed better at a lower dimension than higher dimension.

  20. Performance evaluation of an algorithm for fast optimization of beam weights in anatomy-based intensity modulated radiotherapy

    International Nuclear Information System (INIS)

    Ranganathan, Vaitheeswaran; Sathiya Narayanan, V.K.; Bhangle, Janhavi R.; Gupta, Kamlesh K.; Basu, Sumit; Maiya, Vikram; Joseph, Jolly; Nirhali, Amit

    2010-01-01

    This study aims to evaluate the performance of a new algorithm for optimization of beam weights in anatomy-based intensity modulated radiotherapy (IMRT). The algorithm uses a numerical technique called Gaussian-Elimination that derives the optimum beam weights in an exact or non-iterative way. The distinct feature of the algorithm is that it takes only fraction of a second to optimize the beam weights, irrespective of the complexity of the given case. The algorithm has been implemented using MATLAB with a Graphical User Interface (GUI) option for convenient specification of dose constraints and penalties to different structures. We have tested the numerical and clinical capabilities of the proposed algorithm in several patient cases in comparison with KonRad inverse planning system. The comparative analysis shows that the algorithm can generate anatomy-based IMRT plans with about 50% reduction in number of MUs and 60% reduction in number of apertures, while producing dose distribution comparable to that of beamlet-based IMRT plans. Hence, it is clearly evident from the study that the proposed algorithm can be effectively used for clinical applications. (author)

  1. Efficiency maximization and performance evaluation of hybrid dual channel semitransparent photovoltaic thermal module using fuzzyfied genetic algorithm

    International Nuclear Information System (INIS)

    Singh, Sonveer; Agrawal, Sanjay

    2016-01-01

    Highlights: • Thermal modeling of novel dual channel semitransparent photovoltaic thermal hybrid module. • Efficiency maximization and performance evaluation of dual channel photovoltaic thermal module. • Annual performance has been evaluated for Srinagar, Jodhpur, Bangalore and New Delhi (India). • There are improvements in results for optimized system as compared to un-optimized system. - Abstract: The work has been carried out in two steps; firstly the parameters of hybrid dual channel semitransparent photovoltaic thermal module has been optimized using a fuzzyfied genetic algorithm. During the course of optimization, overall exergy efficiency is considered as an objective function and different design parameters of the proposed module have been optimized. Fuzzy controller is used to improve the performance of genetic algorithms and the approach is called as a fuzzyfied genetic algorithm. In the second step, the performance of the module has been analyzed for four cities of India such as Srinagar, Bangalore, Jodhpur and New Delhi. The performance of the module has been evaluated for daytime 08:00 AM to 05:00 PM and annually from January to December. It is to be noted that, an average improvement occurs in electrical efficiency of the optimized module, simultaneously there is also a reduction in solar cell temperature as compared to un-optimized module.

  2. SU-E-T-605: Performance Evaluation of MLC Leaf-Sequencing Algorithms in Head-And-Neck IMRT

    International Nuclear Information System (INIS)

    Jing, J; Lin, H; Chow, J

    2015-01-01

    Purpose: To investigate the efficiency of three multileaf collimator (MLC) leaf-sequencing algorithms proposed by Galvin et al, Chen et al and Siochi et al using external beam treatment plans for head-and-neck intensity modulated radiation therapy (IMRT). Methods: IMRT plans for head-and-neck were created using the CORVUS treatment planning system. The plans were optimized and the fluence maps for all photon beams determined. Three different MLC leaf-sequencing algorithms based on Galvin et al, Chen et al and Siochi et al were used to calculate the final photon segmental fields and their monitor units in delivery. For comparison purpose, the maximum intensity of fluence map was kept constant in different plans. The number of beam segments and total number of monitor units were calculated for the three algorithms. Results: From results of number of beam segments and total number of monitor units, we found that algorithm of Galvin et al had the largest number of monitor unit which was about 70% larger than the other two algorithms. Moreover, both algorithms of Galvin et al and Siochi et al have relatively lower number of beam segment compared to Chen et al. Although values of number of beam segment and total number of monitor unit calculated by different algorithms varied with the head-and-neck plans, it can be seen that algorithms of Galvin et al and Siochi et al performed well with a lower number of beam segment, though algorithm of Galvin et al had a larger total number of monitor units than Siochi et al. Conclusion: Although performance of the leaf-sequencing algorithm varied with different IMRT plans having different fluence maps, an evaluation is possible based on the calculated number of beam segment and monitor unit. In this study, algorithm by Siochi et al was found to be more efficient in the head-and-neck IMRT. The Project Sponsored by the Fundamental Research Funds for the Central Universities (J2014HGXJ0094) and the Scientific Research Foundation for the

  3. SU-E-T-605: Performance Evaluation of MLC Leaf-Sequencing Algorithms in Head-And-Neck IMRT

    Energy Technology Data Exchange (ETDEWEB)

    Jing, J; Lin, H [Hefei University of Technology, Hefei, Anhui (China); Chow, J [Princess Margaret Hospital, Toronto, ON (Canada)

    2015-06-15

    Purpose: To investigate the efficiency of three multileaf collimator (MLC) leaf-sequencing algorithms proposed by Galvin et al, Chen et al and Siochi et al using external beam treatment plans for head-and-neck intensity modulated radiation therapy (IMRT). Methods: IMRT plans for head-and-neck were created using the CORVUS treatment planning system. The plans were optimized and the fluence maps for all photon beams determined. Three different MLC leaf-sequencing algorithms based on Galvin et al, Chen et al and Siochi et al were used to calculate the final photon segmental fields and their monitor units in delivery. For comparison purpose, the maximum intensity of fluence map was kept constant in different plans. The number of beam segments and total number of monitor units were calculated for the three algorithms. Results: From results of number of beam segments and total number of monitor units, we found that algorithm of Galvin et al had the largest number of monitor unit which was about 70% larger than the other two algorithms. Moreover, both algorithms of Galvin et al and Siochi et al have relatively lower number of beam segment compared to Chen et al. Although values of number of beam segment and total number of monitor unit calculated by different algorithms varied with the head-and-neck plans, it can be seen that algorithms of Galvin et al and Siochi et al performed well with a lower number of beam segment, though algorithm of Galvin et al had a larger total number of monitor units than Siochi et al. Conclusion: Although performance of the leaf-sequencing algorithm varied with different IMRT plans having different fluence maps, an evaluation is possible based on the calculated number of beam segment and monitor unit. In this study, algorithm by Siochi et al was found to be more efficient in the head-and-neck IMRT. The Project Sponsored by the Fundamental Research Funds for the Central Universities (J2014HGXJ0094) and the Scientific Research Foundation for the

  4. Performance evaluation of the ORNL multi-elemental XRF analysis algorithms

    Energy Technology Data Exchange (ETDEWEB)

    McElroy, Robert Dennis [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2016-11-01

    Hybrid K-Edge Densitometer (HKED) systems integrate both K-Edge Densitometry (KED) and X-Ray Fluorescence (XRF) analyses to provide accurate rapid, assay results of the uranium and plutonium content of dissolver solution samples from nuclear fuel reprocessing facilities. Introduced for international safeguards applications in the late 1980s, the XRF component of the hybrid analyses is limited to quantification of U and Pu over a narrow range of U:Pu concentration ratios in the vicinity of ≈100. The analysis was further limited regarding the presence of minor actinide components where only a single minor actinide (typically Am) is included in the analysis and then only treated as an interference. The evolving nuclear fuel cycle has created the need to assay more complex dissolver solutions where uranium may no longer be the dominant actinide in the solution and the concentrations of the so called minor actinides (e.g., Th, Np, Am, and Cm) are sufficiently high that they can no longer be treated as impurities and ignored. Extension of the traditional HKED Region of Interest (ROI) based analysis to include these additional actinides is not possible due to the increased complexity of the XRF spectra. Oak Ridge National Laboratory (ORNL) has developed a spectral fitting approach to the HKED XRF measurement with an enhanced algorithm set to accommodate these complex XRF spectra. This report provides a summary of the spectral fitting methodology and examines the performance of these algorithms using data obtained from the ORNL HKED system, as well as data provided by the International Atomic Energy Agency (IAEA) on actual dissolver solutions.

  5. Comprehensive eye evaluation algorithm

    Science.gov (United States)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  6. Performance measurement, modeling, and evaluation of integrated concurrency control and recovery algorithms in distributed data base systems

    Energy Technology Data Exchange (ETDEWEB)

    Jenq, B.C.

    1986-01-01

    The performance evaluation of integrated concurrency-control and recovery mechanisms for distributed data base systems is studied using a distributed testbed system. In addition, a queueing network model was developed to analyze the two phase locking scheme in the distributed testbed system. The combination of testbed measurement and analytical modeling provides an effective tool for understanding the performance of integrated concurrency control and recovery algorithms in distributed database systems. The design and implementation of the distributed testbed system, CARAT, are presented. The concurrency control and recovery algorithms implemented in CARAT include: a two phase locking scheme with distributed deadlock detection, a distributed version of optimistic approach, before-image and after-image journaling mechanisms for transaction recovery, and a two-phase commit protocol. Many performance measurements were conducted using a variety of workloads. A queueing network model is developed to analyze the performance of the CARAT system using the two-phase locking scheme with before-image journaling. The combination of testbed measurements and analytical modeling provides significant improvements in understanding the performance impacts of the concurrency control and recovery algorithms in distributed database systems.

  7. Performance evaluation of an automated single-channel sleep–wake detection algorithm

    Directory of Open Access Journals (Sweden)

    Kaplan RF

    2014-10-01

    Full Text Available Richard F Kaplan,1 Ying Wang,1 Kenneth A Loparo,1,2 Monica R Kelly,3 Richard R Bootzin3 1General Sleep Corporation, Euclid, OH, USA; 2Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH, USA; 3Department of Psychology, University of Arizona, Tucson, AZ, USA Background: A need exists, from both a clinical and a research standpoint, for objective sleep measurement systems that are both easy to use and can accurately assess sleep and wake. This study evaluates the output of an automated sleep–wake detection algorithm (Z-ALG used in the Zmachine (a portable, single-channel, electroencephalographic [EEG] acquisition and analysis system against laboratory polysomnography (PSG using a consensus of expert visual scorers. Methods: Overnight laboratory PSG studies from 99 subjects (52 females/47 males, 18–60 years, median age 32.7 years, including both normal sleepers and those with a variety of sleep disorders, were assessed. PSG data obtained from the differential mastoids (A1–A2 were assessed by Z-ALG, which determines sleep versus wake every 30 seconds using low-frequency, intermediate-frequency, and high-frequency and time domain EEG features. PSG data were independently scored by two to four certified PSG technologists, using standard Rechtschaffen and Kales guidelines, and these score files were combined on an epoch-by-epoch basis, using a majority voting rule, to generate a single score file per subject to compare against the Z-ALG output. Both epoch-by-epoch and standard sleep indices (eg, total sleep time, sleep efficiency, latency to persistent sleep, and wake after sleep onset were compared between the Z-ALG output and the technologist consensus score files. Results: Overall, the sensitivity and specificity for detecting sleep using the Z-ALG as compared to the technologist consensus are 95.5% and 92.5%, respectively, across all subjects, and the positive predictive value and the

  8. Evaluation of Four Encryption Algorithms for Viability, Reliability and Performance Estimation

    Directory of Open Access Journals (Sweden)

    J. B. Awotunde

    2016-12-01

    Full Text Available Data and information in storage, in transit or during processing are found in various computers and computing devices with wide range of hardware specifications. Cryptography is the knowledge of using codes to encrypt and decrypt data. It enables one to store sensitive information or transmit it across computer in a more secured ways so that it cannot be read by anyone except the intended receiver. Cryptography also allows secure storage of sensitive data on any computer. Cryptography as an approach to computer security comes at a cost in terms of resource utilization such as time, memory and CPU usability time which in some cases may not be in abundance to achieve the set out objective of protecting data. This work looked into the memory construction rate, different key size, CPU utilization time period and encryption speed of the four algorithms to determine the amount of computer resource that is expended and how long it takes each algorithm to complete its task. Results shows that key length of the cryptographic algorithm is proportional to the resource utilization in most cases as found out in the key length of Blowfish, AES, 3DES and DES algorithms respectively. Further research can be carried out in order to determine the power utilization of each of these algorithms.

  9. Fatigue evaluation algorithms: Review

    Energy Technology Data Exchange (ETDEWEB)

    Passipoularidis, V.A.; Broendsted, P.

    2009-11-15

    A progressive damage fatigue simulator for variable amplitude loads named FADAS is discussed in this work. FADAS (Fatigue Damage Simulator) performs ply by ply stress analysis using classical lamination theory and implements adequate stiffness discount tactics based on the failure criterion of Puck, to model the degradation caused by failure events in ply level. Residual strength is incorporated as fatigue damage accumulation metric. Once the typical fatigue and static properties of the constitutive ply are determined,the performance of an arbitrary lay-up under uniaxial and/or multiaxial load time series can be simulated. The predictions are validated against fatigue life data both from repeated block tests at a single stress ratio as well as against spectral fatigue using the WISPER, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using incremental application of each load cycle (in case of ply failure) are implemented and compared. Simulation results confirm the ability of the algorithm to take into account load sequence effects. In general, FADAS performs well in predicting life under both spectral and block loading fatigue. (author)

  10. Performance Evaluation of Various STL File Mesh Refining Algorithms Applied for FDM-RP Process

    Science.gov (United States)

    Ledalla, Siva Rama Krishna; Tirupathi, Balaji; Sriram, Venkatesh

    2018-06-01

    Layered manufacturing machines use the stereolithography (STL) file to build parts. When a curved surface is converted from a computer aided design (CAD) file to STL, it results in a geometrical distortion and chordal error. Parts manufactured with this file, might not satisfy geometric dimensioning and tolerance requirements due to approximated geometry. Current algorithms built in CAD packages have export options to globally reduce this distortion, which leads to an increase in the file size and pre-processing time. In this work, different mesh subdivision algorithms are applied on STL file of a complex geometric features using MeshLab software. The mesh subdivision algorithms considered in this work are modified butterfly subdivision technique, loops sub division technique and general triangular midpoint sub division technique. A comparative study is made with respect to volume and the build time using the above techniques. It is found that triangular midpoint sub division algorithm is more suitable for the geometry under consideration. Only the wheel cap part is then manufactured on Stratasys MOJO FDM machine. The surface roughness of the part is measured on Talysurf surface roughness tester.

  11. Performance Evaluation of the Approaches and Algorithms Using Hamburg Airport Operations

    Science.gov (United States)

    Zhu, Zhifan; Okuniek, Nikolai; Gerdes, Ingrid; Schier, Sebastian; Lee, Hanbong; Jung, Yoon

    2016-01-01

    The German Aerospace Center (DLR) and the National Aeronautics and Space Administration (NASA) have been independently developing and testing their own concepts and tools for airport surface traffic management. Although these concepts and tools have been tested individually for European and US airports, they have never been compared or analyzed side-by-side. This paper presents the collaborative research devoted to the evaluation and analysis of two different surface management concepts. Hamburg Airport was used as a common test bed airport for the study. First, two independent simulations using the same traffic scenario were conducted; one by the DLR team using the Controller Assistance for Departure Optimization (CADEO) and the Taxi Routing for Aircraft: Creation and Controlling (TRACC) in a real-time simulation environment, and one by the NASA team based on the Spot and Runway Departure Advisor (SARDA) in a fast-time simulation environment. A set of common performance metrics was defined. The simulation results showed that both approaches produced operational benefits in efficiency, such as reducing taxi times, while maintaining runway throughput. Both approaches generated the gate pushback schedule to meet the runway schedule, such that the runway utilization was maximized. The conflict-free taxi guidance by TRACC helped avoid taxi conflicts and reduced taxiing stops, but the taxi benefit needed be assessed together with runway throughput to analyze the overall performance objective.

  12. Performance Evaluation of the Approaches and Algorithms for Hamburg Airport Operations

    Science.gov (United States)

    Zhu, Zhifan; Okuniek, Nikolai; Gerdes, Ingrid; Schier, Sebastian; Lee, Hanbong; Jung, Yoon

    2016-01-01

    The German Aerospace Center (DLR) and the National Aeronautics and Space Administration (NASA) have been independently developing and testing their own concepts and tools for airport surface traffic management. Although these concepts and tools have been tested individually for European and US airports, they have never been compared or analyzed side-by-side. This paper presents the collaborative research devoted to the evaluation and analysis of two different surface management concepts. Hamburg Airport was used as a common test bed airport for the study. First, two independent simulations using the same traffic scenario were conducted: one by the DLR team using the Controller Assistance for Departure Optimization (CADEO) and the Taxi Routing for Aircraft: Creation and Controlling (TRACC) in a real-time simulation environment, and one by the NASA team based on the Spot and Runway Departure Advisor (SARDA) in a fast-time simulation environment. A set of common performance metrics was defined. The simulation results showed that both approaches produced operational benefits in efficiency, such as reducing taxi times, while maintaining runway throughput. Both approaches generated the gate pushback schedule to meet the runway schedule, such that the runway utilization was maximized. The conflict-free taxi guidance by TRACC helped avoid taxi conflicts and reduced taxiing stops, but the taxi benefit needed be assessed together with runway throughput to analyze the overall performance objective.

  13. The Performance Evaluation of an IEEE 802.11 Network Containing Misbehavior Nodes under Different Backoff Algorithms

    Directory of Open Access Journals (Sweden)

    Trong-Minh Hoang

    2017-01-01

    Full Text Available Security of any wireless network is always an important issue due to its serious impacts on network performance. Practically, the IEEE 802.11 medium access control can be violated by several native or smart attacks that result in downgrading network performance. In recent years, there are several studies using analytical model to analyze medium access control (MAC layer misbehavior issue to explore this problem but they have focused on binary exponential backoff only. Moreover, a practical condition such as the freezing backoff issue is not included in the previous models. Hence, this paper presents a novel analytical model of the IEEE 802.11 MAC to thoroughly understand impacts of misbehaving node on network throughput and delay parameters. Particularly, the model can express detailed backoff algorithms so that the evaluation of the network performance under some typical attacks through numerical simulation results would be easy.

  14. Performance evaluation of iterative reconstruction algorithms for achieving CT radiation dose reduction — a phantom study

    Science.gov (United States)

    Dodge, Cristina T.; Tamm, Eric P.; Cody, Dianna D.; Liu, Xinming; Jensen, Corey T.; Wei, Wei; Kundra, Vikas

    2016-01-01

    The purpose of this study was to characterize image quality and dose performance with GE CT iterative reconstruction techniques, adaptive statistical iterative reconstruction (ASiR), and model‐based iterative reconstruction (MBIR), over a range of typical to low‐dose intervals using the Catphan 600 and the anthropomorphic Kyoto Kagaku abdomen phantoms. The scope of the project was to quantitatively describe the advantages and limitations of these approaches. The Catphan 600 phantom, supplemented with a fat‐equivalent oval ring, was scanned using a GE Discovery HD750 scanner at 120 kVp, 0.8 s rotation time, and pitch factors of 0.516, 0.984, and 1.375. The mA was selected for each pitch factor to achieve CTDIvol values of 24, 18, 12, 6, 3, 2, and 1 mGy. Images were reconstructed at 2.5 mm thickness with filtered back‐projection (FBP); 20%, 40%, and 70% ASiR; and MBIR. The potential for dose reduction and low‐contrast detectability were evaluated from noise and contrast‐to‐noise ratio (CNR) measurements in the CTP 404 module of the Catphan. Hounsfield units (HUs) of several materials were evaluated from the cylinder inserts in the CTP 404 module, and the modulation transfer function (MTF) was calculated from the air insert. The results were confirmed in the anthropomorphic Kyoto Kagaku abdomen phantom at 6, 3, 2, and 1 mGy. MBIR reduced noise levels five‐fold and increased CNR by a factor of five compared to FBP below 6 mGy CTDIvol, resulting in a substantial improvement in image quality. Compared to ASiR and FBP, HU in images reconstructed with MBIR were consistently lower, and this discrepancy was reversed by higher pitch factors in some materials. MBIR improved the conspicuity of the high‐contrast spatial resolution bar pattern, and MTF quantification confirmed the superior spatial resolution performance of MBIR versus FBP and ASiR at higher dose levels. While ASiR and FBP were relatively insensitive to changes in dose and pitch, the spatial

  15. Comparative Performance Evaluation of Orthogonal-Signal-Generators-Based Single-Phase PLL Algorithms

    DEFF Research Database (Denmark)

    Han, Yang; Luo, Mingyu; Zhao, Xin

    2016-01-01

    The orthogonal signal generator based phase-locked loops (OSG-PLLs) are among the most popular single-phase PLLs within the areas of power electronics and power systems, mainly because they are often easy to be implement and offer a robust performance against the grid disturbances. The main aim o...

  16. Performance Evaluation of a Novel Optimization Sequential Algorithm (SeQ Code for FTTH Network

    Directory of Open Access Journals (Sweden)

    Fazlina C.A.S.

    2017-01-01

    Full Text Available The SeQ codes has advantages, such as variable cross-correlation property at any given number of users and weights, as well as effectively suppressed the impacts of phase induced intensity noise (PIIN and multiple access interference (MAI cancellation property. The result revealed, at system performance analysis of BER = 10-09, the SeQ code capable to achieved 1 Gbps up to 60 km.

  17. Evaluating performance of a user-trained MR lung tumor autocontouring algorithm in the context of intra- and interobserver variations.

    Science.gov (United States)

    Yip, Eugene; Yun, Jihyun; Gabos, Zsolt; Baker, Sarah; Yee, Don; Wachowicz, Keith; Rathee, Satyapal; Fallone, B Gino

    2018-01-01

    Real-time tracking of lung tumors using magnetic resonance imaging (MRI) has been proposed as a potential strategy to mitigate the ill-effects of breathing motion in radiation therapy. Several autocontouring methods have been evaluated against a "gold standard" of a single human expert user. However, contours drawn by experts have inherent intra- and interobserver variations. In this study, we aim to evaluate our user-trained autocontouring algorithm with manually drawn contours from multiple expert users, and to contextualize the accuracy of these autocontours within intra- and interobserver variations. Six nonsmall cell lung cancer patients were recruited, with institutional ethics approval. Patients were imaged with a clinical 3 T Philips MR scanner using a dynamic 2D balanced SSFP sequence under free breathing. Three radiation oncology experts, each in two separate sessions, contoured 130 dynamic images for each patient. For autocontouring, the first 30 images were used for algorithm training, and the remaining 100 images were autocontoured and evaluated. Autocontours were compared against manual contours in terms of Dice's coefficient (DC) and Hausdorff distances (d H ). Intra- and interobserver variations of the manual contours were also evaluated. When compared with the manual contours of the expert user who trained it, the algorithm generates autocontours whose evaluation metrics (same session: DC = 0.90(0.03), d H  = 3.8(1.6) mm; different session DC = 0.88(0.04), d H  = 4.3(1.5) mm) are similar to or better than intraobserver variations (DC = 0.88(0.04), and d H  = 4.3(1.7) mm) between two sessions. The algorithm's autocontours are also compared to the manual contours from different expert users with evaluation metrics (DC = 0.87(0.04), d H  = 4.8(1.7) mm) similar to interobserver variations (DC = 0.87(0.04), d H  = 4.7(1.6) mm). Our autocontouring algorithm delineates tumor contours (algorithm may be a key component of the real

  18. Simulation-Based Evaluation of the Performances of an Algorithm for Detecting Abnormal Disease-Related Features in Cattle Mortality Records.

    Science.gov (United States)

    Perrin, Jean-Baptiste; Durand, Benoît; Gay, Emilie; Ducrot, Christian; Hendrikx, Pascal; Calavas, Didier; Hénaux, Viviane

    2015-01-01

    We performed a simulation study to evaluate the performances of an anomaly detection algorithm considered in the frame of an automated surveillance system of cattle mortality. The method consisted in a combination of temporal regression and spatial cluster detection which allows identifying, for a given week, clusters of spatial units showing an excess of deaths in comparison with their own historical fluctuations. First, we simulated 1,000 outbreaks of a disease causing extra deaths in the French cattle population (about 200,000 herds and 20 million cattle) according to a model mimicking the spreading patterns of an infectious disease and injected these disease-related extra deaths in an authentic mortality dataset, spanning from January 2005 to January 2010. Second, we applied our algorithm on each of the 1,000 semi-synthetic datasets to identify clusters of spatial units showing an excess of deaths considering their own historical fluctuations. Third, we verified if the clusters identified by the algorithm did contain simulated extra deaths in order to evaluate the ability of the algorithm to identify unusual mortality clusters caused by an outbreak. Among the 1,000 simulations, the median duration of simulated outbreaks was 8 weeks, with a median number of 5,627 simulated deaths and 441 infected herds. Within the 12-week trial period, 73% of the simulated outbreaks were detected, with a median timeliness of 1 week, and a mean of 1.4 weeks. The proportion of outbreak weeks flagged by an alarm was 61% (i.e. sensitivity) whereas one in three alarms was a true alarm (i.e. positive predictive value). The performances of the detection algorithm were evaluated for alternative combination of epidemiologic parameters. The results of our study confirmed that in certain conditions automated algorithms could help identifying abnormal cattle mortality increases possibly related to unidentified health events.

  19. Evaluation of Variable Refrigerant Flow Systems Performance and the Enhanced Control Algorithm on Oak Ridge National Laboratory s Flexible Research Platform

    Energy Technology Data Exchange (ETDEWEB)

    Im, Piljae [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Munk, Jeffrey D [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Gehl, Anthony C [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)

    2015-06-01

    A research project “Evaluation of Variable Refrigerant Flow (VRF) Systems Performance and the Enhanced Control Algorithm on Oak Ridge National Laboratory’s (ORNL’s) Flexible Research Platform” was performed to (1) install and validate the performance of Samsung VRF systems compared with the baseline rooftop unit (RTU) variable-air-volume (VAV) system and (2) evaluate the enhanced control algorithm for the VRF system on the two-story flexible research platform (FRP) in Oak Ridge, Tennessee. Based on the VRF system designed by Samsung and ORNL, the system was installed from February 18 through April 15, 2014. The final commissioning and system optimization were completed on June 2, 2014, and the initial test for system operation was started the following day, June 3, 2014. In addition, the enhanced control algorithm was implemented and updated on June 18. After a series of additional commissioning actions, the energy performance data from the RTU and the VRF system were monitored from July 7, 2014, through February 28, 2015. Data monitoring and analysis were performed for the cooling season and heating season separately, and the calibrated simulation model was developed and used to estimate the energy performance of the RTU and VRF systems. This final report includes discussion of the design and installation of the VRF system, the data monitoring and analysis plan, the cooling season and heating season data analysis, and the building energy modeling study

  20. A hybrid algorithm for instant optimization of beam weights in anatomy-based intensity modulated radiotherapy: a performance evaluation study

    International Nuclear Information System (INIS)

    Vaitheeswaran, Ranganathan; Sathiya Narayanan, V.K.; Bhangle, Janhavi R.; Nirhali, Amit; Kumar, Namita; Basu, Sumit; Maiya, Vikram

    2011-01-01

    The study aims to introduce a hybrid optimization algorithm for anatomy-based intensity modulated radiotherapy (AB-IMRT). Our proposal is that by integrating an exact optimization algorithm with a heuristic optimization algorithm, the advantages of both the algorithms can be combined, which will lead to an efficient global optimizer solving the problem at a very fast rate. Our hybrid approach combines Gaussian elimination algorithm (exact optimizer) with fast simulated annealing algorithm (a heuristic global optimizer) for the optimization of beam weights in AB-IMRT. The algorithm has been implemented using MATLAB software. The optimization efficiency of the hybrid algorithm is clarified by (i) analysis of the numerical characteristics of the algorithm and (ii) analysis of the clinical capabilities of the algorithm. The numerical and clinical characteristics of the hybrid algorithm are compared with Gaussian elimination method (GEM) and fast simulated annealing (FSA). The numerical characteristics include convergence, consistency, number of iterations and overall optimization speed, which were analyzed for the respective cases of 8 patients. The clinical capabilities of the hybrid algorithm are demonstrated in cases of (a) prostate and (b) brain. The analyses reveal that (i) the convergence speed of the hybrid algorithm is approximately three times higher than that of FSA algorithm (ii) the convergence (percentage reduction in the cost function) in hybrid algorithm is about 20% improved as compared to that in GEM algorithm (iii) the hybrid algorithm is capable of producing relatively better treatment plans in terms of Conformity Index (CI) (∼ 2% - 5% improvement) and Homogeneity Index (HI) (∼ 4% - 10% improvement) as compared to GEM and FSA algorithms (iv) the sparing of organs at risk in hybrid algorithm-based plans is better than that in GEM-based plans and comparable to that in FSA-based plans; and (v) the beam weights resulting from the hybrid algorithm are

  1. Evaluation of Real-Time Performance of the Virtual Seismologist Earthquake Early Warning Algorithm in Switzerland and California

    Science.gov (United States)

    Behr, Y.; Cua, G. B.; Clinton, J. F.; Heaton, T. H.

    2012-12-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms - the other two being ElarmS (Allen and Kanamori, 2003) and On-Site (Wu and Kanamori, 2005; Boese et al., 2008) algorithms - that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS will be installed and tested at other European networks. VS has been running in real-time on stations of the Southern California Seismic Network (SCSN) since July 2008, and on stations of the Berkeley Digital Seismic Network (BDSN) and the USGS Menlo Park strong motion network in northern California since February 2009. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. We present summaries of the real-time performance of VS in Switzerland and California over the past two and three years respectively. The empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, are demonstrated to perform well in northern California and Switzerland. Implementation in real-time and off-line testing in Europe will potentially be extended to southern Italy, western Greece, Istanbul, Romania, and Iceland. Integration of the VS algorithm into both the CISN Advanced

  2. Monitoring endemic livestock diseases using laboratory diagnostic data: A simulation study to evaluate the performance of univariate process monitoring control algorithms.

    Science.gov (United States)

    Lopes Antunes, Ana Carolina; Dórea, Fernanda; Halasa, Tariq; Toft, Nils

    2016-05-01

    Surveillance systems are critical for accurate, timely monitoring and effective disease control. In this study, we investigated the performance of univariate process monitoring control algorithms in detecting changes in seroprevalence for endemic diseases. We also assessed the effect of sample size (number of sentinel herds tested in the surveillance system) on the performance of the algorithms. Three univariate process monitoring control algorithms were compared: Shewart p Chart(1) (PSHEW), Cumulative Sum(2) (CUSUM) and Exponentially Weighted Moving Average(3) (EWMA). Increases in seroprevalence were simulated from 0.10 to 0.15 and 0.20 over 4, 8, 24, 52 and 104 weeks. Each epidemic scenario was run with 2000 iterations. The cumulative sensitivity(4) (CumSe) and timeliness were used to evaluate the algorithms' performance with a 1% false alarm rate. Using these performance evaluation criteria, it was possible to assess the accuracy and timeliness of the surveillance system working in real-time. The results showed that EWMA and PSHEW had higher CumSe (when compared with the CUSUM) from week 1 until the end of the period for all simulated scenarios. Changes in seroprevalence from 0.10 to 0.20 were more easily detected (higher CumSe) than changes from 0.10 to 0.15 for all three algorithms. Similar results were found with EWMA and PSHEW, based on the median time to detection. Changes in the seroprevalence were detected later with CUSUM, compared to EWMA and PSHEW for the different scenarios. Increasing the sample size 10 fold halved the time to detection (CumSe=1), whereas increasing the sample size 100 fold reduced the time to detection by a factor of 6. This study investigated the performance of three univariate process monitoring control algorithms in monitoring endemic diseases. It was shown that automated systems based on these detection methods identified changes in seroprevalence at different times. Increasing the number of tested herds would lead to faster

  3. Evaluation of Real-Time and Off-Line Performance of the Virtual Seismologist Earthquake Early Warning Algorithm in Switzerland

    Science.gov (United States)

    Behr, Yannik; Clinton, John; Cua, Georgia; Cauzzi, Carlo; Heimers, Stefan; Kästli, Philipp; Becker, Jan; Heaton, Thomas

    2013-04-01

    The Virtual Seismologist (VS) method is a Bayesian approach to regional network-based earthquake early warning (EEW) originally formulated by Cua and Heaton (2007). Implementation of VS into real-time EEW codes has been an on-going effort of the Swiss Seismological Service at ETH Zürich since 2006, with support from ETH Zürich, various European projects, and the United States Geological Survey (USGS). VS is one of three EEW algorithms that form the basis of the California Integrated Seismic Network (CISN) ShakeAlert system, a USGS-funded prototype end-to-end EEW system that could potentially be implemented in California. In Europe, VS is currently operating as a real-time test system in Switzerland. As part of the on-going EU project REAKT (Strategies and Tools for Real-Time Earthquake Risk Reduction), VS installations in southern Italy, western Greece, Istanbul, Romania, and Iceland are planned or underway. In Switzerland, VS has been running in real-time on stations monitored by the Swiss Seismological Service (including stations from Austria, France, Germany, and Italy) since 2010. While originally based on the Earthworm system it has recently been ported to the SeisComp3 system. Besides taking advantage of SeisComp3's picking and phase association capabilities it greatly simplifies the potential installation of VS at networks in particular those already running SeisComp3. We present the architecture of the new SeisComp3 based version and compare its results from off-line tests with the real-time performance of VS in Switzerland over the past two years. We further show that the empirical relationships used by VS to estimate magnitudes and ground motion, originally derived from southern California data, perform well in Switzerland.

  4. Development of design window evaluation and display system. 2. Confirmations of the basic performance of genetic algorithm

    International Nuclear Information System (INIS)

    Murakami, Satoshi; Muramatsu, Toshiharu

    2003-05-01

    A large- scale sodium-cooled fast breeder reactor in feasibility studies on commercialized fast reactors which has a tendency of consideration of through simplified and compacted system is being investigated, however special attention should be paid to thermohydraulic designs for a gas entrainment behavior from free surfaces, a flow-induced vibration of in-vessel components, a thermal shock for various structures due to high-speed coolant flows, nonsymmetrical coolant flows, etc. in the reactor vessel. As thus a lot of thermal-hydraulic issues relate to each other complicatedly on the reactor designs, multiple-criteria decision-making on the understanding of relationship among thermal-hydraulic issues is indispensable to design the reactor efficiently. Genetic Algorithm (GA), which is one of the methods for multiple-criteria decision-making, was applied to the typical simple objective optimization problems and then was confirmed its basic performance. From the analyses, the following results have been obtained. (1) In the unimodal optimization problem, it was confirmed that GA is capable of sufficient searching ability. (2) It was confirmed that GA can be also applied to the discrete optimization problems. (3) In the case of applying GA to the combinational optimization problem, the searching efficiency is improved better by increasing the number of experiment times than the maximum of generation. (4) In the case of applying Ga to the multimodal optimization problem, the searching ability is improved by using the two genetic operators (I.e., the mutation, and the elite strategy) at the same time. (author)

  5. Performance of the "CCS Algorithm" in real world patients.

    Science.gov (United States)

    LaHaye, Stephen A; Olesen, Jonas B; Lacombe, Shawn P

    2015-06-01

    With the publication of the 2014 Focused Update of the Canadian Cardiovascular Society Guidelines for the Management of Atrial Fibrillation, the Canadian Cardiovascular Society Atrial Fibrillation Guidelines Committee has introduced a new triage and management algorithm; the so-called "CCS Algorithm". The CCS Algorithm is based upon expert opinion of the best available evidence; however, the CCS Algorithm has not yet been validated. Accordingly, the purpose of this study is to evaluate the performance of the CCS Algorithm in a cohort of real world patients. We compared the CCS Algorithm with the European Society of Cardiology (ESC) Algorithm in 172 hospital inpatients who are at risk of stroke due to non-valvular atrial fibrillation in whom anticoagulant therapy was being considered. The CCS Algorithm and the ESC Algorithm were concordant in 170/172 patients (99% of the time). There were two patients (1%) with vascular disease, but no other thromboembolic risk factors, which were classified as requiring oral anticoagulant therapy using the ESC Algorithm, but for whom ASA was recommended by the CCS Algorithm. The CCS Algorithm appears to be unnecessarily complicated in so far as it does not appear to provide any additional discriminatory value above and beyond the use of the ESC Algorithm, and its use could result in under treatment of patients, specifically female patients with vascular disease, whose real risk of stroke has been understated by the Guidelines.

  6. Comparing the analytical performances of Micro-NIR and FT-NIR spectrometers in the evaluation of acerola fruit quality, using PLS and SVM regression algorithms.

    Science.gov (United States)

    Malegori, Cristina; Nascimento Marques, Emanuel José; de Freitas, Sergio Tonetto; Pimentel, Maria Fernanda; Pasquini, Celio; Casiraghi, Ernestina

    2017-04-01

    The main goal of this study was to investigate the analytical performances of a state-of-the-art device, one of the smallest dispersion NIR spectrometers on the market (MicroNIR 1700), making a critical comparison with a benchtop FT-NIR spectrometer in the evaluation of the prediction accuracy. In particular, the aim of this study was to estimate in a non-destructive manner, titratable acidity and ascorbic acid content in acerola fruit during ripening, in a view of direct applicability in field of this new miniaturised handheld device. Acerola (Malpighia emarginata DC.) is a super-fruit characterised by a considerable amount of ascorbic acid, ranging from 1.0% to 4.5%. However, during ripening, acerola colour changes and the fruit may lose as much as half of its ascorbic acid content. Because the variability of chemical parameters followed a non-strictly linear profile, two different regression algorithms were compared: PLS and SVM. Regression models obtained with Micro-NIR spectra give better results using SVM algorithm, for both ascorbic acid and titratable acidity estimation. FT-NIR data give comparable results using both SVM and PLS algorithms, with lower errors for SVM regression. The prediction ability of the two instruments was statistically compared using the Passing-Bablok regression algorithm; the outcomes are critically discussed together with the regression models, showing the suitability of the portable Micro-NIR for in field monitoring of chemical parameters of interest in acerola fruits. Copyright © 2016 Elsevier B.V. All rights reserved.

  7. Partial Evaluation of the Euclidian Algorithm

    DEFF Research Database (Denmark)

    Danvy, Olivier; Goldberg, Mayer

    1997-01-01

    -like behavior. Each of them presents a challenge for partial evaluation. The Euclidian algorithm is one of them, and in this article, we make it amenable to partial evaluation. We observe that the number of iterations in the Euclidian algorithm is bounded by a number that can be computed given either of the two...... arguments. We thus rephrase this algorithm using bounded recursion. The resulting program is better suited for automatic unfolding and thus for partial evaluation. Its specialization is efficient....

  8. Performance of Jet Algorithms in CMS

    CERN Document Server

    CMS Collaboration

    The CMS Combined Software and Analysis Challenge 2007 (CSA07) is well underway and expected to produce a wealth of physics analyses to be applied to the first incoming detector data in 2008. The JetMET group of CMS supports four different jet clustering algorithms for the CSA07 Monte Carlo samples, with two different parameterizations each: \\fastkt, \\siscone, \\midpoint, and \\itcone. We present several studies comparing the performance of these algorithms using QCD dijet and \\ttbar Monte Carlo samples. We specifically observe that the \\siscone algorithm performs equal to or better than the \\midpoint algorithm in all presented studies and propose that \\siscone be adopted as the preferred cone-based jet clustering algorithm in future CMS physics analyses, as it is preferred by theorists for its infrared- and collinear-safety to all orders of perturbative QCD. We furthermore encourage the use of the \\fastkt algorithm which is found to perform as good as any other algorithm under study, features dramatically reduc...

  9. Evaluation of virtual monoenergetic imaging algorithms for dual-energy carotid and intracerebral CT angiography: Effects on image quality, artefacts and diagnostic performance for the detection of stenosis.

    Science.gov (United States)

    Leithner, Doris; Mahmoudi, Scherwin; Wichmann, Julian L; Martin, Simon S; Lenga, Lukas; Albrecht, Moritz H; Booz, Christian; Arendt, Christophe T; Beeres, Martin; D'Angelo, Tommaso; Bodelle, Boris; Vogl, Thomas J; Scholtz, Jan-Erik

    2018-02-01

    To investigate the impact of traditional (VMI) and noise-optimized virtual monoenergetic imaging (VMI+) algorithms on quantitative and qualitative image quality, and the assessment of stenosis in carotid and intracranial dual-energy CTA (DE-CTA). DE-CTA studies of 40 patients performed on a third-generation 192-slice dual-source CT scanner were included in this retrospective study. 120-kVp image-equivalent linearly-blended, VMI and VMI+ series were reconstructed. Quantitative analysis included evaluation of contrast-to-noise ratios (CNR) of the aorta, common carotid artery, internal carotid artery, middle cerebral artery, and basilar artery. VMI and VMI+ with highest CNR, and linearly-blended series were rated qualitatively. Three radiologists assessed artefacts and suitability for evaluation at shoulder height, carotid bifurcation, siphon, and intracranial using 5-point Likert scales. Detection and grading of stenosis were performed at carotid bifurcation and siphon. Highest CNR values were observed for 40-keV VMI+ compared to 65-keV VMI and linearly-blended images (P evaluation at shoulder and bifurcation height. Suitability was significantly higher in VMI+ and VMI compared to linearly-blended images for intracranial and ICA assessment (P performance. 40-keV VMI+ showed improved quantitative image quality compared to 65-keV VMI and linearly-blended series in supraaortic DE-CTA. VMI and VMI+ provided increased suitability for carotid and intracranial artery evaluation with excellent assessment of stenosis, but did not translate into increased diagnostic performance. Copyright © 2017 Elsevier B.V. All rights reserved.

  10. Evaluating the statistical performance of less applied algorithms in classification of worldview-3 imagery data in an urbanized landscape

    Science.gov (United States)

    Ranaie, Mehrdad; Soffianian, Alireza; Pourmanafi, Saeid; Mirghaffari, Noorollah; Tarkesh, Mostafa

    2018-03-01

    In recent decade, analyzing the remotely sensed imagery is considered as one of the most common and widely used procedures in the environmental studies. In this case, supervised image classification techniques play a central role. Hence, taking a high resolution Worldview-3 over a mixed urbanized landscape in Iran, three less applied image classification methods including Bagged CART, Stochastic gradient boosting model and Neural network with feature extraction were tested and compared with two prevalent methods: random forest and support vector machine with linear kernel. To do so, each method was run ten time and three validation techniques was used to estimate the accuracy statistics consist of cross validation, independent validation and validation with total of train data. Moreover, using ANOVA and Tukey test, statistical difference significance between the classification methods was significantly surveyed. In general, the results showed that random forest with marginal difference compared to Bagged CART and stochastic gradient boosting model is the best performing method whilst based on independent validation there was no significant difference between the performances of classification methods. It should be finally noted that neural network with feature extraction and linear support vector machine had better processing speed than other.

  11. Component evaluation testing and analysis algorithms.

    Energy Technology Data Exchange (ETDEWEB)

    Hart, Darren M.; Merchant, Bion John

    2011-10-01

    The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.

  12. Flavour Tagging Algorithms and Performances in LHCb

    CERN Document Server

    Calvi, M; Musy, M

    2007-01-01

    In this note we describe the general characteristics of the LHCb flavour tagging algorithms and summarize the tagging performances on the Monte Carlo samples generated for the Data Challenge 2004 in different decay channels. We also discuss some systematics effects and possible methods to extract the mistag fraction in real data.

  13. Analysis of ANSI N13.11: the performance algorithm

    International Nuclear Information System (INIS)

    Roberson, P.L.; Hadley, R.T.; Thorson, M.R.

    1982-06-01

    The method of performance testing for personnel dosimeters specified in draft ANSI N13.11, Criteria for Testing Personnel Dosimetry Performance is evaluated. Points addressed are: (1) operational behavior of the performance algorithm; (2) dependence on the number of test dosimeters; (3) basis for choosing an algorithm; and (4) other possible algorithms. The performance algorithm evaluated for each test category is formed by adding the calibration bias and its standard deviation. This algorithm is not optimal due to a high dependence on the standard deviation. The dependence of the calibration bias on the standard deviation is significant because of the low number of dosimeters (15) evaluated per category. For categories with large standard deviations the uncertainty in determining the performance criterion is large. To have a reasonable chance of passing all categories in one test, we required a 95% probability of passing each category. Then, the maximum permissible standard deviation is 30% even with a zero bias. For test categories with standard deviations <10%, the bias can be as high as 35%. For intermediate standard deviations, the chance of passing a category is improved by using a 5 to 10% negative bias. Most multipurpose personnel dosimetry systems will probably require detailed calibration adjustments to pass all categories within two rounds of testing

  14. Genetic algorithm for nuclear data evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Arthur, Jennifer Ann [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-02-02

    These are slides on genetic algorithm for nuclear data evaluation. The following is covered: initial population, fitness (outer loop), calculate fitness, selection (first part of inner loop), reproduction (second part of inner loop), solution, and examples.

  15. Algorithms evaluation for fundus images enhancement

    International Nuclear Information System (INIS)

    Braem, V; Marcos, M; Bizai, G; Drozdowicz, B; Salvatelli, A

    2011-01-01

    Color images of the retina inherently involve noise and illumination artifacts. In order to improve the diagnostic quality of the images, it is desirable to homogenize the non-uniform illumination and increase contrast while preserving color characteristics. The visual result of different pre-processing techniques can be very dissimilar and it is necessary to make an objective assessment of the techniques in order to select the most suitable. In this article the performance of eight algorithms to correct the non-uniform illumination, contrast modification and color preservation was evaluated. In order to choose the most suitable a general score was proposed. The results got good impression from experts, although some differences suggest that not necessarily the best statistical quality of image is the one of best diagnostic quality to the trained doctor eye. This means that the best pre-processing algorithm for an automatic classification may be different to the most suitable one for visual diagnosis. However, both should result in the same final diagnosis.

  16. Development and Evaluation of Algorithms for Breath Alcohol Screening.

    Science.gov (United States)

    Ljungblad, Jonas; Hök, Bertil; Ekström, Mikael

    2016-04-01

    Breath alcohol screening is important for traffic safety, access control and other areas of health promotion. A family of sensor devices useful for these purposes is being developed and evaluated. This paper is focusing on algorithms for the determination of breath alcohol concentration in diluted breath samples using carbon dioxide to compensate for the dilution. The examined algorithms make use of signal averaging, weighting and personalization to reduce estimation errors. Evaluation has been performed by using data from a previously conducted human study. It is concluded that these features in combination will significantly reduce the random error compared to the signal averaging algorithm taken alone.

  17. An Evaluation of Concurrent Priority Queue Algorithms

    Science.gov (United States)

    1991-02-01

    path pronlem are testedi A! -S7 ?o An Evaluation of Concurrent Priority Queue Algorithms bv Qin Huang BS. Uiversity - of Science andi Technology of China...who have always supported me through my entire career and made my life more enjoyable. This research was supported in part by the Advanced Research

  18. Evaluation of train-speed control algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Slavik, M.M. [BKS Advantech (Pty.) Ltd., Pretoria (South Africa)

    2000-07-01

    A relatively simple and fast simulator has been developed and used for the preliminary testing of train cruise-control algorithms. The simulation is done in software on a PC. The simulator is used to gauge the consequences and feasibility of a cruise-control strategy prior to more elaborate testing and evaluation. The tool was used to design and pre-test a train-cruise control algorithm called NSS, which does not require knowledge of exact train mass, vertical alignment, or actual braking force. Only continuous measurements on the speed of the train and electrical current are required. With this modest input, the NSS algorithm effected speed changes smoothly and efficiently for a wide range of operating conditions. (orig.)

  19. Image quality evaluation of full reference algorithm

    Science.gov (United States)

    He, Nannan; Xie, Kai; Li, Tong; Ye, Yushan

    2018-03-01

    Image quality evaluation is a classic research topic, the goal is to design the algorithm, given the subjective feelings consistent with the evaluation value. This paper mainly introduces several typical reference methods of Mean Squared Error(MSE), Peak Signal to Noise Rate(PSNR), Structural Similarity Image Metric(SSIM) and feature similarity(FSIM) of objective evaluation methods. The different evaluation methods are tested by Matlab, and the advantages and disadvantages of these methods are obtained by analyzing and comparing them.MSE and PSNR are simple, but they are not considered to introduce HVS characteristics into image quality evaluation. The evaluation result is not ideal. SSIM has a good correlation and simple calculation ,because it is considered to the human visual effect into image quality evaluation,However the SSIM method is based on a hypothesis,The evaluation result is limited. The FSIM method can be used for test of gray image and color image test, and the result is better. Experimental results show that the new image quality evaluation algorithm based on FSIM is more accurate.

  20. Using network properties to evaluate targeted immunization algorithms

    Directory of Open Access Journals (Sweden)

    Bita Shams

    2014-09-01

    Full Text Available Immunization of complex network with minimal or limited budget is a challenging issue for research community. In spite of much literature in network immunization, no comprehensive research has been conducted for evaluation and comparison of immunization algorithms. In this paper, we propose an evaluation framework for immunization algorithms regarding available amount of vaccination resources, goal of immunization program, and time complexity. The evaluation framework is designed based on network topological metrics which is extensible to all epidemic spreading model. Exploiting evaluation framework on well-known targeted immunization algorithms shows that in general, immunization based on PageRank centrality outperforms other targeting strategies in various types of networks, whereas, closeness and eigenvector centrality exhibit the worst case performance.

  1. Atmospheric turbulence and sensor system effects on biometric algorithm performance

    Science.gov (United States)

    Espinola, Richard L.; Leonard, Kevin R.; Byrd, Kenneth A.; Potvin, Guy

    2015-05-01

    Biometric technologies composed of electro-optical/infrared (EO/IR) sensor systems and advanced matching algorithms are being used in various force protection/security and tactical surveillance applications. To date, most of these sensor systems have been widely used in controlled conditions with varying success (e.g., short range, uniform illumination, cooperative subjects). However the limiting conditions of such systems have yet to be fully studied for long range applications and degraded imaging environments. Biometric technologies used for long range applications will invariably suffer from the effects of atmospheric turbulence degradation. Atmospheric turbulence causes blur, distortion and intensity fluctuations that can severely degrade image quality of electro-optic and thermal imaging systems and, for the case of biometrics technology, translate to poor matching algorithm performance. In this paper, we evaluate the effects of atmospheric turbulence and sensor resolution on biometric matching algorithm performance. We use a subset of the Facial Recognition Technology (FERET) database and a commercial algorithm to analyze facial recognition performance on turbulence degraded facial images. The goal of this work is to understand the feasibility of long-range facial recognition in degraded imaging conditions, and the utility of camera parameter trade studies to enable the design of the next generation biometrics sensor systems.

  2. Community detection algorithm evaluation with ground-truth data

    Science.gov (United States)

    Jebabli, Malek; Cherifi, Hocine; Cherifi, Chantal; Hamouda, Atef

    2018-02-01

    Community structure is of paramount importance for the understanding of complex networks. Consequently, there is a tremendous effort in order to develop efficient community detection algorithms. Unfortunately, the issue of a fair assessment of these algorithms is a thriving open question. If the ground-truth community structure is available, various clustering-based metrics are used in order to compare it versus the one discovered by these algorithms. However, these metrics defined at the node level are fairly insensitive to the variation of the overall community structure. To overcome these limitations, we propose to exploit the topological features of the 'community graphs' (where the nodes are the communities and the links represent their interactions) in order to evaluate the algorithms. To illustrate our methodology, we conduct a comprehensive analysis of overlapping community detection algorithms using a set of real-world networks with known a priori community structure. Results provide a better perception of their relative performance as compared to classical metrics. Moreover, they show that more emphasis should be put on the topology of the community structure. We also investigate the relationship between the topological properties of the community structure and the alternative evaluation measures (quality metrics and clustering metrics). It appears clearly that they present different views of the community structure and that they must be combined in order to evaluate the effectiveness of community detection algorithms.

  3. Evaluation of Algorithms for Compressing Hyperspectral Data

    Science.gov (United States)

    Cook, Sid; Harsanyi, Joseph; Faber, Vance

    2003-01-01

    With EO-1 Hyperion in orbit NASA is showing their continued commitment to hyperspectral imaging (HSI). As HSI sensor technology continues to mature, the ever-increasing amounts of sensor data generated will result in a need for more cost effective communication and data handling systems. Lockheed Martin, with considerable experience in spacecraft design and developing special purpose onboard processors, has teamed with Applied Signal & Image Technology (ASIT), who has an extensive heritage in HSI spectral compression and Mapping Science (MSI) for JPEG 2000 spatial compression expertise, to develop a real-time and intelligent onboard processing (OBP) system to reduce HSI sensor downlink requirements. Our goal is to reduce the downlink requirement by a factor > 100, while retaining the necessary spectral and spatial fidelity of the sensor data needed to satisfy the many science, military, and intelligence goals of these systems. Our compression algorithms leverage commercial-off-the-shelf (COTS) spectral and spatial exploitation algorithms. We are currently in the process of evaluating these compression algorithms using statistical analysis and NASA scientists. We are also developing special purpose processors for executing these algorithms onboard a spacecraft.

  4. Evaluation of Underwater Image Enhancement Algorithms under Different Environmental Conditions

    Directory of Open Access Journals (Sweden)

    Marino Mangeruga

    2018-01-01

    Full Text Available Underwater images usually suffer from poor visibility, lack of contrast and colour casting, mainly due to light absorption and scattering. In literature, there are many algorithms aimed to enhance the quality of underwater images through different approaches. Our purpose was to identify an algorithm that performs well in different environmental conditions. We have selected some algorithms from the state of the art and we have employed them to enhance a dataset of images produced in various underwater sites, representing different environmental and illumination conditions. These enhanced images have been evaluated through some quantitative metrics. By analysing the results of these metrics, we tried to understand which of the selected algorithms performed better than the others. Another purpose of our research was to establish if a quantitative metric was enough to judge the behaviour of an underwater image enhancement algorithm. We aim to demonstrate that, even if the metrics can provide an indicative estimation of image quality, they could lead to inconsistent or erroneous evaluations.

  5. Queue and stack sorting algorithm optimization and performance analysis

    Science.gov (United States)

    Qian, Mingzhu; Wang, Xiaobao

    2018-04-01

    Sorting algorithm is one of the basic operation of a variety of software development, in data structures course specializes in all kinds of sort algorithm. The performance of the sorting algorithm is directly related to the efficiency of the software. A lot of excellent scientific research queue is constantly optimizing algorithm, algorithm efficiency better as far as possible, the author here further research queue combined with stacks of sorting algorithms, the algorithm is mainly used for alternating operation queue and stack storage properties, Thus avoiding the need for a large number of exchange or mobile operations in the traditional sort. Before the existing basis to continue research, improvement and optimization, the focus on the optimization of the time complexity of the proposed optimization and improvement, The experimental results show that the improved effectively, at the same time and the time complexity and space complexity of the algorithm, the stability study corresponding research. The improvement and optimization algorithm, improves the practicability.

  6. Evaluating and comparing algorithms for respiratory motion prediction

    International Nuclear Information System (INIS)

    Ernst, F; Dürichen, R; Schlaefer, A; Schweikard, A

    2013-01-01

    In robotic radiosurgery, it is necessary to compensate for systematic latencies arising from target tracking and mechanical constraints. This compensation is usually achieved by means of an algorithm which computes the future target position. In most scientific works on respiratory motion prediction, only one or two algorithms are evaluated on a limited amount of very short motion traces. The purpose of this work is to gain more insight into the real world capabilities of respiratory motion prediction methods by evaluating many algorithms on an unprecedented amount of data. We have evaluated six algorithms, the normalized least mean squares (nLMS), recursive least squares (RLS), multi-step linear methods (MULIN), wavelet-based multiscale autoregression (wLMS), extended Kalman filtering, and ε-support vector regression (SVRpred) methods, on an extensive database of 304 respiratory motion traces. The traces were collected during treatment with the CyberKnife (Accuray, Inc., Sunnyvale, CA, USA) and feature an average length of 71 min. Evaluation was done using a graphical prediction toolkit, which is available to the general public, as is the data we used. The experiments show that the nLMS algorithm—which is one of the algorithms currently used in the CyberKnife—is outperformed by all other methods. This is especially true in the case of the wLMS, the SVRpred, and the MULIN algorithms, which perform much better. The nLMS algorithm produces a relative root mean square (RMS) error of 75% or less (i.e., a reduction in error of 25% or more when compared to not doing prediction) in only 38% of the test cases, whereas the MULIN and SVRpred methods reach this level in more than 77%, the wLMS algorithm in more than 84% of the test cases. Our work shows that the wLMS algorithm is the most accurate algorithm and does not require parameter tuning, making it an ideal candidate for clinical implementation. Additionally, we have seen that the structure of a patient

  7. A new BP Fourier algorithm and its application in English teaching evaluation

    Science.gov (United States)

    Pei, Xuehui; Pei, Guixin

    2017-08-01

    BP neural network algorithm has wide adaptability and accuracy when used in complicated system evaluation, but its calculation defects such as slow convergence have limited its practical application. The paper tries to speed up the calculation convergence of BP neural network algorithm with Fourier basis functions and presents a new BP Fourier algorithm for complicated system evaluation. First, shortages and working principle of BP algorithm are analyzed for subsequent targeted improvement; Second, the presented BP Fourier algorithm adopts Fourier basis functions to simplify calculation structure, designs new calculation transfer function between input and output layers, and conducts theoretical analysis to prove the efficiency of the presented algorithm; Finally, the presented algorithm is used in evaluating university English teaching and the application results shows that the presented BP Fourier algorithm has better performance in calculation efficiency and evaluation accuracy and can be used in evaluating complicated system practically.

  8. Evaluation of Topology-Aware Broadcast Algorithms for Dragonfly Networks

    Energy Technology Data Exchange (ETDEWEB)

    Dorier, Matthieu; Mubarak, Misbah; Ross, Rob; Li, Jianping Kelvin; Carothers, Christopher D.; Ma, Kwan-Liu

    2016-09-12

    Two-tiered direct network topologies such as Dragonflies have been proposed for future post-petascale and exascale machines, since they provide a high-radix, low-diameter, fast interconnection network. Such topologies call for redesigning MPI collective communication algorithms in order to attain the best performance. Yet as increasingly more applications share a machine, it is not clear how these topology-aware algorithms will react to interference with concurrent jobs accessing the same network. In this paper, we study three topology-aware broadcast algorithms, including one designed by ourselves. We evaluate their performance through event-driven simulation for small- and large-sized broadcasts (in terms of both data size and number of processes). We study the effect of different routing mechanisms on the topology-aware collective algorithms, as well as their sensitivity to network contention with other jobs. Our results show that while topology-aware algorithms dramatically reduce link utilization, their advantage in terms of latency is more limited.

  9. A comparison of performance measures for online algorithms

    DEFF Research Database (Denmark)

    Boyar, Joan; Irani, Sandy; Larsen, Kim Skak

    2009-01-01

    is to balance greediness and adaptability. We examine how these measures evaluate the Greedy Algorithm and Lazy Double Coverage, commonly studied algorithms in the context of server problems. We examine Competitive Analysis, the Max/Max Ratio, the Random Order Ratio, Bijective Analysis and Relative Worst Order...... Analysis and determine how they compare the two algorithms. We find that by the Max/Max Ratio and Bijective Analysis, Greedy is the better algorithm. Under the other measures Lazy Double Coverage is better, though Relative Worst Order Analysis indicates that Greedy is sometimes better. Our results also...... provide the first proof of optimality of an algorithm under Relative Worst Order Analysis....

  10. An objective procedure for evaluation of adaptive antifeedback algorithms in hearing aids.

    Science.gov (United States)

    Freed, Daniel J; Soli, Sigfrid D

    2006-08-01

    This study evaluated the performance of nine adaptive antifeedback algorithms. There were two goals: first, to identify objective procedures that are useful for evaluating these algorithms, and second, to identify strengths and weaknesses of existing algorithms. The algorithms were evaluated in behind-the-ear implementations on the Knowles Electronics Manikin for Acoustic Research (KEMAR). Different acoustic conditions were created by placing a telephone handset or a hat on KEMAR. Electroacoustic techniques were devised to measure the following performance aspects of each algorithm: (1) additional gain made available before oscillation, (2) gain lost in specific frequency regions, (3) reduction of suboscillatory peaks in the frequency response, (4) speed of adaptation to changing acoustic conditions, and (5) robustness in the presence of tonal input signals. For each measurement, performance varied widely across algorithms. No single algorithm was clearly superior or inferior to the others. Generally, the feedback cancellation algorithms were less likely to sacrifice gain in specific frequency regions and better at reducing suboscillatory peaks, whereas the algorithms that used noncancellation techniques were more tolerant of tonal input signals. For those algorithms equipped with special operational modes intended for music listening, the music mode improved the response to tonal inputs but sometimes sacrificed other performance aspects. Algorithms that required an acoustic measurement for initialization purposes tended to perform poorly in acoustic conditions dissimilar to the condition in which initialization was performed. The objective methods devised for this study appear useful for evaluating the performance of adaptive antifeedback algorithms. Currently available algorithms demonstrate a wide range of performance, and further research is required to develop new algorithms that combine the best features of existing algorithms.

  11. Deploy Nalu/Kokkos algorithmic infrastructure with performance benchmarking.

    Energy Technology Data Exchange (ETDEWEB)

    Domino, Stefan P. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ananthan, Shreyas [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Knaus, Robert C. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Williams, Alan B. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2017-09-29

    The former Nalu interior heterogeneous algorithm design, which was originally designed to manage matrix assembly operations over all elemental topology types, has been modified to operate over homogeneous collections of mesh entities. This newly templated kernel design allows for removal of workset variable resize operations that were formerly required at each loop over a Sierra ToolKit (STK) bucket (nominally, 512 entities in size). Extensive usage of the Standard Template Library (STL) std::vector has been removed in favor of intrinsic Kokkos memory views. In this milestone effort, the transition to Kokkos as the underlying infrastructure to support performance and portability on many-core architectures has been deployed for key matrix algorithmic kernels. A unit-test driven design effort has developed a homogeneous entity algorithm that employs a team-based thread parallelism construct. The STK Single Instruction Multiple Data (SIMD) infrastructure is used to interleave data for improved vectorization. The collective algorithm design, which allows for concurrent threading and SIMD management, has been deployed for the core low-Mach element- based algorithm. Several tests to ascertain SIMD performance on Intel KNL and Haswell architectures have been carried out. The performance test matrix includes evaluation of both low- and higher-order methods. The higher-order low-Mach methodology builds on polynomial promotion of the core low-order control volume nite element method (CVFEM). Performance testing of the Kokkos-view/SIMD design indicates low-order matrix assembly kernel speed-up ranging between two and four times depending on mesh loading and node count. Better speedups are observed for higher-order meshes (currently only P=2 has been tested) especially on KNL. The increased workload per element on higher-order meshes bene ts from the wide SIMD width on KNL machines. Combining multiple threads with SIMD on KNL achieves a 4.6x speedup over the baseline, with

  12. Design and performance evaluation of the "iTIVA" algorithm for manual infusion of intravenous anesthetics based on effect-site target

    NARCIS (Netherlands)

    D.E. Ramírez (David Eduardo); J.A. Calvache (Jose Andrés)

    2016-01-01

    textabstractIntroduction: Remifentanil and propofol infusion using TCI pumps has proven to be beneficial for the practice of anesthesia but the availability of these systems is limited. Objective: Designing a pharmacokinetic model-based algorithm for calculating manual infusion regimens to achieve

  13. On the performance of pre-microRNA detection algorithms

    DEFF Research Database (Denmark)

    Saçar Demirci, Müşerref Duygu; Baumbach, Jan; Allmer, Jens

    2017-01-01

    assess 13 ab initio pre-miRNA detection approaches using all relevant, published, and novel data sets while judging algorithm performance based on ten intrinsic performance measures. We present an extensible framework, izMiR, which allows for the unbiased comparison of existing algorithms, adding new...

  14. Airport Traffic Conflict Detection and Resolution Algorithm Evaluation

    Science.gov (United States)

    Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Ballard, Kathryn M.; Otero, Sharon D.; Barker, Glover D.

    2016-01-01

    Two conflict detection and resolution (CD&R) algorithms for the terminal maneuvering area (TMA) were evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. One CD&R algorithm, developed at NASA, was designed to enhance surface situation awareness and provide cockpit alerts of potential conflicts during runway, taxi, and low altitude air-to-air operations. The second algorithm, Enhanced Traffic Situation Awareness on the Airport Surface with Indications and Alerts (SURF IA), was designed to increase flight crew awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the aircraft-based CD&R algorithms during various runway, taxiway, and low altitude scenarios, multiple levels of CD&R system equipage, and various levels of horizontal position accuracy. Algorithm performance was assessed through various metrics including the collision rate, nuisance and missed alert rate, and alert toggling rate. The data suggests that, in general, alert toggling, nuisance and missed alerts, and unnecessary maneuvering occurred more frequently as the position accuracy was reduced. Collision avoidance was more effective when all of the aircraft were equipped with CD&R and maneuvered to avoid a collision after an alert was issued. In order to reduce the number of unwanted (nuisance) alerts when taxiing across a runway, a buffer is needed between the hold line and the alerting zone so alerts are not generated when an aircraft is behind the hold line. All of the results support RTCA horizontal position accuracy requirements for performing a CD&R function to reduce the likelihood and severity of runway incursions and collisions.

  15. Tuning and performance evaluation of PID controller for superheater steam temperature control of 200 MW boiler using gain phase assignment algorithm

    Science.gov (United States)

    Begum, A. Yasmine; Gireesh, N.

    2018-04-01

    In superheater, steam temperature is controlled in a cascade control loop. The cascade control loop consists of PI and PID controllers. To improve the superheater steam temperature control the controller's gains in a cascade control loop has to be tuned efficiently. The mathematical model of the superheater is derived by sets of nonlinear partial differential equations. The tuning methods taken for study here are designed for delay plus first order transfer function model. Hence from the dynamical model of the superheater, a FOPTD model is derived using frequency response method. Then by using Chien-Hrones-Reswick Tuning Algorithm and Gain-Phase Assignment Algorithm optimum controller gains has been found out based on the least value of integral time weighted absolute error.

  16. Generic algorithms for high performance scalable geocomputing

    Science.gov (United States)

    de Jong, Kor; Schmitz, Oliver; Karssenberg, Derek

    2016-04-01

    During the last decade, the characteristics of computing hardware have changed a lot. For example, instead of a single general purpose CPU core, personal computers nowadays contain multiple cores per CPU and often general purpose accelerators, like GPUs. Additionally, compute nodes are often grouped together to form clusters or a supercomputer, providing enormous amounts of compute power. For existing earth simulation models to be able to use modern hardware platforms, their compute intensive parts must be rewritten. This can be a major undertaking and may involve many technical challenges. Compute tasks must be distributed over CPU cores, offloaded to hardware accelerators, or distributed to different compute nodes. And ideally, all of this should be done in such a way that the compute task scales well with the hardware resources. This presents two challenges: 1) how to make good use of all the compute resources and 2) how to make these compute resources available for developers of simulation models, who may not (want to) have the required technical background for distributing compute tasks. The first challenge requires the use of specialized technology (e.g.: threads, OpenMP, MPI, OpenCL, CUDA). The second challenge requires the abstraction of the logic handling the distribution of compute tasks from the model-specific logic, hiding the technical details from the model developer. To assist the model developer, we are developing a C++ software library (called Fern) containing algorithms that can use all CPU cores available in a single compute node (distributing tasks over multiple compute nodes will be done at a later stage). The algorithms are grid-based (finite difference) and include local and spatial operations such as convolution filters. The algorithms handle distribution of the compute tasks to CPU cores internally. In the resulting model the low-level details of how this is done is separated from the model-specific logic representing the modeled system

  17. The Parallel Algorithm Based on Genetic Algorithm for Improving the Performance of Cognitive Radio

    Directory of Open Access Journals (Sweden)

    Liu Miao

    2018-01-01

    Full Text Available The intercarrier interference (ICI problem of cognitive radio (CR is severe. In this paper, the machine learning algorithm is used to obtain the optimal interference subcarriers of an unlicensed user (un-LU. Masking the optimal interference subcarriers can suppress the ICI of CR. Moreover, the parallel ICI suppression algorithm is designed to improve the calculation speed and meet the practical requirement of CR. Simulation results show that the data transmission rate threshold of un-LU can be set, the data transmission quality of un-LU can be ensured, the ICI of a licensed user (LU is suppressed, and the bit error rate (BER performance of LU is improved by implementing the parallel suppression algorithm. The ICI problem of CR is solved well by the new machine learning algorithm. The computing performance of the algorithm is improved by designing a new parallel structure and the communication performance of CR is enhanced.

  18. High Performance Parallel Multigrid Algorithms for Unstructured Grids

    Science.gov (United States)

    Frederickson, Paul O.

    1996-01-01

    We describe a high performance parallel multigrid algorithm for a rather general class of unstructured grid problems in two and three dimensions. The algorithm PUMG, for parallel unstructured multigrid, is related in structure to the parallel multigrid algorithm PSMG introduced by McBryan and Frederickson, for they both obtain a higher convergence rate through the use of multiple coarse grids. Another reason for the high convergence rate of PUMG is its smoother, an approximate inverse developed by Baumgardner and Frederickson.

  19. Cardiovascular System Sonographic Evaluation Algorithm: A New Sonographic Algorithm for Evaluation of the Fetal Cardiovascular System in the Second Trimester.

    Science.gov (United States)

    De León-Luis, Juan; Bravo, Coral; Gámez, Francisco; Ortiz-Quintana, Luis

    2015-07-01

    To evaluate the reproducibility and feasibility of the new cardiovascular system sonographic evaluation algorithm for studying the extended fetal cardiovascular system, including the portal, thymic, and supra-aortic areas, in the second trimester of pregnancy (19-22 weeks). We performed a cross-sectional study of pregnant women with healthy fetuses (singleton and twin pregnancies) attending our center from March to August 2011. The extended fetal cardiovascular system was evaluated by following the new algorithm, a sequential acquisition of axial views comprising the following (caudal to cranial): I, portal sinus; II, ductus venosus; III, hepatic veins; IV, 4-chamber view; V, left ventricular outflow tract; VI, right ventricular outflow tract; VII, 3-vessel and trachea view; VIII, thy-box; and IX, subclavian arteries. Interobserver agreement on the feasibility and exploration time was estimated in a subgroup of patients. The feasibility and exploration time were determined for the main cohort. Maternal, fetal, and sonographic factors affecting both features were evaluated. Interobserver agreement was excellent for all views except view VIII; the difference in the mean exploration time between observers was 1.5 minutes (95% confidence interval, 0.7-2.1 minutes; P cardiovascular system sonographic evaluation algorithm is a reproducible and feasible approach for exploration of the extended fetal cardiovascular system in a second-trimester scan. It can be used to explore these areas in normal and abnormal conditions and provides an integrated image of extended fetal cardiovascular anatomy. © 2015 by the American Institute of Ultrasound in Medicine.

  20. Analysing the performance of dynamic multi-objective optimisation algorithms

    CSIR Research Space (South Africa)

    Helbig, M

    2013-06-01

    Full Text Available and the goal of the algorithm is to track a set of tradeoff solutions over time. Analysing the performance of a dynamic multi-objective optimisation algorithm (DMOA) is not a trivial task. For each environment (before a change occurs) the DMOA has to find a set...

  1. Predicting Students’ Performance using Modified ID3 Algorithm

    OpenAIRE

    Ramanathan L; Saksham Dhanda; Suresh Kumar D

    2013-01-01

    The ability to predict performance of students is very crucial in our present education system. We can use data mining concepts for this purpose. ID3 algorithm is one of the famous algorithms present today to generate decision trees. But this algorithm has a shortcoming that it is inclined to attributes with many values. So , this research aims to overcome this shortcoming of the algorithm by using gain ratio(instead of information gain) as well as by giving weights to each attribute at every...

  2. Towards a Framework for Evaluating and Comparing Diagnosis Algorithms

    Science.gov (United States)

    Kurtoglu, Tolga; Narasimhan, Sriram; Poll, Scott; Garcia,David; Kuhn, Lukas; deKleer, Johan; vanGemund, Arjan; Feldman, Alexander

    2009-01-01

    Diagnostic inference involves the detection of anomalous system behavior and the identification of its cause, possibly down to a failed unit or to a parameter of a failed unit. Traditional approaches to solving this problem include expert/rule-based, model-based, and data-driven methods. Each approach (and various techniques within each approach) use different representations of the knowledge required to perform the diagnosis. The sensor data is expected to be combined with these internal representations to produce the diagnosis result. In spite of the availability of various diagnosis technologies, there have been only minimal efforts to develop a standardized software framework to run, evaluate, and compare different diagnosis technologies on the same system. This paper presents a framework that defines a standardized representation of the system knowledge, the sensor data, and the form of the diagnosis results and provides a run-time architecture that can execute diagnosis algorithms, send sensor data to the algorithms at appropriate time steps from a variety of sources (including the actual physical system), and collect resulting diagnoses. We also define a set of metrics that can be used to evaluate and compare the performance of the algorithms, and provide software to calculate the metrics.

  3. Human Performance Evaluation System

    International Nuclear Information System (INIS)

    Hardwick, R.J. Jr.

    1985-01-01

    Operating nuclear power plants requires high standards of performance, extensive training and responsive management. Despite our best efforts inappropriate human actions do occur, but they can be managed. An extensive review of License Event Reports (LERs) was conducted which indicated continual inadequacy in human performance and in evaluation of root causes. Of some 31,000 LERs, about 5,000 or 16% were directly attributable to inappropriate actions. A recent analysis of 87 Significant Event Reports (issued by INPO in 1983) identified inappropriate actions as being the most frequent root cause (44% of the total). A more recent analysis of SERs issued in 1983 and 1984 indicate that 52% of the root causes were attributed to human performance. The Human Performance Evaluation System (HPES) is a comprehensive, coordinated utility/industry system for evaluating and reporting human performance situtations. HPES is a result of the realization that current reporting system provide limited treatment of human performance and rarely provide adequate information about root causes of inappropriate actions by individuals. The HPES was implemented to identify and eliminate root causes of inappropriate actions

  4. Enabling high performance computational science through combinatorial algorithms

    International Nuclear Information System (INIS)

    Boman, Erik G; Bozdag, Doruk; Catalyurek, Umit V; Devine, Karen D; Gebremedhin, Assefaw H; Hovland, Paul D; Pothen, Alex; Strout, Michelle Mills

    2007-01-01

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation

  5. Enabling high performance computational science through combinatorial algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Boman, Erik G [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Bozdag, Doruk [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Catalyurek, Umit V [Biomedical Informatics, and Electrical and Computer Engineering, Ohio State University (United States); Devine, Karen D [Discrete Algorithms and Math Department, Sandia National Laboratories (United States); Gebremedhin, Assefaw H [Computer Science and Center for Computational Science, Old Dominion University (United States); Hovland, Paul D [Mathematics and Computer Science Division, Argonne National Laboratory (United States); Pothen, Alex [Computer Science and Center for Computational Science, Old Dominion University (United States); Strout, Michelle Mills [Computer Science, Colorado State University (United States)

    2007-07-15

    The Combinatorial Scientific Computing and Petascale Simulations (CSCAPES) Institute is developing algorithms and software for combinatorial problems that play an enabling role in scientific and engineering computations. Discrete algorithms will be increasingly critical for achieving high performance for irregular problems on petascale architectures. This paper describes recent contributions by researchers at the CSCAPES Institute in the areas of load balancing, parallel graph coloring, performance improvement, and parallel automatic differentiation.

  6. Assessment of various supervised learning algorithms using different performance metrics

    Science.gov (United States)

    Susheel Kumar, S. M.; Laxkar, Deepak; Adhikari, Sourav; Vijayarajan, V.

    2017-11-01

    Our work brings out comparison based on the performance of supervised machine learning algorithms on a binary classification task. The supervised machine learning algorithms which are taken into consideration in the following work are namely Support Vector Machine(SVM), Decision Tree(DT), K Nearest Neighbour (KNN), Naïve Bayes(NB) and Random Forest(RF). This paper mostly focuses on comparing the performance of above mentioned algorithms on one binary classification task by analysing the Metrics such as Accuracy, F-Measure, G-Measure, Precision, Misclassification Rate, False Positive Rate, True Positive Rate, Specificity, Prevalence.

  7. Evaluating the Performance of Rijndael Encryption

    Directory of Open Access Journals (Sweden)

    Bogdan CIOBANU

    2012-01-01

    Full Text Available In this paper we present a a comparative analysis of the performance of the Rijndael algorithm, developed with the help of two programming languages, namely C and Matlab. The main goal is to get a full, detailed picture about the functioning of this algorithm. In order to evaluate the performances of the Rijndael algorithm for the two different implementations, we took into account establishing the variable factors within each type of implementation so as to avoid the reasons that might lead to running differences (for instance, the comparison of the two implementations will be performed for the situation in which the encryption key length is the same. We chose to use the traditional algorithm for both types of implementation, in which the input is transformed into 4 blocks of 4 bytes, followed by the handling of each byte from each individual column

  8. Performance comparison of extracellular spike sorting algorithms for single-channel recordings.

    Science.gov (United States)

    Wild, Jiri; Prekopcsak, Zoltan; Sieger, Tomas; Novak, Daniel; Jech, Robert

    2012-01-30

    Proper classification of action potentials from extracellular recordings is essential for making an accurate study of neuronal behavior. Many spike sorting algorithms have been presented in the technical literature. However, no comparative analysis has hitherto been performed. In our study, three widely-used publicly-available spike sorting algorithms (WaveClus, KlustaKwik, OSort) were compared with regard to their parameter settings. The algorithms were evaluated using 112 artificial signals (publicly available online) with 2-9 different neurons and varying noise levels between 0.00 and 0.60. An optimization technique based on Adjusted Mutual Information was employed to find near-optimal parameter settings for a given artificial signal and algorithm. All three algorithms performed significantly better (psorting algorithm, receiving the best evaluation score for 60% of all signals. OSort operated at almost five times the speed of the other algorithms. In terms of accuracy, OSort performed significantly less well (palgorithms was optimal in general. The accuracy of the algorithms depended on proper choice of the algorithm parameters and also on specific properties of the examined signal. Copyright © 2011 Elsevier B.V. All rights reserved.

  9. Radionuclide calibrators performance evaluation

    International Nuclear Information System (INIS)

    Mora Ramirez, E.; Zeledon Fonseca, P.; Jimenez Cordero, M.

    2008-01-01

    Radionuclide calibrators are used to estimate accurately activity prior to administration to a patient, so it is very important that this equipment meets its performance requirements. The purpose of this paper is to compare the commercially available 'Calicheck' (Calcorp. Inc), used to assess linearity, versus the well-known source decay method, and also to show our results after performing several recommended quality control tests. The parameters that we wanted to evaluate were carried on using the Capintec CRC-15R and CRC-15 β radionuclide calibrators. The evaluated tests were: high voltage, display, zero adjust, background, reproducibility, source constancy, accuracy, precision and linearity. The first six tests were evaluated on the daily practice, here we analyzed the 2007 recorded data; and the last three were evaluated once a year. During the daily evaluation both calibrators performance were satisfactory comparing with the manufacture's requirements. The accuracy test show result within the ± 10% allowed for a field instrument. Precision performance is within the ± 1 % allowed. On the other hand, the linearity test shows that using the source decay method the relative coefficient is 0.9998, for both equipment and using the Calicheck the relative coefficient is 0.997. However, looking the percentage of error, during the 'Calicheck' test, its range goes from 0.0 % up to -25.35%, and using the source decay method, the range goes from 0.0 % up to -31.05 %, taking into account both instruments. Checking the 'Calicheck' results we can see that the results varying randomly, but using the source decay method the percentage of error increase as the source activity decrease. We conclude that both devices meet its manufactures requirements, in the case of the linearity using the decay method, decreasing the activity source, increasing the percentage of error, this may happen because of the equipment age. (author)

  10. Evaluation Of Algorithms Of Anti- HIV Antibody Tests

    Directory of Open Access Journals (Sweden)

    Paranjape R.S

    1997-01-01

    Full Text Available Research question: Can alternate algorithms be used in place of conventional algorithm for epidemiological studies of HIV infection with less expenses? Objective: To compare the results of HIV sero- prevalence as determined by test algorithms combining three kits with conventional test algorithm. Study design: Cross â€" sectional. Participants: 282 truck drivers. Statistical analysis: Sensitivity and specificity analysis and predictive values. Results: Three different algorithms that do not include Western Blot (WB were compared with the conventional algorithm, in a truck driver population with 5.6% prevalence of HIV â€"I infection. Algorithms with one EIA (Genetic Systems or Biotest and a rapid test (immunocomb or with two EIAs showed 100% positive predictive value in relation to the conventional algorithm. Using an algorithm with EIA as screening test and a rapid test as a confirmatory test was 50 to 70% less expensive than the conventional algorithm per positive scrum sample. These algorithms obviate the interpretation of indeterminate results and also give differential diagnosis of HIV-2 infection. Alternate algorithms are ideally suited for community based control programme in developing countries. Application of these algorithms in population with low prevalence should also be studied in order to evaluate universal applicability.

  11. Experimental analysis of the performance of machine learning algorithms in the classification of navigation accident records

    Directory of Open Access Journals (Sweden)

    REIS, M V. S. de A.

    2017-06-01

    Full Text Available This paper aims to evaluate the use of machine learning techniques in a database of marine accidents. We analyzed and evaluated the main causes and types of marine accidents in the Northern Fluminense region. For this, machine learning techniques were used. The study showed that the modeling can be done in a satisfactory manner using different configurations of classification algorithms, varying the activation functions and training parameters. The SMO (Sequential Minimal Optimization algorithm showed the best performance result.

  12. DiamondTorre Algorithm for High-Performance Wave Modeling

    Directory of Open Access Journals (Sweden)

    Vadim Levchenko

    2016-08-01

    Full Text Available Effective algorithms of physical media numerical modeling problems’ solution are discussed. The computation rate of such problems is limited by memory bandwidth if implemented with traditional algorithms. The numerical solution of the wave equation is considered. A finite difference scheme with a cross stencil and a high order of approximation is used. The DiamondTorre algorithm is constructed, with regard to the specifics of the GPGPU’s (general purpose graphical processing unit memory hierarchy and parallelism. The advantages of these algorithms are a high level of data localization, as well as the property of asynchrony, which allows one to effectively utilize all levels of GPGPU parallelism. The computational intensity of the algorithm is greater than the one for the best traditional algorithms with stepwise synchronization. As a consequence, it becomes possible to overcome the above-mentioned limitation. The algorithm is implemented with CUDA. For the scheme with the second order of approximation, the calculation performance of 50 billion cells per second is achieved. This exceeds the result of the best traditional algorithm by a factor of five.

  13. Improved Ant Colony Clustering Algorithm and Its Performance Study

    Science.gov (United States)

    Gao, Wei

    2016-01-01

    Clustering analysis is used in many disciplines and applications; it is an important tool that descriptively identifies homogeneous groups of objects based on attribute values. The ant colony clustering algorithm is a swarm-intelligent method used for clustering problems that is inspired by the behavior of ant colonies that cluster their corpses and sort their larvae. A new abstraction ant colony clustering algorithm using a data combination mechanism is proposed to improve the computational efficiency and accuracy of the ant colony clustering algorithm. The abstraction ant colony clustering algorithm is used to cluster benchmark problems, and its performance is compared with the ant colony clustering algorithm and other methods used in existing literature. Based on similar computational difficulties and complexities, the results show that the abstraction ant colony clustering algorithm produces results that are not only more accurate but also more efficiently determined than the ant colony clustering algorithm and the other methods. Thus, the abstraction ant colony clustering algorithm can be used for efficient multivariate data clustering. PMID:26839533

  14. Supercapacitors Performance Evaluation

    OpenAIRE

    Zhang, S; Pan, N

    2015-01-01

    The performance of a supercapacitor can be characterized by a series of key parameters, including the cell capacitance, operating voltage, equivalent series resistance, power density, energy density, and time constant. To accurately measure these parameters, a variety of methods have been proposed and are used in academia and industry. As a result, some confusion has been caused due to the inconsistencies between different evaluation methods and practices. Such confusion hinders effective com...

  15. High performance deformable image registration algorithms for manycore processors

    CERN Document Server

    Shackleford, James; Sharp, Gregory

    2013-01-01

    High Performance Deformable Image Registration Algorithms for Manycore Processors develops highly data-parallel image registration algorithms suitable for use on modern multi-core architectures, including graphics processing units (GPUs). Focusing on deformable registration, we show how to develop data-parallel versions of the registration algorithm suitable for execution on the GPU. Image registration is the process of aligning two or more images into a common coordinate frame and is a fundamental step to be able to compare or fuse data obtained from different sensor measurements. E

  16. Convergence Performance of Adaptive Algorithms of L-Filters

    Directory of Open Access Journals (Sweden)

    Robert Hudec

    2003-01-01

    Full Text Available This paper deals with convergence parameters determination of adaptive algorithms, which are used in adaptive L-filters design. Firstly the stability of adaptation process, convergence rate or adaptation time, and behaviour of convergence curve belong among basic properties of adaptive algorithms. L-filters with variety of adaptive algorithms were used to their determination. Convergence performances finding of adaptive filters is important mainly for their hardware applications, where filtration in real time or adaptation of coefficient filter with low capacity of input data are required.

  17. A method for evaluating discoverability and navigability of recommendation algorithms.

    Science.gov (United States)

    Lamprecht, Daniel; Strohmaier, Markus; Helic, Denis

    2017-01-01

    Recommendations are increasingly used to support and enable discovery, browsing, and exploration of items. This is especially true for entertainment platforms such as Netflix or YouTube, where frequently, no clear categorization of items exists. Yet, the suitability of a recommendation algorithm to support these use cases cannot be comprehensively evaluated by any recommendation evaluation measures proposed so far. In this paper, we propose a method to expand the repertoire of existing recommendation evaluation techniques with a method to evaluate the discoverability and navigability of recommendation algorithms. The proposed method tackles this by means of first evaluating the discoverability of recommendation algorithms by investigating structural properties of the resulting recommender systems in terms of bow tie structure, and path lengths. Second, the method evaluates navigability by simulating three different models of information seeking scenarios and measuring the success rates. We show the feasibility of our method by applying it to four non-personalized recommendation algorithms on three data sets and also illustrate its applicability to personalized algorithms. Our work expands the arsenal of evaluation techniques for recommendation algorithms, extends from a one-click-based evaluation towards multi-click analysis, and presents a general, comprehensive method to evaluating navigability of arbitrary recommendation algorithms.

  18. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    Directory of Open Access Journals (Sweden)

    Simon Fong

    2014-01-01

    Full Text Available Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  19. Towards enhancement of performance of K-means clustering using nature-inspired optimization algorithms.

    Science.gov (United States)

    Fong, Simon; Deb, Suash; Yang, Xin-She; Zhuang, Yan

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario.

  20. Towards Enhancement of Performance of K-Means Clustering Using Nature-Inspired Optimization Algorithms

    Science.gov (United States)

    Deb, Suash; Yang, Xin-She

    2014-01-01

    Traditional K-means clustering algorithms have the drawback of getting stuck at local optima that depend on the random values of initial centroids. Optimization algorithms have their advantages in guiding iterative computation to search for global optima while avoiding local optima. The algorithms help speed up the clustering process by converging into a global optimum early with multiple search agents in action. Inspired by nature, some contemporary optimization algorithms which include Ant, Bat, Cuckoo, Firefly, and Wolf search algorithms mimic the swarming behavior allowing them to cooperatively steer towards an optimal objective within a reasonable time. It is known that these so-called nature-inspired optimization algorithms have their own characteristics as well as pros and cons in different applications. When these algorithms are combined with K-means clustering mechanism for the sake of enhancing its clustering quality by avoiding local optima and finding global optima, the new hybrids are anticipated to produce unprecedented performance. In this paper, we report the results of our evaluation experiments on the integration of nature-inspired optimization methods into K-means algorithms. In addition to the standard evaluation metrics in evaluating clustering quality, the extended K-means algorithms that are empowered by nature-inspired optimization methods are applied on image segmentation as a case study of application scenario. PMID:25202730

  1. Accuracy evaluation of a new real-time continuous glucose monitoring algorithm in hypoglycemia

    DEFF Research Database (Denmark)

    Mahmoudi, Zeinab; Jensen, Morten Hasselstrøm; Johansen, Mette Dencker

    2014-01-01

    UNLABELLED: Abstract Background: The purpose of this study was to evaluate the performance of a new continuous glucose monitoring (CGM) calibration algorithm and to compare it with the Guardian(®) REAL-Time (RT) (Medtronic Diabetes, Northridge, CA) calibration algorithm in hypoglycemia. SUBJECTS...... AND METHODS: CGM data were obtained from 10 type 1 diabetes patients undergoing insulin-induced hypoglycemia. Data were obtained in two separate sessions using the Guardian RT CGM device. Data from the same CGM sensor were calibrated by two different algorithms: the Guardian RT algorithm and a new calibration...... algorithm. The accuracy of the two algorithms was compared using four performance metrics. RESULTS: The median (mean) of absolute relative deviation in the whole range of plasma glucose was 20.2% (32.1%) for the Guardian RT calibration and 17.4% (25.9%) for the new calibration algorithm. The mean (SD...

  2. Performance of multiobjective computational intelligence algorithms for the routing and wavelength assignment problem

    Directory of Open Access Journals (Sweden)

    Jorge Patiño

    2016-01-01

    Full Text Available This paper presents an evaluation performance of computational intelligence algorithms based on the multiobjective theory for the solution of the Routing and Wavelength Assignment problem (RWA in optical networks. The study evaluates the Firefly Algorithm, the Differential Evolutionary Algorithm, the Simulated Annealing Algorithm and two versions of the Particle Swarm Optimization algorithm. The paper provides a description of the multiobjective algorithms; then, an evaluation based on the performance provided by the multiobjective algorithms versus mono-objective approaches when dealing with different traffic loads, different numberof wavelengths and wavelength conversion process over the NSFNet topology is presented. Simulation results show that monoobjective algorithms properly solve the RWA problem for low values of data traffic and low number of wavelengths. However, the multiobjective approaches adapt better to online traffic when the number of wavelengths available in the network increases as well as when wavelength conversion is implemented in the nodes.

  3. Standardized evaluation of algorithms for computer-aided diagnosis of dementia based on structural MRI

    DEFF Research Database (Denmark)

    Bron, Esther E.; Smits, Marion; van der Flier, Wiesje M.

    2015-01-01

    algorithms based on a clinically representative multi-center data set. Using clinical practice as the starting point, the goal was to reproduce the clinical diagnosis. Therefore, we evaluated algorithms for multi-class classification of three diagnostic groups: patients with probable Alzheimer's disease...... of aging). The best performing algorithm yielded an accuracy of 63.0% and an area under the receiver-operating-characteristic curve (AUC) of 78.8%. In general, the best performances were achieved using feature extraction based on voxel-based morphometry or a combination of features that included volume......Abstract Algorithms for computer-aided diagnosis of dementia based on structural MRI have demonstrated high performance in the literature, but are difficult to compare as different data sets and methodology were used for evaluation. In addition, it is unclear how the algorithms would perform...

  4. A High-Performance Genetic Algorithm: Using Traveling Salesman Problem as a Case

    Directory of Open Access Journals (Sweden)

    Chun-Wei Tsai

    2014-01-01

    Full Text Available This paper presents a simple but efficient algorithm for reducing the computation time of genetic algorithm (GA and its variants. The proposed algorithm is motivated by the observation that genes common to all the individuals of a GA have a high probability of surviving the evolution and ending up being part of the final solution; as such, they can be saved away to eliminate the redundant computations at the later generations of a GA. To evaluate the performance of the proposed algorithm, we use it not only to solve the traveling salesman problem but also to provide an extensive analysis on the impact it may have on the quality of the end result. Our experimental results indicate that the proposed algorithm can significantly reduce the computation time of GA and GA-based algorithms while limiting the degradation of the quality of the end result to a very small percentage compared to traditional GA.

  5. Performance analysis of manufacturing systems : queueing approximations and algorithms

    NARCIS (Netherlands)

    Vuuren, van M.

    2007-01-01

    Performance Analysis of Manufacturing Systems Queueing Approximations and Algorithms This thesis is concerned with the performance analysis of manufacturing systems. Manufacturing is the application of tools and a processing medium to the transformation of raw materials into finished goods for sale.

  6. Dentate Gyrus circuitry features improve performance of sparse approximation algorithms.

    Directory of Open Access Journals (Sweden)

    Panagiotis C Petrantonakis

    Full Text Available Memory-related activity in the Dentate Gyrus (DG is characterized by sparsity. Memory representations are seen as activated neuronal populations of granule cells, the main encoding cells in DG, which are estimated to engage 2-4% of the total population. This sparsity is assumed to enhance the ability of DG to perform pattern separation, one of the most valuable contributions of DG during memory formation. In this work, we investigate how features of the DG such as its excitatory and inhibitory connectivity diagram can be used to develop theoretical algorithms performing Sparse Approximation, a widely used strategy in the Signal Processing field. Sparse approximation stands for the algorithmic identification of few components from a dictionary that approximate a certain signal. The ability of DG to achieve pattern separation by sparsifing its representations is exploited here to improve the performance of the state of the art sparse approximation algorithm "Iterative Soft Thresholding" (IST by adding new algorithmic features inspired by the DG circuitry. Lateral inhibition of granule cells, either direct or indirect, via mossy cells, is shown to enhance the performance of the IST. Apart from revealing the potential of DG-inspired theoretical algorithms, this work presents new insights regarding the function of particular cell types in the pattern separation task of the DG.

  7. Performance Analysis of Binary Search Algorithm in RFID

    Directory of Open Access Journals (Sweden)

    Xiangmei SONG

    2014-12-01

    Full Text Available Binary search algorithm (BS is a kind of important anti-collision algorithm in the Radio Frequency Identification (RFID, is also one of the key technologies which determine whether the information in the tag is identified by the reader-writer fast and reliably. The performance of BS directly affects the quality of service in Internet of Things. This paper adopts an automated formal technology: probabilistic model checking to analyze the performance of BS algorithm formally. Firstly, according to the working principle of BS algorithm, its dynamic behavior is abstracted into a Discrete Time Markov Chains which can describe deterministic, discrete time and the probability selection. And then on the model we calculate the probability of the data sent successfully and the expected time of tags completing the data transmission. Compared to the another typical anti-collision protocol S-ALOHA in RFID, experimental results show that with an increase in the number of tags the BS algorithm has a less space and time consumption, the average number of conflicts increases slower than the S-ALOHA protocol standard, BS algorithm needs fewer expected time to complete the data transmission, and the average speed of the data transmission in BS is as 1.6 times as the S-ALOHA protocol.

  8. Selective epidemic vaccination under the performant routing algorithms

    Science.gov (United States)

    Bamaarouf, O.; Alweimine, A. Ould Baba; Rachadi, A.; EZ-Zahraouy, H.

    2018-04-01

    Despite the extensive research on traffic dynamics and epidemic spreading, the effect of the routing algorithms strategies on the traffic-driven epidemic spreading has not received an adequate attention. It is well known that more performant routing algorithm strategies are used to overcome the congestion problem. However, our main result shows unexpectedly that these algorithms favor the virus spreading more than the case where the shortest path based algorithm is used. In this work, we studied the virus spreading in a complex network using the efficient path and the global dynamic routing algorithms as compared to shortest path strategy. Some previous studies have tried to modify the routing rules to limit the virus spreading, but at the expense of reducing the traffic transport efficiency. This work proposed a solution to overcome this drawback by using a selective vaccination procedure instead of a random vaccination used often in the literature. We found that the selective vaccination succeeded in eradicating the virus better than a pure random intervention for the performant routing algorithm strategies.

  9. Massively parallel performance of neutron transport response matrix algorithms

    International Nuclear Information System (INIS)

    Hanebutte, U.R.; Lewis, E.E.

    1993-01-01

    Massively parallel red/black response matrix algorithms for the solution of within-group neutron transport problems are implemented on the Connection Machines-2, 200 and 5. The response matrices are dericed from the diamond-differences and linear-linear nodal discrete ordinate and variational nodal P 3 approximations. The unaccelerated performance of the iterative procedure is examined relative to the maximum rated performances of the machines. The effects of processor partitions size, of virtual processor ratio and of problems size are examined in detail. For the red/black algorithm, the ratio of inter-node communication to computing times is found to be quite small, normally of the order of ten percent or less. Performance increases with problems size and with virtual processor ratio, within the memeory per physical processor limitation. Algorithm adaptation to courser grain machines is straight-forward, with total computing time being virtually inversely proportional to the number of physical processors. (orig.)

  10. Attacks, applications, and evaluation of known watermarking algorithms with Checkmark

    Science.gov (United States)

    Meerwald, Peter; Pereira, Shelby

    2002-04-01

    The Checkmark benchmarking tool was introduced to provide a framework for application-oriented evaluation of watermarking schemes. In this article we introduce new attacks and applications into the existing Checkmark framework. In addition to describing new attacks and applications, we also compare the performance of some well-known watermarking algorithms (proposed by Bruyndonckx,Cox, Fridrich, Dugad, Kim, Wang, Xia, Xie, Zhu and Pereira) with respect to the Checkmark benchmark. In particular, we consider the non-geometric application which contains tests that do not change the geometry of image. This attack constraint is artificial, but yet important for research purposes since a number of algorithms may be interesting, but would score poorly with respect to specific applications simply because geometric compensation has not been incorporated. We note, however, that with the help of image registration, even research algorithms that do not have counter-measures against geometric distortion -- such as a template or reference watermark -- can be evaluated. In the first version of the Checkmark benchmarking program, application-oriented evaluation was introduced, along with many new attacks not already considered in the literature. A second goal of this paper is to introduce new attacks and new applications into the Checkmark framework. In particular, we introduce the following new applications: video frame watermarking, medical imaging and watermarking of logos. Video frame watermarking includes low compression attacks and distortions which warp the edges of the video as well as general projective transformations which may result from someone filming the screen at a cinema. With respect to medical imaging, only small distortions are considered and furthermore it is essential that no distortions are present at embedding. Finally for logos, we consider images of small sizes and particularly compression, scaling, aspect ratio and other small distortions. The challenge

  11. Building test data from real outbreaks for evaluating detection algorithms.

    Science.gov (United States)

    Texier, Gaetan; Jackson, Michael L; Siwe, Leonel; Meynard, Jean-Baptiste; Deparis, Xavier; Chaudet, Herve

    2017-01-01

    Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler). We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor) on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1) resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak signals.

  12. Building test data from real outbreaks for evaluating detection algorithms.

    Directory of Open Access Journals (Sweden)

    Gaetan Texier

    Full Text Available Benchmarking surveillance systems requires realistic simulations of disease outbreaks. However, obtaining these data in sufficient quantity, with a realistic shape and covering a sufficient range of agents, size and duration, is known to be very difficult. The dataset of outbreak signals generated should reflect the likely distribution of authentic situations faced by the surveillance system, including very unlikely outbreak signals. We propose and evaluate a new approach based on the use of historical outbreak data to simulate tailored outbreak signals. The method relies on a homothetic transformation of the historical distribution followed by resampling processes (Binomial, Inverse Transform Sampling Method-ITSM, Metropolis-Hasting Random Walk, Metropolis-Hasting Independent, Gibbs Sampler, Hybrid Gibbs Sampler. We carried out an analysis to identify the most important input parameters for simulation quality and to evaluate performance for each of the resampling algorithms. Our analysis confirms the influence of the type of algorithm used and simulation parameters (i.e. days, number of cases, outbreak shape, overall scale factor on the results. We show that, regardless of the outbreaks, algorithms and metrics chosen for the evaluation, simulation quality decreased with the increase in the number of days simulated and increased with the number of cases simulated. Simulating outbreaks with fewer cases than days of duration (i.e. overall scale factor less than 1 resulted in an important loss of information during the simulation. We found that Gibbs sampling with a shrinkage procedure provides a good balance between accuracy and data dependency. If dependency is of little importance, binomial and ITSM methods are accurate. Given the constraint of keeping the simulation within a range of plausible epidemiological curves faced by the surveillance system, our study confirms that our approach can be used to generate a large spectrum of outbreak

  13. Power efficient and high performance VLSI architecture for AES algorithm

    Directory of Open Access Journals (Sweden)

    K. Kalaiselvi

    2015-09-01

    Full Text Available Advanced encryption standard (AES algorithm has been widely deployed in cryptographic applications. This work proposes a low power and high throughput implementation of AES algorithm using key expansion approach. We minimize the power consumption and critical path delay using the proposed high performance architecture. It supports both encryption and decryption using 256-bit keys with a throughput of 0.06 Gbps. The VHDL language is utilized for simulating the design and an FPGA chip has been used for the hardware implementations. Experimental results reveal that the proposed AES architectures offer superior performance than the existing VLSI architectures in terms of power, throughput and critical path delay.

  14. Towards an evaluation framework for process mining algorithms

    NARCIS (Netherlands)

    Rozinat, A.; Alves De Medeiros, A.K.; Günther, C.W.; Weijters, A.J.M.M.; Aalst, van der W.M.P.

    2007-01-01

    Although there has been a lot of progress in developing process mining algorithms in recent years, no effort has been put in developing a common means of assessing the quality of the models discovered by these algorithms. In this paper, we outline elements of an evaluation framework that is intended

  15. Evaluation of models generated via hybrid evolutionary algorithms ...

    African Journals Online (AJOL)

    2016-04-02

    Apr 2, 2016 ... Evaluation of models generated via hybrid evolutionary algorithms for the prediction of Microcystis ... evolutionary algorithms (HEA) proved to be highly applica- ble to the hypertrophic reservoirs of South Africa. .... discovered and optimised using a large-scale parallel computational device and relevant soft-.

  16. A Performance Weighted Collaborative Filtering algorithm for personalized radiology education.

    Science.gov (United States)

    Lin, Hongli; Yang, Xuedong; Wang, Weisheng; Luo, Jiawei

    2014-10-01

    Devising an accurate prediction algorithm that can predict the difficulty level of cases for individuals and then selects suitable cases for them is essential to the development of a personalized training system. In this paper, we propose a novel approach, called Performance Weighted Collaborative Filtering (PWCF), to predict the difficulty level of each case for individuals. The main idea of PWCF is to assign an optimal weight to each rating used for predicting the difficulty level of a target case for a trainee, rather than using an equal weight for all ratings as in traditional collaborative filtering methods. The assigned weight is a function of the performance level of the trainee at which the rating was made. The PWCF method and the traditional method are compared using two datasets. The experimental data are then evaluated by means of the MAE metric. Our experimental results show that PWCF outperforms the traditional methods by 8.12% and 17.05%, respectively, over the two datasets, in terms of prediction precision. This suggests that PWCF is a viable method for the development of personalized training systems in radiology education. Copyright © 2014. Published by Elsevier Inc.

  17. Evaluating Algorithm Performance Metrics Tailored for Prognostics

    Data.gov (United States)

    National Aeronautics and Space Administration — Prognostics has taken center stage in Condition Based Maintenance (CBM) where it is desired to estimate Remaining Useful Life (RUL) of a system so that remedial...

  18. Paired Model Evaluation of OCR Algorithms

    National Research Council Canada - National Science Library

    Kanungo, Tapas; Marton, Gregory A; Bulbul, Osama

    1998-01-01

    Characterizing the performance of Optical Character Recognition (OCR) systems is crucial for monitoring technical progress predicting, OCR performance, providing scientific explanations for system behavior and identifying open problems...

  19. Evaluation of a New Backtrack Free Path Planning Algorithm for Manipulators

    Science.gov (United States)

    Islam, Md. Nazrul; Tamura, Shinsuke; Murata, Tomonari; Yanase, Tatsuro

    This paper evaluates a newly proposed backtrack free path planning algorithm (BFA) for manipulators. BFA is an exact algorithm, i.e. it is resolution complete. Different from existing resolution complete algorithms, its computation time and memory space are proportional to the number of arms. Therefore paths can be calculated within practical and predetermined time even for manipulators with many arms, and it becomes possible to plan complicated motions of multi-arm manipulators in fully automated environments. The performance of BFA is evaluated for 2-dimensional environments while changing the number of arms and obstacle placements. Its performance under locus and attitude constraints is also evaluated. Evaluation results show that the computation volume of the algorithm is almost the same as the theoretical one, i.e. it increases linearly with the number of arms even in complicated environments. Moreover BFA achieves the constant performance independent of environments.

  20. Superficial dose evaluation of four dose calculation algorithms

    Science.gov (United States)

    Cao, Ying; Yang, Xiaoyu; Yang, Zhen; Qiu, Xiaoping; Lv, Zhiping; Lei, Mingjun; Liu, Gui; Zhang, Zijian; Hu, Yongmei

    2017-08-01

    Accurate superficial dose calculation is of major importance because of the skin toxicity in radiotherapy, especially within the initial 2 mm depth being considered more clinically relevant. The aim of this study is to evaluate superficial dose calculation accuracy of four commonly used algorithms in commercially available treatment planning systems (TPS) by Monte Carlo (MC) simulation and film measurements. The superficial dose in a simple geometrical phantom with size of 30 cm×30 cm×30 cm was calculated by PBC (Pencil Beam Convolution), AAA (Analytical Anisotropic Algorithm), AXB (Acuros XB) in Eclipse system and CCC (Collapsed Cone Convolution) in Raystation system under the conditions of source to surface distance (SSD) of 100 cm and field size (FS) of 10×10 cm2. EGSnrc (BEAMnrc/DOSXYZnrc) program was performed to simulate the central axis dose distribution of Varian Trilogy accelerator, combined with measurements of superficial dose distribution by an extrapolation method of multilayer radiochromic films, to estimate the dose calculation accuracy of four algorithms in the superficial region which was recommended in detail by the ICRU (International Commission on Radiation Units and Measurement) and the ICRP (International Commission on Radiological Protection). In superficial region, good agreement was achieved between MC simulation and film extrapolation method, with the mean differences less than 1%, 2% and 5% for 0°, 30° and 60°, respectively. The relative skin dose errors were 0.84%, 1.88% and 3.90%; the mean dose discrepancies (0°, 30° and 60°) between each of four algorithms and MC simulation were (2.41±1.55%, 3.11±2.40%, and 1.53±1.05%), (3.09±3.00%, 3.10±3.01%, and 3.77±3.59%), (3.16±1.50%, 8.70±2.84%, and 18.20±4.10%) and (14.45±4.66%, 10.74±4.54%, and 3.34±3.26%) for AXB, CCC, AAA and PBC respectively. Monte Carlo simulation verified the feasibility of the superficial dose measurements by multilayer Gafchromic films. And the rank

  1. Robust ray-tracing algorithms for interactive dose rate evaluation

    International Nuclear Information System (INIS)

    Perrotte, L.

    2011-01-01

    More than ever, it is essential today to develop simulation tools to rapidly evaluate the dose rate received by operators working on nuclear sites. In order to easily study numerous different scenarios of intervention, computation times of available softwares have to be all lowered. This mainly implies to accelerate the geometrical computations needed for the dose rate evaluation. These computations consist in finding and sorting the whole list of intersections between a big 3D scene and multiple groups of 'radiative' rays meeting at the point where the dose has to be measured. In order to perform all these computations in less than a second, we first propose a GPU algorithm that enables the efficient management of one big group of coherent rays. Then we present a modification of this algorithm that guarantees the robustness of the ray-triangle intersection tests through the elimination of the precision issues due to floating-point arithmetic. This modification does not require the definition of scene-dependent coefficients ('epsilon' style) and only implies a small loss of performance (less than 10%). Finally we propose an efficient strategy to handle multiple ray groups (corresponding to multiple radiative objects) which use the previous results.Thanks to these improvements, we are able to perform an interactive and robust dose rate evaluation on big 3D scenes: all of the intersections (more than 13 million) between 700 000 triangles and 12 groups of 100 000 rays each are found, sorted along each ray and transferred to the CPU in 470 milliseconds. (author) [fr

  2. Criteria for performance evaluation

    Directory of Open Access Journals (Sweden)

    David J. Weiss

    2009-03-01

    Full Text Available Using a cognitive task (mental calculation and a perceptual-motor task (stylized golf putting, we examined differential proficiency using the CWS index and several other quantitative measures of performance. The CWS index (Weiss and Shanteau, 2003 is a coherence criterion that looks only at internal properties of the data without incorporating an external standard. In Experiment 1, college students (n = 20 carried out 2- and 3-digit addition and multiplication problems under time pressure. In Experiment 2, experienced golfers (n = 12, also college students, putted toward a target from nine different locations. Within each experiment, we analyzed the same responses using different methods. For the arithmetic tasks, accuracy information (mean absolute deviation from the correct answer, MAD using a coherence criterion was available; for golf, accuracy information using a correspondence criterion (mean deviation from the target, also MAD was available. We ranked the performances of the participants according to each measure, then compared the orders using Spearman's rextsubscript{s}. For mental calculation, the CWS order correlated moderately (rextsubscript{s} =.46 with that of MAD. However, a different coherence criterion, degree of model fit, did not correlate with either CWS or accuracy. For putting, the ranking generated by CWS correlated .68 with that generated by MAD. Consensual answers were also available for both experiments, and the rankings they generated correlated highly with those of MAD. The coherence vs. correspondence distinction did not map well onto criteria for performance evaluation.

  3. Ramp - Metering Algorithms Evaluated within Simplified Conditions

    Science.gov (United States)

    Janota, Aleš; Holečko, Peter; Gregor, Michal; Hruboš, Marián

    2017-12-01

    Freeway networks reach their limits, since it is usually impossible to increase traffic volumes by indefinitely extending transport infrastructure through adding new traffic lanes. One of the possible solutions is to use advanced intelligent transport systems, particularly ramp metering systems. The paper shows how two particular algorithms of local and traffic-responsive control (Zone, ALINEA) can be adapted to simplified conditions corresponding to Slovak freeways. Both control strategies are modelled and simulated using PTV Vissim software, including the module VisVAP. Presented results demonstrate the properties of both control strategies, which are compared mutually as well as with the initial situation in which no control strategy is applied

  4. Ramp - Metering Algorithms Evaluated within Simplified Conditions

    Directory of Open Access Journals (Sweden)

    Janota Aleš

    2017-12-01

    Full Text Available Freeway networks reach their limits, since it is usually impossible to increase traffic volumes by indefinitely extending transport infrastructure through adding new traffic lanes. One of the possible solutions is to use advanced intelligent transport systems, particularly ramp metering systems. The paper shows how two particular algorithms of local and traffic-responsive control (Zone, ALINEA can be adapted to simplified conditions corresponding to Slovak freeways. Both control strategies are modelled and simulated using PTV Vissim software, including the module VisVAP. Presented results demonstrate the properties of both control strategies, which are compared mutually as well as with the initial situation in which no control strategy is applied

  5. Relevance as a metric for evaluating machine learning algorithms

    NARCIS (Netherlands)

    Kota Gopalakrishna, A.; Ozcelebi, T.; Liotta, A.; Lukkien, J.J.

    2013-01-01

    In machine learning, the choice of a learning algorithm that is suitable for the application domain is critical. The performance metric used to compare different algorithms must also reflect the concerns of users in the application domain under consideration. In this work, we propose a novel

  6. ComprehensiveBench: a Benchmark for the Extensive Evaluation of Global Scheduling Algorithms

    Science.gov (United States)

    Pilla, Laércio L.; Bozzetti, Tiago C.; Castro, Márcio; Navaux, Philippe O. A.; Méhaut, Jean-François

    2015-10-01

    Parallel applications that present tasks with imbalanced loads or complex communication behavior usually do not exploit the underlying resources of parallel platforms to their full potential. In order to mitigate this issue, global scheduling algorithms are employed. As finding the optimal task distribution is an NP-Hard problem, identifying the most suitable algorithm for a specific scenario and comparing algorithms are not trivial tasks. In this context, this paper presents ComprehensiveBench, a benchmark for global scheduling algorithms that enables the variation of a vast range of parameters that affect performance. ComprehensiveBench can be used to assist in the development and evaluation of new scheduling algorithms, to help choose a specific algorithm for an arbitrary application, to emulate other applications, and to enable statistical tests. We illustrate its use in this paper with an evaluation of Charm++ periodic load balancers that stresses their characteristics.

  7. Performance of genetic algorithms in search for water splitting perovskites

    DEFF Research Database (Denmark)

    Jain, A.; Castelli, Ivano Eligio; Hautier, G.

    2013-01-01

    We examine the performance of genetic algorithms (GAs) in uncovering solar water light splitters over a space of almost 19,000 perovskite materials. The entire search space was previously calculated using density functional theory to determine solutions that fulfill constraints on stability, band...

  8. Performance and development for the Inner Detector Trigger Algorithms at ATLAS

    CERN Document Server

    Penc, Ondrej; The ATLAS collaboration

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for Run 2 starting in spring 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.

  9. The performance and development for the Inner Detector Trigger algorithms at ATLAS

    International Nuclear Information System (INIS)

    Penc, Ondrej

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for LHC's Run 2 starting in 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. (paper)

  10. The performance and development for the Inner Detector Trigger algorithms at ATLAS

    CERN Document Server

    Penc, O; The ATLAS collaboration

    2015-01-01

    A redesign of the tracking algorithms for the ATLAS trigger for LHC's Run 2 starting in 2015 is in progress. The ATLAS HLT software has been restructured to run as a more flexible single stage HLT, instead of two separate stages (Level 2 and Event Filter) as in Run 1. The new tracking strategy employed for Run 2 will use a Fast Track Finder (FTF) algorithm to seed subsequent Precision Tracking, and will result in improved track parameter resolution and faster execution times than achieved during Run 1. The performance of the new algorithms has been evaluated to identify those aspects where code optimisation would be most beneficial. The performance and timing of the algorithms for electron and muon reconstruction in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves.

  11. Algorithms and Methods for High-Performance Model Predictive Control

    DEFF Research Database (Denmark)

    Frison, Gianluca

    routines employed in the numerical tests. The main focus of this thesis is on linear MPC problems. In this thesis, both the algorithms and their implementation are equally important. About the implementation, a novel implementation strategy for the dense linear algebra routines in embedded optimization...... is proposed, aiming at improving the computational performance in case of small matrices. About the algorithms, they are built on top of the proposed linear algebra, and they are tailored to exploit the high-level structure of the MPC problems, with special care on reducing the computational complexity....

  12. Exchange inlet optimization by genetic algorithm for improved RBCC performance

    Science.gov (United States)

    Chorkawy, G.; Etele, J.

    2017-09-01

    A genetic algorithm based on real parameter representation using a variable selection pressure and variable probability of mutation is used to optimize an annular air breathing rocket inlet called the Exchange Inlet. A rapid and accurate design method which provides estimates for air breathing, mixing, and isentropic flow performance is used as the engine of the optimization routine. Comparison to detailed numerical simulations show that the design method yields desired exit Mach numbers to within approximately 1% over 75% of the annular exit area and predicts entrained air massflows to between 1% and 9% of numerically simulated values depending on the flight condition. Optimum designs are shown to be obtained within approximately 8000 fitness function evaluations in a search space on the order of 106. The method is also shown to be able to identify beneficial values for particular alleles when they exist while showing the ability to handle cases where physical and aphysical designs co-exist at particular values of a subset of alleles within a gene. For an air breathing engine based on a hydrogen fuelled rocket an exchange inlet is designed which yields a predicted air entrainment ratio within 95% of the theoretical maximum.

  13. Performance comparison of some evolutionary algorithms on job shop scheduling problems

    Science.gov (United States)

    Mishra, S. K.; Rao, C. S. P.

    2016-09-01

    Job Shop Scheduling as a state space search problem belonging to NP-hard category due to its complexity and combinational explosion of states. Several naturally inspire evolutionary methods have been developed to solve Job Shop Scheduling Problems. In this paper the evolutionary methods namely Particles Swarm Optimization, Artificial Intelligence, Invasive Weed Optimization, Bacterial Foraging Optimization, Music Based Harmony Search Algorithms are applied and find tuned to model and solve Job Shop Scheduling Problems. To compare about 250 Bench Mark instances have been used to evaluate the performance of these algorithms. The capabilities of each these algorithms in solving Job Shop Scheduling Problems are outlined.

  14. Evaluating ortholog prediction algorithms in a yeast model clade.

    Directory of Open Access Journals (Sweden)

    Leonidas Salichos

    Full Text Available BACKGROUND: Accurate identification of orthologs is crucial for evolutionary studies and for functional annotation. Several algorithms have been developed for ortholog delineation, but so far, manually curated genome-scale biological databases of orthologous genes for algorithm evaluation have been lacking. We evaluated four popular ortholog prediction algorithms (MultiParanoid; and OrthoMCL; RBH: Reciprocal Best Hit; RSD: Reciprocal Smallest Distance; the last two extended into clustering algorithms cRBH and cRSD, respectively, so that they can predict orthologs across multiple taxa against a set of 2,723 groups of high-quality curated orthologs from 6 Saccharomycete yeasts in the Yeast Gene Order Browser. RESULTS: Examination of sensitivity [TP/(TP+FN], specificity [TN/(TN+FP], and accuracy [(TP+TN/(TP+TN+FP+FN] across a broad parameter range showed that cRBH was the most accurate and specific algorithm, whereas OrthoMCL was the most sensitive. Evaluation of the algorithms across a varying number of species showed that cRBH had the highest accuracy and lowest false discovery rate [FP/(FP+TP], followed by cRSD. Of the six species in our set, three descended from an ancestor that underwent whole genome duplication. Subsequent differential duplicate loss events in the three descendants resulted in distinct classes of gene loss patterns, including cases where the genes retained in the three descendants are paralogs, constituting 'traps' for ortholog prediction algorithms. We found that the false discovery rate of all algorithms dramatically increased in these traps. CONCLUSIONS: These results suggest that simple algorithms, like cRBH, may be better ortholog predictors than more complex ones (e.g., OrthoMCL and MultiParanoid for evolutionary and functional genomics studies where the objective is the accurate inference of single-copy orthologs (e.g., molecular phylogenetics, but that all algorithms fail to accurately predict orthologs when paralogy

  15. Jet algorithms performance in 13 TeV data

    CERN Document Server

    CMS Collaboration

    2017-01-01

    The performance of jet algorithms with data collected by the CMS detector at the LHC in 2015 with a center-of-mass energy of 13 TeV, corresponding to 2.3 fb$^{-1}$ of integrated luminosity, is reported. The criteria used to reject jets originating from detector noise are discussed and the efficiency and noise jet rejection rate are measured. A likelihood discriminant designed to differentiate jets initiated by light-quark partons from jets initiated from gluons is studied. A multivariate discriminator is built to distinguish jets initiated by a single high $p_{\\mathrm{T}}$ quark or gluon from jets originating from the overlap of multiple low $p_{\\mathrm{T}}$ particles from non-primary vertices (pileup jets). Algorithms used to identify large radius jets reconstructed from the decay products of highly Lorentz boosted W bosons and top quarks are discussed, and the efficiency and background rejection rates of these algorithms are measured.

  16. Performance of the ALICE secondary vertex b-tagging algorithm

    CERN Document Server

    INSPIRE-00262232

    2016-11-04

    The identification of jets originating from beauty quarks in heavy-ion collisions is important to study the properties of the hot and dense matter produced in such collisions. A variety of algorithms for b-jet tagging was elaborated at the LHC experiments. They rely on the properties of B hadrons, i.e. their long lifetime, large mass and large multiplicity of decay products. In this work, the b-tagging algorithm based on displaced secondary-vertex topologies is described. We present Monte Carlo based performance studies of the algorithm for charged jets reconstructed with the ALICE tracking system in p-Pb collisions at $\\sqrt{s_\\text{NN}}$ = 5.02 TeV. The tagging efficiency, rejection rate and the correction of the smearing effects of non-ideal detector response are presented.

  17. Comparative evaluation of community detection algorithms: a topological approach

    International Nuclear Information System (INIS)

    Orman, Günce Keziban; Labatut, Vincent; Cherifi, Hocine

    2012-01-01

    Community detection is one of the most active fields in complex network analysis, due to its potential value in practical applications. Many works inspired by different paradigms are devoted to the development of algorithmic solutions allowing the network structure in such cohesive subgroups to be revealed. Comparative studies reported in the literature usually rely on a performance measure considering the community structure as a partition (Rand index, normalized mutual information, etc). However, this type of comparison neglects the topological properties of the communities. In this paper, we present a comprehensive comparative study of a representative set of community detection methods, in which we adopt both types of evaluation. Community-oriented topological measures are used to qualify the communities and evaluate their deviation from the reference structure. In order to mimic real-world systems, we use artificially generated realistic networks. It turns out there is no equivalence between the two approaches: a high performance does not necessarily correspond to correct topological properties, and vice versa. They can therefore be considered as complementary, and we recommend applying both of them in order to perform a complete and accurate assessment. (paper)

  18. Partial Evaluation of the Euclidian Algorithm

    DEFF Research Database (Denmark)

    Danvy, Olivier; Goldberg, Mayer

    1997-01-01

    Some programs are easily amenable to partial evaluation because their control flow clearly depends on one of their parameters. Specializing such programs with respect to this parameter eliminates the associated interpretive overhead. Some other programs, however, do not exhibit this interpreter-l...

  19. Performance study of LMS based adaptive algorithms for unknown system identification

    International Nuclear Information System (INIS)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-01-01

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment

  20. Performance study of LMS based adaptive algorithms for unknown system identification

    Energy Technology Data Exchange (ETDEWEB)

    Javed, Shazia; Ahmad, Noor Atinah [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 Penang (Malaysia)

    2014-07-10

    Adaptive filtering techniques have gained much popularity in the modeling of unknown system identification problem. These techniques can be classified as either iterative or direct. Iterative techniques include stochastic descent method and its improved versions in affine space. In this paper we present a comparative study of the least mean square (LMS) algorithm and some improved versions of LMS, more precisely the normalized LMS (NLMS), LMS-Newton, transform domain LMS (TDLMS) and affine projection algorithm (APA). The performance evaluation of these algorithms is carried out using adaptive system identification (ASI) model with random input signals, in which the unknown (measured) signal is assumed to be contaminated by output noise. Simulation results are recorded to compare the performance in terms of convergence speed, robustness, misalignment, and their sensitivity to the spectral properties of input signals. Main objective of this comparative study is to observe the effects of fast convergence rate of improved versions of LMS algorithms on their robustness and misalignment.

  1. Performance modeling of parallel algorithms for solving neutron diffusion problems

    International Nuclear Information System (INIS)

    Azmy, Y.Y.; Kirk, B.L.

    1995-01-01

    Neutron diffusion calculations are the most common computational methods used in the design, analysis, and operation of nuclear reactors and related activities. Here, mathematical performance models are developed for the parallel algorithm used to solve the neutron diffusion equation on message passing and shared memory multiprocessors represented by the Intel iPSC/860 and the Sequent Balance 8000, respectively. The performance models are validated through several test problems, and these models are used to estimate the performance of each of the two considered architectures in situations typical of practical applications, such as fine meshes and a large number of participating processors. While message passing computers are capable of producing speedup, the parallel efficiency deteriorates rapidly as the number of processors increases. Furthermore, the speedup fails to improve appreciably for massively parallel computers so that only small- to medium-sized message passing multiprocessors offer a reasonable platform for this algorithm. In contrast, the performance model for the shared memory architecture predicts very high efficiency over a wide range of number of processors reasonable for this architecture. Furthermore, the model efficiency of the Sequent remains superior to that of the hypercube if its model parameters are adjusted to make its processors as fast as those of the iPSC/860. It is concluded that shared memory computers are better suited for this parallel algorithm than message passing computers

  2. Evaluation of a Cross Layer Scheduling Algorithm for LTE Downlink

    Directory of Open Access Journals (Sweden)

    A. Popovska Avramova

    2013-06-01

    Full Text Available The LTE standard is a leading standard in the wireless broadband market. The Radio Resource Management at the base station plays a major role in satisfying users demand for high data rates and quality of service. This paper evaluates a cross layer scheduling algorithm that aims at minimizing the resource utilization. The algorithm makes decisions based on channel conditions, the size of transmission buffers and different quality of service demands. Simulation results show that the new algorithm improves the resource utilization and provides better guarantees for service quality.

  3. Performance Test of Core Protection and Monitoring Algorithm with DLL for SMART Simulator Implementation

    International Nuclear Information System (INIS)

    Koo, Bonseung; Hwang, Daehyun; Kim, Keungkoo

    2014-01-01

    A multi-purpose best-estimate simulator for SMART is being established, which is intended to be used as a tool to evaluate the impacts of design changes on the safety performance, and to improve and/or optimize the operating procedure of SMART. In keeping with these intentions, a real-time model of the digital core protection and monitoring systems was developed and the real-time performance of the models was verified for various simulation scenarios. In this paper, a performance test of the core protection and monitoring algorithm with a DLL file for the SMART simulator implementation was performed. A DLL file of the simulator application code was made and several real-time evaluation tests were conducted for the steady-state and transient conditions with simulated system variables. A performance test of the core protection and monitoring algorithms for the SMART simulator was performed. A DLL file of the simulator version code was made and several real-time evaluation tests were conducted for various scenarios with a DLL file and simulated system variables. The results of all test cases showed good agreement with the reference results and some features caused by algorithm change were properly reflected to the DLL results. Therefore, it was concluded that the SCOPS S SIM and SCOMS S SIM algorithms and calculational capabilities are appropriate for the core protection and monitoring program in the SMART simulator

  4. Performance Analysis of the Decentralized Eigendecomposition and ESPRIT Algorithm

    Science.gov (United States)

    Suleiman, Wassim; Pesavento, Marius; Zoubir, Abdelhak M.

    2016-05-01

    In this paper, we consider performance analysis of the decentralized power method for the eigendecomposition of the sample covariance matrix based on the averaging consensus protocol. An analytical expression of the second order statistics of the eigenvectors obtained from the decentralized power method which is required for computing the mean square error (MSE) of subspace-based estimators is presented. We show that the decentralized power method is not an asymptotically consistent estimator of the eigenvectors of the true measurement covariance matrix unless the averaging consensus protocol is carried out over an infinitely large number of iterations. Moreover, we introduce the decentralized ESPRIT algorithm which yields fully decentralized direction-of-arrival (DOA) estimates. Based on the performance analysis of the decentralized power method, we derive an analytical expression of the MSE of DOA estimators using the decentralized ESPRIT algorithm. The validity of our asymptotic results is demonstrated by simulations.

  5. An evaluation of classification algorithms for intrusion detection ...

    African Journals Online (AJOL)

    An evaluation of classification algorithms for intrusion detection. ... Log in or Register to get access to full text downloads. ... Most of the available IDSs use all the 41 features in the network to evaluate and search for intrusive pattern in which ...

  6. Evaluating progressive-rendering algorithms in appearance design tasks.

    Science.gov (United States)

    Jiawei Ou; Karlik, Ondrej; Křivánek, Jaroslav; Pellacini, Fabio

    2013-01-01

    Progressive rendering is becoming a popular alternative to precomputational approaches to appearance design. However, progressive algorithms create images exhibiting visual artifacts at early stages. A user study investigated these artifacts' effects on user performance in appearance design tasks. Novice and expert subjects performed lighting and material editing tasks with four algorithms: random path tracing, quasirandom path tracing, progressive photon mapping, and virtual-point-light rendering. Both the novices and experts strongly preferred path tracing to progressive photon mapping and virtual-point-light rendering. None of the participants preferred random path tracing to quasirandom path tracing or vice versa; the same situation held between progressive photon mapping and virtual-point-light rendering. The user workflow didn’t differ significantly with the four algorithms. The Web Extras include a video showing how four progressive-rendering algorithms converged (at http://youtu.be/ck-Gevl1e9s), the source code used, and other supplementary materials.

  7. Performance of the ATLAS primary vertex reconstruction algorithms

    CERN Document Server

    Zhang, Matt

    2017-01-01

    The reconstruction of primary vertices in the busy, high pile up environment of the LHC is a challenging task. The challenges and novel methods developed by the ATLAS experiment to reconstruct vertices in such environments will be presented. Such advances in vertex seeding include methods taken from medical imagining, which allow for reconstruction of very nearby vertices will be highlighted. The performance of the current vertexing algorithms using early Run-2 data will be presented and compared to results from simulation.

  8. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  9. TEACHER PERFORMANCE EVALUATION

    Directory of Open Access Journals (Sweden)

    Guadalupe Iván Martínez-Chairez

    2015-07-01

    Full Text Available This research report comes from a study that was developed during the school years 2013-2014, 2014-2015, in the southern state of Chihuahua central region, in the education sector 25 consisting of five school zones that provide services to Meoqui municipalities, Julimes and Delicias. The study is a mixed court, correlational comprehensive sequential procedure. Some of the results is that there is a correlation between the 578 years of service of teachers and the score assigned to students in teaching career, but there is no association between teacher performance and the context in which it works, in addition there is no relationship between the teacher performance and school performance of students on standardized tests. 2.4% of the representative sample presented an excellent teacher performance, 7.3% have a bad teacher performance, but it is noteworthy that 39% of teachers observed lies with good teaching performance.

  10. A Formal Approach for RT-DVS Algorithms Evaluation Based on Statistical Model Checking

    Directory of Open Access Journals (Sweden)

    Shengxin Dai

    2015-01-01

    Full Text Available Energy saving is a crucial concern in embedded real time systems. Many RT-DVS algorithms have been proposed to save energy while preserving deadline guarantees. This paper presents a novel approach to evaluate RT-DVS algorithms using statistical model checking. A scalable framework is proposed for RT-DVS algorithms evaluation, in which the relevant components are modeled as stochastic timed automata, and the evaluation metrics including utilization bound, energy efficiency, battery awareness, and temperature awareness are expressed as statistical queries. Evaluation of these metrics is performed by verifying the corresponding queries using UPPAAL-SMC and analyzing the statistical information provided by the tool. We demonstrate the applicability of our framework via a case study of five classical RT-DVS algorithms.

  11. Performances of the New Real Time Tsunami Detection Algorithm applied to tide gauges data

    Science.gov (United States)

    Chierici, F.; Embriaco, D.; Morucci, S.

    2017-12-01

    Real-time tsunami detection algorithms play a key role in any Tsunami Early Warning System. We have developed a new algorithm for tsunami detection (TDA) based on the real-time tide removal and real-time band-pass filtering of seabed pressure time series acquired by Bottom Pressure Recorders. The TDA algorithm greatly increases the tsunami detection probability, shortens the detection delay and enhances detection reliability with respect to the most widely used tsunami detection algorithm, while containing the computational cost. The algorithm is designed to be used also in autonomous early warning systems with a set of input parameters and procedures which can be reconfigured in real time. We have also developed a methodology based on Monte Carlo simulations to test the tsunami detection algorithms. The algorithm performance is estimated by defining and evaluating statistical parameters, namely the detection probability, the detection delay, which are functions of the tsunami amplitude and wavelength, and the occurring rate of false alarms. In this work we present the performance of the TDA algorithm applied to tide gauge data. We have adapted the new tsunami detection algorithm and the Monte Carlo test methodology to tide gauges. Sea level data acquired by coastal tide gauges in different locations and environmental conditions have been used in order to consider real working scenarios in the test. We also present an application of the algorithm to the tsunami event generated by Tohoku earthquake on March 11th 2011, using data recorded by several tide gauges scattered all over the Pacific area.

  12. SURF IA Conflict Detection and Resolution Algorithm Evaluation

    Science.gov (United States)

    Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Barker, Glover D.

    2012-01-01

    The Enhanced Traffic Situational Awareness on the Airport Surface with Indications and Alerts (SURF IA) algorithm was evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. SURF IA is designed to increase flight crew situation awareness of the runway environment and facilitate an appropriate and timely response to potential conflict situations. The purpose of the study was to evaluate the performance of the SURF IA algorithm under various runway scenarios, multiple levels of conflict detection and resolution (CD&R) system equipage, and various levels of horizontal position accuracy. This paper gives an overview of the SURF IA concept, simulation study, and results. Runway incursions are a serious aviation safety hazard. As such, the FAA is committed to reducing the severity, number, and rate of runway incursions by implementing a combination of guidance, education, outreach, training, technology, infrastructure, and risk identification and mitigation initiatives [1]. Progress has been made in reducing the number of serious incursions - from a high of 67 in Fiscal Year (FY) 2000 to 6 in FY2010. However, the rate of all incursions has risen steadily over recent years - from a rate of 12.3 incursions per million operations in FY2005 to a rate of 18.9 incursions per million operations in FY2010 [1, 2]. The National Transportation Safety Board (NTSB) also considers runway incursions to be a serious aviation safety hazard, listing runway incursion prevention as one of their most wanted transportation safety improvements [3]. The NTSB recommends that immediate warning of probable collisions/incursions be given directly to flight crews in the cockpit [4].

  13. Evaluation of TCP Congestion Control Algorithms on the Windows Vista Platform

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yee-Ting; /SLAC

    2006-07-07

    CTCP, an innovative TCP congestion control algorithm developed by Microsoft, is evaluated and compared to HSTCP and StandardTCP. Tests were performed on the production Internet from Stanford Linear Accelerator Center (SLAC) to various geographically located hosts to give a broad overview of the performances. We find that certain issues were apparent during testing (not directly related to the congestion control algorithms) which may skew results. With this in mind, we find that CTCP performed similarly to HSTCP across a multitude of different network environments. However, to improve the fairness and to reduce the impact of CTCP upon existing StandardTCP traffic, two areas of further research were investigated. Algorithmic additions to CTCP for burst control to reduce the aggressiveness of its cwnd increments demonstrated beneficial improvements in both fairness and throughput over the original CTCP algorithm. Similarly, {gamma} auto-tuning algorithms were investigated to dynamically adapt CTCP flows to their network conditions for optimal performance. While the effects of these auto-tuning algorithms when used in addition to burst control showed little to no benefit to fairness nor throughput for the limited number of network paths tested, one of the auto-tuning algorithms performed such that there was negligible impact upon StandardTCP. With these improvements, CTCP was found to perform better than HSTCP in terms of fairness and similarly in terms of throughput under the production environments tested.

  14. Evaluation of TCP Congestion Control Algorithms on the Windows Vista Platform

    International Nuclear Information System (INIS)

    Li, Y

    2006-01-01

    CTCP, an innovative TCP congestion control algorithm developed by Microsoft, is evaluated and compared to HSTCP and StandardTCP. Tests were performed on the production Internet from Stanford Linear Accelerator Center (SLAC) to various geographically located hosts to give a broad overview of the performances. We find that certain issues were apparent during testing (not directly related to the congestion control algorithms) which may skew results. With this in mind, we find that CTCP performed similarly to HSTCP across a multitude of different network environments. However, to improve the fairness and to reduce the impact of CTCP upon existing StandardTCP traffic, two areas of further research were investigated. Algorithmic additions to CTCP for burst control to reduce the aggressiveness of its cwnd increments demonstrated beneficial improvements in both fairness and throughput over the original CTCP algorithm. Similarly, auto-tuning algorithms were investigated to dynamically adapt CTCP flows to their network conditions for optimal performance. Whilst the effects of these auto-tuning algorithms when used in addition to burst control showed little to no benefit to fairness nor throughput for the limited number of network paths tested, one of the auto-tuning algorithms performed such that there was negligible impact upon StandardTCP. With these improvements, CTCP was found to perform better than HSTCP in terms of fairness and similarly in terms of throughput under the production environments tested

  15. CoSMOS: Performance of Kurtosis Algorithm for Radio Frequency Interference Detection and Mitigation

    DEFF Research Database (Denmark)

    Misra, Sidharth; Kristensen, Steen Savstrup; Skou, Niels

    2007-01-01

    The performance of a previously developed algorithm for Radio Frequency Interference (RFI) detection and mitigation is experimentally evaluated. Results obtained from CoSMOS, an airborne campaign using a fully polarimetric L-band radiometer are analyzed for this purpose. Data is collected using two...

  16. Performance comparison between total variation (TV)-based compressed sensing and statistical iterative reconstruction algorithms

    International Nuclear Information System (INIS)

    Tang Jie; Nett, Brian E; Chen Guanghong

    2009-01-01

    Of all available reconstruction methods, statistical iterative reconstruction algorithms appear particularly promising since they enable accurate physical noise modeling. The newly developed compressive sampling/compressed sensing (CS) algorithm has shown the potential to accurately reconstruct images from highly undersampled data. The CS algorithm can be implemented in the statistical reconstruction framework as well. In this study, we compared the performance of two standard statistical reconstruction algorithms (penalized weighted least squares and q-GGMRF) to the CS algorithm. In assessing the image quality using these iterative reconstructions, it is critical to utilize realistic background anatomy as the reconstruction results are object dependent. A cadaver head was scanned on a Varian Trilogy system at different dose levels. Several figures of merit including the relative root mean square error and a quality factor which accounts for the noise performance and the spatial resolution were introduced to objectively evaluate reconstruction performance. A comparison is presented between the three algorithms for a constant undersampling factor comparing different algorithms at several dose levels. To facilitate this comparison, the original CS method was formulated in the framework of the statistical image reconstruction algorithms. Important conclusions of the measurements from our studies are that (1) for realistic neuro-anatomy, over 100 projections are required to avoid streak artifacts in the reconstructed images even with CS reconstruction, (2) regardless of the algorithm employed, it is beneficial to distribute the total dose to more views as long as each view remains quantum noise limited and (3) the total variation-based CS method is not appropriate for very low dose levels because while it can mitigate streaking artifacts, the images exhibit patchy behavior, which is potentially harmful for medical diagnosis.

  17. Simulation environment for algorithms and agents evaluation.

    Directory of Open Access Journals (Sweden)

    Pablo CHAMOSO

    2016-06-01

    Full Text Available This article presents an adaptive platform that can simulate the centralized control of different smart city areas. For example, public lighting and intelligent management, public zones of buildings, energy distribution, etc. It can operate the hardware infrastructure and perform optimization both in energy consumption and economic control from a modular architecture which is fully adaptable to most cities. Machine-to-machine (M2M permits connecting all the sensors of the city so that they provide the platform with a perfect perspective of the global city status. To carry out this optimization, the platform offers the developers a software that operates on the hardware infrastructure and merges various techniques of artificial intelligence (AI and statistics, such as artificial neural networks (ANN, multi-agent systems (MAS or a Service Oriented Approach (SOA, forming an Internet of Services (IoS. Different case studies were tested by using the presented platform, and further development is still underway with additional case studies.

  18. A Parallel Particle Swarm Optimization Algorithm Accelerated by Asynchronous Evaluations

    Science.gov (United States)

    Venter, Gerhard; Sobieszczanski-Sobieski, Jaroslaw

    2005-01-01

    A parallel Particle Swarm Optimization (PSO) algorithm is presented. Particle swarm optimization is a fairly recent addition to the family of non-gradient based, probabilistic search algorithms that is based on a simplified social model and is closely tied to swarming theory. Although PSO algorithms present several attractive properties to the designer, they are plagued by high computational cost as measured by elapsed time. One approach to reduce the elapsed time is to make use of coarse-grained parallelization to evaluate the design points. Previous parallel PSO algorithms were mostly implemented in a synchronous manner, where all design points within a design iteration are evaluated before the next iteration is started. This approach leads to poor parallel speedup in cases where a heterogeneous parallel environment is used and/or where the analysis time depends on the design point being analyzed. This paper introduces an asynchronous parallel PSO algorithm that greatly improves the parallel e ciency. The asynchronous algorithm is benchmarked on a cluster assembled of Apple Macintosh G5 desktop computers, using the multi-disciplinary optimization of a typical transport aircraft wing as an example.

  19. Instrument performance evaluation

    International Nuclear Information System (INIS)

    Swinth, K.L.

    1993-03-01

    Deficiencies exist in both the performance and the quality of health physics instruments. Recognizing the implications of such deficiencies for the protection of workers and the public, in the early 1980s the DOE and the NRC encouraged the development of a performance standard and established a program to test a series of instruments against criteria in the standard. The purpose of the testing was to establish the practicality of the criteria in the standard, to determine the performance of a cross section of available instruments, and to establish a testing capability. Over 100 instruments were tested, resulting in a practical standard and an understanding of the deficiencies in available instruments. In parallel with the instrument testing, a value-impact study clearly established the benefits of implementing a formal testing program. An ad hoc committee also met several times to establish recommendations for the voluntary implementation of a testing program based on the studies and the performance standard. For several reasons, a formal program did not materialize. Ongoing tests and studies have supported the development of specific instruments and have helped specific clients understand the performance of their instruments. The purpose of this presentation is to trace the history of instrument testing to date and suggest the benefits of a centralized formal program

  20. Airport Traffic Conflict Detection and Resolution Algorithm Evaluation

    Science.gov (United States)

    Jones, Denise R.; Chartrand, Ryan C.; Wilson, Sara R.; Commo, Sean A.; Otero, Sharon D.; Barker, Glover D.

    2012-01-01

    A conflict detection and resolution (CD&R) concept for the terminal maneuvering area (TMA) was evaluated in a fast-time batch simulation study at the National Aeronautics and Space Administration (NASA) Langley Research Center. The CD&R concept is being designed to enhance surface situation awareness and provide cockpit alerts of potential conflicts during runway, taxi, and low altitude air-to-air operations. The purpose of the study was to evaluate the performance of aircraft-based CD&R algorithms in the TMA, as a function of surveillance accuracy. This paper gives an overview of the CD&R concept, simulation study, and results. The Next Generation Air Transportation System (NextGen) concept for the year 2025 and beyond envisions the movement of large numbers of people and goods in a safe, efficient, and reliable manner [1]. NextGen will remove many of the constraints in the current air transportation system, support a wider range of operations, and provide an overall system capacity up to three times that of current operating levels. Emerging NextGen operational concepts [2], such as four-dimensional trajectory based airborne and surface operations, equivalent visual operations, and super density arrival and departure operations, require a different approach to air traffic management and as a result, a dramatic shift in the tasks, roles, and responsibilities for the flight deck and air traffic control (ATC) to ensure a safe, sustainable air transportation system.

  1. Evaluation of single and multi-threshold entropy-based algorithms for folded substrate analysis

    Directory of Open Access Journals (Sweden)

    Magdolna Apro

    2011-10-01

    Full Text Available This paper presents a detailed evaluation of two variants of Maximum Entropy image segmentation algorithm(single and multi-thresholding with respect to their performance on segmenting test images showing folded substrates.The segmentation quality was determined by evaluating values of four different measures: misclassificationerror, modified Hausdorff distance, relative foreground area error and positive-negative false detection ratio. Newnormalization methods were proposed in order to combine all parameters into a unique algorithm evaluation rating.The segmentation algorithms were tested on images obtained by three different digitalisation methods coveringfour different surface textures. In addition, the methods were also tested on three images presenting a perfect fold.The obtained results showed that Multi-Maximum Entropy algorithm is better suited for the analysis of imagesshowing folded substrates.

  2. An Evaluation of the Sniffer Global Optimization Algorithm Using Standard Test Functions

    Science.gov (United States)

    Butler, Roger A. R.; Slaminka, Edward E.

    1992-03-01

    The performance of Sniffer—a new global optimization algorithm—is compared with that of Simulated Annealing. Using the number of function evaluations as a measure of efficiency, the new algorithm is shown to be significantly better at finding the global minimum of seven standard test functions. Several of the test functions used have many local minima and very steep walls surrounding the global minimum. Such functions are intended to thwart global minimization algorithms.

  3. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  4. Evaluation of algorithms used to order markers on genetic maps.

    Science.gov (United States)

    Mollinari, M; Margarido, G R A; Vencovsky, R; Garcia, A A F

    2009-12-01

    When building genetic maps, it is necessary to choose from several marker ordering algorithms and criteria, and the choice is not always simple. In this study, we evaluate the efficiency of algorithms try (TRY), seriation (SER), rapid chain delineation (RCD), recombination counting and ordering (RECORD) and unidirectional growth (UG), as well as the criteria PARF (product of adjacent recombination fractions), SARF (sum of adjacent recombination fractions), SALOD (sum of adjacent LOD scores) and LHMC (likelihood through hidden Markov chains), used with the RIPPLE algorithm for error verification, in the construction of genetic linkage maps. A linkage map of a hypothetical diploid and monoecious plant species was simulated containing one linkage group and 21 markers with fixed distance of 3 cM between them. In all, 700 F(2) populations were randomly simulated with 100 and 400 individuals with different combinations of dominant and co-dominant markers, as well as 10 and 20% of missing data. The simulations showed that, in the presence of co-dominant markers only, any combination of algorithm and criteria may be used, even for a reduced population size. In the case of a smaller proportion of dominant markers, any of the algorithms and criteria (except SALOD) investigated may be used. In the presence of high proportions of dominant markers and smaller samples (around 100), the probability of repulsion linkage increases between them and, in this case, use of the algorithms TRY and SER associated to RIPPLE with criterion LHMC would provide better results.

  5. HLA RTI performance evaluation

    CSIR Research Space (South Africa)

    Malinga, L

    2009-07-01

    Full Text Available size of the UDP packet of the network, namely 64 KB, when using the best effort mode. The performance analysis task of the different RTIs was undertaken for two reasons. The first is to re-establish a High Level Architecture (HLA) in our Research... exchange messages over the network with the RTI Gateway process, via TCP sockets or UDP in order to realise the services associated with the RTI. The allocation of CPU resources to the federate and the RTIA process is exclusively managed...

  6. The effect of load imbalances on the performance of Monte Carlo algorithms in LWR analysis

    International Nuclear Information System (INIS)

    Siegel, A.R.; Smith, K.; Romano, P.K.; Forget, B.; Felker, K.

    2013-01-01

    A model is developed to predict the impact of particle load imbalances on the performance of domain-decomposed Monte Carlo neutron transport algorithms. Expressions for upper bound performance “penalties” are derived in terms of simple machine characteristics, material characterizations and initial particle distributions. The hope is that these relations can be used to evaluate tradeoffs among different memory decomposition strategies in next generation Monte Carlo codes, and perhaps as a metric for triggering particle redistribution in production codes

  7. An Elementary Algorithm to Evaluate Trigonometric Functions to High Precision

    Science.gov (United States)

    Johansson, B. Tomas

    2018-01-01

    Evaluation of the cosine function is done via a simple Cordic-like algorithm, together with a package for handling arbitrary-precision arithmetic in the computer program Matlab. Approximations to the cosine function having hundreds of correct decimals are presented with a discussion around errors and implementation.

  8. Review of the convolution algorithm for evaluating service integrated systems

    DEFF Research Database (Denmark)

    Iversen, Villy Bæk

    1997-01-01

    In this paper we give a review of the applicability of the convolution algorithm. By this we are able to evaluate communication networks end--to--end with e.g. BPP multi-ratetraffic models insensitive to the holding time distribution. Rearrangement, minimum allocation, and maximum allocation...

  9. Evaluation of Static JavaScript Call Graph Algorithms

    NARCIS (Netherlands)

    J.-J. Dijkstra (Jorryt-Jan)

    2014-01-01

    htmlabstractThis thesis consists of a replication study in which two algorithms to compute JavaScript call graphs have been implemented and evaluated. Existing IDE support for JavaScript is hampered due to the dynamic nature of the language. Previous studies partially solve call graph computation

  10. Experimental Performance of a Genetic Algorithm for Airborne Strategic Conflict Resolution

    Science.gov (United States)

    Karr, David A.; Vivona, Robert A.; Roscoe, David A.; DePascale, Stephen M.; Consiglio, Maria

    2009-01-01

    The Autonomous Operations Planner, a research prototype flight-deck decision support tool to enable airborne self-separation, uses a pattern-based genetic algorithm to resolve predicted conflicts between the ownship and traffic aircraft. Conflicts are resolved by modifying the active route within the ownship's flight management system according to a predefined set of maneuver pattern templates. The performance of this pattern-based genetic algorithm was evaluated in the context of batch-mode Monte Carlo simulations running over 3600 flight hours of autonomous aircraft in en-route airspace under conditions ranging from typical current traffic densities to several times that level. Encountering over 8900 conflicts during two simulation experiments, the genetic algorithm was able to resolve all but three conflicts, while maintaining a required time of arrival constraint for most aircraft. Actual elapsed running time for the algorithm was consistent with conflict resolution in real time. The paper presents details of the genetic algorithm's design, along with mathematical models of the algorithm's performance and observations regarding the effectiveness of using complimentary maneuver patterns when multiple resolutions by the same aircraft were required.

  11. Evaluation of Term Ranking Algorithms for Pseudo-Relevance Feedback in MEDLINE Retrieval.

    Science.gov (United States)

    Yoo, Sooyoung; Choi, Jinwook

    2011-06-01

    The purpose of this study was to investigate the effects of query expansion algorithms for MEDLINE retrieval within a pseudo-relevance feedback framework. A number of query expansion algorithms were tested using various term ranking formulas, focusing on query expansion based on pseudo-relevance feedback. The OHSUMED test collection, which is a subset of the MEDLINE database, was used as a test corpus. Various ranking algorithms were tested in combination with different term re-weighting algorithms. Our comprehensive evaluation showed that the local context analysis ranking algorithm, when used in combination with one of the reweighting algorithms - Rocchio, the probabilistic model, and our variants - significantly outperformed other algorithm combinations by up to 12% (paired t-test; p algorithm pairs, at least in the context of the OHSUMED corpus. Comparative experiments on term ranking algorithms were performed in the context of a subset of MEDLINE documents. With medical documents, local context analysis, which uses co-occurrence with all query terms, significantly outperformed various term ranking methods based on both frequency and distribution analyses. Furthermore, the results of the experiments demonstrated that the term rank-based re-weighting method contributed to a remarkable improvement in mean average precision.

  12. Performance Analysis of Evolutionary Algorithms for Steiner Tree Problems.

    Science.gov (United States)

    Lai, Xinsheng; Zhou, Yuren; Xia, Xiaoyun; Zhang, Qingfu

    2017-01-01

    The Steiner tree problem (STP) aims to determine some Steiner nodes such that the minimum spanning tree over these Steiner nodes and a given set of special nodes has the minimum weight, which is NP-hard. STP includes several important cases. The Steiner tree problem in graphs (GSTP) is one of them. Many heuristics have been proposed for STP, and some of them have proved to be performance guarantee approximation algorithms for this problem. Since evolutionary algorithms (EAs) are general and popular randomized heuristics, it is significant to investigate the performance of EAs for STP. Several empirical investigations have shown that EAs are efficient for STP. However, up to now, there is no theoretical work on the performance of EAs for STP. In this article, we reveal that the (1+1) EA achieves 3/2-approximation ratio for STP in a special class of quasi-bipartite graphs in expected runtime [Formula: see text], where [Formula: see text], [Formula: see text], and [Formula: see text] are, respectively, the number of Steiner nodes, the number of special nodes, and the largest weight among all edges in the input graph. We also show that the (1+1) EA is better than two other heuristics on two GSTP instances, and the (1+1) EA may be inefficient on a constructed GSTP instance.

  13. Evaluation of an automated single-channel sleep staging algorithm

    Directory of Open Access Journals (Sweden)

    Wang Y

    2015-09-01

    Full Text Available Ying Wang,1 Kenneth A Loparo,1,2 Monica R Kelly,3 Richard F Kaplan1 1General Sleep Corporation, Euclid, OH, 2Department of Electrical Engineering and Computer Science, Case Western Reserve University, Cleveland, OH, 3Department of Psychology, University of Arizona, Tucson, AZ, USA Background: We previously published the performance evaluation of an automated electroencephalography (EEG-based single-channel sleep–wake detection algorithm called Z-ALG used by the Zmachine® sleep monitoring system. The objective of this paper is to evaluate the performance of a new algorithm called Z-PLUS, which further differentiates sleep as detected by Z-ALG into Light Sleep, Deep Sleep, and Rapid Eye Movement (REM Sleep, against laboratory polysomnography (PSG using a consensus of expert visual scorers. Methods: Single night, in-lab PSG recordings from 99 subjects (52F/47M, 18–60 years, median age 32.7 years, including both normal sleepers and those reporting a variety of sleep complaints consistent with chronic insomnia, sleep apnea, and restless leg syndrome, as well as those taking selective serotonin reuptake inhibitor/serotonin–norepinephrine reuptake inhibitor antidepressant medications, previously evaluated using Z-ALG were re-examined using Z-PLUS. EEG data collected from electrodes placed at the differential-mastoids (A1–A2 were processed by Z-ALG to determine wake and sleep, then those epochs detected as sleep were further processed by Z-PLUS to differentiate into Light Sleep, Deep Sleep, and REM. EEG data were visually scored by multiple certified polysomnographic technologists according to the Rechtschaffen and Kales criterion, and then combined using a majority-voting rule to create a PSG Consensus score file for each of the 99 subjects. Z-PLUS output was compared to the PSG Consensus score files for both epoch-by-epoch (eg, sensitivity, specificity, and kappa and sleep stage-related statistics (eg, Latency to Deep Sleep, Latency to REM

  14. Comparative performance of conventional OPC concrete and HPC designed by densified mixture design algorithm

    Science.gov (United States)

    Huynh, Trong-Phuoc; Hwang, Chao-Lung; Yang, Shu-Ti

    2017-12-01

    This experimental study evaluated the performance of normal ordinary Portland cement (OPC) concrete and high-performance concrete (HPC) that were designed by the conventional method (ACI) and densified mixture design algorithm (DMDA) method, respectively. Engineering properties and durability performance of both the OPC and HPC samples were studied using the tests of workability, compressive strength, water absorption, ultrasonic pulse velocity, and electrical surface resistivity. Test results show that the HPC performed good fresh property and further showed better performance in terms of strength and durability as compared to the OPC.

  15. Evaluation of HIV-1 rapid tests and identification of alternative testing algorithms for use in Uganda.

    Science.gov (United States)

    Kaleebu, Pontiano; Kitandwe, Paul Kato; Lutalo, Tom; Kigozi, Aminah; Watera, Christine; Nanteza, Mary Bridget; Hughes, Peter; Musinguzi, Joshua; Opio, Alex; Downing, Robert; Mbidde, Edward Katongole

    2018-02-27

    The World Health Organization recommends that countries conduct two phase evaluations of HIV rapid tests (RTs) in order to come up with the best algorithms. In this report, we present the first ever such evaluation in Uganda, involving both blood and oral based RTs. The role of weak positive (WP) bands on the accuracy of the individual RT and on the algorithms was also investigated. In total 11 blood based and 3 oral transudate kits were evaluated. All together 2746 participants from seven sites, covering the four different regions of Uganda participated. Two enzyme immunoassays (EIAs) run in parallel were used as the gold standard. The performance and cost of the different algorithms was calculated, with a pre-determined price cut-off of either cheaper or within 20% price of the current algorithm of Determine + Statpak + Unigold. In the second phase, the three best algorithms selected in phase I were used at the point of care for purposes of quality control using finger stick whole blood. We identified three algorithms; Determine + SD Bioline + Statpak; Determine + Statpak + SD Bioline, both with the same sensitivity and specificity of 99.2% and 99.1% respectively and Determine + Statpak + Insti, with sensitivity and specificity of 99.1% and 99% respectively as having performed better and met the cost requirements. There were 15 other algorithms that performed better than the current one but rated more than the 20% price. None of the 3 oral mucosal transudate kits were suitable for inclusion in an algorithm because of their low sensitivities. Band intensity affected the performance of individual RTs but not the final algorithms. We have come up with three algorithms we recommend for public or Government procurement based on accuracy and cost. In case one algorithm is preferred, we recommend to replace Unigold, the current tie breaker with SD Bioline. We further recommend that all the 18 algorithms that have shown better performance than the current one are made

  16. Empirical and Statistical Evaluation of the Effectiveness of Four Lossless Data Compression Algorithms

    Directory of Open Access Journals (Sweden)

    N. A. Azeez

    2017-04-01

    Full Text Available Data compression is the process of reducing the size of a file to effectively reduce storage space and communication cost. The evolvement in technology and digital age has led to an unparalleled usage of digital files in this current decade. The usage of data has resulted to an increase in the amount of data being transmitted via various channels of data communication which has prompted the need to look into the current lossless data compression algorithms to check for their level of effectiveness so as to maximally reduce the bandwidth requirement in communication and transfer of data. Four lossless data compression algorithm: Lempel-Ziv Welch algorithm, Shannon-Fano algorithm, Adaptive Huffman algorithm and Run-Length encoding have been selected for implementation. The choice of these algorithms was based on their similarities, particularly in application areas. Their level of efficiency and effectiveness were evaluated using some set of predefined performance evaluation metrics namely compression ratio, compression factor, compression time, saving percentage, entropy and code efficiency. The algorithms implementation was done in the NetBeans Integrated Development Environment using Java as the programming language. Through the statistical analysis performed using Boxplot and ANOVA and comparison made on the four algo

  17. A comparison of thermal algorithms of fuel rod performance code systems

    International Nuclear Information System (INIS)

    Park, C. J.; Park, J. H.; Kang, K. H.; Ryu, H. J.; Moon, J. S.; Jeong, I. H.; Lee, C. Y.; Song, K. C.

    2003-11-01

    The goal of the fuel rod performance is to identify the robustness of a fuel rod with cladding material. Computer simulation of the fuel rod performance becomes one of important parts to designed and evaluate new nuclear fuels and claddings. To construct a computing code system for the fuel rod performance, several algorithms of the existing fuel rod performance code systems are compared and are summarized as a preliminary work. Among several code systems, FRAPCON, and FEMAXI for LWR, ELESTRES for CANDU reactor, and LIFE for fast reactor are reviewed. Thermal algorithms of the above codes are investigated including methodologies and subroutines. This work will be utilized to construct a computing code system for dry process fuel rod performance

  18. A comparison of thermal algorithms of fuel rod performance code systems

    Energy Technology Data Exchange (ETDEWEB)

    Park, C. J.; Park, J. H.; Kang, K. H.; Ryu, H. J.; Moon, J. S.; Jeong, I. H.; Lee, C. Y.; Song, K. C

    2003-11-01

    The goal of the fuel rod performance is to identify the robustness of a fuel rod with cladding material. Computer simulation of the fuel rod performance becomes one of important parts to designed and evaluate new nuclear fuels and claddings. To construct a computing code system for the fuel rod performance, several algorithms of the existing fuel rod performance code systems are compared and are summarized as a preliminary work. Among several code systems, FRAPCON, and FEMAXI for LWR, ELESTRES for CANDU reactor, and LIFE for fast reactor are reviewed. Thermal algorithms of the above codes are investigated including methodologies and subroutines. This work will be utilized to construct a computing code system for dry process fuel rod performance.

  19. HIV misdiagnosis in sub-Saharan Africa: performance of diagnostic algorithms at six testing sites

    Science.gov (United States)

    Kosack, Cara S.; Shanks, Leslie; Beelaert, Greet; Benson, Tumwesigye; Savane, Aboubacar; Ng’ang’a, Anne; Andre, Bita; Zahinda, Jean-Paul BN; Fransen, Katrien; Page, Anne-Laure

    2017-01-01

    Abstract Introduction: We evaluated the diagnostic accuracy of HIV testing algorithms at six programmes in five sub-Saharan African countries. Methods: In this prospective multisite diagnostic evaluation study (Conakry, Guinea; Kitgum, Uganda; Arua, Uganda; Homa Bay, Kenya; Doula, Cameroun and Baraka, Democratic Republic of Congo), samples from clients (greater than equal to five years of age) testing for HIV were collected and compared to a state-of-the-art algorithm from the AIDS reference laboratory at the Institute of Tropical Medicine, Belgium. The reference algorithm consisted of an enzyme-linked immuno-sorbent assay, a line-immunoassay, a single antigen-enzyme immunoassay and a DNA polymerase chain reaction test. Results: Between August 2011 and January 2015, over 14,000 clients were tested for HIV at 6 HIV counselling and testing sites. Of those, 2786 (median age: 30; 38.1% males) were included in the study. Sensitivity of the testing algorithms ranged from 89.5% in Arua to 100% in Douala and Conakry, while specificity ranged from 98.3% in Doula to 100% in Conakry. Overall, 24 (0.9%) clients, and as many as 8 per site (1.7%), were misdiagnosed, with 16 false-positive and 8 false-negative results. Six false-negative specimens were retested with the on-site algorithm on the same sample and were found to be positive. Conversely, 13 false-positive specimens were retested: 8 remained false-positive with the on-site algorithm. Conclusions: The performance of algorithms at several sites failed to meet expectations and thresholds set by the World Health Organization, with unacceptably high rates of false results. Alongside the careful selection of rapid diagnostic tests and the validation of algorithms, strictly observing correct procedures can reduce the risk of false results. In the meantime, to identify false-positive diagnoses at initial testing, patients should be retested upon initiating antiretroviral therapy. PMID:28691437

  20. A Study on Improvement of Algorithm for Source Term Evaluation

    International Nuclear Information System (INIS)

    Park, Jeong Ho; Park, Do Hyung; Lee, Jae Hee

    2010-03-01

    The program developed by KAERI for source term assessment of radwastes from the advanced nuclear fuel cycle consists of spent fuel database analysis module, spent fuel arising projection module, and automatic characterization module for radwastes from pyroprocess. To improve the algorithm adopted the developed program, following items were carried out: - development of an algorithm to decrease analysis time for spent fuel database - development of setup routine for a analysis procedure - improvement of interface for spent fuel arising projection module - optimization of data management algorithm needed for massive calculation to estimate source terms of radwastes from advanced fuel cycle The program developed through this study has a capability to perform source term estimation although several spent fuel assemblies with different fuel design, initial enrichment, irradiation history, discharge burnup, and cooling time are processed at the same time in the pyroprocess. It is expected that this program will be very useful for the design of unit process of pyroprocess and disposal system

  1. Comparative performance of an elitist teaching-learning-based optimization algorithm for solving unconstrained optimization problems

    Directory of Open Access Journals (Sweden)

    R. Venkata Rao

    2013-01-01

    Full Text Available Teaching-Learning-based optimization (TLBO is a recently proposed population based algorithm, which simulates the teaching-learning process of the class room. This algorithm requires only the common control parameters and does not require any algorithm-specific control parameters. In this paper, the effect of elitism on the performance of the TLBO algorithm is investigated while solving unconstrained benchmark problems. The effects of common control parameters such as the population size and the number of generations on the performance of the algorithm are also investigated. The proposed algorithm is tested on 76 unconstrained benchmark functions with different characteristics and the performance of the algorithm is compared with that of other well known optimization algorithms. A statistical test is also performed to investigate the results obtained using different algorithms. The results have proved the effectiveness of the proposed elitist TLBO algorithm.

  2. Performance of an open-source heart sound segmentation algorithm on eight independent databases.

    Science.gov (United States)

    Liu, Chengyu; Springer, David; Clifford, Gari D

    2017-08-01

    Heart sound segmentation is a prerequisite step for the automatic analysis of heart sound signals, facilitating the subsequent identification and classification of pathological events. Recently, hidden Markov model-based algorithms have received increased interest due to their robustness in processing noisy recordings. In this study we aim to evaluate the performance of the recently published logistic regression based hidden semi-Markov model (HSMM) heart sound segmentation method, by using a wider variety of independently acquired data of varying quality. Firstly, we constructed a systematic evaluation scheme based on a new collection of heart sound databases, which we assembled for the PhysioNet/CinC Challenge 2016. This collection includes a total of more than 120 000 s of heart sounds recorded from 1297 subjects (including both healthy subjects and cardiovascular patients) and comprises eight independent heart sound databases sourced from multiple independent research groups around the world. Then, the HSMM-based segmentation method was evaluated using the assembled eight databases. The common evaluation metrics of sensitivity, specificity, accuracy, as well as the [Formula: see text] measure were used. In addition, the effect of varying the tolerance window for determining a correct segmentation was evaluated. The results confirm the high accuracy of the HSMM-based algorithm on a separate test dataset comprised of 102 306 heart sounds. An average [Formula: see text] score of 98.5% for segmenting S1 and systole intervals and 97.2% for segmenting S2 and diastole intervals were observed. The [Formula: see text] score was shown to increases with an increases in the tolerance window size, as expected. The high segmentation accuracy of the HSMM-based algorithm on a large database confirmed the algorithm's effectiveness. The described evaluation framework, combined with the largest collection of open access heart sound data, provides essential resources for

  3. Evaluation of several state-of-charge algorithms

    Science.gov (United States)

    Espinosa, J. M.; Martin, M. E.; Burke, A. F.

    1988-09-01

    One of the important needs in marketing an electric vehicle is a device which reliably indicates battery state-of-charge for all types of driving. The purpose of the state-of-charge indicator is analogous to a gas gauge in an internal combustion engine powered vehicle. Many different approaches have been tried to accurately predict battery state-of-charge. This report evaluates several of these approaches. Four different algorithms were implemented into software on an IBM PC and tested using a battery test database for ALCO 2200 lead-acid batteries generated at the INEL. The database was obtained under controlled conditions which compare with the battery response in real EV use. Each algorithm is described in detail as to theory and operational functionality. Also discussed is the hardware and data requirements particular to implementing the individual algorithms. The algorithms were evaluated for accuracy using constant power, stepped power, and simulated vehicle (SFUDS79) discharge profiles. Attempts were made to explain the cause of differences between the predicted and actual state-of-charge and to provide possible remedies to correct them. Recommendations for future work on battery state-of-charge indicators are presented that utilize the hardware and software now in place in the INEL Battery Laboratory.

  4. Dry Process Fuel Performance Evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Yang, Myung Seung; Song, K. C.; Moon, J. S. and others

    2005-04-15

    The objective of the project is to establish the performance evaluation system of DUPIC fuel during the Phase II R and D. In order to fulfil this objectives, irradiation test of DUPIC fuel was carried out in HANARO using the non-instrumented and SPND-instrumented rig. Also, the analysis on the in-reactor behavior analysis of DUPIC fuel, out-pile test using simulated DUPIC fuel as well as performance and integrity assessment in a commercial reactor were performed during this Phase. The R and D results of the Phase II are summarized as follows : - Performance evaluation of DUPIC fuel via irradiation test in HANARO - Post irradiation examination of irradiated fuel and performance analysis - Development of DUPIC fuel performance code (modified ELESTRES) considering material properties of DUPIC fuel - Irradiation behavior and integrity assessment under the design power envelope of DUPIC fuel - Foundamental technology development of thermal/mechanical performance evaluation using ANSYS (FEM package)

  5. Dry Process Fuel Performance Evaluation

    International Nuclear Information System (INIS)

    Yang, Myung Seung; Song, K. C.; Moon, J. S. and others

    2005-04-01

    The objective of the project is to establish the performance evaluation system of DUPIC fuel during the Phase II R and D. In order to fulfil this objectives, irradiation test of DUPIC fuel was carried out in HANARO using the non-instrumented and SPND-instrumented rig. Also, the analysis on the in-reactor behavior analysis of DUPIC fuel, out-pile test using simulated DUPIC fuel as well as performance and integrity assessment in a commercial reactor were performed during this Phase. The R and D results of the Phase II are summarized as follows : - Performance evaluation of DUPIC fuel via irradiation test in HANARO - Post irradiation examination of irradiated fuel and performance analysis - Development of DUPIC fuel performance code (modified ELESTRES) considering material properties of DUPIC fuel - Irradiation behavior and integrity assessment under the design power envelope of DUPIC fuel - Foundamental technology development of thermal/mechanical performance evaluation using ANSYS (FEM package)

  6. Formula over Function? From Algorithms to Values in Judicial Evaluation

    Directory of Open Access Journals (Sweden)

    Francesco Contini

    2014-12-01

    Full Text Available This paper discusses the forms and effects of the ‘invasion’ of the ‘temples of the law’ by new economic and managerial forms of performance evaluation. While traditional judicial evaluation focused on how to select and promote individual judges and on the legal quality of the single case, new quantitative methods and formulas are being introduced to assess efficiency, productivity and timeliness of judges and courts. Building on two case studies, from Spain and the Netherlands, the paper illustrates two contrasting approaches to judicial performance evaluation. On the one hand individual judges' productivity is evaluated through quantitative data and mathematical algorithms: in the extreme case considered here, judge's remuneration was adjusted accordingly. On the other hand quantitative and qualitative data, collected by a variety of methods and theoretical frameworks, are used as the basis of a multi-layered negotiation process designed to find a synthesis between competing economic, legal and social values aimed at improving overall organizational performance. Considering the flaws of unidimensional measurement and evaluation systems and considering the incommensurability of the results of the multiple evaluative frameworks (economic, legal, sociological required to overcome such flaws, the authors argue there is a need for political dialogue between relevant players in order to allocate the values appropriate to judicial evaluation. Este artículo analiza las formas y efectos de la “invasión” de los “templos de la ley” por nuevas formas económicas y de gestión como la evaluación del rendimiento. Mientras que la evaluación judicial tradicional se ha centrado en la forma de seleccionar y promocionar a jueces individuales, y en la calidad jurídica de un caso individual, hoy en día se están introduciendo nuevos métodos cuantitativos y fórmulas para determinar la eficiencia, productividad y oportunidad de jueces y

  7. Optimization of diesel engine performance by the Bees Algorithm

    Science.gov (United States)

    Azfanizam Ahmad, Siti; Sunthiram, Devaraj

    2018-03-01

    Biodiesel recently has been receiving a great attention in the world market due to the depletion of the existing fossil fuels. Biodiesel also becomes an alternative for diesel No. 2 fuel which possesses characteristics such as biodegradable and oxygenated. However, there are facts suggested that biodiesel does not have the equivalent features as diesel No. 2 fuel as it has been claimed that the usage of biodiesel giving increment in the brake specific fuel consumption (BSFC). The objective of this study is to find the maximum brake power and brake torque as well as the minimum BSFC to optimize the condition of diesel engine when using the biodiesel fuel. This optimization was conducted using the Bees Algorithm (BA) under specific biodiesel percentage in fuel mixture, engine speed and engine load. The result showed that 58.33kW of brake power, 310.33 N.m of brake torque and 200.29/(kW.h) of BSFC were the optimum value. Comparing to the ones obtained by other algorithm, the BA produced a fine brake power and a better brake torque and BSFC. This finding proved that the BA can be used to optimize the performance of diesel engine based on the optimum value of the brake power, brake torque and BSFC.

  8. Performance of 12 DIR algorithms in low-contrast regions for mass and density conserving deformation

    International Nuclear Information System (INIS)

    Yeo, U. J.; Supple, J. R.; Franich, R. D.; Taylor, M. L.; Smith, R.; Kron, T.

    2013-01-01

    Purpose: Deformable image registration (DIR) has become a key tool for adaptive radiotherapy to account for inter- and intrafraction organ deformation. Of contemporary interest, the application to deformable dose accumulation requires accurate deformation even in low contrast regions where dose gradients may exist within near-uniform tissues. One expects high-contrast features to generally be deformed more accurately by DIR algorithms. The authors systematically assess the accuracy of 12 DIR algorithms and quantitatively examine, in particular, low-contrast regions, where accuracy has not previously been established.Methods: This work investigates DIR algorithms in three dimensions using deformable gel (DEFGEL) [U. J. Yeo, M. L. Taylor, L. Dunn, R. L. Smith, T. Kron, and R. D. Franich, “A novel methodology for 3D deformable dosimetry,” Med. Phys. 39, 2203–2213 (2012)], for application to mass- and density-conserving deformations. CT images of DEFGEL phantoms with 16 fiducial markers (FMs) implanted were acquired in deformed and undeformed states for three different representative deformation geometries. Nonrigid image registration was performed using 12 common algorithms in the public domain. The optimum parameter setup was identified for each algorithm and each was tested for deformation accuracy in three scenarios: (I) original images of the DEFGEL with 16 FMs; (II) images with eight of the FMs mathematically erased; and (III) images with all FMs mathematically erased. The deformation vector fields obtained for scenarios II and III were then applied to the original images containing all 16 FMs. The locations of the FMs estimated by the algorithms were compared to actual locations determined by CT imaging. The accuracy of the algorithms was assessed by evaluation of three-dimensional vectors between true marker locations and predicted marker locations.Results: The mean magnitude of 16 error vectors per sample ranged from 0.3 to 3.7, 1.0 to 6.3, and 1.3 to 7

  9. Performance Analyses of IDEAL Algorithm on Highly Skewed Grid System

    Directory of Open Access Journals (Sweden)

    Dongliang Sun

    2014-03-01

    Full Text Available IDEAL is an efficient segregated algorithm for the fluid flow and heat transfer problems. This algorithm has now been extended to the 3D nonorthogonal curvilinear coordinates. Highly skewed grids in the nonorthogonal curvilinear coordinates can decrease the convergence rate and deteriorate the calculating stability. In this study, the feasibility of the IDEAL algorithm on highly skewed grid system is analyzed by investigating the lid-driven flow in the inclined cavity. It can be concluded that the IDEAL algorithm is more robust and more efficient than the traditional SIMPLER algorithm, especially for the highly skewed and fine grid system. For example, at θ = 5° and grid number = 70 × 70 × 70, the convergence rate of the IDEAL algorithm is 6.3 times faster than that of the SIMPLER algorithm, and the IDEAL algorithm can converge almost at any time step multiple.

  10. Evaluation of Six Algorithms to Monitor Wheat Leaf Nitrogen Concentration

    Directory of Open Access Journals (Sweden)

    Xia Yao

    2015-11-01

    Full Text Available The rapid and non-destructive monitoring of the canopy leaf nitrogen concentration (LNC in crops is important for precise nitrogen (N management. Nowadays, there is an urgent need to identify next-generation bio-physical variable retrieval algorithms that can be incorporated into an operational processing chain for hyperspectral satellite missions. We assessed six retrieval algorithms for estimating LNC from canopy reflectance of winter wheat in eight field experiments. These experiments represented variations in the N application rates, planting densities, ecological sites and cultivars and yielded a total of 821 samples from various places in Jiangsu, China over nine consecutive years. Based on the reflectance spectra and their first derivatives, six methods using different numbers of wavelengths were applied to construct predictive models for estimating wheat LNC, including continuum removal (CR, vegetation indices (VIs, stepwise multiple linear regression (SMLR, partial least squares regression (PLSR, artificial neural networks (ANNs, and support vector machines (SVMs. To assess the performance of these six methods, we provided a systematic evaluation of the estimation accuracies using the six metrics that were the coefficients of determination for the calibration (R2C and validation (R2V sets, the root mean square errors of prediction (RMSEP for the calibration and validation sets, the ratio of prediction to deviation (RPD, the computational efficiency (CE and the complexity level (CL. The following results were obtained: (1 For the VIs method, SAVI(R1200, R705 produced a more accurate estimation of the LNC than other indices, with R²C, R²V, RMSEP, RPD and CE values of 0.844, 0.795, 0.384, 2.005 and 0.10 min, respectively; (2 For the SMLR, PLSR, ANNs and SVMs methods, the SVMs using the first derivative canopy spectra (SVM-FDS offered the best accuracy in terms of R²C, R²V, RMSEP, RPD, and CE, at 0.96, 0.78, 0.37, 2.02, and 21

  11. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  12. A Hybrid Neural Network-Genetic Algorithm Technique for Aircraft Engine Performance Diagnostics

    Science.gov (United States)

    Kobayashi, Takahisa; Simon, Donald L.

    2001-01-01

    In this paper, a model-based diagnostic method, which utilizes Neural Networks and Genetic Algorithms, is investigated. Neural networks are applied to estimate the engine internal health, and Genetic Algorithms are applied for sensor bias detection and estimation. This hybrid approach takes advantage of the nonlinear estimation capability provided by neural networks while improving the robustness to measurement uncertainty through the application of Genetic Algorithms. The hybrid diagnostic technique also has the ability to rank multiple potential solutions for a given set of anomalous sensor measurements in order to reduce false alarms and missed detections. The performance of the hybrid diagnostic technique is evaluated through some case studies derived from a turbofan engine simulation. The results show this approach is promising for reliable diagnostics of aircraft engines.

  13. Optimization Solutions for Improving the Performance of the Parallel Reduction Algorithm Using Graphics Processing Units

    Directory of Open Access Journals (Sweden)

    Ion LUNGU

    2012-01-01

    Full Text Available In this paper, we research, analyze and develop optimization solutions for the parallel reduction function using graphics processing units (GPUs that implement the Compute Unified Device Architecture (CUDA, a modern and novel approach for improving the software performance of data processing applications and algorithms. Many of these applications and algorithms make use of the reduction function in their computational steps. After having designed the function and its algorithmic steps in CUDA, we have progressively developed and implemented optimization solutions for the reduction function. In order to confirm, test and evaluate the solutions' efficiency, we have developed a custom tailored benchmark suite. We have analyzed the obtained experimental results regarding: the comparison of the execution time and bandwidth when using graphic processing units covering the main CUDA architectures (Tesla GT200, Fermi GF100, Kepler GK104 and a central processing unit; the data type influence; the binary operator's influence.

  14. Computer vision algorithm for diabetic foot injury identification and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Castaneda M, C. L.; Solis S, L. O.; Martinez B, M. R.; Ortiz R, J. M.; Garza V, I.; Martinez F, M.; Castaneda M, R.; Vega C, H. R., E-mail: lsolis@uaz.edu.mx [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico)

    2016-10-15

    Diabetic foot is one of the most devastating consequences related to diabetes. It is relevant because of its incidence and the elevated percentage of amputations and deaths that the disease implies. Given the fact that the existing tests and laboratories designed to diagnose it are limited and expensive, the most common evaluation is still based on signs and symptoms. This means that the specialist completes a questionnaire based solely on observation and an invasive wound measurement. Using the questionnaire, the physician issues a diagnosis. In the sense, the diagnosis relies only on the criteria and the specialists experience. For some variables such as the lesions area or their location, this dependency is not acceptable. Currently bio-engineering has played a key role on the diagnose of different chronic degenerative diseases. A timely diagnose has proven to be the best tool against diabetic foot. The diabetics foot clinical evaluation, increases the possibility to identify risks and further complications. The main goal of this paper is to present the development of an algorithm based on digital image processing techniques, which enables to optimize the results on the diabetics foot lesion evaluation. Using advanced techniques for object segmentation and adjusting the sensibility parameter, allows the correlation between the algorithms identified wounds and those observed by the physician. Using the developed algorithm it is possible to identify and assess the wounds, their size, and location, in a non-invasive way. (Author)

  15. Computer vision algorithm for diabetic foot injury identification and evaluation

    International Nuclear Information System (INIS)

    Castaneda M, C. L.; Solis S, L. O.; Martinez B, M. R.; Ortiz R, J. M.; Garza V, I.; Martinez F, M.; Castaneda M, R.; Vega C, H. R.

    2016-10-01

    Diabetic foot is one of the most devastating consequences related to diabetes. It is relevant because of its incidence and the elevated percentage of amputations and deaths that the disease implies. Given the fact that the existing tests and laboratories designed to diagnose it are limited and expensive, the most common evaluation is still based on signs and symptoms. This means that the specialist completes a questionnaire based solely on observation and an invasive wound measurement. Using the questionnaire, the physician issues a diagnosis. In the sense, the diagnosis relies only on the criteria and the specialists experience. For some variables such as the lesions area or their location, this dependency is not acceptable. Currently bio-engineering has played a key role on the diagnose of different chronic degenerative diseases. A timely diagnose has proven to be the best tool against diabetic foot. The diabetics foot clinical evaluation, increases the possibility to identify risks and further complications. The main goal of this paper is to present the development of an algorithm based on digital image processing techniques, which enables to optimize the results on the diabetics foot lesion evaluation. Using advanced techniques for object segmentation and adjusting the sensibility parameter, allows the correlation between the algorithms identified wounds and those observed by the physician. Using the developed algorithm it is possible to identify and assess the wounds, their size, and location, in a non-invasive way. (Author)

  16. Performance Evaluations in Audit Firms

    DEFF Research Database (Denmark)

    Riise Johansen, Thomas; Christoffersen, Jeppe

    2017-01-01

    Previous research has only minimally examined the association between the behaviour and performance evaluations of individual auditors beyond the use of efficiency-focused evaluations. We examine the association between dysfunctional auditor behaviour and three evaluation foci: an efficiency focus......, a client focus and a quality focus. Our results, which are based on questionnaire responses from 196 auditors, demonstrate that an efficiency focus is not associated with dysfunctional behaviour. A client focus is found to be associated with dysfunctional behaviour. Finally, and perhaps most importantly......, our results show that it seems possible to limit dysfunctional behaviours through a quality focus in performance evaluations. Our results provide insights of use to practitioners and regulators on how performance evaluations may not only induce but also reduce dysfunctional auditor behaviours....

  17. Evaluating Multicore Algorithms on the Unified Memory Model

    Directory of Open Access Journals (Sweden)

    John E. Savage

    2009-01-01

    Full Text Available One of the challenges to achieving good performance on multicore architectures is the effective utilization of the underlying memory hierarchy. While this is an issue for single-core architectures, it is a critical problem for multicore chips. In this paper, we formulate the unified multicore model (UMM to help understand the fundamental limits on cache performance on these architectures. The UMM seamlessly handles different types of multiple-core processors with varying degrees of cache sharing at different levels. We demonstrate that our model can be used to study a variety of multicore architectures on a variety of applications. In particular, we use it to analyze an option pricing problem using the trinomial model and develop an algorithm for it that has near-optimal memory traffic between cache levels. We have implemented the algorithm on a two Quad-Core Intel Xeon 5310 1.6 GHz processors (8 cores. It achieves a peak performance of 19.5 GFLOPs, which is 38% of the theoretical peak of the multicore system. We demonstrate that our algorithm outperforms compiler-optimized and auto-parallelized code by a factor of up to 7.5.

  18. Evaluation of hybrids algorithms for mass detection in digitalized mammograms

    International Nuclear Information System (INIS)

    Cordero, Jose; Garzon Reyes, Johnson

    2011-01-01

    The breast cancer remains being a significant public health problem, the early detection of the lesions can increase the success possibilities of the medical treatments. The mammography is an image modality effective to early diagnosis of abnormalities, where the medical image is obtained of the mammary gland with X-rays of low radiation, this allows detect a tumor or circumscribed mass between two to three years before that it was clinically palpable, and is the only method that until now achieved reducing the mortality by breast cancer. In this paper three hybrids algorithms for circumscribed mass detection on digitalized mammograms are evaluated. In the first stage correspond to a review of the enhancement and segmentation techniques used in the processing of the mammographic images. After a shape filtering was applied to the resulting regions. By mean of a Bayesian filter the survivors regions were processed, where the characteristics vector for the classifier was constructed with few measurements. Later, the implemented algorithms were evaluated by ROC curves, where 40 images were taken for the test, 20 normal images and 20 images with circumscribed lesions. Finally, the advantages and disadvantages in the correct detection of a lesion of every algorithm are discussed.

  19. Evaluation of nine HIV rapid test kits to develop a national HIV testing algorithm in Nigeria

    Directory of Open Access Journals (Sweden)

    Orji Bassey

    2015-05-01

    Full Text Available Background: Non-cold chain-dependent HIV rapid testing has been adopted in many resource-constrained nations as a strategy for reaching out to populations. HIV rapid test kits (RTKs have the advantage of ease of use, low operational cost and short turnaround times. Before 2005, different RTKs had been used in Nigeria without formal evaluation. Between 2005 and 2007, a study was conducted to formally evaluate a number of RTKs and construct HIV testing algorithms. Objectives: The objectives of this study were to assess and select HIV RTKs and develop national testing algorithms. Method: Nine RTKs were evaluated using 528 well-characterised plasma samples. These comprised 198 HIV-positive specimens (37.5% and 330 HIV-negative specimens (62.5%, collected nationally. Sensitivity and specificity were calculated with 95% confidence intervals for all nine RTKs singly and for serial and parallel combinations of six RTKs; and relative costs were estimated. Results: Six of the nine RTKs met the selection criteria, including minimum sensitivity and specificity (both ≥ 99.0% requirements. There were no significant differences in sensitivities or specificities of RTKs in the serial and parallel algorithms, but the cost of RTKs in parallel algorithms was twice that in serial algorithms. Consequently, three serial algorithms, comprising four test kits (BundiTM, DetermineTM, Stat-Pak® and Uni-GoldTM with 100.0% sensitivity and 99.1% – 100.0% specificity, were recommended and adopted as national interim testing algorithms in 2007. Conclusion: This evaluation provides the first evidence for reliable combinations of RTKs for HIV testing in Nigeria. However, these RTKs need further evaluation in the field (Phase II to re-validate their performance.

  20. Revisiting Mutual Fund Performance Evaluation

    OpenAIRE

    Angelidis, Timotheos; Giamouridis, Daniel; Tessaromatis, Nikolaos

    2012-01-01

    Mutual fund manager excess performance should be measured relative to their self-reported benchmark rather than the return of a passive portfolio with the same risk characteristics. Ignoring the self-reported benchmark introduces biases in the measurement of stock selection and timing components of excess performance. We revisit baseline empirical evidence in mutual fund performance evaluation utilizing stock selection and timing measures that address these biases. We introduce a new factor e...

  1. Licensee Performance Evaluation: Phase II

    International Nuclear Information System (INIS)

    Chakoff, H.E.; Speaker, D.M.; Thompson, S.R.; Cohen, S.C.

    1979-08-01

    This report details work performed during the second phase of a two-phase contract to develop methodology for Licensee Performance Evaluation. The Phase I report, NUREG/CR-0110 details initial efforts on the contract. The model developed in Phase I was used to evaluate nine additional facilities for this report. Performance indicators from noncompliance data were also evaluated. Methodology was developed employing the noncompliance indicators and used for 12 case studies. It was found that licensee event report indicators could be more easily identified and utilized than noncompliance indicators based on presently available data systems. However, noncompliance data, appropriately related to cause, could provide real insight into why performance was what it was

  2. Evaluation of clustering algorithms for protein-protein interaction networks

    Directory of Open Access Journals (Sweden)

    van Helden Jacques

    2006-11-01

    Full Text Available Abstract Background Protein interactions are crucial components of all cellular processes. Recently, high-throughput methods have been developed to obtain a global description of the interactome (the whole network of protein interactions for a given organism. In 2002, the yeast interactome was estimated to contain up to 80,000 potential interactions. This estimate is based on the integration of data sets obtained by various methods (mass spectrometry, two-hybrid methods, genetic studies. High-throughput methods are known, however, to yield a non-negligible rate of false positives, and to miss a fraction of existing interactions. The interactome can be represented as a graph where nodes correspond with proteins and edges with pairwise interactions. In recent years clustering methods have been developed and applied in order to extract relevant modules from such graphs. These algorithms require the specification of parameters that may drastically affect the results. In this paper we present a comparative assessment of four algorithms: Markov Clustering (MCL, Restricted Neighborhood Search Clustering (RNSC, Super Paramagnetic Clustering (SPC, and Molecular Complex Detection (MCODE. Results A test graph was built on the basis of 220 complexes annotated in the MIPS database. To evaluate the robustness to false positives and false negatives, we derived 41 altered graphs by randomly removing edges from or adding edges to the test graph in various proportions. Each clustering algorithm was applied to these graphs with various parameter settings, and the clusters were compared with the annotated complexes. We analyzed the sensitivity of the algorithms to the parameters and determined their optimal parameter values. We also evaluated their robustness to alterations of the test graph. We then applied the four algorithms to six graphs obtained from high-throughput experiments and compared the resulting clusters with the annotated complexes. Conclusion This

  3. The ATLAS Trigger algorithms upgrade and performance in Run 2

    CERN Document Server

    Bernius, Catrin; The ATLAS collaboration

    2017-01-01

    Title: The ATLAS Trigger algorithms upgrade and performance in Run 2 (TDAQ) The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a center-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken impr...

  4. Analysis for Performance of Symbiosis Co-evolutionary Algorithm

    OpenAIRE

    根路銘, もえ子; 遠藤, 聡志; 山田, 孝治; 宮城, 隼夫; Nerome, Moeko; Endo, Satoshi; Yamada, Koji; Miyagi, Hayao

    2000-01-01

    In this paper, we analyze the behavior of symbiotic evolution algorithm for the N-Queens problem as benchmark problem for search methods in the field of aritificial intelligence. It is shown that this algorithm improves the ability of evolutionary search method. When the problem is solved by Genetic Algorithms (GAs), an ordinal representation is often used as one of gene conversion methods which convert from phenotype to genotype and reconvert. The representation can hinder occurrence of leth...

  5. Optimization, evaluation, and comparison of standard algorithms for image reconstruction with the VIP-PET.

    Science.gov (United States)

    Mikhaylova, E; Kolstein, M; De Lorenzo, G; Chmeissani, M

    2014-07-01

    A novel positron emission tomography (PET) scanner design based on a room-temperature pixelated CdTe solid-state detector is being developed within the framework of the Voxel Imaging PET (VIP) Pathfinder project [1]. The simulation results show a great potential of the VIP to produce high-resolution images even in extremely challenging conditions such as the screening of a human head [2]. With unprecedented high channel density (450 channels/cm 3 ) image reconstruction is a challenge. Therefore optimization is needed to find the best algorithm in order to exploit correctly the promising detector potential. The following reconstruction algorithms are evaluated: 2-D Filtered Backprojection (FBP), Ordered Subset Expectation Maximization (OSEM), List-Mode OSEM (LM-OSEM), and the Origin Ensemble (OE) algorithm. The evaluation is based on the comparison of a true image phantom with a set of reconstructed images obtained by each algorithm. This is achieved by calculation of image quality merit parameters such as the bias, the variance and the mean square error (MSE). A systematic optimization of each algorithm is performed by varying the reconstruction parameters, such as the cutoff frequency of the noise filters and the number of iterations. The region of interest (ROI) analysis of the reconstructed phantom is also performed for each algorithm and the results are compared. Additionally, the performance of the image reconstruction methods is compared by calculating the modulation transfer function (MTF). The reconstruction time is also taken into account to choose the optimal algorithm. The analysis is based on GAMOS [3] simulation including the expected CdTe and electronic specifics.

  6. Performance of target detection algorithm in compressive sensing miniature ultraspectral imaging compressed sensing system

    Science.gov (United States)

    Gedalin, Daniel; Oiknine, Yaniv; August, Isaac; Blumberg, Dan G.; Rotman, Stanley R.; Stern, Adrian

    2017-04-01

    Compressive sensing theory was proposed to deal with the high quantity of measurements demanded by traditional hyperspectral systems. Recently, a compressive spectral imaging technique dubbed compressive sensing miniature ultraspectral imaging (CS-MUSI) was presented. This system uses a voltage controlled liquid crystal device to create multiplexed hyperspectral cubes. We evaluate the utility of the data captured using the CS-MUSI system for the task of target detection. Specifically, we compare the performance of the matched filter target detection algorithm in traditional hyperspectral systems and in CS-MUSI multiplexed hyperspectral cubes. We found that the target detection algorithm performs similarly in both cases, despite the fact that the CS-MUSI data is up to an order of magnitude less than that in conventional hyperspectral cubes. Moreover, the target detection is approximately an order of magnitude faster in CS-MUSI data.

  7. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    Science.gov (United States)

    Atanassov, E.; Dimitrov, D.; Gurov, T.

    2015-10-01

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  8. Evaluation of stochastic algorithms for financial mathematics problems from point of view of energy-efficiency

    Energy Technology Data Exchange (ETDEWEB)

    Atanassov, E.; Dimitrov, D., E-mail: d.slavov@bas.bg, E-mail: emanouil@parallel.bas.bg, E-mail: gurov@bas.bg; Gurov, T. [Institute of Information and Communication Technologies, BAS, Acad. G. Bonchev str., bl. 25A, 1113 Sofia (Bulgaria)

    2015-10-28

    The recent developments in the area of high-performance computing are driven not only by the desire for ever higher performance but also by the rising costs of electricity. The use of various types of accelerators like GPUs, Intel Xeon Phi has become mainstream and many algorithms and applications have been ported to make use of them where available. In Financial Mathematics the question of optimal use of computational resources should also take into account the limitations on space, because in many use cases the servers are deployed close to the exchanges. In this work we evaluate various algorithms for option pricing that we have implemented for different target architectures in terms of their energy and space efficiency. Since it has been established that low-discrepancy sequences may be better than pseudorandom numbers for these types of algorithms, we also test the Sobol and Halton sequences. We present the raw results, the computed metrics and conclusions from our tests.

  9. The JPSS Ground Project Algorithm Verification, Test and Evaluation System

    Science.gov (United States)

    Vicente, G. A.; Jain, P.; Chander, G.; Nguyen, V. T.; Dixon, V.

    2016-12-01

    The Government Resource for Algorithm Verification, Independent Test, and Evaluation (GRAVITE) is an operational system that provides services to the Suomi National Polar-orbiting Partnership (S-NPP) Mission. It is also a unique environment for Calibration/Validation (Cal/Val) and Data Quality Assessment (DQA) of the Join Polar Satellite System (JPSS) mission data products. GRAVITE provides a fast and direct access to the data and products created by the Interface Data Processing Segment (IDPS), the NASA/NOAA operational system that converts Raw Data Records (RDR's) generated by sensors on the S-NPP into calibrated geo-located Sensor Data Records (SDR's) and generates Mission Unique Products (MUPS). It also facilitates algorithm investigation, integration, checkouts and tuning, instrument and product calibration and data quality support, monitoring and data/products distribution. GRAVITE is the portal for the latest S-NPP and JPSS baselined Processing Coefficient Tables (PCT's) and Look-Up-Tables (LUT's) and hosts a number DQA offline tools that takes advantage of the proximity to the near-real time data flows. It also contains a set of automated and ad-hoc Cal/Val tools used for algorithm analysis and updates, including an instance of the IDPS called GRAVITE Algorithm Development Area (G-ADA), that has the latest installation of the IDPS algorithms running in an identical software and hardware platforms. Two other important GRAVITE component are the Investigator-led Processing System (IPS) and the Investigator Computing Facility (ICF). The IPS is a dedicated environment where authorized users run automated scripts called Product Generation Executables (PGE's) to support Cal/Val and data quality assurance offline. This data-rich and data-driven service holds its own distribution system and allows operators to retrieve science data products. The ICF is a workspace where users can share computing applications and resources and have full access to libraries and

  10. An algorithm to biological tissues evaluation in pediatric examinations

    International Nuclear Information System (INIS)

    Souza, R.T.F.; Miranda, J.R.A.; Alvarez, M.; Velo, A.F.; Pina, D.R.

    2011-01-01

    A prerequisite for the construction of phantoms is the quantification of the average thickness of biological tissues and the equivalence of these simulators in simulator material thicknesses. This study aim to develop an algorithm to classify and quantify tissues, based on normal distribution of CT numbers of anatomical structures found in the mean free path of the X-rays beam, using the examination histogram to carry out this evaluation. We have considered an algorithm for the determination of the equivalent biological tissues thickness from histograms. This algorithm classifies different biological tissues from tomographic exams in DICOM format and calculates the average thickness of these tissues. The founded results had revealed coherent with literature, presenting discrepancies of up to 21,6%, relative to bone tissue, analyzed for anthropomorphic phantom (RANDO). These results allow using this methodology in livings tissues, for the construction of thorax homogeneous phantoms, of just born and suckling patients, who will be used later in the optimization process of pediatrics radiographic images. (author)

  11. An analytical phantom for the evaluation of medical flow imaging algorithms

    International Nuclear Information System (INIS)

    Pashaei, A; Fatouraee, N

    2009-01-01

    Blood flow characteristics (e.g. velocity, pressure, shear stress, streamline and volumetric flow rate) are effective tools in diagnosis of cardiovascular diseases such as atherosclerotic plaque, aneurism and cardiac muscle failure. Noninvasive estimation of cardiovascular blood flow characteristics is mostly limited to the measurement of velocity components by medical imaging modalities. Once the velocity field is obtained from the images, other flow characteristics within the cardiovascular system can be determined using algorithms relating them to the velocity components. In this work, we propose an analytical flow phantom to evaluate these algorithms accurately. The Navier-Stokes equations are used to derive this flow phantom. The exact solution of these equations obtains analytical expression for the flow characteristics inside the domain. Features such as pulsatility, incompressibility and viscosity of flow are included in a three-dimensional domain. The velocity domain of the resulted system is presented as reference images. These images could be employed to evaluate the performance of different flow characteristic algorithms. In this study, we also present some applications of the obtained phantom. The calculation of pressure domain from velocity data, volumetric flow rate, wall shear stress and particle trace are the characteristics whose algorithms are evaluated here. We also present the application of this phantom in the analysis of noisy and low-resolution images. The presented phantom can be considered as a benchmark test to compare the accuracy of different flow characteristic algorithms.

  12. Performance Analysis of the Consensus-Based Distributed LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Gonzalo Mateos

    2009-01-01

    Full Text Available Low-cost estimation of stationary signals and reduced-complexity tracking of nonstationary processes are well motivated tasks than can be accomplished using ad hoc wireless sensor networks (WSNs. To this end, a fully distributed least mean-square (D-LMS algorithm is developed in this paper, in which sensors exchange messages with single-hop neighbors to consent on the network-wide estimates adaptively. The novel approach does not require a Hamiltonian cycle or a special bridge subset of sensors, while communications among sensors are allowed to be noisy. A mean-square error (MSE performance analysis of D-LMS is conducted in the presence of a time-varying parameter vector, which adheres to a first-order autoregressive model. For sensor observations that are related to the parameter vector of interest via a linear Gaussian model and after adopting simplifying independence assumptions, exact closed-form expressions are derived for the global and sensor-level MSE evolution as well as its steady-state (s.s. values. Mean and MSE-sense stability of D-LMS are also established. Interestingly, extensive numerical tests demonstrate that for small step-sizes the results accurately extend to the pragmatic setting whereby sensors acquire temporally correlated, not necessarily Gaussian data.

  13. Finite sample performance of the E-M algorithm for ranks data modelling

    Directory of Open Access Journals (Sweden)

    Angela D'Elia

    2007-10-01

    Full Text Available We check the finite sample performance of the maximum likelihood estimators of the parameters of a mixture distribution recently introduced for modelling ranks/preference data. The estimates are derived by the E-M algorithm and the performance is evaluated both from an univariate and bivariate points of view. While the results are generally acceptable as far as it concerns the bias, the Monte Carlo experiment shows a different behaviour of the estimators efficiency for the two parameters of the mixture, mainly depending upon their location in the admissible parametric space. Some operative suggestions conclude the paer.

  14. A comprehensive evaluation of alignment algorithms in the context of RNA-seq.

    Directory of Open Access Journals (Sweden)

    Robert Lindner

    Full Text Available Transcriptome sequencing (RNA-Seq overcomes limitations of previously used RNA quantification methods and provides one experimental framework for both high-throughput characterization and quantification of transcripts at the nucleotide level. The first step and a major challenge in the analysis of such experiments is the mapping of sequencing reads to a transcriptomic origin including the identification of splicing events. In recent years, a large number of such mapping algorithms have been developed, all of which have in common that they require algorithms for aligning a vast number of reads to genomic or transcriptomic sequences. Although the FM-index based aligner Bowtie has become a de facto standard within mapping pipelines, a much larger number of possible alignment algorithms have been developed also including other variants of FM-index based aligners. Accordingly, developers and users of RNA-seq mapping pipelines have the choice among a large number of available alignment algorithms. To provide guidance in the choice of alignment algorithms for these purposes, we evaluated the performance of 14 widely used alignment programs from three different algorithmic classes: algorithms using either hashing of the reference transcriptome, hashing of reads, or a compressed FM-index representation of the genome. Here, special emphasis was placed on both precision and recall and the performance for different read lengths and numbers of mismatches and indels in a read. Our results clearly showed the significant reduction in memory footprint and runtime provided by FM-index based aligners at a precision and recall comparable to the best hash table based aligners. Furthermore, the recently developed Bowtie 2 alignment algorithm shows a remarkable tolerance to both sequencing errors and indels, thus, essentially making hash-based aligners obsolete.

  15. The performance of the backpropagation algorithm with varying slope of the activation function

    International Nuclear Information System (INIS)

    Bai Yanping; Zhang Haixia; Hao Yilong

    2009-01-01

    Some adaptations are proposed to the basic BP algorithm in order to provide an efficient method to non-linear data learning and prediction. In this paper, an adopted BP algorithm with varying slope of activation function and different learning rates is put forward. The results of experiment indicated that this algorithm can get very good performance of training. We also test the prediction performance of our adopted BP algorithm on 16 instances. We compared the test results to the ones of the BP algorithm with gradient descent momentum and an adaptive learning rate. The results indicate this adopted BP algorithm gives best performance (100%) for test example, which conclude this adopted BP algorithm produces a smoothed reconstruction that learns better to new prediction function values than the BP algorithm improved with momentum.

  16. Unveiling the development of intracranial injury using dynamic brain EIT: an evaluation of current reconstruction algorithms.

    Science.gov (United States)

    Li, Haoting; Chen, Rongqing; Xu, Canhua; Liu, Benyuan; Tang, Mengxing; Yang, Lin; Dong, Xiuzhen; Fu, Feng

    2017-08-21

    Dynamic brain electrical impedance tomography (EIT) is a promising technique for continuously monitoring the development of cerebral injury. While there are many reconstruction algorithms available for brain EIT, there is still a lack of study to compare their performance in the context of dynamic brain monitoring. To address this problem, we develop a framework for evaluating different current algorithms with their ability to correctly identify small intracranial conductivity changes. Firstly, a simulation 3D head phantom with realistic layered structure and impedance distribution is developed. Next several reconstructing algorithms, such as back projection (BP), damped least-square (DLS), Bayesian, split Bregman (SB) and GREIT are introduced. We investigate their temporal response, noise performance, location and shape error with respect to different noise levels on the simulation phantom. The results show that the SB algorithm demonstrates superior performance in reducing image error. To further improve the location accuracy, we optimize SB by incorporating the brain structure-based conductivity distribution priors, in which differences of the conductivities between different brain tissues and the inhomogeneous conductivity distribution of the skull are considered. We compare this novel algorithm (called SB-IBCD) with SB and DLS using anatomically correct head shaped phantoms with spatial varying skull conductivity. Main results and Significance: The results showed that SB-IBCD is the most effective in unveiling small intracranial conductivity changes, where it can reduce the image error by an average of 30.0% compared to DLS.

  17. WDM Multicast Tree Construction Algorithms and Their Comparative Evaluations

    Science.gov (United States)

    Makabe, Tsutomu; Mikoshi, Taiju; Takenaka, Toyofumi

    We propose novel tree construction algorithms for multicast communication in photonic networks. Since multicast communications consume many more link resources than unicast communications, effective algorithms for route selection and wavelength assignment are required. We propose a novel tree construction algorithm, called the Weighted Steiner Tree (WST) algorithm and a variation of the WST algorithm, called the Composite Weighted Steiner Tree (CWST) algorithm. Because these algorithms are based on the Steiner Tree algorithm, link resources among source and destination pairs tend to be commonly used and link utilization ratios are improved. Because of this, these algorithms can accept many more multicast requests than other multicast tree construction algorithms based on the Dijkstra algorithm. However, under certain delay constraints, the blocking characteristics of the proposed Weighted Steiner Tree algorithm deteriorate since some light paths between source and destinations use many hops and cannot satisfy the delay constraint. In order to adapt the approach to the delay-sensitive environments, we have devised the Composite Weighted Steiner Tree algorithm comprising the Weighted Steiner Tree algorithm and the Dijkstra algorithm for use in a delay constrained environment such as an IPTV application. In this paper, we also give the results of simulation experiments which demonstrate the superiority of the proposed Composite Weighted Steiner Tree algorithm compared with the Distributed Minimum Hop Tree (DMHT) algorithm, from the viewpoint of the light-tree request blocking.

  18. performance evaluation of feature sets of minutiae quadruplets

    African Journals Online (AJOL)

    databases. This shows that the evaluation of algorithms on just one or two databases is not sufficient to confirm the performance of tech- niques as they may be database-dependent. Much work was done to find a feature-set that would have a good performance across three. FVC databases of the FVC 2000, 2002 and. 2004 ...

  19. Evaluation of feature detection algorithms for structure from motion

    CSIR Research Space (South Africa)

    Govender, N

    2009-11-01

    Full Text Available technique with an application to stereo vision,” in International Joint Conference on Artificial Intelligence, April 1981. [17] C.Tomasi and T.Kanade, “Detection and tracking of point fetaures,” Carnegie Mellon, Tech. Rep., April 1991. [18] P. Torr... Algorithms for Structure from Motion Natasha Govender Mobile Intelligent Autonomous Systems CSIR Pretoria Email: ngovender@csir.co.za Abstract—Structure from motion is a widely-used technique in computer vision to perform 3D reconstruction. The 3D...

  20. Performance Evaluation of Security Protocols

    DEFF Research Database (Denmark)

    Bodei, Chiara; Buchholtz, Mikael; Curti, Michele

    2005-01-01

    We use a special operational semantics which drives us in inferring quantitative measures on systems describing cryptographis cryptographic protocols. We assign rates to transitions by only looking at these labels. The rates reflect the distributed architecture running applications and the use...... of possibly different cryptosystems. We then map transition systems to Markov chains and evaluate performance of systems, using standard tools....

  1. Evaluation of EIT system performance.

    Science.gov (United States)

    Yasin, Mamatjan; Böhm, Stephan; Gaggero, Pascal O; Adler, Andy

    2011-07-01

    An electrical impedance tomography (EIT) system images internal conductivity from surface electrical stimulation and measurement. Such systems necessarily comprise multiple design choices from cables and hardware design to calibration and image reconstruction. In order to compare EIT systems and study the consequences of changes in system performance, this paper describes a systematic approach to evaluate the performance of the EIT systems. The system to be tested is connected to a saline phantom in which calibrated contrasting test objects are systematically positioned using a position controller. A set of evaluation parameters are proposed which characterize (i) data and image noise, (ii) data accuracy, (iii) detectability of single contrasts and distinguishability of multiple contrasts, and (iv) accuracy of reconstructed image (amplitude, resolution, position and ringing). Using this approach, we evaluate three different EIT systems and illustrate the use of these tools to evaluate and compare performance. In order to facilitate the use of this approach, all details of the phantom, test objects and position controller design are made publicly available including the source code of the evaluation and reporting software.

  2. Performances of new reconstruction algorithms for CT-TDLAS (computer tomography-tunable diode laser absorption spectroscopy)

    International Nuclear Information System (INIS)

    Jeon, Min-Gyu; Deguchi, Yoshihiro; Kamimoto, Takahiro; Doh, Deog-Hee; Cho, Gyeong-Rae

    2017-01-01

    Highlights: • The measured data were successfully used for generating absorption spectra. • Four different reconstruction algorithms, ART, MART, SART and SMART were evaluated. • The calculation speed of convergence by the SMART algorithm was the fastest. • SMART was the most reliable algorithm for reconstructing the multiple signals. - Abstract: Recent advent of the tunable lasers made to measure simultaneous temperature and concentration fields of the gases. CT-TDLAS (computed tomography-tunable diode laser absorption spectroscopy) is one the leading techniques for the measurements of temperature and concentration fields of the gases. In CT-TDLAS, the accuracies of the measurement results are strongly dependent upon the reconstruction algorithms. In this study, four different reconstruction algorithms have been tested numerically using experimental data sets measured by thermocouples for combustion fields. Three reconstruction algorithms, MART (multiplicative algebraic reconstruction technique) algorithm, SART (simultaneous algebraic reconstruction technique) algorithm and SMART (simultaneous multiplicative algebraic reconstruction technique) algorithm, are newly proposed for CT-TDLAS in this study. The calculation results obtained by the three algorithms have been compared with previous algorithm, ART (algebraic reconstruction technique) algorithm. Phantom data sets have been generated by the use of thermocouples data obtained in an actual experiment. The data of the Harvard HITRAN table in which the thermo-dynamical properties and the light spectrum of the H_2O are listed were used for the numerical test. The reconstructed temperature and concentration fields were compared with the original HITRAN data, through which the constructed methods are validated. The performances of the four reconstruction algorithms were demonstrated. This method is expected to enhance the practicality of CT-TDLAS.

  3. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  4. Performance Analysis of a Decoding Algorithm for Algebraic Geometry Codes

    DEFF Research Database (Denmark)

    Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund; Høholdt, Tom

    1998-01-01

    We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance......We analyse the known decoding algorithms for algebraic geometry codes in the case where the number of errors is greater than or equal to [(dFR-1)/2]+1, where dFR is the Feng-Rao distance...

  5. Diagnostic performance of line-immunoassay based algorithms for incident HIV-1 infection

    Directory of Open Access Journals (Sweden)

    Schüpbach Jörg

    2012-04-01

    Full Text Available Abstract Background Serologic testing algorithms for recent HIV seroconversion (STARHS provide important information for HIV surveillance. We have previously demonstrated that a patient's antibody reaction pattern in a confirmatory line immunoassay (INNO-LIA™ HIV I/II Score provides information on the duration of infection, which is unaffected by clinical, immunological and viral variables. In this report we have set out to determine the diagnostic performance of Inno-Lia algorithms for identifying incident infections in patients with known duration of infection and evaluated the algorithms in annual cohorts of HIV notifications. Methods Diagnostic sensitivity was determined in 527 treatment-naive patients infected for up to 12 months. Specificity was determined in 740 patients infected for longer than 12 months. Plasma was tested by Inno-Lia and classified as either incident ( Results The 10 best algorithms had a mean raw sensitivity of 59.4% and a mean specificity of 95.1%. Adjustment for overrepresentation of patients in the first quarter year of infection further reduced the sensitivity. In the preferred model, the mean adjusted sensitivity was 37.4%. Application of the 10 best algorithms to four annual cohorts of HIV-1 notifications totalling 2'595 patients yielded a mean IIR of 0.35 in 2005/6 (baseline and of 0.45, 0.42 and 0.35 in 2008, 2009 and 2010, respectively. The increase between baseline and 2008 and the ensuing decreases were highly significant. Other adjustment models yielded different absolute IIR, although the relative changes between the cohorts were identical for all models. Conclusions The method can be used for comparing IIR in annual cohorts of HIV notifications. The use of several different algorithms in combination, each with its own sensitivity and specificity to detect incident infection, is advisable as this reduces the impact of individual imperfections stemming primarily from relatively low sensitivities and

  6. Performance Estimation and Fault Diagnosis Based on Levenberg–Marquardt Algorithm for a Turbofan Engine

    Directory of Open Access Journals (Sweden)

    Junjie Lu

    2018-01-01

    Full Text Available Establishing the schemes of accurate and computationally efficient performance estimation and fault diagnosis for turbofan engines has become a new research focus and challenges. It is able to increase reliability and stability of turbofan engine and reduce the life cycle costs. Accurate estimation of turbofan engine performance counts on thoroughly understanding the components’ performance, which is described by component characteristic maps and the fault of each component can be regarded as the change of characteristic maps. In this paper, a novel method based on a Levenberg–Marquardt (LM algorithm is proposed to enhance the fidelity of the performance estimation and the credibility of the fault diagnosis for the turbofan engine. The presented method utilizes the LM algorithm to figure out the operating point in the characteristic maps, preparing for performance estimation and fault diagnosis. The accuracy of the proposed method is evaluated for estimating performance parameters in the transient case with Rayleigh process noise and Gaussian measurement noise. The comparison among the extended Kalman filter (EKF method, the particle filter (PF method and the proposed method is implemented in the abrupt fault case and the gradual degeneration case and it has been shown that the proposed method has the capability to lead to more accurate result for performance estimation and fault diagnosis of turbofan engine than current popular EKF and PF diagnosis methods.

  7. A domain specific language for performance portable molecular dynamics algorithms

    Science.gov (United States)

    Saunders, William Robert; Grant, James; Müller, Eike Hermann

    2018-03-01

    Developers of Molecular Dynamics (MD) codes face significant challenges when adapting existing simulation packages to new hardware. In a continuously diversifying hardware landscape it becomes increasingly difficult for scientists to be experts both in their own domain (physics/chemistry/biology) and specialists in the low level parallelisation and optimisation of their codes. To address this challenge, we describe a "Separation of Concerns" approach for the development of parallel and optimised MD codes: the science specialist writes code at a high abstraction level in a domain specific language (DSL), which is then translated into efficient computer code by a scientific programmer. In a related context, an abstraction for the solution of partial differential equations with grid based methods has recently been implemented in the (Py)OP2 library. Inspired by this approach, we develop a Python code generation system for molecular dynamics simulations on different parallel architectures, including massively parallel distributed memory systems and GPUs. We demonstrate the efficiency of the auto-generated code by studying its performance and scalability on different hardware and compare it to other state-of-the-art simulation packages. With growing data volumes the extraction of physically meaningful information from the simulation becomes increasingly challenging and requires equally efficient implementations. A particular advantage of our approach is the easy expression of such analysis algorithms. We consider two popular methods for deducing the crystalline structure of a material from the local environment of each atom, show how they can be expressed in our abstraction and implement them in the code generation framework.

  8. Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography.

    Science.gov (United States)

    Kirişli, H A; Schaap, M; Metz, C T; Dharampal, A S; Meijboom, W B; Papadopoulou, S L; Dedic, A; Nieman, K; de Graaf, M A; Meijs, M F L; Cramer, M J; Broersen, A; Cetin, S; Eslami, A; Flórez-Valencia, L; Lor, K L; Matuszewski, B; Melki, I; Mohr, B; Oksüz, I; Shahzad, R; Wang, C; Kitslaar, P H; Unal, G; Katouzian, A; Örkisz, M; Chen, C M; Precioso, F; Najman, L; Masood, S; Ünay, D; van Vliet, L; Moreno, R; Goldenberg, R; Vuçini, E; Krestin, G P; Niessen, W J; van Walsum, T

    2013-12-01

    Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/. Copyright © 2013 Elsevier B.V. All rights reserved.

  9. An Algorithm for Glaucoma Screening in Clinical Settings and Its Preliminary Performance Profile

    Directory of Open Access Journals (Sweden)

    S-Farzad Mohammadi

    2013-01-01

    Full Text Available Purpose: To devise and evaluate a screening algorithm for glaucoma in clinical settings. Methods: Screening included examination of the optic disc for vertical cupping (≥0.4 and asymmetry (≥0.15, Goldmann applanation tonometry (≥21 mmHg, adjusted or unadjusted for central corneal thickness, and automated perimetry. In the diagnostic step, retinal nerve fiber layer imaging was performed using scanning laser polarimetry. Performance of the screening protocol was assessed in an eye hospital-based program in which 124 non-physician personnel aged 40 years or above were examined. A single ophthalmologist carried out the examinations and in equivocal cases, a glaucoma subspecialist′s opinion was sought. Results: Glaucoma was diagnosed in six cases (prevalence 4.8%; 95% confidence interval, 0.01-0.09 of whom five were new. The likelihood of making a definite diagnosis of glaucoma for those who were screened positively was 8.5 times higher than the estimated baseline risk for the reference population; the positive predictive value of the screening protocol was 30%. Screening excluded 80% of the initial population. Conclusion: Application of a formal screening protocol (such as our algorithm or its equivalent in clinical settings can be helpful in detecting new cases of glaucoma. Preliminary performance assessment of the algorithm showed its applicability and effectiveness in detecting glaucoma among subjects without any visual complaint.

  10. Performance Assessment Method for a Forged Fingerprint Detection Algorithm

    Science.gov (United States)

    Shin, Yong Nyuo; Jun, In-Kyung; Kim, Hyun; Shin, Woochang

    The threat of invasion of privacy and of the illegal appropriation of information both increase with the expansion of the biometrics service environment to open systems. However, while certificates or smart cards can easily be cancelled and reissued if found to be missing, there is no way to recover the unique biometric information of an individual following a security breach. With the recognition that this threat factor may disrupt the large-scale civil service operations approaching implementation, such as electronic ID cards and e-Government systems, many agencies and vendors around the world continue to develop forged fingerprint detection technology, but no objective performance assessment method has, to date, been reported. Therefore, in this paper, we propose a methodology designed to evaluate the objective performance of the forged fingerprint detection technology that is currently attracting a great deal of attention.

  11. Self-karaoke patterns: an interactive audio-visual system for handsfree live algorithm performance

    OpenAIRE

    Eldridge, Alice

    2014-01-01

    Self-karaoke Patterns, is an audiovisual study for improvised cello and live algorithms. The work is motivated in part by addressing the practical needs of the performer in ‘handsfree’ live algorithm contexts and in part an aesthetic concern with resolving the tension between conceptual dedication to autonomous algorithms and musical dedication to coherent performance. The elected approach is inspired by recent work investing the role of ‘shape’ in musical performance.

  12. Empirical study of self-configuring genetic programming algorithm performance and behaviour

    International Nuclear Information System (INIS)

    KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" data-affiliation=" (Siberian State Aerospace University named after Academician M.F. Reshetnev 31 KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" >Semenkin, E; KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" data-affiliation=" (Siberian State Aerospace University named after Academician M.F. Reshetnev 31 KrasnoyarskiyRabochiy prospect, Krasnoyarsk, 660014 (Russian Federation))" >Semenkina, M

    2015-01-01

    The behaviour of the self-configuring genetic programming algorithm with a modified uniform crossover operator that implements a selective pressure on the recombination stage, is studied over symbolic programming problems. The operator's probabilistic rates interplay is studied and the role of operator variants on algorithm performance is investigated. Algorithm modifications based on the results of investigations are suggested. The performance improvement of the algorithm is demonstrated by the comparative analysis of suggested algorithms on the benchmark and real world problems

  13. Evaluation of machine learning algorithms for improved risk assessment for Down's syndrome.

    Science.gov (United States)

    Koivu, Aki; Korpimäki, Teemu; Kivelä, Petri; Pahikkala, Tapio; Sairanen, Mikko

    2018-05-04

    Prenatal screening generates a great amount of data that is used for predicting risk of various disorders. Prenatal risk assessment is based on multiple clinical variables and overall performance is defined by how well the risk algorithm is optimized for the population in question. This article evaluates machine learning algorithms to improve performance of first trimester screening of Down syndrome. Machine learning algorithms pose an adaptive alternative to develop better risk assessment models using the existing clinical variables. Two real-world data sets were used to experiment with multiple classification algorithms. Implemented models were tested with a third, real-world, data set and performance was compared to a predicate method, a commercial risk assessment software. Best performing deep neural network model gave an area under the curve of 0.96 and detection rate of 78% with 1% false positive rate with the test data. Support vector machine model gave area under the curve of 0.95 and detection rate of 61% with 1% false positive rate with the same test data. When compared with the predicate method, the best support vector machine model was slightly inferior, but an optimized deep neural network model was able to give higher detection rates with same false positive rate or similar detection rate but with markedly lower false positive rate. This finding could further improve the first trimester screening for Down syndrome, by using existing clinical variables and a large training data derived from a specific population. Copyright © 2018 Elsevier Ltd. All rights reserved.

  14. An Evaluation of Algorithms for Identifying Metastatic Breast, Lung, or Colorectal Cancer in Administrative Claims Data.

    Science.gov (United States)

    Whyte, Joanna L; Engel-Nitz, Nicole M; Teitelbaum, April; Gomez Rey, Gabriel; Kallich, Joel D

    2015-07-01

    Administrative health care claims data are used for epidemiologic, health services, and outcomes cancer research and thus play a significant role in policy. Cancer stage, which is often a major driver of cost and clinical outcomes, is not typically included in claims data. Evaluate algorithms used in a dataset of cancer patients to identify patients with metastatic breast (BC), lung (LC), or colorectal (CRC) cancer using claims data. Clinical data on BC, LC, or CRC patients (between January 1, 2007 and March 31, 2010) were linked to a health care claims database. Inclusion required health plan enrollment ≥3 months before initial cancer diagnosis date. Algorithms were used in the claims database to identify patients' disease status, which was compared with physician-reported metastases. Generic and tumor-specific algorithms were evaluated using ICD-9 codes, varying diagnosis time frames, and including/excluding other tumors. Positive and negative predictive values, sensitivity, and specificity were assessed. The linked databases included 14,480 patients; of whom, 32%, 17%, and 14.2% had metastatic BC, LC, and CRC, respectively, at diagnosis and met inclusion criteria. Nontumor-specific algorithms had lower specificity than tumor-specific algorithms. Tumor-specific algorithms' sensitivity and specificity were 53% and 99% for BC, 55% and 85% for LC, and 59% and 98% for CRC, respectively. Algorithms to distinguish metastatic BC, LC, and CRC from locally advanced disease should use tumor-specific primary cancer codes with 2 claims for the specific primary cancer >30-42 days apart to reduce misclassification. These performed best overall in specificity, positive predictive values, and overall accuracy to identify metastatic cancer in a health care claims database.

  15. Preliminary evaluation of the MLAA algorithm with the Philips Ingenuity PET/MR

    International Nuclear Information System (INIS)

    Lougovski, Alexandr; Schramm, Georg; Maus, Jens; Hofheinz, Frank; Ho, Jörg van den

    2014-01-01

    Combined PET/MR is a promising tool for simultaneous investigation of soft tissue morphology and function. However, contrary to CT, MR images do not provide information on photon attenuation in tissue. In the currently available systems issue is solved by synthesizing attenuation maps from MR images using segmentation algorithms. This approach has been shown to provide reason-able results in most cases. However, sporadically occurring segmentation errors can cause serious problems. Recently, algorithms for simultaneous estimation of attenuation and tracer distribution (MLAA) have been introduced. So far, validity of MLAA has mainly been demonstrated in simulated data. We have integrated the MLAA algorithm [2] into the THOR reconstruction []. An evaluation of MLAA was performed using both phantom and patient data acquired with the Ingenuity PET/MR.

  16. Evaluating and Improving Automatic Sleep Spindle Detection by Using Multi-Objective Evolutionary Algorithms

    Directory of Open Access Journals (Sweden)

    Min-Yin Liu

    2017-05-01

    Full Text Available Sleep spindles are brief bursts of brain activity in the sigma frequency range (11–16 Hz measured by electroencephalography (EEG mostly during non-rapid eye movement (NREM stage 2 sleep. These oscillations are of great biological and clinical interests because they potentially play an important role in identifying and characterizing the processes of various neurological disorders. Conventionally, sleep spindles are identified by expert sleep clinicians via visual inspection of EEG signals. The process is laborious and the results are inconsistent among different experts. To resolve the problem, numerous computerized methods have been developed to automate the process of sleep spindle identification. Still, the performance of these automated sleep spindle detection methods varies inconsistently from study to study. There are two reasons: (1 the lack of common benchmark databases, and (2 the lack of commonly accepted evaluation metrics. In this study, we focus on tackling the second problem by proposing to evaluate the performance of a spindle detector in a multi-objective optimization context and hypothesize that using the resultant Pareto fronts for deriving evaluation metrics will improve automatic sleep spindle detection. We use a popular multi-objective evolutionary algorithm (MOEA, the Strength Pareto Evolutionary Algorithm (SPEA2, to optimize six existing frequency-based sleep spindle detection algorithms. They include three Fourier, one continuous wavelet transform (CWT, and two Hilbert-Huang transform (HHT based algorithms. We also explore three hybrid approaches. Trained and tested on open-access DREAMS and MASS databases, two new hybrid methods of combining Fourier with HHT algorithms show significant performance improvement with F1-scores of 0.726–0.737.

  17. Computed radiography systems performance evaluation

    International Nuclear Information System (INIS)

    Xavier, Clarice C.; Nersissian, Denise Y.; Furquim, Tania A.C.

    2009-01-01

    The performance of a computed radiography system was evaluated, according to the AAPM Report No. 93. Evaluation tests proposed by the publication were performed, and the following nonconformities were found: imaging p/ate (lP) dark noise, which compromises the clinical image acquired using the IP; exposure indicator uncalibrated, which can cause underexposure to the IP; nonlinearity of the system response, which causes overexposure; resolution limit under the declared by the manufacturer and erasure thoroughness uncalibrated, impairing structures visualization; Moire pattern visualized at the grid response, and IP Throughput over the specified by the manufacturer. These non-conformities indicate that digital imaging systems' lack of calibration can cause an increase in dose in order that image prob/ems can be so/ved. (author)

  18. Clinical implementation and evaluation of the Acuros dose calculation algorithm.

    Science.gov (United States)

    Yan, Chenyu; Combine, Anthony G; Bednarz, Greg; Lalonde, Ronald J; Hu, Bin; Dickens, Kathy; Wynn, Raymond; Pavord, Daniel C; Saiful Huq, M

    2017-09-01

    The main aim of this study is to validate the Acuros XB dose calculation algorithm for a Varian Clinac iX linac in our clinics, and subsequently compare it with the wildely used AAA algorithm. The source models for both Acuros XB and AAA were configured by importing the same measured beam data into Eclipse treatment planning system. Both algorithms were validated by comparing calculated dose with measured dose on a homogeneous water phantom for field sizes ranging from 6 cm × 6 cm to 40 cm × 40 cm. Central axis and off-axis points with different depths were chosen for the comparison. In addition, the accuracy of Acuros was evaluated for wedge fields with wedge angles from 15 to 60°. Similarly, variable field sizes for an inhomogeneous phantom were chosen to validate the Acuros algorithm. In addition, doses calculated by Acuros and AAA at the center of lung equivalent tissue from three different VMAT plans were compared to the ion chamber measured doses in QUASAR phantom, and the calculated dose distributions by the two algorithms and their differences on patients were compared. Computation time on VMAT plans was also evaluated for Acuros and AAA. Differences between dose-to-water (calculated by AAA and Acuros XB) and dose-to-medium (calculated by Acuros XB) on patient plans were compared and evaluated. For open 6 MV photon beams on the homogeneous water phantom, both Acuros XB and AAA calculations were within 1% of measurements. For 23 MV photon beams, the calculated doses were within 1.5% of measured doses for Acuros XB and 2% for AAA. Testing on the inhomogeneous phantom demonstrated that AAA overestimated doses by up to 8.96% at a point close to lung/solid water interface, while Acuros XB reduced that to 1.64%. The test on QUASAR phantom showed that Acuros achieved better agreement in lung equivalent tissue while AAA underestimated dose for all VMAT plans by up to 2.7%. Acuros XB computation time was about three times faster than AAA for VMAT plans, and

  19. ON CONSTRUCTION OF A RELIABLE GROUND TRUTH FOR EVALUATION OF VISUAL SLAM ALGORITHMS

    Directory of Open Access Journals (Sweden)

    Jan Bayer

    2016-11-01

    Full Text Available In this work we are concerning the problem of localization accuracy evaluation of visual-based Simultaneous Localization and Mapping (SLAM techniques. Quantitative evaluation of the SLAM algorithm performance is usually done using the established metrics of Relative pose error and Absolute trajectory error which require a precise and reliable ground truth. Such a ground truth is usually hard to obtain, while it requires an expensive external localization system. In this work we are proposing to use the SLAM algorithm itself to construct a reliable ground-truth by offline frame-by-frame processing. The generated ground-truth is suitable for evaluation of different SLAM systems, as well as for tuning the parametrization of the on-line SLAM. The presented practical experimental results indicate the feasibility of the proposed approach.

  20. Computational performance of a projection and rescaling algorithm

    OpenAIRE

    Pena, Javier; Soheili, Negar

    2018-01-01

    This paper documents a computational implementation of a {\\em projection and rescaling algorithm} for finding most interior solutions to the pair of feasibility problems \\[ \\text{find} \\; x\\in L\\cap\\mathbb{R}^n_{+} \\;\\;\\;\\; \\text{ and } \\; \\;\\;\\;\\; \\text{find} \\; \\hat x\\in L^\\perp\\cap\\mathbb{R}^n_{+}, \\] where $L$ denotes a linear subspace in $\\mathbb{R}^n$ and $L^\\perp$ denotes its orthogonal complement. The projection and rescaling algorithm is a recently developed method that combines a {\\...

  1. Testing the performance of empirical remote sensing algorithms in the Baltic Sea waters with modelled and in situ reflectance data

    Directory of Open Access Journals (Sweden)

    Martin Ligi

    2017-01-01

    Full Text Available Remote sensing studies published up to now show that the performance of empirical (band-ratio type algorithms in different parts of the Baltic Sea is highly variable. Best performing algorithms are different in the different regions of the Baltic Sea. Moreover, there is indication that the algorithms have to be seasonal as the optical properties of phytoplankton assemblages dominating in spring and summer are different. We modelled 15,600 reflectance spectra using HydroLight radiative transfer model to test 58 previously published empirical algorithms. 7200 of the spectra were modelled using specific inherent optical properties (SIOPs of the open parts of the Baltic Sea in summer and 8400 with SIOPs of spring season. Concentration range of chlorophyll-a, coloured dissolved organic matter (CDOM and suspended matter used in the model simulations were based on the actually measured values available in literature. For each optically active constituent we added one concentration below actually measured minimum and one concentration above the actually measured maximum value in order to test the performance of the algorithms in wider range. 77 in situ reflectance spectra from rocky (Sweden and sandy (Estonia, Latvia coastal areas were used to evaluate the performance of the algorithms also in coastal waters. Seasonal differences in the algorithm performance were confirmed but we found also algorithms that can be used in both spring and summer conditions. The algorithms that use bands available on OLCI, launched in February 2016, are highlighted as this sensor will be available for Baltic Sea monitoring for coming decades.

  2. Evaluating acoustic speaker normalization algorithms: evidence from longitudinal child data.

    Science.gov (United States)

    Kohn, Mary Elizabeth; Farrington, Charlie

    2012-03-01

    Speaker vowel formant normalization, a technique that controls for variation introduced by physical differences between speakers, is necessary in variationist studies to compare speakers of different ages, genders, and physiological makeup in order to understand non-physiological variation patterns within populations. Many algorithms have been established to reduce variation introduced into vocalic data from physiological sources. The lack of real-time studies tracking the effectiveness of these normalization algorithms from childhood through adolescence inhibits exploration of child participation in vowel shifts. This analysis compares normalization techniques applied to data collected from ten African American children across five time points. Linear regressions compare the reduction in variation attributable to age and gender for each speaker for the vowels BEET, BAT, BOT, BUT, and BOAR. A normalization technique is successful if it maintains variation attributable to a reference sociolinguistic variable, while reducing variation attributable to age. Results indicate that normalization techniques which rely on both a measure of central tendency and range of the vowel space perform best at reducing variation attributable to age, although some variation attributable to age persists after normalization for some sections of the vowel space. © 2012 Acoustical Society of America

  3. Evaluation of clinical image processing algorithms used in digital mammography.

    Science.gov (United States)

    Zanca, Federica; Jacobs, Jurgen; Van Ongeval, Chantal; Claus, Filip; Celis, Valerie; Geniets, Catherine; Provost, Veerle; Pauwels, Herman; Marchal, Guy; Bosmans, Hilde

    2009-03-01

    Screening is the only proven approach to reduce the mortality of breast cancer, but significant numbers of breast cancers remain undetected even when all quality assurance guidelines are implemented. With the increasing adoption of digital mammography systems, image processing may be a key factor in the imaging chain. Although to our knowledge statistically significant effects of manufacturer-recommended image processings have not been previously demonstrated, the subjective experience of our radiologists, that the apparent image quality can vary considerably between different algorithms, motivated this study. This article addresses the impact of five such algorithms on the detection of clusters of microcalcifications. A database of unprocessed (raw) images of 200 normal digital mammograms, acquired with the Siemens Novation DR, was collected retrospectively. Realistic simulated microcalcification clusters were inserted in half of the unprocessed images. All unprocessed images were subsequently processed with five manufacturer-recommended image processing algorithms (Agfa Musica 1, IMS Raffaello Mammo 1.2, Sectra Mamea AB Sigmoid, Siemens OPVIEW v2, and Siemens OPVIEW v1). Four breast imaging radiologists were asked to locate and score the clusters in each image on a five point rating scale. The free-response data were analyzed by the jackknife free-response receiver operating characteristic (JAFROC) method and, for comparison, also with the receiver operating characteristic (ROC) method. JAFROC analysis revealed highly significant differences between the image processings (F = 8.51, p < 0.0001), suggesting that image processing strongly impacts the detectability of clusters. Siemens OPVIEW2 and Siemens OPVIEW1 yielded the highest and lowest performances, respectively. ROC analysis of the data also revealed significant differences between the processing but at lower significance (F = 3.47, p = 0.0305) than JAFROC. Both statistical analysis methods revealed that the

  4. Comparison between beamforming and super resolution imaging algorithms for non-destructive evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Fan, Chengguang [College of Mechatronic Engineering and Automation, National University of Defense Technology, Changsha 410073, PR China and Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom); Drinkwater, Bruce W. [Department of Mechanical Engineering, University of Bristol, Queen' s Building, University Walk, Bristol BS8 1TR (United Kingdom)

    2014-02-18

    In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded.

  5. Comparison between beamforming and super resolution imaging algorithms for non-destructive evaluation

    International Nuclear Information System (INIS)

    Fan, Chengguang; Drinkwater, Bruce W.

    2014-01-01

    In this paper the performance of total focusing method is compared with the widely used time-reversal MUSIC super resolution technique. The algorithms are tested with simulated and experimental ultrasonic array data, each containing different noise levels. The simulated time domain signals allow the effects of array geometry, frequency, scatterer location, scatterer size, scatterer separation and random noise to be carefully controlled. The performance of the imaging algorithms is evaluated in terms of resolution and sensitivity to random noise. It is shown that for the low noise situation, time-reversal MUSIC provides enhanced lateral resolution when compared to the total focusing method. However, for higher noise levels, the total focusing method shows robustness, whilst the performance of time-reversal MUSIC is significantly degraded

  6. EVALUATING THE ACCURACY OF DEM GENERATION ALGORITHMS FROM UAV IMAGERY

    Directory of Open Access Journals (Sweden)

    J. J. Ruiz

    2013-08-01

    Full Text Available In this work we evaluated how the use of different positioning systems affects the accuracy of Digital Elevation Models (DEMs generated from aerial imagery obtained with Unmanned Aerial Vehicles (UAVs. In this domain, state-of-the-art DEM generation algorithms suffer from typical errors obtained by GPS/INS devices in the position measurements associated with each picture obtained. The deviations from these measurements to real world positions are about meters. The experiments have been carried out using a small quadrotor in the indoor testbed at the Center for Advanced Aerospace Technologies (CATEC. This testbed houses a system that is able to track small markers mounted on the UAV and along the scenario with millimeter precision. This provides very precise position measurements, to which we can add random noise to simulate errors in different GPS receivers. The results showed that final DEM accuracy clearly depends on the positioning information.

  7. Collaborative en-route and slot allocation algorithm based on fuzzy comprehensive evaluation

    Science.gov (United States)

    Yang, Shangwen; Guo, Baohua; Xiao, Xuefei; Gao, Haichao

    2018-01-01

    To allocate the en-routes and slots to the flights with collaborative decision making, a collaborative en-route and slot allocation algorithm based on fuzzy comprehensive evaluation was proposed. Evaluation indexes include flight delay costs, delay time and the number of turning points. Analytic hierarchy process is applied to determining index weights. Remark set for current two flights not yet obtained the en-route and slot in flight schedule is established. Then, fuzzy comprehensive evaluation is performed, and the en-route and slot for the current two flights are determined. Continue selecting the flight not yet obtained an en-route and a slot in flight schedule. Perform fuzzy comprehensive evaluation until all flights have obtained the en-routes and slots. MatlabR2007b was applied to numerical test based on the simulated data of a civil en-route. Test results show that, compared with the traditional strategy of first come first service, the algorithm gains better effect. The effectiveness of the algorithm was verified.

  8. Evaluation of an electron Monte Carlo dose calculation algorithm for treatment planning.

    Science.gov (United States)

    Chamberland, Eve; Beaulieu, Luc; Lachance, Bernard

    2015-05-08

    The purpose of this study is to evaluate the accuracy of the electron Monte Carlo (eMC) dose calculation algorithm included in a commercial treatment planning system and compare its performance against an electron pencil beam algorithm. Several tests were performed to explore the system's behavior in simple geometries and in configurations encountered in clinical practice. The first series of tests were executed in a homogeneous water phantom, where experimental measurements and eMC-calculated dose distributions were compared for various combinations of energy and applicator. More specifically, we compared beam profiles and depth-dose curves at different source-to-surface distances (SSDs) and gantry angles, by using dose difference and distance to agreement. Also, we compared output factors, we studied the effects of algorithm input parameters, which are the random number generator seed, as well as the calculation grid size, and we performed a calculation time evaluation. Three different inhomogeneous solid phantoms were built, using high- and low-density materials inserts, to clinically simulate relevant heterogeneity conditions: a small air cylinder within a homogeneous phantom, a lung phantom, and a chest wall phantom. We also used an anthropomorphic phantom to perform comparison of eMC calculations to measurements. Finally, we proceeded with an evaluation of the eMC algorithm on a clinical case of nose cancer. In all mentioned cases, measurements, carried out by means of XV-2 films, radiographic films or EBT2 Gafchromic films. were used to compare eMC calculations with dose distributions obtained from an electron pencil beam algorithm. eMC calculations in the water phantom were accurate. Discrepancies for depth-dose curves and beam profiles were under 2.5% and 2 mm. Dose calculations with eMC for the small air cylinder and the lung phantom agreed within 2% and 4%, respectively. eMC calculations for the chest wall phantom and the anthropomorphic phantom also

  9. Performance assessment of electric power generations using an adaptive neural network algorithm and fuzzy DEA

    Energy Technology Data Exchange (ETDEWEB)

    Javaheri, Zahra

    2010-09-15

    Modeling, evaluating and analyzing performance of Iranian thermal power plants is the main goal of this study which is based on multi variant methods analysis. These methods include fuzzy DEA and adaptive neural network algorithm. At first, we determine indicators, then data is collected, next we obtained values of ranking and efficiency by Fuzzy DEA, Case study is thermal power plants In view of the fact that investment to establish on power plant is very high, and maintenance of power plant causes an expensive expenditure, moreover using fossil fuel effected environment hence optimum produce of current power plants is important.

  10. A synthetic dataset for evaluating soft and hard fusion algorithms

    Science.gov (United States)

    Graham, Jacob L.; Hall, David L.; Rimland, Jeffrey

    2011-06-01

    There is an emerging demand for the development of data fusion techniques and algorithms that are capable of combining conventional "hard" sensor inputs such as video, radar, and multispectral sensor data with "soft" data including textual situation reports, open-source web information, and "hard/soft" data such as image or video data that includes human-generated annotations. New techniques that assist in sense-making over a wide range of vastly heterogeneous sources are critical to improving tactical situational awareness in counterinsurgency (COIN) and other asymmetric warfare situations. A major challenge in this area is the lack of realistic datasets available for test and evaluation of such algorithms. While "soft" message sets exist, they tend to be of limited use for data fusion applications due to the lack of critical message pedigree and other metadata. They also lack corresponding hard sensor data that presents reasonable "fusion opportunities" to evaluate the ability to make connections and inferences that span the soft and hard data sets. This paper outlines the design methodologies, content, and some potential use cases of a COIN-based synthetic soft and hard dataset created under a United States Multi-disciplinary University Research Initiative (MURI) program funded by the U.S. Army Research Office (ARO). The dataset includes realistic synthetic reports from a variety of sources, corresponding synthetic hard data, and an extensive supporting database that maintains "ground truth" through logical grouping of related data into "vignettes." The supporting database also maintains the pedigree of messages and other critical metadata.

  11. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  12. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  13. Evaluation of ultrasonic array imaging algorithms for inspection of a coarse grained material

    Science.gov (United States)

    Van Pamel, A.; Lowe, M. J. S.; Brett, C. R.

    2014-02-01

    Improving the ultrasound inspection capability for coarse grain metals remains of longstanding interest to industry and the NDE research community and is expected to become increasingly important for next generation power plants. A test sample of coarse grained Inconel 625 which is representative of future power plant components has been manufactured to test the detectability of different inspection techniques. Conventional ultrasonic A, B, and C-scans showed the sample to be extraordinarily difficult to inspect due to its scattering behaviour. However, in recent years, array probes and Full Matrix Capture (FMC) imaging algorithms, which extract the maximum amount of information possible, have unlocked exciting possibilities for improvements. This article proposes a robust methodology to evaluate the detection performance of imaging algorithms, applying this to three FMC imaging algorithms; Total Focusing Method (TFM), Phase Coherent Imaging (PCI), and Decomposition of the Time Reversal Operator with Multiple Scattering (DORT MSF). The methodology considers the statistics of detection, presenting the detection performance as Probability of Detection (POD) and probability of False Alarm (PFA). The data is captured in pulse-echo mode using 64 element array probes at centre frequencies of 1MHz and 5MHz. All three algorithms are shown to perform very similarly when comparing their flaw detection capabilities on this particular case.

  14. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    Energy Technology Data Exchange (ETDEWEB)

    Madduri, Kamesh; Ediger, David; Jiang, Karl; Bader, David A.; Chavarria-Miranda, Daniel

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.

  15. Performance Comparison of Widely-Used Maximum Power Point Tracker Algorithms under Real Environmental Conditions

    Directory of Open Access Journals (Sweden)

    DURUSU, A.

    2014-08-01

    Full Text Available Maximum power point trackers (MPPTs play an essential role in extracting power from photovoltaic (PV panels as they make the solar panels to operate at the maximum power point (MPP whatever the changes of environmental conditions are. For this reason, they take an important place in the increase of PV system efficiency. MPPTs are driven by MPPT algorithms and a number of MPPT algorithms are proposed in the literature. The comparison of the MPPT algorithms in literature are made by a sun simulator based test system under laboratory conditions for short durations. However, in this study, the performances of four most commonly used MPPT algorithms are compared under real environmental conditions for longer periods. A dual identical experimental setup is designed to make a comparison between two the considered MPPT algorithms as synchronized. As a result of this study, the ranking among these algorithms are presented and the results show that Incremental Conductance (IC algorithm gives the best performance.

  16. Performability Modelling Tools, Evaluation Techniques and Applications

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.H.M.

    1990-01-01

    This thesis deals with three aspects of quantitative evaluation of fault-tolerant and distributed computer and communication systems: performability evaluation techniques, performability modelling tools, and performability modelling applications. Performability modelling is a relatively new

  17. Evaluate Data Center Network Performance

    DEFF Research Database (Denmark)

    Pilimon, Artur

    through a data center network, which is usually built with layer 2 switches and layer 3 routers. The topology of the data center network is crucial for latency in the data communication to and from the data center and between servers in the data center. Tests can be conducted to measure latency and other...... Engineering, scientists evaluate data center network topologies with an SDN-based (Software-Defined Networking) control framework measuring network performance – primarily latency. This can be used to plan data center scaling by testing how a new topology will function before changes are made. Data center...... performance parameters for different data center network topologies. It is however important that tests can be repeated and reproduced to have comparable information from the tests. There are, of course, many topologies that can be used for data center networks. At DTU Fotonik, Department of Photonics...

  18. A high performance hardware implementation image encryption with AES algorithm

    Science.gov (United States)

    Farmani, Ali; Jafari, Mohamad; Miremadi, Seyed Sohrab

    2011-06-01

    This paper describes implementation of a high-speed encryption algorithm with high throughput for encrypting the image. Therefore, we select a highly secured symmetric key encryption algorithm AES(Advanced Encryption Standard), in order to increase the speed and throughput using pipeline technique in four stages, control unit based on logic gates, optimal design of multiplier blocks in mixcolumn phase and simultaneous production keys and rounds. Such procedure makes AES suitable for fast image encryption. Implementation of a 128-bit AES on FPGA of Altra company has been done and the results are as follow: throughput, 6 Gbps in 471MHz. The time of encrypting in tested image with 32*32 size is 1.15ms.

  19. Comparison of predictive performance of data mining algorithms in predicting body weight in Mengali rams of Pakistan

    Directory of Open Access Journals (Sweden)

    Senol Celik

    Full Text Available ABSTRACT The present study aimed at comparing predictive performance of some data mining algorithms (CART, CHAID, Exhaustive CHAID, MARS, MLP, and RBF in biometrical data of Mengali rams. To compare the predictive capability of the algorithms, the biometrical data regarding body (body length, withers height, and heart girth and testicular (testicular length, scrotal length, and scrotal circumference measurements of Mengali rams in predicting live body weight were evaluated by most goodness of fit criteria. In addition, age was considered as a continuous independent variable. In this context, MARS data mining algorithm was used for the first time to predict body weight in two forms, without (MARS_1 and with interaction (MARS_2 terms. The superiority order in the predictive accuracy of the algorithms was found as CART > CHAID ≈ Exhaustive CHAID > MARS_2 > MARS_1 > RBF > MLP. Moreover, all tested algorithms provided a strong predictive accuracy for estimating body weight. However, MARS is the only algorithm that generated a prediction equation for body weight. Therefore, it is hoped that the available results might present a valuable contribution in terms of predicting body weight and describing the relationship between the body weight and body and testicular measurements in revealing breed standards and the conservation of indigenous gene sources for Mengali sheep breeding. Therefore, it will be possible to perform more profitable and productive sheep production. Use of data mining algorithms is useful for revealing the relationship between body weight and testicular traits in describing breed standards of Mengali sheep.

  20. A Literature Review on Recommender Systems Algorithms, Techniques and Evaluations

    Directory of Open Access Journals (Sweden)

    Kasra Madadipouya

    2017-07-01

    Full Text Available One of the most crucial issues, nowadays, is to provide personalized services to each individual based on their preferences. To achieve this goal, recommender system could be utilized as a tool to help the users in decision-making process offering different items and options. They are utilized to predict and recommend relevant items to end users. In this case an item could be anything such as a document, a location, a movie, an article or even a user (friend suggestion. The main objective of the recommender systems is to suggest items which have great potential to be liked by users. In modern recommender systems, various methods are combined together with the aim of extracting patterns in available datasets. Combination of different algorithms make prediction more convoluted since various parameters should be taken into account in providing recommendations. Recommendations could be personalized or non-personalized. In non-personalized type, selection of the items for a user is based on the number of the times that an item has been visited in the past by other users. However, in the personalized type, the main objective is to provide the best items to the user based on her taste and preferences. Although, in many domains recommender systems gained significant improvements and provide better services for users, it still requires further research to improve accuracy of recommendations in many aspects. In fact, the current available recommender systems are far from the ideal model of the recommender system. This paper reviews state of art in recommender systems algorithms and techniques which is necessary to identify the gaps and improvement areas. In addition to that, we provide possible solutions to overcome shortages and known issues of recommender systems as well as discussing about recommender systems evaluation methods and metrics in details.

  1. Algorithms evaluation for transformers differential protection; Avaliacao de algoritmos para protecao diferencial de transformadores

    Energy Technology Data Exchange (ETDEWEB)

    Piovesan, Luis Sergio

    1997-07-01

    The appliance of two algorithms is evaluated, one based in Fourier analysis and other based in a rectangular transform technique over Fourier analysis, to be used in digital logical circuits (digital protection relays) for the purpose of differential protection of power transformers (ANSI 87T). The first chapter has a brief introduction about electrical protection. The second chapter discusses the general problems of transform protection, the development of digital technology and, with more detail, the differential protection associated to this technology. In this chapter are presented the particular aspects of transformers differential protection concerning sensibility, inrush current situations and harmonic distortions caused by transformer core saturations and the differential protection algorithms and their applications in a specific relay design. In chapter three, a method to make possible testing the protection performance is developed. This work applies digital simulations using EMTP to generate current signal of transformer operation and fault conditions. Digital simulation using Matlab is used to simulate the protection. The EMTP generated field signals are sent to the relay under test, furnishing data of normal operation, internal and external faults. The relay logic simulator at Matlab will work this data and so, it will be possible to verify and evaluate the algorithm behavior and performance. Chapter 4 shows the protection operation over simulations of several of transformer operation and fault conditions. The last chapter presents a conclusion about the protection performance, discussions about all the methods applied in this work and suggestions for further studies. (author)

  2. Evaluation of proposed degradation algorithms for multiburst environments

    International Nuclear Information System (INIS)

    Olness, D.U.; Warshawsky, A.S.

    1993-01-01

    This work is part of an ongoing effort of the Defense Nuclear Agency's Intermediate Dose Program to investigate the effects of intermediate radiation doses on combat unit performance. The objective of this study is to develop an improved technique for applying performance degradation factors to combat crews in simulated battles following multiple radiation doses on the tactical battlefield. A further objective of the study is to quantify differences in Janus results when crew performance factors, following multiple radiation doses, are obtained from the improved technique instead of from the technique used previously. In this paper, the authors describe and evaluate three methods previously identified for determining performance degradation from multiple exposures. They also present the observed quantitative differences in outcomes of conventional battles begun a few hours after multiple radiation exposures when alternate techniques for calculating combat crew performance degradation factors are included in the Janus combat simulation

  3. Evaluation of Algorithms for Photon Depth of Interaction Estimation for the TRIMAGE PET Component

    Science.gov (United States)

    Camarlinghi, Niccolò; Belcari, Nicola; Cerello, Piergiorgio; Pennazio, Francesco; Sportelli, Giancarlo; Zaccaro, Emanuele; Del Guerra, Alberto

    2016-02-01

    The TRIMAGE consortium aims to develop a multimodal PET/MR/EEG brain scanner dedicated to the early diagnosis of schizophrenia and other mental health disorders. The TRIMAGE PET component features a full ring made of 18 detectors, each one consisting of twelve 8 ×8 Silicon PhotoMultipliers (SiPMs) tiles coupled to two segmented LYSO crystal matrices with staggered layers. The identification of the pixel where a photon interacted is performed on-line at the front-end level, thus allowing the FPGA board to emit fully digital event packets. This allows to increase the effective bandwidth, but imposes restrictions on the complexity of the algorithms to be implemented. In this work, two algorithms, whose implementation is feasible directly on an FPGA, are presented and evaluated. The first algorithm is driven by physical considerations, while the other consists in a two-class linear Support Vector Machine (SVM). The validation of the algorithm performance is carried out by using simulated data generated with the GAMOS Monte Carlo. The obtained results show that the achieved accuracy in layer identification is above 90% for both the proposed approaches. The feasibility of tagging and rejecting events that underwent multiple interactions within the detector is also discussed.

  4. Plant operator performance evaluation system

    International Nuclear Information System (INIS)

    Ujita, Hiroshi; Fukuda, Mitsuko; Kubota, Ryuji.

    1989-01-01

    A plant operator performance evaluation system to analyze plant operation records during accident training and to identify and classify operator errors has been developed for the purpose of supporting realization of a training and education system for plant operators. A knowledge engineering technique was applied to evaluation of operator behavior by both even-based and symptom-based procedures, in various situations including event transition due to multiple failures or operational errors. The system classifies the identified errors as to their single and double types based on Swain's error classification and the error levels reflecting Rasmussen's cognitive level, and it also evaluates the effect of errors on plant state and then classifies error influence, using 'knowledge for phenomena and operations', as represented by frames. It has additional functions for analysis of error statistics and knowledge acquisition support of 'knowledge for operations'. The system was applied to a training analysis for a scram event in a BWR plant, and its error analysis function was confirmed to be effective by operational experts. (author)

  5. Forecasting Ability But No Profitability: An Empirical Evaluation of Genetic Algorithm-optimised Technical Trading Rules

    OpenAIRE

    Pereira, Robert

    1999-01-01

    This paper evaluates the performance of several popular technical trading rules applied to the Australian share market. The optimal trading rule parameter values over the in-sample period of 4/1/82 to 31/12/89 are found using a genetic algorithm. These optimal rules are then evaluated in terms of their forecasting ability and economic profitability during the out-of-sample period from 2/1/90 to the 31/12/97. The results indicate that the optimal rules outperform the benchmark given by a risk-...

  6. Fast and Accurate Ground Truth Generation for Skew-Tolerance Evaluation of Page Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Okun Oleg

    2006-01-01

    Full Text Available Many image segmentation algorithms are known, but often there is an inherent obstacle in the unbiased evaluation of segmentation quality: the absence or lack of a common objective representation for segmentation results. Such a representation, known as the ground truth, is a description of what one should obtain as the result of ideal segmentation, independently of the segmentation algorithm used. The creation of ground truth is a laborious process and therefore any degree of automation is always welcome. Document image analysis is one of the areas where ground truths are employed. In this paper, we describe an automated tool called GROTTO intended to generate ground truths for skewed document images, which can be used for the performance evaluation of page segmentation algorithms. Some of these algorithms are claimed to be insensitive to skew (tilt of text lines. However, this fact is usually supported only by a visual comparison of what one obtains and what one should obtain since ground truths are mostly available for upright images, that is, those without skew. As a result, the evaluation is both subjective; that is, prone to errors, and tedious. Our tool allows users to quickly and easily produce many sufficiently accurate ground truths that can be employed in practice and therefore it facilitates automatic performance evaluation. The main idea is to utilize the ground truths available for upright images and the concept of the representative square [9] in order to produce the ground truths for skewed images. The usefulness of our tool is demonstrated through a number of experiments with real-document images of complex layout.

  7. Optimum Performance-Based Seismic Design Using a Hybrid Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    S. Talatahari

    2014-01-01

    Full Text Available A hybrid optimization method is presented to optimum seismic design of steel frames considering four performance levels. These performance levels are considered to determine the optimum design of structures to reduce the structural cost. A pushover analysis of steel building frameworks subject to equivalent-static earthquake loading is utilized. The algorithm is based on the concepts of the charged system search in which each agent is affected by local and global best positions stored in the charged memory considering the governing laws of electrical physics. Comparison of the results of the hybrid algorithm with those of other metaheuristic algorithms shows the efficiency of the hybrid algorithm.

  8. Evaluation of the algorithms for recovering reflectance from virtual digital camera response

    Directory of Open Access Journals (Sweden)

    Ana Gebejes

    2012-10-01

    Full Text Available In the recent years many new methods for quality control in graphic industry are proposed. All of these methodshave one in common – using digital camera as a capturing device and appropriate image processing method/algorithmto obtain desired information. With the development of new, more accurate sensors, digital cameras becameeven more dominant and the use of cameras as measuring device became more emphasized. The idea of using cameraas spectrophotometer is interesting because this kind of measurement would be more economical, faster, widelyavailable and it would provide a possibility of multiple colour capture with a single shot. This can be very usefulfor capturing colour targets for characterization of different properties of a print device. A lot of effort is put into enablingcommercial colour CCD cameras (3 acquisition channels to obtain enough of the information for reflectancerecovery. Unfortunately, RGB camera was not made with the idea of performing colour measurements but ratherfor producing an image that is visually pleasant for the observer. This somewhat complicates the task and seeks fora development of different algorithms that will estimate the reflectance information from the available RGB cameraresponses with minimal possible error. In this paper three different reflectance estimation algorithms are evaluated(Orthogonal projection,Wiener and optimized Wiener estimation, together with the method for reflectance approximationbased on principal component analysis (PCA. The aim was to perform reflectance estimation pixelwise and analyze the performance of some reflectance estimation algorithms locally, at some specific pixels in theimage, and globally, on the whole image. Performances of each algorithm were evaluated visually and numericallyby obtaining pixel wise colour difference and pixel wise difference of estimated reflectance to the original values. Itwas concluded that Wiener method gives the best reflectance estimation

  9. Performance of fusion algorithms for computer-aided detection and classification of mines in very shallow water obtained from testing in navy Fleet Battle Exercise-Hotel 2000

    Science.gov (United States)

    Ciany, Charles M.; Zurawski, William; Kerfoot, Ian

    2001-10-01

    The performance of Computer Aided Detection/Computer Aided Classification (CAD/CAC) Fusion algorithms on side-scan sonar images was evaluated using data taken at the Navy's's Fleet Battle Exercise-Hotel held in Panama City, Florida, in August 2000. A 2-of-3 binary fusion algorithm is shown to provide robust performance. The algorithm accepts the classification decisions and associated contact locations form three different CAD/CAC algorithms, clusters the contacts based on Euclidian distance, and then declares a valid target when a clustered contact is declared by at least 2 of the 3 individual algorithms. This simple binary fusion provided a 96 percent probability of correct classification at a false alarm rate of 0.14 false alarms per image per side. The performance represented a 3.8:1 reduction in false alarms over the best performing single CAD/CAC algorithm, with no loss in probability of correct classification.

  10. 48 CFR 436.604 - Performance evaluation.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 4 2010-10-01 2010-10-01 false Performance evaluation... Performance evaluation. Preparation of performance evaluation reports. (a) In addition to the requirements of FAR 36.604, performance evaluation reports shall be prepared for indefinite-delivery type contracts...

  11. Examining applying high performance genetic data feature selection and classification algorithms for colon cancer diagnosis.

    Science.gov (United States)

    Al-Rajab, Murad; Lu, Joan; Xu, Qiang

    2017-07-01

    This paper examines the accuracy and efficiency (time complexity) of high performance genetic data feature selection and classification algorithms for colon cancer diagnosis. The need for this research derives from the urgent and increasing need for accurate and efficient algorithms. Colon cancer is a leading cause of death worldwide, hence it is vitally important for the cancer tissues to be expertly identified and classified in a rapid and timely manner, to assure both a fast detection of the disease and to expedite the drug discovery process. In this research, a three-phase approach was proposed and implemented: Phases One and Two examined the feature selection algorithms and classification algorithms employed separately, and Phase Three examined the performance of the combination of these. It was found from Phase One that the Particle Swarm Optimization (PSO) algorithm performed best with the colon dataset as a feature selection (29 genes selected) and from Phase Two that the Support Vector Machine (SVM) algorithm outperformed other classifications, with an accuracy of almost 86%. It was also found from Phase Three that the combined use of PSO and SVM surpassed other algorithms in accuracy and performance, and was faster in terms of time analysis (94%). It is concluded that applying feature selection algorithms prior to classification algorithms results in better accuracy than when the latter are applied alone. This conclusion is important and significant to industry and society. Copyright © 2017 Elsevier B.V. All rights reserved.

  12. LEARNING ALGORITHM EFFECT ON MULTILAYER FEED FORWARD ARTIFICIAL NEURAL NETWORK PERFORMANCE IN IMAGE CODING

    Directory of Open Access Journals (Sweden)

    OMER MAHMOUD

    2007-08-01

    Full Text Available One of the essential factors that affect the performance of Artificial Neural Networks is the learning algorithm. The performance of Multilayer Feed Forward Artificial Neural Network performance in image compression using different learning algorithms is examined in this paper. Based on Gradient Descent, Conjugate Gradient, Quasi-Newton techniques three different error back propagation algorithms have been developed for use in training two types of neural networks, a single hidden layer network and three hidden layers network. The essence of this study is to investigate the most efficient and effective training methods for use in image compression and its subsequent applications. The obtained results show that the Quasi-Newton based algorithm has better performance as compared to the other two algorithms.

  13. Compositional Performability Evaluation for STATEMATE

    NARCIS (Netherlands)

    Böde, Eckard; Herbstritt, Marc; Hermanns, Holger; Johr, Sven; Peikenkamp, Thomas; Pulungan, Reza; Wimmer, Ralf; Becker, Bernd

    2006-01-01

    This paper reports on our efforts to link an industrial state-of-the-art modelling tool to academic state-of-the-art analysis algorithms. In a nutshell, we enable timed reachability analysis of uniform continuous-time Markov decision processes, which are generated from STATEMATE models. We give a

  14. Evaluation of a Cross Layer Scheduling Algorithm for LTE Downlink

    DEFF Research Database (Denmark)

    Popovska Avramova, Andrijana; Yan, Ying; Dittmann, Lars

    2013-01-01

    the resource utilization. The algorithm makes decisions based on the channel conditions, the size of the transmission buffers and different quality of service demands. The simulation results show that the new algorithm improves the resource utilization and provides better guaranties for service quality....

  15. Development and evaluation of thermal model reduction algorithms for spacecraft

    Science.gov (United States)

    Deiml, Michael; Suderland, Martin; Reiss, Philipp; Czupalla, Markus

    2015-05-01

    This paper is concerned with the topic of the reduction of thermal models of spacecraft. The work presented here has been conducted in cooperation with the company OHB AG, formerly Kayser-Threde GmbH, and the Institute of Astronautics at Technische Universität München with the goal to shorten and automatize the time-consuming and manual process of thermal model reduction. The reduction of thermal models can be divided into the simplification of the geometry model for calculation of external heat flows and radiative couplings and into the reduction of the underlying mathematical model. For simplification a method has been developed which approximates the reduced geometry model with the help of an optimization algorithm. Different linear and nonlinear model reduction techniques have been evaluated for their applicability in reduction of the mathematical model. Thereby the compatibility with the thermal analysis tool ESATAN-TMS is of major concern, which restricts the useful application of these methods. Additional model reduction methods have been developed, which account to these constraints. The Matrix Reduction method allows the approximation of the differential equation to reference values exactly expect for numerical errors. The summation method enables a useful, applicable reduction of thermal models that can be used in industry. In this work a framework for model reduction of thermal models has been created, which can be used together with a newly developed graphical user interface for the reduction of thermal models in industry.

  16. Dynamic statistical optimization of GNSS radio occultation bending angles: advanced algorithm and performance analysis

    Science.gov (United States)

    Li, Y.; Kirchengast, G.; Scherllin-Pirscher, B.; Norman, R.; Yuan, Y. B.; Fritzer, J.; Schwaerz, M.; Zhang, K.

    2015-08-01

    We introduce a new dynamic statistical optimization algorithm to initialize ionosphere-corrected bending angles of Global Navigation Satellite System (GNSS)-based radio occultation (RO) measurements. The new algorithm estimates background and observation error covariance matrices with geographically varying uncertainty profiles and realistic global-mean correlation matrices. The error covariance matrices estimated by the new approach are more accurate and realistic than in simplified existing approaches and can therefore be used in statistical optimization to provide optimal bending angle profiles for high-altitude initialization of the subsequent Abel transform retrieval of refractivity. The new algorithm is evaluated against the existing Wegener Center Occultation Processing System version 5.6 (OPSv5.6) algorithm, using simulated data on two test days from January and July 2008 and real observed CHAllenging Minisatellite Payload (CHAMP) and Constellation Observing System for Meteorology, Ionosphere, and Climate (COSMIC) measurements from the complete months of January and July 2008. The following is achieved for the new method's performance compared to OPSv5.6: (1) significant reduction of random errors (standard deviations) of optimized bending angles down to about half of their size or more; (2) reduction of the systematic differences in optimized bending angles for simulated MetOp data; (3) improved retrieval of refractivity and temperature profiles; and (4) realistically estimated global-mean correlation matrices and realistic uncertainty fields for the background and observations. Overall the results indicate high suitability for employing the new dynamic approach in the processing of long-term RO data into a reference climate record, leading to well-characterized and high-quality atmospheric profiles over the entire stratosphere.

  17. A Comparative Evaluation of Unsupervised Anomaly Detection Algorithms for Multivariate Data

    Science.gov (United States)

    Goldstein, Markus; Uchida, Seiichi

    2016-01-01

    Anomaly detection is the process of identifying unexpected items or events in datasets, which differ from the norm. In contrast to standard classification tasks, anomaly detection is often applied on unlabeled data, taking only the internal structure of the dataset into account. This challenge is known as unsupervised anomaly detection and is addressed in many practical applications, for example in network intrusion detection, fraud detection as well as in the life science and medical domain. Dozens of algorithms have been proposed in this area, but unfortunately the research community still lacks a comparative universal evaluation as well as common publicly available datasets. These shortcomings are addressed in this study, where 19 different unsupervised anomaly detection algorithms are evaluated on 10 different datasets from multiple application domains. By publishing the source code and the datasets, this paper aims to be a new well-funded basis for unsupervised anomaly detection research. Additionally, this evaluation reveals the strengths and weaknesses of the different approaches for the first time. Besides the anomaly detection performance, computational effort, the impact of parameter settings as well as the global/local anomaly detection behavior is outlined. As a conclusion, we give an advise on algorithm selection for typical real-world tasks. PMID:27093601

  18. Using internal evaluation measures to validate the quality of diverse stream clustering algorithms

    NARCIS (Netherlands)

    Hassani, M.; Seidl, T.

    2017-01-01

    Measuring the quality of a clustering algorithm has shown to be as important as the algorithm itself. It is a crucial part of choosing the clustering algorithm that performs best for an input data. Streaming input data have many features that make them much more challenging than static ones. They

  19. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    International Nuclear Information System (INIS)

    Martin-Haugh, Stewart

    2014-01-01

    A description of the algorithms and the performance of the ATLAS Inner Detector trigger for LHC Run 1 are presented, as well as prospects for a redesign of the tracking algorithms in Run 2. The Inner Detector trigger algorithms are vital for many trigger signatures at ATLAS. The performance of the algorithms for electrons is presented. The ATLAS trigger software will be restructured from two software levels into a single stage which poses a big challenge for the trigger algorithms in terms of execution time and maintaining the physics performance. Expected future improvements in the timing and efficiencies of the Inner Detector triggers are discussed, utilising the planned merging of the current two stages of the ATLAS trigger.

  20. Performance analysis of a decoding algorithm for algebraic-geometry codes

    DEFF Research Database (Denmark)

    Høholdt, Tom; Jensen, Helge Elbrønd; Nielsen, Rasmus Refslund

    1999-01-01

    The fast decoding algorithm for one point algebraic-geometry codes of Sakata, Elbrond Jensen, and Hoholdt corrects all error patterns of weight less than half the Feng-Rao minimum distance. In this correspondence we analyze the performance of the algorithm for heavier error patterns. It turns out...

  1. Performance tests of the Kramers equation and boson algorithms for simulations of QCD

    International Nuclear Information System (INIS)

    Jansen, K.; Liu Chuan; Jegerlehner, B.

    1995-12-01

    We present a performance comparison of the Kramers equation and the boson algorithms for simulations of QCD with two flavors of dynamical Wilson fermions and gauge group SU(2). Results are obtained on 6 3 12, 8 3 12 and 16 4 lattices. In both algorithms a number of optimizations are installed. (orig.)

  2. Performance evaluation methodology for historical document image binarization.

    Science.gov (United States)

    Ntirogiannis, Konstantinos; Gatos, Basilis; Pratikakis, Ioannis

    2013-02-01

    Document image binarization is of great importance in the document image analysis and recognition pipeline since it affects further stages of the recognition process. The evaluation of a binarization method aids in studying its algorithmic behavior, as well as verifying its effectiveness, by providing qualitative and quantitative indication of its performance. This paper addresses a pixel-based binarization evaluation methodology for historical handwritten/machine-printed document images. In the proposed evaluation scheme, the recall and precision evaluation measures are properly modified using a weighting scheme that diminishes any potential evaluation bias. Additional performance metrics of the proposed evaluation scheme consist of the percentage rates of broken and missed text, false alarms, background noise, character enlargement, and merging. Several experiments conducted in comparison with other pixel-based evaluation measures demonstrate the validity of the proposed evaluation scheme.

  3. 48 CFR 2936.604 - Performance evaluation.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 7 2010-10-01 2010-10-01 false Performance evaluation... Performance evaluation. (a) The HCA must establish procedures to evaluate architect-engineer contractor... reports must be made using Standard Form 1421, Performance Evaluation (Architect-Engineer) as prescribed...

  4. 13 CFR 304.4 - Performance evaluations.

    Science.gov (United States)

    2010-01-01

    ... 13 Business Credit and Assistance 1 2010-01-01 2010-01-01 false Performance evaluations. 304.4... ECONOMIC DEVELOPMENT DISTRICTS § 304.4 Performance evaluations. (a) EDA shall evaluate the management... of at least one (1) other District Organization in the performance evaluation on a cost-reimbursement...

  5. Performance assessment of electric power generations using an adaptive neural network algorithm

    International Nuclear Information System (INIS)

    Azadeh, A.; Ghaderi, S.F.; Anvari, M.; Saberi, M.

    2007-01-01

    Efficiency frontier analysis has been an important approach of evaluating firms' performance in private and public sectors. There have been many efficiency frontier analysis methods reported in the literature. However, the assumptions made for each of these methods are restrictive. Each of these methodologies has its strength as well as major limitations. This study proposes a non-parametric efficiency frontier analysis method based on the adaptive neural network technique for measuring efficiency as a complementary tool for the common techniques of the efficiency studies in the previous studies. The proposed computational method is able to find a stochastic frontier based on a set of input-output observational data and do not require explicit assumptions about the function structure of the stochastic frontier. In this algorithm, for calculating the efficiency scores, a similar approach to econometric methods has been used. Moreover, the effect of the return to scale of decision-making units (DMUs) on its efficiency is included and the unit used for the correction is selected by notice of its scale (under constant return to scale assumption). An example using real data is presented for illustrative purposes. In the application to the power generation sector of Iran, we find that the neural network provide more robust results and identifies more efficient units than the conventional methods since better performance patterns are explored. Moreover, principle component analysis (PCA) is used to verify the findings of the proposed algorithm

  6. Comparison Performance of Genetic Algorithm and Ant Colony Optimization in Course Scheduling Optimizing

    Directory of Open Access Journals (Sweden)

    Imam Ahmad Ashari

    2016-11-01

    Full Text Available Scheduling problems at the university is a complex type of scheduling problems. The scheduling process should be carried out at every turn of the semester's. The core of the problem of scheduling courses at the university is that the number of components that need to be considered in making the schedule, some of the components was made up of students, lecturers, time and a room with due regard to the limits and certain conditions so that no collision in the schedule such as mashed room, mashed lecturer and others. To resolve a scheduling problem most appropriate technique used is the technique of optimization. Optimization techniques can give the best results desired. Metaheuristic algorithm is an algorithm that has a lot of ways to solve the problems to the very limit the optimal solution. In this paper, we use a genetic algorithm and ant colony optimization algorithm is an algorithm metaheuristic to solve the problem of course scheduling. The two algorithm will be tested and compared to get performance is the best. The algorithm was tested using data schedule courses of the university in Semarang. From the experimental results we conclude that the genetic algorithm has better performance than the ant colony optimization  algorithm in solving the case of course scheduling.

  7. 48 CFR 236.604 - Performance evaluation.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 3 2010-10-01 2010-10-01 false Performance evaluation... Architect-Engineer Services 236.604 Performance evaluation. (a) Preparation of performance reports. Use DD Form 2631, Performance Evaluation (Architect-Engineer), instead of SF 1421. (2) Prepare a separate...

  8. Performance Evaluation of Frequent Subgraph Discovery Techniques

    Directory of Open Access Journals (Sweden)

    Saif Ur Rehman

    2014-01-01

    Full Text Available Due to rapid development of the Internet technology and new scientific advances, the number of applications that model the data as graphs increases, because graphs have highly expressive power to model a complicated structure. Graph mining is a well-explored area of research which is gaining popularity in the data mining community. A graph is a general model to represent data and has been used in many domains such as cheminformatics, web information management system, computer network, and bioinformatics, to name a few. In graph mining the frequent subgraph discovery is a challenging task. Frequent subgraph mining is concerned with discovery of those subgraphs from graph dataset which have frequent or multiple instances within the given graph dataset. In the literature a large number of frequent subgraph mining algorithms have been proposed; these included FSG, AGM, gSpan, CloseGraph, SPIN, Gaston, and Mofa. The objective of this research work is to perform quantitative comparison of the above listed techniques. The performances of these techniques have been evaluated through a number of experiments based on three different state-of-the-art graph datasets. This novel work will provide base for anyone who is working to design a new frequent subgraph discovery technique.

  9. Evaluation of Four Encryption Algorithms for Viability, Reliability and ...

    African Journals Online (AJOL)

    Akorede

    power utilization of each of these algorithms. KEYWORDS: ... business, military, power, health and so on. .... During data transmission, the sender encrypts the plain text with the ..... Schemes in Wireless Devices Unpublished Thesis, university.

  10. Evaluation of Hierarchical Clustering Algorithms for Document Datasets

    National Research Council Canada - National Science Library

    Zhao, Ying; Karypis, George

    2002-01-01

    Fast and high-quality document clustering algorithms play an important role in providing intuitive navigation and browsing mechanisms by organizing large amounts of information into a small number of meaningful clusters...

  11. Crane Double Cycling in Container Ports: Algorithms, Evaluation, and Planning

    OpenAIRE

    Goodchild, Anne Victoria

    2005-01-01

    Loading ships as they are unloaded (double-cycling) can improve the efficiency of a quay crane and container port. This dissertation describes the double-cycling problem, and presents solution algorithms and simple formulae to estimate benefits. In Chapter 2 we focus on reducing the number of operations necessary to turn around a ship. First an intuitive lower bound is developed. We then present a greedy algorithm that was developed based on the physical properties of the problem and yields a...

  12. Software for evaluation of EPR-dosimetry performance

    International Nuclear Information System (INIS)

    Shishkina, E.A.; Timofeev, Yu.S.; Ivanov, D.V.

    2014-01-01

    Electron paramagnetic resonance (EPR) with tooth enamel is a method extensively used for retrospective external dosimetry. Different research groups apply different equipment, sample preparation procedures and spectrum processing algorithms for EPR dosimetry. A uniform algorithm for description and comparison of performances was designed and implemented in a new computer code. The aim of the paper is to introduce the new software 'EPR-dosimetry performance'. The computer code is a user-friendly tool for providing a full description of method-specific capabilities of EPR tooth dosimetry, from metrological characteristics to practical limitations in applications. The software designed for scientists and engineers has several applications, including support of method calibration by evaluation of calibration parameters, evaluation of critical value and detection limit for registration of radiation-induced signal amplitude, estimation of critical value and detection limit for dose evaluation, estimation of minimal detectable value for anthropogenic dose assessment and description of method uncertainty. (authors)

  13. Performance and Complexity Analysis of Blind FIR Channel Identification Algorithms Based on Deterministic Maximum Likelihood in SIMO Systems

    DEFF Research Database (Denmark)

    De Carvalho, Elisabeth; Omar, Samir; Slock, Dirk

    2013-01-01

    We analyze two algorithms that have been introduced previously for Deterministic Maximum Likelihood (DML) blind estimation of multiple FIR channels. The first one is a modification of the Iterative Quadratic ML (IQML) algorithm. IQML gives biased estimates of the channel and performs poorly at low...... to the initialization. Its asymptotic performance does not reach the DML performance though. The second strategy, called Pseudo-Quadratic ML (PQML), is naturally denoised. The denoising in PQML is furthermore more efficient than in DIQML: PQML yields the same asymptotic performance as DML, as opposed to DIQML......, but requires a consistent initialization. We furthermore compare DIQML and PQML to the strategy of alternating minimization w.r.t. symbols and channel for solving DML (AQML). An asymptotic performance analysis, a complexity evaluation and simulation results are also presented. The proposed DIQML and PQML...

  14. Evaluation of GMI and PMI diffeomorphic‐based demons algorithms for aligning PET and CT Images

    Science.gov (United States)

    Yang, Juan; Zhang, You; Yin, Yong

    2015-01-01

    Fusion of anatomic information in computed tomography (CT) and functional information in F18‐FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined F18‐FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole‐body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)‐based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point‐wise mutual information (PMI) diffeomorphic‐based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB‐approved study. Whole‐body PET and CT images were acquired from a combined F18‐FDG PET/CT scanner for each patient. The modified Hausdorff distance (dMH) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of dMH were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI‐based demons and the PMI diffeomorphic‐based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined F18‐FDG PET/CT scanner was used for image acquisition. The PMI

  15. Evaluation of GMI and PMI diffeomorphic-based demons algorithms for aligning PET and CT Images.

    Science.gov (United States)

    Yang, Juan; Wang, Hongjun; Zhang, You; Yin, Yong

    2015-07-08

    Fusion of anatomic information in computed tomography (CT) and functional information in 18F-FDG positron emission tomography (PET) is crucial for accurate differentiation of tumor from benign masses, designing radiotherapy treatment plan and staging of cancer. Although current PET and CT images can be acquired from combined 18F-FDG PET/CT scanner, the two acquisitions are scanned separately and take a long time, which may induce potential positional errors in global and local caused by respiratory motion or organ peristalsis. So registration (alignment) of whole-body PET and CT images is a prerequisite for their meaningful fusion. The purpose of this study was to assess the performance of two multimodal registration algorithms for aligning PET and CT images. The proposed gradient of mutual information (GMI)-based demons algorithm, which incorporated the GMI between two images as an external force to facilitate the alignment, was compared with the point-wise mutual information (PMI) diffeomorphic-based demons algorithm whose external force was modified by replacing the image intensity difference in diffeomorphic demons algorithm with the PMI to make it appropriate for multimodal image registration. Eight patients with esophageal cancer(s) were enrolled in this IRB-approved study. Whole-body PET and CT images were acquired from a combined 18F-FDG PET/CT scanner for each patient. The modified Hausdorff distance (d(MH)) was used to evaluate the registration accuracy of the two algorithms. Of all patients, the mean values and standard deviations (SDs) of d(MH) were 6.65 (± 1.90) voxels and 6.01 (± 1.90) after the GMI-based demons and the PMI diffeomorphic-based demons registration algorithms respectively. Preliminary results on oncological patients showed that the respiratory motion and organ peristalsis in PET/CT esophageal images could not be neglected, although a combined 18F-FDG PET/CT scanner was used for image acquisition. The PMI diffeomorphic-based demons

  16. Evaluation of emergency department performance

    DEFF Research Database (Denmark)

    Sørup, Christian Michel; Jacobsen, Peter; Forberg, Jakob Lundager

    2013-01-01

    eligibility criteria includes: 1) the main purpose was to discuss, analyse, or promote performance measures best reflecting ED performance, 2) the article was a review article, and 3) the article reported macro-level performance measures, thus reflecting an overall departmental performance level. Results...... measures that related directly to the patient. Performance measures related to employees were only stated in two of the 14 included articles. Conclusions A total of 55 ED performance measures were identified. ED time intervals were the most recommended performance measures followed by patient centeredness...... and safety performance measures. ED employee related performance measures were rarely mentioned in the investigated literature. The study’s results allow for advancement towards improved performance measurement and standardised assessment across EDs....

  17. Performance Evaluation of Tree Object Matching

    DEFF Research Database (Denmark)

    Somchaipeng, Kerawit; Sporring, Jon; Kreiborg, Sven

    2005-01-01

    Multi-Scale Singularity Trees (MSSTs) represents the deep structure of images in scale-space and provide both the connections between image features at different scales and their strengths. In this report we present and evaluate an algorithm that exploits the MSSTs for image matching. Two versions...

  18. Performance evaluation of image denoising developed using convolutional denoising autoencoders in chest radiography

    Science.gov (United States)

    Lee, Donghoon; Choi, Sunghoon; Kim, Hee-Joung

    2018-03-01

    When processing medical images, image denoising is an important pre-processing step. Various image denoising algorithms have been developed in the past few decades. Recently, image denoising using the deep learning method has shown excellent performance compared to conventional image denoising algorithms. In this study, we introduce an image denoising technique based on a convolutional denoising autoencoder (CDAE) and evaluate clinical applications by comparing existing image denoising algorithms. We train the proposed CDAE model using 3000 chest radiograms training data. To evaluate the performance of the developed CDAE model, we compare it with conventional denoising algorithms including median filter, total variation (TV) minimization, and non-local mean (NLM) algorithms. Furthermore, to verify the clinical effectiveness of the developed denoising model with CDAE, we investigate the performance of the developed denoising algorithm on chest radiograms acquired from real patients. The results demonstrate that the proposed denoising algorithm developed using CDAE achieves a superior noise-reduction effect in chest radiograms compared to TV minimization and NLM algorithms, which are state-of-the-art algorithms for image noise reduction. For example, the peak signal-to-noise ratio and structure similarity index measure of CDAE were at least 10% higher compared to conventional denoising algorithms. In conclusion, the image denoising algorithm developed using CDAE effectively eliminated noise without loss of information on anatomical structures in chest radiograms. It is expected that the proposed denoising algorithm developed using CDAE will be effective for medical images with microscopic anatomical structures, such as terminal bronchioles.

  19. A semi-active suspension control algorithm for vehicle comprehensive vertical dynamics performance

    Science.gov (United States)

    Nie, Shida; Zhuang, Ye; Liu, Weiping; Chen, Fan

    2017-08-01

    Comprehensive performance of the vehicle, including ride qualities and road-holding, is essentially of great value in practice. Many up-to-date semi-active control algorithms improve vehicle dynamics performance effectively. However, it is hard to improve comprehensive performance for the conflict between ride qualities and road-holding around the second-order resonance. Hence, a new control algorithm is proposed to achieve a good trade-off between ride qualities and road-holding. In this paper, the properties of the invariant points are analysed, which gives an insight into the performance conflicting around the second-order resonance. Based on it, a new control algorithm is proposed. The algorithm employs a novel frequency selector to balance suspension ride and handling performance by adopting a medium damping around the second-order resonance. The results of this study show that the proposed control algorithm could improve the performance of ride qualities and suspension working space up to 18.3% and 8.2%, respectively, with little loss of road-holding compared to the passive suspension. Consequently, the comprehensive performance can be improved by 6.6%. Hence, the proposed algorithm is of great potential to be implemented in practice.

  20. Evaluation of the global orbit correction algorithm for the APS real-time orbit feedback system

    International Nuclear Information System (INIS)

    Carwardine, J.; Evans, K. Jr.

    1997-01-01

    The APS real-time orbit feedback system uses 38 correctors per plane and has available up to 320 rf beam position monitors. Orbit correction is implemented using multiple digital signal processors. Singular value decomposition is used to generate a correction matrix from a linear response matrix model of the storage ring lattice. This paper evaluates the performance of the APS system in terms of its ability to correct localized and distributed sources of orbit motion. The impact of regulator gain and bandwidth, choice of beam position monitors, and corrector dynamics are discussed. The weighted least-squares algorithm is reviewed in the context of local feedback

  1. Energy performance evaluation of AAC

    Science.gov (United States)

    Aybek, Hulya

    The U.S. building industry constitutes the largest consumer of energy (i.e., electricity, natural gas, petroleum) in the world. The building sector uses almost 41 percent of the primary energy and approximately 72 percent of the available electricity in the United States. As global energy-generating resources are being depleted at exponential rates, the amount of energy consumed and wasted cannot be ignored. Professionals concerned about the environment have placed a high priority on finding solutions that reduce energy consumption while maintaining occupant comfort. Sustainable design and the judicious combination of building materials comprise one solution to this problem. A future including sustainable energy may result from using energy simulation software to accurately estimate energy consumption and from applying building materials that achieve the potential results derived through simulation analysis. Energy-modeling tools assist professionals with making informed decisions about energy performance during the early planning phases of a design project, such as determining the most advantageous combination of building materials, choosing mechanical systems, and determining building orientation on the site. By implementing energy simulation software to estimate the effect of these factors on the energy consumption of a building, designers can make adjustments to their designs during the design phase when the effect on cost is minimal. The primary objective of this research consisted of identifying a method with which to properly select energy-efficient building materials and involved evaluating the potential of these materials to earn LEED credits when properly applied to a structure. In addition, this objective included establishing a framework that provides suggestions for improvements to currently available simulation software that enhance the viability of the estimates concerning energy efficiency and the achievements of LEED credits. The primary objective

  2. Evaluation of segmentation algorithms for optical coherence tomography images of ovarian tissue

    Science.gov (United States)

    Sawyer, Travis W.; Rice, Photini F. S.; Sawyer, David M.; Koevary, Jennifer W.; Barton, Jennifer K.

    2018-02-01

    Ovarian cancer has the lowest survival rate among all gynecologic cancers due to predominantly late diagnosis. Early detection of ovarian cancer can increase 5-year survival rates from 40% up to 92%, yet no reliable early detection techniques exist. Optical coherence tomography (OCT) is an emerging technique that provides depthresolved, high-resolution images of biological tissue in real time and demonstrates great potential for imaging of ovarian tissue. Mouse models are crucial to quantitatively assess the diagnostic potential of OCT for ovarian cancer imaging; however, due to small organ size, the ovaries must rst be separated from the image background using the process of segmentation. Manual segmentation is time-intensive, as OCT yields three-dimensional data. Furthermore, speckle noise complicates OCT images, frustrating many processing techniques. While much work has investigated noise-reduction and automated segmentation for retinal OCT imaging, little has considered the application to the ovaries, which exhibit higher variance and inhomogeneity than the retina. To address these challenges, we evaluated a set of algorithms to segment OCT images of mouse ovaries. We examined ve preprocessing techniques and six segmentation algorithms. While all pre-processing methods improve segmentation, Gaussian filtering is most effective, showing an improvement of 32% +/- 1.2%. Of the segmentation algorithms, active contours performs best, segmenting with an accuracy of 0.948 +/- 0.012 compared with manual segmentation (1.0 being identical). Nonetheless, further optimization could lead to maximizing the performance for segmenting OCT images of the ovaries.

  3. An evaluation of the potential of GPUs to accelerate tracking algorithms for the ATLAS trigger

    CERN Document Server

    Baines, JTM; The ATLAS collaboration; Emeliyanov, D; Howard, JR; Kama, S; Washbrook, AJ; Wynne, BM

    2014-01-01

    The potential of GPUs has been evaluated as a possible way to accelerate trigger algorithms for the ATLAS experiment located at the Large Hadron Collider (LHC). During LHC Run-1 ATLAS employed a three-level trigger system to progressively reduce the LHC collision rate of 20 MHz to a storage rate of about 600 Hz for offline processing. Reconstruction of charged particles trajectories through the Inner Detector (ID) was performed at the second (L2) and third (EF) trigger levels. The ID contains pixel, silicon strip (SCT) and straw-tube technologies. Prior to tracking, data-preparation algorithms processed the ID raw data producing measurements of the track position at each detector layer. The data-preparation and tracking consumed almost three-quarters of the total L2 CPU resources during 2012 data-taking. Detailed performance studies of a CUDA™ implementation of the L2 pixel and SCT data-preparation and tracking algorithms running on a Nvidia® Tesla C2050 GPU have shown a speed-up by a factor of 12 for the ...

  4. The algorithmic performance of J-Tpeak for drug safety clinical trial.

    Science.gov (United States)

    Chien, Simon C; Gregg, Richard E

    The interval from J-point to T-wave peak (JTp) in ECG is a new biomarker able to identify drugs that prolong the QT interval but have different ion channel effects. If JTp is not prolonged, the prolonged QT may be associated with multi ion channel block that may have low torsade de pointes risk. From the automatic ECG measurement perspective, accurate and repeatable measurement of JTp involves different challenges than QT. We evaluated algorithm performance and JTp challenges using the Philips DXL diagnostic 12/16/18-lead algorithm. Measurement of JTp represents a different use model. Standard use of corrected QT interval is clinical risk assessment on patients with cardiac disease or suspicion of heart disease. Drug safety trials involve a very different population - young healthy subjects - who commonly have J-waves, notches and slurs. Drug effects include difficult and unusual morphology such as flat T-waves, gentle notches, and multiple T-wave peaks. The JTp initiative study provided ECGs collected from 22 young subjects (11 males and females) in randomized testing of dofetilide, quinidine, ranolazine, verapamil and placebo. We compare the JTp intervals between DXL algorithm and the FDA published measurements. The lead wise, vector-magnitude (VM), root-mean-square (RMS) and principal-component-analysis (PCA) representative beats were used to measure JTp and QT intervals. We also implemented four different methods for T peak detection for comparison. We found that JTp measurements were closer to the reference for combined leads RMS and PCA than individual leads. Differences in J-point location led to part of the JTp measurement difference because of the high prevalence of J-waves, notches and slurs. Larger differences were noted for drug effect causing multiple distinct T-wave peaks (Tp). The automated algorithm chooses the later peak while the reference was the earlier peak. Choosing among different algorithmic strategies in T peak measurement results in the

  5. Evaluating algorithms for the Generation of Referring Expressions using a balanced corpus

    NARCIS (Netherlands)

    Gatt, A.; van der Sluis, Ielka; van Deemter, Kees

    2007-01-01

    Despite being the focus of intensive research, evaluation of algorithms that generate referring expressions is still in its infancy. We describe a corpusbased evaluation methodology, applied to a number of classic algorithms in this area. The methodology focuses on balance and semantic transparency

  6. An efficient impedance method for induced field evaluation based on a stabilized Bi-conjugate gradient algorithm

    International Nuclear Information System (INIS)

    Wang Hua; Liu Feng; Crozier, Stuart; Xia Ling

    2008-01-01

    This paper presents a stabilized Bi-conjugate gradient algorithm (BiCGstab) that can significantly improve the performance of the impedance method, which has been widely applied to model low-frequency field induction phenomena in voxel phantoms. The improved impedance method offers remarkable computational advantages in terms of convergence performance and memory consumption over the conventional, successive over-relaxation (SOR)-based algorithm. The scheme has been validated against other numerical/analytical solutions on a lossy, multilayered sphere phantom excited by an ideal coil loop. To demonstrate the computational performance and application capability of the developed algorithm, the induced fields inside a human phantom due to a low-frequency hyperthermia device is evaluated. The simulation results show the numerical accuracy and superior performance of the method.

  7. An efficient impedance method for induced field evaluation based on a stabilized Bi-conjugate gradient algorithm.

    Science.gov (United States)

    Wang, Hua; Liu, Feng; Xia, Ling; Crozier, Stuart

    2008-11-21

    This paper presents a stabilized Bi-conjugate gradient algorithm (BiCGstab) that can significantly improve the performance of the impedance method, which has been widely applied to model low-frequency field induction phenomena in voxel phantoms. The improved impedance method offers remarkable computational advantages in terms of convergence performance and memory consumption over the conventional, successive over-relaxation (SOR)-based algorithm. The scheme has been validated against other numerical/analytical solutions on a lossy, multilayered sphere phantom excited by an ideal coil loop. To demonstrate the computational performance and application capability of the developed algorithm, the induced fields inside a human phantom due to a low-frequency hyperthermia device is evaluated. The simulation results show the numerical accuracy and superior performance of the method.

  8. Comparison of Controller and Flight Deck Algorithm Performance During Interval Management with Dynamic Arrival Trees (STARS)

    Science.gov (United States)

    Battiste, Vernol; Lawton, George; Lachter, Joel; Brandt, Summer; Koteskey, Robert; Dao, Arik-Quang; Kraut, Josh; Ligda, Sarah; Johnson, Walter W.

    2012-01-01

    Managing the interval between arrival aircraft is a major part of the en route and TRACON controller s job. In an effort to reduce controller workload and low altitude vectoring, algorithms have been developed to allow pilots to take responsibility for, achieve and maintain proper spacing. Additionally, algorithms have been developed to create dynamic weather-free arrival routes in the presence of convective weather. In a recent study we examined an algorithm to handle dynamic re-routing in the presence of convective weather and two distinct spacing algorithms. The spacing algorithms originated from different core algorithms; both were enhanced with trajectory intent data for the study. These two algorithms were used simultaneously in a human-in-the-loop (HITL) simulation where pilots performed weather-impacted arrival operations into Louisville International Airport while also performing interval management (IM) on some trials. The controllers retained responsibility for separation and for managing the en route airspace and some trials managing IM. The goal was a stress test of dynamic arrival algorithms with ground and airborne spacing concepts. The flight deck spacing algorithms or controller managed spacing not only had to be robust to the dynamic nature of aircraft re-routing around weather but also had to be compatible with two alternative algorithms for achieving the spacing goal. Flight deck interval management spacing in this simulation provided a clear reduction in controller workload relative to when controllers were responsible for spacing the aircraft. At the same time, spacing was much less variable with the flight deck automated spacing. Even though the approaches taken by the two spacing algorithms to achieve the interval management goals were slightly different they seem to be simpatico in achieving the interval management goal of 130 sec by the TRACON boundary.

  9. Learning Activity Predictors from Sensor Data: Algorithms, Evaluation, and Applications.

    Science.gov (United States)

    Minor, Bryan; Doppa, Janardhan Rao; Cook, Diane J

    2017-12-01

    Recent progress in Internet of Things (IoT) platforms has allowed us to collect large amounts of sensing data. However, there are significant challenges in converting this large-scale sensing data into decisions for real-world applications. Motivated by applications like health monitoring and intervention and home automation we consider a novel problem called Activity Prediction , where the goal is to predict future activity occurrence times from sensor data. In this paper, we make three main contributions. First, we formulate and solve the activity prediction problem in the framework of imitation learning and reduce it to a simple regression learning problem. This approach allows us to leverage powerful regression learners that can reason about the relational structure of the problem with negligible computational overhead. Second, we present several metrics to evaluate activity predictors in the context of real-world applications. Third, we evaluate our approach using real sensor data collected from 24 smart home testbeds. We also embed the learned predictor into a mobile-device-based activity prompter and evaluate the app for 9 participants living in smart homes. Our results indicate that our activity predictor performs better than the baseline methods, and offers a simple approach for predicting activities from sensor data.

  10. A new algorithm for reducing the workload of experts in performing systematic reviews.

    Science.gov (United States)

    Matwin, Stan; Kouznetsov, Alexandre; Inkpen, Diana; Frunza, Oana; O'Blenis, Peter

    2010-01-01

    To determine whether a factorized version of the complement naïve Bayes (FCNB) classifier can reduce the time spent by experts reviewing journal articles for inclusion in systematic reviews of drug class efficacy for disease treatment. The proposed classifier was evaluated on a test collection built from 15 systematic drug class reviews used in previous work. The FCNB classifier was constructed to classify each article as containing high-quality, drug class-specific evidence or not. Weight engineering (WE) techniques were added to reduce underestimation for Medical Subject Headings (MeSH)-based and Publication Type (PubType)-based features. Cross-validation experiments were performed to evaluate the classifier's parameters and performance. Work saved over sampling (WSS) at no less than a 95% recall was used as the main measure of performance. The minimum workload reduction for a systematic review for one topic, achieved with a FCNB/WE classifier, was 8.5%; the maximum was 62.2% and the average over the 15 topics was 33.5%. This is 15.0% higher than the average workload reduction obtained using a voting perceptron-based automated citation classification system. The FCNB/WE classifier is simple, easy to implement, and produces significantly better results in reducing the workload than previously achieved. The results support it being a useful algorithm for machine-learning-based automation of systematic reviews of drug class efficacy for disease treatment.

  11. Mapping the Conjugate Gradient Algorithm onto High Performance Heterogeneous Computers

    Science.gov (United States)

    2014-05-01

    Solution of sparse indefinite systems of linear equations. Society for Industrial and Applied Mathematis 12(4), 617 –629. Parker, M. ( 2009 ). Taking advantage...44 vii 11 LIST OF SYMBOLS, ABBREVIATIONS, AND NOMENCLATURE API Application Programming Interface ASIC Application Specific Integrated Circuit...FPGA designer, 1 16 2 thus, final implementations were nearly always performed using fixed-point or integer arithmetic (Parker 2009 ). With the recent

  12. 48 CFR 36.604 - Performance evaluation.

    Science.gov (United States)

    2010-10-01

    ... 48 Federal Acquisition Regulations System 1 2010-10-01 2010-10-01 false Performance evaluation. 36.604 Section 36.604 Federal Acquisition Regulations System FEDERAL ACQUISITION REGULATION SPECIAL... Performance evaluation. See 42.1502(f) for the requirements for preparing past performance evaluations for...

  13. Performance evaluation and financial market runs

    NARCIS (Netherlands)

    Wagner, W.B.

    2013-01-01

    This paper develops a model in which performance evaluation causes runs by fund managers and results in asset fire sales. Performance evaluation nonetheless is efficient as it disciplines managers. Optimal performance evaluation combines absolute and relative components in order to make runs less

  14. Performance of b tagging algorithms in proton-proton collisions at 13 TeV with Phase 1 CMS detector

    CERN Document Server

    CMS Collaboration

    2018-01-01

    Many measurements as well as searches for new physics beyond the standard model at the LHC rely on the efficient identification of heavy flavour jets, i.e. jets containing b or c hadrons. In this Detector Performance Summary, the performance of these algorithms is presented, based on proton-proton collision data recorded by the CMS experiment at 13 TeV. Expected performance of the heavy flavour identification algorithms with the upgraded tracker detector are presented. Correction factors for a different performance in data and simulation are evaluated in 41.9 fb-1 of collision data collected in 2017. Finally, the reconstruction of observables relevant for heavy flavour identification in 2018 data is studied.

  15. Algorithm to determine electrical submersible pump performance considering temperature changes for viscous crude oils

    Energy Technology Data Exchange (ETDEWEB)

    Valderrama, A. [Petroleos de Venezuela, S.A., Distrito Socialista Tecnologico (Venezuela); Valencia, F. [Petroleos de Venezuela, S.A., Instituto de Tecnologia Venezolana para el Petroleo (Venezuela)

    2011-07-01

    In the heavy oil industry, electrical submersible pumps (ESPs) are used to transfer energy to fluids through stages made up of one impeller and one diffuser. Since liquid temperature increases through the different stages, viscosity might change between the inlet and outlet of the pump, thus affecting performance. The aim of this research was to create an algorithm to determine ESPs' performance curves considering temperature changes through the stages. A computational algorithm was developed and then compared with data collected in a laboratory with a CG2900 ESP. Results confirmed that when the fluid's viscosity is affected by the temperature changes, the stages of multistage pump systems do not have the same performance. Thus the developed algorithm could help production engineers to take viscosity changes into account and optimize the ESP design. This study developed an algorithm to take into account the fluid viscosity changes through pump stages.

  16. {Performance of the ATLAS Inner Detector Trigger algorithms in pp collisions at 7TeV

    CERN Document Server

    Masik, Jiri; The ATLAS collaboration

    2011-01-01

    The ATLAS trigger performs online event selection in three stages. The Inner Detector information is used in the second (Level 2) and third (Event Filter) stages. Track reconstruction in the silicon detectors and transition radiation tracker contributes significantly to the rejection of uninteresting events while retaining a high signal efficiency. To achieve an overall trigger execution time of 40 ms per event, Level 2 tracking uses fast custom algorithms. The Event Filter tracking uses modified offline algorithms, with an overall execution time of 4s per event. Performance of the trigger tracking algorithms with data collected by ATLAS in 2011 is shown. The high efficiency and track quality of the trigger tracking algorithms for identification of physics signatures is presented. We also discuss the robustness of the reconstruction software with respect to the presence of multiple interactions per bunch crossing, an increasingly important feature for optimal performance moving towards the design luminosities...

  17. Performance of the ATLAS Inner Detector Trigger algorithms in pp collisions at 7TeV

    CERN Document Server

    Masik, Jiri; The ATLAS collaboration

    2011-01-01

    The ATLAS trigger performs online event selection in three stages. The Inner Detector information is used in the second (Level 2) and third (Event Filter) stages. Track reconstruction in the silicon detectors and transition radiation tracker contributes significantly to the rejection of uninteresting events while retaining a high signal efficiency. To achieve an overall trigger execution time of 40 ms per event, Level 2 tracking uses fast custom algorithms. The Event Filter tracking uses modified offline algorithms, with an overall execution time of 4s per event. Performance of the trigger tracking algorithms with data collected by ATLAS in 2011 is shown. The high efficiency and track quality of the trigger tracking algorithms for identification of physics signatures is presented. We also discuss the robustness of the reconstruction software with respect to the presence of multiple interactions per bunch crossing, an increasingly important feature for optimal performance moving towards the design luminosities...

  18. A Genetic algorithm for evaluating the zeros (roots) of polynomial ...

    African Journals Online (AJOL)

    This paper presents a Genetic Algorithm software (which is a computational, search technique) for finding the zeros (roots) of any given polynomial function, and optimizing and solving N-dimensional systems of equations. The software is particularly useful since most of the classic schemes are not all embracing.

  19. A message passing algorithm for the evaluation of social influence

    NARCIS (Netherlands)

    Vassio, Luca; Fagnani, Fabio; Frasca, Paolo; Ozdaglar, Asuman

    2014-01-01

    In this paper, we define a new measure of node centrality in social networks, the Harmonic Influence Centrality, which emerges naturally in the study of social influence over networks. Next, we introduce a distributed message passing algorithm to compute the Harmonic Influence Centrality of each

  20. A Novel AHRS Inertial Sensor-Based Algorithm for Wheelchair Propulsion Performance Analysis

    Directory of Open Access Journals (Sweden)

    Jonathan Bruce Shepherd

    2016-08-01

    Full Text Available With the increasing rise of professionalism in sport, athletes, teams, and coaches are looking to technology to monitor performance in both games and training in order to find a competitive advantage. The use of inertial sensors has been proposed as a cost effective and adaptable measurement device for monitoring wheelchair kinematics; however, the outcomes are dependent on the reliability of the processing algorithms. Though there are a variety of algorithms that have been proposed to monitor wheelchair propulsion in court sports, they all have limitations. Through experimental testing, we have shown the Attitude and Heading Reference System (AHRS-based algorithm to be a suitable and reliable candidate algorithm for estimating velocity, distance, and approximating trajectory. The proposed algorithm is computationally inexpensive, agnostic of wheel camber, not sensitive to sensor placement, and can be embedded for real-time implementations. The research is conducted under Griffith University Ethics (GU Ref No: 2016/294.

  1. Development and performance analysis of a lossless data reduction algorithm for voip

    International Nuclear Information System (INIS)

    Misbahuddin, S.; Boulejfen, N.

    2014-01-01

    VoIP (Voice Over IP) is becoming an alternative way of voice communications over the Internet. To better utilize voice call bandwidth, some standard compression algorithms are applied in VoIP systems. However, these algorithms affect the voice quality with high compression ratios. This paper presents a lossless data reduction technique to improve VoIP data transfer rate over the IP network. The proposed algorithm exploits the data redundancies in digitized VFs (Voice Frames) generated by VoIP systems. Performance of proposed data reduction algorithm has been presented in terms of compression ratio. The proposed algorithm will help retain the voice quality along with the improvement in VoIP data transfer rates. (author)

  2. Improving performance of wavelet-based image denoising algorithm using complex diffusion process

    DEFF Research Database (Denmark)

    Nadernejad, Ehsan; Sharifzadeh, Sara; Korhonen, Jari

    2012-01-01

    using a variety of standard images and its performance has been compared against several de-noising algorithms known from the prior art. Experimental results show that the proposed algorithm preserves the edges better and in most cases, improves the measured visual quality of the denoised images......Image enhancement and de-noising is an essential pre-processing step in many image processing algorithms. In any image de-noising algorithm, the main concern is to keep the interesting structures of the image. Such interesting structures often correspond to the discontinuities (edges...... in comparison to the existing methods known from the literature. The improvement is obtained without excessive computational cost, and the algorithm works well on a wide range of different types of noise....

  3. Performance comparison of weighted sum-minimum mean square error and virtual signal-to-interference plus noise ratio algorithms in simulated and measured channels

    DEFF Research Database (Denmark)

    Rahimi, Maryam; Nielsen, Jesper Ødum; Pedersen, Troels

    2014-01-01

    A comparison in data achievement between two well-known algorithms with simulated and real measured data is presented. The algorithms maximise the data rate in cooperative base stations (BS) multiple-input-single-output scenario. Weighted sum-minimum mean square error algorithm could be used...... in multiple-input-multiple-output scenarios, but it has lower performance than virtual signal-to-interference plus noise ratio algorithm in theory and practice. A real measurement environment consisting of two BS and two users have been studied to evaluate the simulation results....

  4. Performance evaluation software moving object detection and tracking in videos

    CERN Document Server

    Karasulu, Bahadir

    2013-01-01

    Performance Evaluation Software: Moving Object Detection and Tracking in Videos introduces a software approach for the real-time evaluation and performance comparison of the methods specializing in moving object detection and/or tracking (D&T) in video processing. Digital video content analysis is an important item for multimedia content-based indexing (MCBI), content-based video retrieval (CBVR) and visual surveillance systems. There are some frequently-used generic algorithms for video object D&T in the literature, such as Background Subtraction (BS), Continuously Adaptive Mean-shift (CMS),

  5. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy

    Science.gov (United States)

    2017-01-01

    Background Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor’s activity for the purposes of quality assurance, safety, and continuing professional development. Objective The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors’ professional performance in the United Kingdom. Methods We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians’ colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Results Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to “popular” (recall=.97), “innovator” (recall=.98), and “respected” (recall=.87) codes and was lower for the “interpersonal” (recall=.80) and “professional” (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as “respected,” “professional,” and “interpersonal” related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P.05). Conclusions Machine learning algorithms can classify open-text feedback

  6. Supervised Machine Learning Algorithms Can Classify Open-Text Feedback of Doctor Performance With Human-Level Accuracy.

    Science.gov (United States)

    Gibbons, Chris; Richards, Suzanne; Valderas, Jose Maria; Campbell, John

    2017-03-15

    Machine learning techniques may be an effective and efficient way to classify open-text reports on doctor's activity for the purposes of quality assurance, safety, and continuing professional development. The objective of the study was to evaluate the accuracy of machine learning algorithms trained to classify open-text reports of doctor performance and to assess the potential for classifications to identify significant differences in doctors' professional performance in the United Kingdom. We used 1636 open-text comments (34,283 words) relating to the performance of 548 doctors collected from a survey of clinicians' colleagues using the General Medical Council Colleague Questionnaire (GMC-CQ). We coded 77.75% (1272/1636) of the comments into 5 global themes (innovation, interpersonal skills, popularity, professionalism, and respect) using a qualitative framework. We trained 8 machine learning algorithms to classify comments and assessed their performance using several training samples. We evaluated doctor performance using the GMC-CQ and compared scores between doctors with different classifications using t tests. Individual algorithm performance was high (range F score=.68 to .83). Interrater agreement between the algorithms and the human coder was highest for codes relating to "popular" (recall=.97), "innovator" (recall=.98), and "respected" (recall=.87) codes and was lower for the "interpersonal" (recall=.80) and "professional" (recall=.82) codes. A 10-fold cross-validation demonstrated similar performance in each analysis. When combined together into an ensemble of multiple algorithms, mean human-computer interrater agreement was .88. Comments that were classified as "respected," "professional," and "interpersonal" related to higher doctor scores on the GMC-CQ compared with comments that were not classified (P.05). Machine learning algorithms can classify open-text feedback of doctor performance into multiple themes derived by human raters with high

  7. 40 CFR 63.2354 - What performance tests, design evaluations, and performance evaluations must I conduct?

    Science.gov (United States)

    2010-07-01

    ... evaluations, and performance evaluations must I conduct? 63.2354 Section 63.2354 Protection of Environment... tests, design evaluations, and performance evaluations must I conduct? (a)(1) For each performance test... procedures specified in subpart SS of this part. (3) For each performance evaluation of a continuous emission...

  8. Realization of 3D evaluation algorithm in dose-guided radiotherapy

    International Nuclear Information System (INIS)

    Wang Yu; Li Gui; Wang Dong; Wu Yican; FDS Team

    2012-01-01

    3D evaluation algorithm instead of 2D evaluation method of clinical dose verification is highly needed for dose evaluation in Dose-guided Radiotherapy. 3D evaluation algorithm of three evaluation methods, including Dose Difference, Distance-To-Agreement and 7 Analysis, was realized by the tool of Visual C++ according to the formula. Two plans were designed to test the algorithm, plan 1 was radiation on equivalent water using square field for the verification of the algorithm's correctness; plan 2 was radiation on the emulation head phantom using conformal field for the verification of the algorithm's practicality. For plan 1, the dose difference, in the tolerance range has a pass rate of 100%, the Distance-To-Agreement and 7 analysis was of a pass rate of 100% in the tolerance range, and a pass rate of 99±1% at the boundary of range. For plan 2, the pass rate of algorithm were 88.35%, 100%, 95.07% for the three evaluation methods, respectively. It can be concluded that the 3D evaluation algorithm is feasible and could be used to evaluate 3D dose distributions in Dose-guided Radiotherapy. (authors)

  9. Application of Machine Learning Algorithms for the Query Performance Prediction

    Directory of Open Access Journals (Sweden)

    MILICEVIC, M.

    2015-08-01

    Full Text Available This paper analyzes the relationship between the system load/throughput and the query response time in a real Online transaction processing (OLTP system environment. Although OLTP systems are characterized by short transactions, which normally entail high availability and consistent short response times, the need for operational reporting may jeopardize these objectives. We suggest a new approach to performance prediction for concurrent database workloads, based on the system state vector which consists of 36 attributes. There is no bias to the importance of certain attributes, but the machine learning methods are used to determine which attributes better describe the behavior of the particular database server and how to model that system. During the learning phase, the system's profile is created using multiple reference queries, which are selected to represent frequent business processes. The possibility of the accurate response time prediction may be a foundation for automated decision-making for database (DB query scheduling. Possible applications of the proposed method include adaptive resource allocation, quality of service (QoS management or real-time dynamic query scheduling (e.g. estimation of the optimal moment for a complex query execution.

  10. The ATLAS Trigger Algorithms Upgrade and Performance in Run-2

    CERN Document Server

    Bernius, Catrin; The ATLAS collaboration

    2017-01-01

    The ATLAS trigger has been used very successfully for the online event selection during the first part of the second LHC run (Run-2) in 2015/16 at a center-of-mass energy of 13 TeV. The trigger system is composed of a hardware Level-1 trigger and a software-based high-level trigger; it reduces the event rate from the bunch-crossing rate of 40 MHz to an average recording rate of about 1 kHz. The excellent performance of the ATLAS trigger has been vital for the ATLAS physics program of Run-2, selecting interesting collision events for wide variety of physics signatures with high efficiency. The trigger selection capabilities of ATLAS during Run-2 have been significantly improved compared to Run-1, in order to cope with the higher event rates and pile-up which are the result of the almost doubling of the center-of-mass collision energy and the increase in the instantaneous luminosity of the LHC. At the Level-1 trigger the undertaken improvements resulted in more pile-up robust selection efficiencies and event ra...

  11. Medicare Administrative Contractor Performance Evaluation

    Data.gov (United States)

    U.S. Department of Health & Human Services — The Centers for Medicare and Medicaid Services (CMS) has compiled a summary of overall Medicare Administrative Contractor (MAC) performance information as measured...

  12. Enhancement and evaluation of an algorithm for atmospheric profiling continuity from Aqua to Suomi-NPP

    Science.gov (United States)

    Lipton, A.; Moncet, J. L.; Payne, V.; Lynch, R.; Polonsky, I. N.

    2017-12-01

    We will present recent results from an algorithm for producing climate-quality atmospheric profiling earth system data records (ESDRs) for application to data from hyperspectral sounding instruments, including the Atmospheric InfraRed Sounder (AIRS) on EOS Aqua and the Cross-track Infrared Sounder (CrIS) on Suomi-NPP, along with their companion microwave sounders, AMSU and ATMS, respectively. The ESDR algorithm uses an optimal estimation approach and the implementation has a flexible, modular software structure to support experimentation and collaboration. Data record continuity benefits from the fact that the same algorithm can be applied to different sensors, simply by providing suitable configuration and data files. Developments to be presented include the impact of a radiance-based pre-classification method for the atmospheric background. In addition to improving retrieval performance, pre-classification has the potential to reduce the sensitivity of the retrievals to the climatological data from which the background estimate and its error covariance are derived. We will also discuss evaluation of a method for mitigating the effect of clouds on the radiances, and enhancements of the radiative transfer forward model.

  13. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing. PMID:28467505

  14. Performance of direct and iterative algorithms on an optical systolic processor

    Science.gov (United States)

    Ghosh, A. K.; Casasent, D.; Neuman, C. P.

    1985-11-01

    The frequency-multiplexed optical linear algebra processor (OLAP) is treated in detail with attention to its performance in the solution of systems of linear algebraic equations (LAEs). General guidelines suitable for most OLAPs, including digital-optical processors, are advanced concerning system and component error source models, guidelines for appropriate use of direct and iterative algorithms, the dominant error sources, and the effect of multiple simultaneous error sources. Specific results are advanced on the quantitative performance of both direct and iterative algorithms in the solution of systems of LAEs and in the solution of nonlinear matrix equations. Acoustic attenuation is found to dominate iterative algorithms and detector noise to dominate direct algorithms. The effect of multiple spatial errors is found to be additive. A theoretical expression for the amount of acoustic attenuation allowed is advanced and verified. Simulations and experimental data are included.

  15. Performance comparison of heuristic algorithms for task scheduling in IaaS cloud computing environment.

    Science.gov (United States)

    Madni, Syed Hamid Hussain; Abd Latiff, Muhammad Shafie; Abdullahi, Mohammed; Abdulhamid, Shafi'i Muhammad; Usman, Mohammed Joda

    2017-01-01

    Cloud computing infrastructure is suitable for meeting computational needs of large task sizes. Optimal scheduling of tasks in cloud computing environment has been proved to be an NP-complete problem, hence the need for the application of heuristic methods. Several heuristic algorithms have been developed and used in addressing this problem, but choosing the appropriate algorithm for solving task assignment problem of a particular nature is difficult since the methods are developed under different assumptions. Therefore, six rule based heuristic algorithms are implemented and used to schedule autonomous tasks in homogeneous and heterogeneous environments with the aim of comparing their performance in terms of cost, degree of imbalance, makespan and throughput. First Come First Serve (FCFS), Minimum Completion Time (MCT), Minimum Execution Time (MET), Max-min, Min-min and Sufferage are the heuristic algorithms considered for the performance comparison and analysis of task scheduling in cloud computing.

  16. HPC-NMF: A High-Performance Parallel Algorithm for Nonnegative Matrix Factorization

    Energy Technology Data Exchange (ETDEWEB)

    2016-08-22

    NMF is a useful tool for many applications in different domains such as topic modeling in text mining, background separation in video analysis, and community detection in social networks. Despite its popularity in the data mining community, there is a lack of efficient distributed algorithms to solve the problem for big data sets. We propose a high-performance distributed-memory parallel algorithm that computes the factorization by iteratively solving alternating non-negative least squares (NLS) subproblems for $\\WW$ and $\\HH$. It maintains the data and factor matrices in memory (distributed across processors), uses MPI for interprocessor communication, and, in the dense case, provably minimizes communication costs (under mild assumptions). As opposed to previous implementation, our algorithm is also flexible: It performs well for both dense and sparse matrices, and allows the user to choose any one of the multiple algorithms for solving the updates to low rank factors $\\WW$ and $\\HH$ within the alternating iterations.

  17. An integrated environment for fast development and performance assessment of sonar image processing algorithms - SSIE

    DEFF Research Database (Denmark)

    Henriksen, Lars

    1996-01-01

    The sonar simulator integrated environment (SSIE) is a tool for developing high performance processing algorithms for single or sequences of sonar images. The tool is based on MATLAB providing a very short lead time from concept to executable code and thereby assessment of the algorithms tested...... of the algorithms is the availability of sonar images. To accommodate this problem the SSIE has been equipped with a simulator capable of generating high fidelity sonar images for a given scene of objects, sea-bed AUV path, etc. In the paper the main components of the SSIE is described and examples of different...... processing steps are given...

  18. A new computerized diagnostic algorithm for quantitative evaluation of binocular misalignment in patients with strabismus

    Science.gov (United States)

    Nam, Kyoung Won; Kim, In Young; Kang, Ho Chul; Yang, Hee Kyung; Yoon, Chang Ki; Hwang, Jeong Min; Kim, Young Jae; Kim, Tae Yun; Kim, Kwang Gi

    2012-10-01

    Accurate measurement of binocular misalignment between both eyes is important for proper preoperative management, surgical planning, and postoperative evaluation of patients with strabismus. In this study, we proposed a new computerized diagnostic algorithm that can calculate the angle of binocular eye misalignment photographically by using a dedicated three-dimensional eye model mimicking the structure of the natural human eye. To evaluate the performance of the proposed algorithm, eight healthy volunteers and eight individuals with strabismus were recruited in this study, the horizontal deviation angle, vertical deviation angle, and angle of eye misalignment were calculated and the angular differences between the healthy and the strabismus groups were evaluated using the nonparametric Mann-Whitney test and the Pearson correlation test. The experimental results demonstrated a statistically significant difference between the healthy and strabismus groups (p = 0.015 0.05). The measurements of the two methods were highly correlated (r = 0.969, p human eye to diagnose non-invasively the severity of strabismus.

  19. Evaluation of reconstruction algorithms in SPECT neuroimaging: Pt. 1

    International Nuclear Information System (INIS)

    Heejoung Kim; Zeeberg, B.R.; Reba, R.C.

    1993-01-01

    In the presence of statistical noise, an iterative reconstruction algorithm (IRA) for the quantitative reconstruction of single-photon-emission computed tomographic (SPECT) brain images overcomes major limitations of applying the standard filtered back projection (FBP) reconstruction algorithm to projection data which have been degraded by convolution of the true radioactivity distribution with a finite-resolution distance-dependent detector response: (a) the non-uniformity within the grey (or white) matter voxels which results even though the true model is uniform within these voxels; (b) a significantly lower ratio of grey/white matter voxel values than in the true model; and (c) an inability to detect an altered radioactivity value within the grey (or white) matter voxels. It is normally expected that an algorithm which improves spatial resolution and quantitative accuracy might also increase the magnitude of the statistical noise in the reconstructed image. However, the noise properties in the IRA images are very similar to those in the FBP images. (Author)

  20. Aging evaluation of active components by using performance evaluation

    International Nuclear Information System (INIS)

    Jung, S. K.; Jin, T. E.; Kim, J. S.; Jung, I. S.; Kim, T. R.

    2003-01-01

    Risk analysis and performance evaluation methodology were applied to the aging evaluation of active components in the periodic safety review of Wolsung unit 1. We conclude that evaluation of performance is more effective to discriminate the aging degradation of active component than the evaluation of aging mechanism. It is essential to analyze the common cause failures of low performance components to evaluate the properness of present maintenance system. Past 10 years failure history is used for establishing the performance criteria. Past 2 years failure history is used for the evaluating the recent performance condition. We analyze the failure mode of the components to improve the maintenance system. Performance evaluation methodology is useful for the quantitative evaluation of aging degradation of active components. Analysis on the repeated failures can be useful for the feedback to maintenance plan and interval

  1. A Novel AHRS Inertial Sensor-Based Algorithm for Wheelchair Propulsion Performance Analysis

    OpenAIRE

    Jonathan Bruce Shepherd; Tomohito Wada; David Rowlands; Daniel Arthur James

    2016-01-01

    With the increasing rise of professionalism in sport, athletes, teams, and coaches are looking to technology to monitor performance in both games and training in order to find a competitive advantage. The use of inertial sensors has been proposed as a cost effective and adaptable measurement device for monitoring wheelchair kinematics; however, the outcomes are dependent on the reliability of the processing algorithms. Though there are a variety of algorithms that have been proposed to monito...

  2. Portfolio optimization and performance evaluation

    DEFF Research Database (Denmark)

    Juhl, Hans Jørn; Christensen, Michael

    2013-01-01

    Based on an exclusive business-to-business database comprising nearly 1,000 customers, the applicability of portfolio analysis is documented, and it is examined how such an optimization analysis can be used to explore the growth potential of a company. As opposed to any previous analyses, optimal...... customer portfolios are determined, and it is shown how marketing decision-makers can use this information in their marketing strategies to optimize the revenue growth of the company. Finally, our analysis is the first analysis which applies portfolio based methods to measure customer performance......, and it is shown how these performance measures complement the optimization analysis....

  3. Preliminary Evaluation of Intelligent Intention Estimation Algorithms for an Actuated Lower-Limb Exoskeleton

    Directory of Open Access Journals (Sweden)

    Mervin Chandrapal

    2013-02-01

    Full Text Available This paper describes the experimental testing of an actuated lower-limb exoskeleton. The exoskeleton is designed to alleviate the loading at the knee joint by supplying assistive torque. It is hypothesized that the support provided will reduce the muscular effort required to perform activities of daily living and thus facilitate the execution of these movements by those who previously had limited mobility. The exoskeleton is actuated by four pneumatic artificial muscles, each providing 150N of pulling force to assist in the flexion and extension of the knee joint. The exoskeleton system estimates the user's intended motion using muscle activity information recorded from five thigh muscles, together with the knee angle. To experimentally evaluate the performance of the device, the exoskeleton was worn by an able-bodied user, whilst performing the sit-to-stand-to-sit movement. In addition, the three intention estimation algorithms were also tested to determine the influence of the various algorithms on the support provided. The results show a significant reduction in the user's muscle activity (≈ 20% when assisted by the exoskeleton in a predictable manner.

  4. The Parameters Selection of PSO Algorithm influencing On performance of Fault Diagnosis

    Directory of Open Access Journals (Sweden)

    He Yan

    2016-01-01

    Full Text Available The particle swarm optimization (PSO is an optimization algorithm based on intelligent optimization. Parameters selection of PSO will play an important role in performance and efficiency of the algorithm. In this paper, the performance of PSO is analyzed when the control parameters vary, including particle number, accelerate constant, inertia weight and maximum limited velocity. And then PSO with dynamic parameters has been applied on the neural network training for gearbox fault diagnosis, the results with different parameters of PSO are compared and analyzed. At last some suggestions for parameters selection are proposed to improve the performance of PSO.

  5. Evaluating judge performance in sport.

    Science.gov (United States)

    Looney, Marilyn A

    2004-01-01

    Many sports, such as, gymnastics, diving, ski jumping, and figure skating, use judges' scores to determine the winner of a competition. These judges use some type of rating scale when judging performances (e.g., figure skating: 0.0 - 6.0). Sport governing bodies have the responsibility of setting and enforcing quality control parameters for judge performance. Given the judging scandals in figure skating at the 1998 and 2002 Olympics, judge performance in sport is receiving greater scrutiny. The purpose of this article is to illustrate how results from Rasch analyses can be used to provide in-depth feedback to judges about their scoring patterns. Nine judges' scores for 20 pairs of figure skaters who competed at the 2002 Winter Olympics were analyzed using a four-faceted (skater pair ability, skating aspect difficulty, program difficulty, and judge severity) Rasch rating scale model that was not common to all judges. Fit statistics, the logical ordering of skating aspects, skating programs, and separation indices all indicated a good fit of the data to the model. The type of feedback that can be given to judges about their scoring pattern was illustrated for one judge (USA) whose performance was flagged as being unpredictable. Feedback included a detailed description of how the rating scale was used; for example, 10% of all marks given by the American judge were unexpected by the model (Z > |2|). Three figures illustrated differences between the judge's observed and expected marks arranged according to the pairs' skating order and final placement in the competition. Scores which may represent "nationalistic bias" or a skating order influence were flagged by looking at these figures. If sport governing bodies wish to improve the performance of their judges, they need to employ methods that monitor the internal consistency of each judge as a many-facet Rasch analysis does.

  6. Investigating the performance of neural network backpropagation algorithms for TEC estimations using South African GPS data

    Science.gov (United States)

    Habarulema, J. B.; McKinnell, L.-A.

    2012-05-01

    In this work, results obtained by investigating the application of different neural network backpropagation training algorithms are presented. This was done to assess the performance accuracy of each training algorithm in total electron content (TEC) estimations using identical datasets in models development and verification processes. Investigated training algorithms are standard backpropagation (SBP), backpropagation with weight delay (BPWD), backpropagation with momentum (BPM) term, backpropagation with chunkwise weight update (BPC) and backpropagation for batch (BPB) training. These five algorithms are inbuilt functions within the Stuttgart Neural Network Simulator (SNNS) and the main objective was to find out the training algorithm that generates the minimum error between the TEC derived from Global Positioning System (GPS) observations and the modelled TEC data. Another investigated algorithm is the MatLab based Levenberg-Marquardt backpropagation (L-MBP), which achieves convergence after the least number of iterations during training. In this paper, neural network (NN) models were developed using hourly TEC data (for 8 years: 2000-2007) derived from GPS observations over a receiver station located at Sutherland (SUTH) (32.38° S, 20.81° E), South Africa. Verification of the NN models for all algorithms considered was performed on both "seen" and "unseen" data. Hourly TEC values over SUTH for 2003 formed the "seen" dataset. The "unseen" dataset consisted of hourly TEC data for 2002 and 2008 over Cape Town (CPTN) (33.95° S, 18.47° E) and SUTH, respectively. The models' verification showed that all algorithms investigated provide comparable results statistically, but differ significantly in terms of time required to achieve convergence during input-output data training/learning. This paper therefore provides a guide to neural network users for choosing appropriate algorithms based on the availability of computation capabilities used for research.

  7. Evaluation of multilayer perceptron algorithms for an analysis of network flow data

    Science.gov (United States)

    Bieniasz, Jedrzej; Rawski, Mariusz; Skowron, Krzysztof; Trzepiński, Mateusz

    2016-09-01

    The volume of exchanged information through IP networks is larger than ever and still growing. It creates a space for both benign and malicious activities. The second one raises awareness on security network devices, as well as network infrastructure and a system as a whole. One of the basic tools to prevent cyber attacks is Network Instrusion Detection System (NIDS). NIDS could be realized as a signature-based detector or an anomaly-based one. In the last few years the emphasis has been placed on the latter type, because of the possibility of applying smart and intelligent solutions. An ideal NIDS of next generation should be composed of self-learning algorithms that could react on known and unknown malicious network activities respectively. In this paper we evaluated a machine learning approach for detection of anomalies in IP network data represented as NetFlow records. We considered Multilayer Perceptron (MLP) as the classifier and we used two types of learning algorithms - Backpropagation (BP) and Particle Swarm Optimization (PSO). This paper includes a comprehensive survey on determining the most optimal MLP learning algorithm for the classification problem in application to network flow data. The performance, training time and convergence of BP and PSO methods were compared. The results show that PSO algorithm implemented by the authors outperformed other solutions if accuracy of classifications is considered. The major disadvantage of PSO is training time, which could be not acceptable for larger data sets or in real network applications. At the end we compared some key findings with the results from the other papers to show that in all cases results from this study outperformed them.

  8. Evaluation of algorithms for photon depth of interaction estimation for the TRIMAGE PET component

    Energy Technology Data Exchange (ETDEWEB)

    Camarlinghi, Niccolo; Belcari, Nicola [University of Pisa (Italy); Cerello, Piergiorgio [University of Torino (Italy); Sportelli, Giancarlo [University of Pisa (Italy); Pennazio, Francesco [University of Torino (Italy); Zaccario, Emanuele; Del Guerra, Alberto [University of Pisa (Italy)

    2015-05-18

    The TRIMAGE consortium aims to develop a multimodal PET/MR/EEG brain scanner dedicated to the early diagnosis of schizophrenia and other mental health disorders. The PET component features a full ring made of 18 detectors, each one consisting of twelve 8x8 Silicon PhotoMultipliers (SiPMs) tiles coupled to two segmented LYSO crystal matrices with staggered layers. In each module, the crystals belonging to the bottom layer are coupled one to one to the SiPMs, while each crystal of the top layer is coupled to four crystals of the bottom layer. This configuration allows to increase the crystal thickness while reducing the depth of interaction uncertainty, as photons interacting in different layers are expected to produce different light patterns on the SiPMs. The PET scanner will implement the pixel/layer identification on a front-end FPGA. This will allow increasing the effective bandwidth, setting at the same time restrictions on the complexity of the algorithms to be implemented. In this work two algorithms whose implementation is feasible directly on an FPGA are presented and evaluated. The first algorithm implements a method based on adaptive thresholding, while the other uses a linear Support Vector Machine (SVM) trained to distinguish the light pattern coming from two different layers. The validation of the algorithm performance is carried out by using simulated data generated with the GAMOS Monte Carlo. The obtained results show that the achieved accuracy in layer and pixel identification is above the 90% for both the proposed approaches.

  9. Design and Large-Scale Evaluation of Educational Games for Teaching Sorting Algorithms

    Science.gov (United States)

    Battistella, Paulo Eduardo; von Wangenheim, Christiane Gresse; von Wangenheim, Aldo; Martina, Jean Everson

    2017-01-01

    The teaching of sorting algorithms is an essential topic in undergraduate computing courses. Typically the courses are taught through traditional lectures and exercises involving the implementation of the algorithms. As an alternative, this article presents the design and evaluation of three educational games for teaching Quicksort and Heapsort.…

  10. Profiling high performance dense linear algebra algorithms on multicore architectures for power and energy efficiency

    KAUST Repository

    Ltaief, Hatem; Luszczek, Piotr R.; Dongarra, Jack

    2011-01-01

    This paper presents the power profile of two high performance dense linear algebra libraries i.e., LAPACK and PLASMA. The former is based on block algorithms that use the fork-join paradigm to achieve parallel performance. The latter uses fine

  11. Evaluation of in silico algorithms for use with ACMG/AMP clinical variant interpretation guidelines.

    Science.gov (United States)

    Ghosh, Rajarshi; Oak, Ninad; Plon, Sharon E

    2017-11-28

    The American College of Medical Genetics and American College of Pathologists (ACMG/AMP) variant classification guidelines for clinical reporting are widely used in diagnostic laboratories for variant interpretation. The ACMG/AMP guidelines recommend complete concordance of predictions among all in silico algorithms used without specifying the number or types of algorithms. The subjective nature of this recommendation contributes to discordance of variant classification among clinical laboratories and prevents definitive classification of variants. Using 14,819 benign or pathogenic missense variants from the ClinVar database, we compared performance of 25 algorithms across datasets differing in distinct biological and technical variables. There was wide variability in concordance among different combinations of algorithms with particularly low concordance for benign variants. We also identify a previously unreported source of error in variant interpretation (false concordance) where concordant in silico predictions are opposite to the evidence provided by other sources. We identified recently developed algorithms with high predictive power and robust to variables such as disease mechanism, gene constraint, and mode of inheritance, although poorer performing algorithms are more frequently used based on review of the clinical genetics literature (2011-2017). Our analyses identify algorithms with high performance characteristics independent of underlying disease mechanisms. We describe combinations of algorithms with increased concordance that should improve in silico algorithm usage during assessment of clinically relevant variants using the ACMG/AMP guidelines.

  12. Metric Accuracy Evaluation of Dense Matching Algorithms in Archeological Applications

    Directory of Open Access Journals (Sweden)

    C. Re

    2011-12-01

    Full Text Available In the cultural heritage field the recording and documentation of small and medium size objects with very detailed Digital Surface Models (DSM is readily possible by through the use of high resolution and high precision triangulation laser scanners. 3D surface recording of archaeological objects can be easily achieved in museums; however, this type of record can be quite expensive. In many cases photogrammetry can provide a viable alternative for the generation of DSMs. The photogrammetric procedure has some benefits with respect to laser survey. The research described in this paper sets out to verify the reconstruction accuracy of DSMs of some archaeological artifacts obtained by photogrammetric survey. The experimentation has been carried out on some objects preserved in the Petrie Museum of Egyptian Archaeology at University College London (UCL. DSMs produced by two photogrammetric software packages are compared with the digital 3D model obtained by a state of the art triangulation color laser scanner. Intercomparison between the generated DSM has allowed an evaluation of metric accuracy of the photogrammetric approach applied to archaeological documentation and of precision performances of the two software packages.

  13. High performance graphics processor based computed tomography reconstruction algorithms for nuclear and other large scale applications.

    Energy Technology Data Exchange (ETDEWEB)

    Jimenez, Edward S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Orr, Laurel J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Thompson, Kyle R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2013-09-01

    The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.

  14. A comparison and evaluation of five biclustering algorithms by quantifying goodness of biclusters for gene expression data

    Directory of Open Access Journals (Sweden)

    Li Li

    2012-07-01

    Full Text Available Abstract Background Several biclustering algorithms have been proposed to identify biclusters, in which genes share similar expression patterns across a number of conditions. However, different algorithms would yield different biclusters and further lead to distinct conclusions. Therefore, some testing and comparisons between these algorithms are strongly required. Methods In this study, five biclustering algorithms (i.e. BIMAX, FABIA, ISA, QUBIC and SAMBA were compared with each other in the cases where they were used to handle two expression datasets (GDS1620 and pathway with different dimensions in Arabidopsis thaliana (A. thaliana GO (gene ontology annotation and PPI (protein-protein interaction network were used to verify the corresponding biological significance of biclusters from the five algorithms. To compare the algorithms’ performance and evaluate quality of identified biclusters, two scoring methods, namely weighted enrichment (WE scoring and PPI scoring, were proposed in our study. For each dataset, after combining the scores of all biclusters into one unified ranking, we could evaluate the performance and behavior of the five biclustering algorithms in a better way. Results Both WE and PPI scoring methods has been proved effective to validate biological significance of the biclusters, and a significantly positive correlation between the two sets of scores has been tested to demonstrate the consistence of these two methods. A comparative study of the above five algorithms has revealed that: (1 ISA is the most effective one among the five algorithms on the dataset of GDS1620 and BIMAX outperforms the other algorithms on the dataset of pathway. (2 Both ISA and BIMAX are data-dependent. The former one does not work well on the datasets with few genes, while the latter one holds well for the datasets with more conditions. (3 FABIA and QUBIC perform poorly in this study and they may be suitable to large datasets with more genes and

  15. Evaluation of Item-Based Top-N Recommendation Algorithms

    Science.gov (United States)

    2000-09-15

    Furthermore, one of the advantages of the item-based algorithm is that it has much smaller computational require- 11 0.0 0.1 0.2 0.3 0.4 0.5 0.6 ecommerce ...items, utilized by many e-commerce sites, cannot take advantage of pre-computed user-to-user similarities. Consequently, even though the throughput of...Non-Zeros ecommerce 6667 17491 91222 catalog 50918 39080 435524 ccard 42629 68793 398619 skills 4374 2125 82612 movielens 943 1682 100000 Table 1: The

  16. Loudspeaker Design and Performance Evaluation

    Science.gov (United States)

    Mäkivirta, Aki Vihtori

    A loudspeaker comprises transducers converting an electrical driving signal into sound pressure, an enclosure working as a holder for transducers, front baffle and box to contain and eliminate the rear-radiating audio signal, and electronic components. Modeling of transducers as well as enclosures is treated in Chap. 32 of this handbook. The purpose of the present chapter is to shed light on the design choices and options for the electronic circuits conditioning the electrical signal fed into loudspeaker transducers in order to optimize the acoustic performance of the loudspeaker.

  17. Performance comparison of attitude determination, attitude estimation, and nonlinear observers algorithms

    Science.gov (United States)

    MOHAMMED, M. A. SI; BOUSSADIA, H.; BELLAR, A.; ADNANE, A.

    2017-01-01

    This paper presents a brief synthesis and useful performance analysis of different attitude filtering algorithms (attitude determination algorithms, attitude estimation algorithms, and nonlinear observers) applied to Low Earth Orbit Satellite in terms of accuracy, convergence time, amount of memory, and computation time. This latter is calculated in two ways, using a personal computer and also using On-board computer 750 (OBC 750) that is being used in many SSTL Earth observation missions. The use of this comparative study could be an aided design tool to the designer to choose from an attitude determination or attitude estimation or attitude observer algorithms. The simulation results clearly indicate that the nonlinear Observer is the more logical choice.

  18. Performance Analysis of Blind Beamforming Algorithms in Adaptive Antenna Array in Rayleigh Fading Channel Model

    International Nuclear Information System (INIS)

    Yasin, M; Akhtar, Pervez; Pathan, Amir Hassan

    2013-01-01

    In this paper, we analyze the performance of adaptive blind algorithms – i.e. Kaiser Constant Modulus Algorithm (KCMA), Hamming CMA (HAMCMA) – with CMA in a wireless cellular communication system using digital modulation technique. These blind algorithms are used in digital signal processor of adaptive antenna to make it smart and change weights of the antenna array system dynamically. The simulation results revealed that KCMA and HAMCMA provide minimum mean square error (MSE) with 1.247 dB and 1.077 dB antenna gain enhancement, 75% reduction in bit error rate (BER) respectively over that of CMA. Therefore, KCMA and HAMCMA algorithms give a cost effective solution for a communication system

  19. Evaluation of Droplet Splashing Algorithm in LEWICE 3.0

    Science.gov (United States)

    Homenko, Hilary N.

    2004-01-01

    The Icing Branch at NASA Glenn Research has developed a computer program to simulate ice formation on the leading edge of an aircraft wing during flight through cold, moist air. As part of the branch's current research, members have developed software known as LEWICE. This program is capable of predicting the formation of ice under designated weather conditions. The success of LEWICE is an asset to airplane manufacturers, ice protection system manufacturers, and the airline industry. Simulations of ice formation conducted in the tunnel and in flight is costly and time consuming. However, the danger of in-flight icing continues to be a concern for both commercial and military pilots. The LEWICE software is a step towards inexpensive and time efficient prediction of ice collection. In the most recent version of the program, LEWICE contains an algorithm for droplet splashing. Droplet splashing is a natural occurrence that causes accumulation of ice on aircraft surfaces. At impingement water droplets lose a portion of their mass to splashing. With part of each droplet joining the airflow and failing to freeze, early versions of LEWICE without the splashing algorithm over-predicted the collection of ice on the leading edge. The objective of my project was to determine whether the revised version of LEWICE accurately reflected the ice collection data obtained from the Icing Research Tunnel (IRT). The experimental data from the IRT was collected by Mark Potapczuk in January, March and July of 2001 and April and December of 2002. Experimental data points were the result of ice tracings conducted shortly after testing in the tunnel. Run sheets, which included a record of velocity, temperature, liquid water content and droplet diameter, served as the input of the LEWICE computer program. Parameters identical to the tunnel conditions were used to run LEWICE 2.0 and LEWICE 3.0. The results from IRT and versions of LEWICE were compared graphically. After entering the raw

  20. Evaluation channel performance in multichannel environments

    NARCIS (Netherlands)

    Gensler, S.; Dekimpe, M.; Skiera, B.

    2007-01-01

    Evaluating channel performance is crucial for actively managing multiple sales channels, and requires understanding the customers' channel preferences. Two key components of channel performance are (i) the existing customers' intrinsic loyalty to a particular channel and (ii) the channel's ability

  1. Model Performance Evaluation and Scenario Analysis (MPESA)

    Science.gov (United States)

    Model Performance Evaluation and Scenario Analysis (MPESA) assesses the performance with which models predict time series data. The tool was developed Hydrological Simulation Program-Fortran (HSPF) and the Stormwater Management Model (SWMM)

  2. Performance evaluation of Louisiana superpave mixtures.

    Science.gov (United States)

    2008-12-01

    This report documents the performance of Louisiana Superpave mixtures through laboratory mechanistic tests, mixture : volumetric properties, gradation analysis, and early field performance. Thirty Superpave mixtures were evaluated in this : study. Fo...

  3. Performance evaluation of ventilation radiators

    International Nuclear Information System (INIS)

    Myhren, Jonn Are; Holmberg, Sture

    2013-01-01

    A ventilation radiator is a combined ventilation and heat emission unit currently of interest due to its potential for increasing energy efficiency in exhaust-ventilated buildings with warm water heating. This paper presents results of performance tests of several ventilation radiator models conducted under controlled laboratory conditions. The purpose of the study was to validate results achieved by Computational Fluid Dynamics (CFD) in an earlier study and identify possible improvements in the performance of such systems. The main focus was on heat transfer from internal convection fins, but comfort and health aspects related to ventilation rates and air temperatures were also considered. The general results from the CFD simulations were confirmed; the heat output of ventilation radiators may be improved by at least 20% without sacrificing ventilation efficiency or thermal comfort. Improved thermal efficiency of ventilation radiators allows a lower supply water temperature and energy savings both for heating up and distribution of warm water in heat pumps or district heating systems. A secondary benefit is that a high ventilation rate can be maintained all year around without risk for cold draught. -- Highlights: ► Low temperature heat emitters are currently of interest due to their potential for increasing energy efficiency. ► A ventilation radiator is a combined ventilation and heat emission unit which can be adapted to low temperature heating systems. ► We examine how ventilation radiators can be made to be more efficient in terms of energy consumption and thermal comfort. ► Current work focuses on heat transfer mechanisms and convection fin configuration of ventilation radiators

  4. Assessing the Performance of a Machine Learning Algorithm in Identifying Bubbles in Dust Emission

    Science.gov (United States)

    Xu, Duo; Offner, Stella S. R.

    2017-12-01

    Stellar feedback created by radiation and winds from massive stars plays a significant role in both physical and chemical evolution of molecular clouds. This energy and momentum leaves an identifiable signature (“bubbles”) that affects the dynamics and structure of the cloud. Most bubble searches are performed “by eye,” which is usually time-consuming, subjective, and difficult to calibrate. Automatic classifications based on machine learning make it possible to perform systematic, quantifiable, and repeatable searches for bubbles. We employ a previously developed machine learning algorithm, Brut, and quantitatively evaluate its performance in identifying bubbles using synthetic dust observations. We adopt magnetohydrodynamics simulations, which model stellar winds launching within turbulent molecular clouds, as an input to generate synthetic images. We use a publicly available three-dimensional dust continuum Monte Carlo radiative transfer code, HYPERION, to generate synthetic images of bubbles in three Spitzer bands (4.5, 8, and 24 μm). We designate half of our synthetic bubbles as a training set, which we use to train Brut along with citizen-science data from the Milky Way Project (MWP). We then assess Brut’s accuracy using the remaining synthetic observations. We find that Brut’s performance after retraining increases significantly, and it is able to identify yellow bubbles, which are likely associated with B-type stars. Brut continues to perform well on previously identified high-score bubbles, and over 10% of the MWP bubbles are reclassified as high-confidence bubbles, which were previously marginal or ambiguous detections in the MWP data. We also investigate the influence of the size of the training set, dust model, evolutionary stage, and background noise on bubble identification.

  5. Evaluation of Multiple Kernel Learning Algorithms for Crop Mapping Using Satellite Image Time-Series Data

    Science.gov (United States)

    Niazmardi, S.; Safari, A.; Homayouni, S.

    2017-09-01

    Crop mapping through classification of Satellite Image Time-Series (SITS) data can provide very valuable information for several agricultural applications, such as crop monitoring, yield estimation, and crop inventory. However, the SITS data classification is not straightforward. Because different images of a SITS data have different levels of information regarding the classification problems. Moreover, the SITS data is a four-dimensional data that cannot be classified using the conventional classification algorithms. To address these issues in this paper, we presented a classification strategy based on Multiple Kernel Learning (MKL) algorithms for SITS data classification. In this strategy, initially different kernels are constructed from different images of the SITS data and then they are combined into a composite kernel using the MKL algorithms. The composite kernel, once constructed, can be used for the classification of the data using the kernel-based classification algorithms. We compared the computational time and the classification performances of the proposed classification strategy using different MKL algorithms for the purpose of crop mapping. The considered MKL algorithms are: MKL-Sum, SimpleMKL, LPMKL and Group-Lasso MKL algorithms. The experimental tests of the proposed strategy on two SITS data sets, acquired by SPOT satellite sensors, showed that this strategy was able to provide better performances when compared to the standard classification algorithm. The results also showed that the optimization method of the used MKL algorithms affects both the computational time and classification accuracy of this strategy.

  6. Statistical evaluation of diagnostic performance topics in ROC analysis

    CERN Document Server

    Zou, Kelly H; Bandos, Andriy I; Ohno-Machado, Lucila; Rockette, Howard E

    2016-01-01

    Statistical evaluation of diagnostic performance in general and Receiver Operating Characteristic (ROC) analysis in particular are important for assessing the performance of medical tests and statistical classifiers, as well as for evaluating predictive models or algorithms. This book presents innovative approaches in ROC analysis, which are relevant to a wide variety of applications, including medical imaging, cancer research, epidemiology, and bioinformatics. Statistical Evaluation of Diagnostic Performance: Topics in ROC Analysis covers areas including monotone-transformation techniques in parametric ROC analysis, ROC methods for combined and pooled biomarkers, Bayesian hierarchical transformation models, sequential designs and inferences in the ROC setting, predictive modeling, multireader ROC analysis, and free-response ROC (FROC) methodology. The book is suitable for graduate-level students and researchers in statistics, biostatistics, epidemiology, public health, biomedical engineering, radiology, medi...

  7. A Performance Evaluation of Online Warehouse Update Algorithms

    Science.gov (United States)

    1998-01-01

    able to present a fully consistent ver- sion of the warehouse to the queries while the warehouse is being updated. Multiversioning has been used...LST97]). Special- ized multiversion access structures have also been proposed ([LS89, LS90, dBS96, BC97, VV97, MOPW98]) In the context of OLTP systems...collection processes. 2.1 Multiversioning MVNL supports multiple versions by using Time Travel ([Sto87]). Each row has two extra at- tributes, Tmin

  8. Performance Evaluation of Proportional Fair Scheduling Algorithm with Measured Channels

    DEFF Research Database (Denmark)

    Sørensen, Troels Bundgaard; Pons, Manuel Rubio

    2005-01-01

    subjected to measured channel traces. Specifically, we applied measured signal fading recorded from GSM cell phone users making calls on an indoor wireless office system. Different from reference channel models, these measured channels have much more irregular fading between users, which as we show...

  9. Performance evaluation of spot detection algorithms in fluorescence microscopy images

    CSIR Research Space (South Africa)

    Mabaso, M

    2012-10-01

    Full Text Available triggered the development of a highly sophisticated imaging tool known as fluorescence microscopy. This is used to visualise and study intracellular processes. The use of fluorescence microscopy and a specific staining method make biological molecules... was first used in astronomical applications [2] to detect isotropic objects, and was then introduced to biological applications [3]. Olivio-Marin[3] approached the problem of feature extraction based on undecimated wavelet representation of the image...

  10. Design and evaluation of basic standard encryption algorithm modules using nanosized complementary metal oxide semiconductor molecular circuits

    Science.gov (United States)

    Masoumi, Massoud; Raissi, Farshid; Ahmadian, Mahmoud; Keshavarzi, Parviz

    2006-01-01

    We are proposing that the recently proposed semiconductor-nanowire-molecular architecture (CMOL) is an optimum platform to realize encryption algorithms. The basic modules for the advanced encryption standard algorithm (Rijndael) have been designed using CMOL architecture. The performance of this design has been evaluated with respect to chip area and speed. It is observed that CMOL provides considerable improvement over implementation with regular CMOS architecture even with a 20% defect rate. Pseudo-optimum gate placement and routing are provided for Rijndael building blocks and the possibility of designing high speed, attack tolerant and long key encryptions are discussed.

  11. Positioning performance analysis of the time sum of arrival algorithm with error features

    Science.gov (United States)

    Gong, Feng-xun; Ma, Yan-qiu

    2018-03-01

    The theoretical positioning accuracy of multilateration (MLAT) with the time difference of arrival (TDOA) algorithm is very high. However, there are some problems in practical applications. Here we analyze the location performance of the time sum of arrival (TSOA) algorithm from the root mean square error ( RMSE) and geometric dilution of precision (GDOP) in additive white Gaussian noise (AWGN) environment. The TSOA localization model is constructed. Using it, the distribution of location ambiguity region is presented with 4-base stations. And then, the location performance analysis is started from the 4-base stations with calculating the RMSE and GDOP variation. Subsequently, when the location parameters are changed in number of base stations, base station layout and so on, the performance changing patterns of the TSOA location algorithm are shown. So, the TSOA location characteristics and performance are revealed. From the RMSE and GDOP state changing trend, the anti-noise performance and robustness of the TSOA localization algorithm are proved. The TSOA anti-noise performance will be used for reducing the blind-zone and the false location rate of MLAT systems.

  12. Performance of the reconstruction algorithms of the FIRST experiment pixel sensors vertex detector

    Energy Technology Data Exchange (ETDEWEB)

    Rescigno, R., E-mail: regina.rescigno@iphc.cnrs.fr [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Finck, Ch.; Juliani, D. [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Spiriti, E. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali di Frascati (Italy); Istituto Nazionale di Fisica Nucleare - Sezione di Roma 3 (Italy); Baudot, J. [Institut Pluridisciplinaire Hubert Curien, 23 rue du Loess, 67037 Strasbourg Cedex 2 (France); Abou-Haidar, Z. [CNA, Sevilla (Spain); Agodi, C. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Alvarez, M.A.G. [CNA, Sevilla (Spain); Aumann, T. [GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt (Germany); Battistoni, G. [Istituto Nazionale di Fisica Nucleare - Sezione di Milano (Italy); Bocci, A. [CNA, Sevilla (Spain); Böhlen, T.T. [European Organization for Nuclear Research CERN, Geneva (Switzerland); Medical Radiation Physics, Karolinska Institutet and Stockholm University, Stockholm (Sweden); Boudard, A. [CEA-Saclay, IRFU/SPhN, Gif sur Yvette Cedex (France); Brunetti, A.; Carpinelli, M. [Istituto Nazionale di Fisica Nucleare - Sezione di Cagliari (Italy); Università di Sassari (Italy); Cirrone, G.A.P. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Cortes-Giraldo, M.A. [Departamento de Fisica Atomica, Molecular y Nuclear, University of Sevilla, 41080-Sevilla (Spain); Cuttone, G.; De Napoli, M. [Istituto Nazionale di Fisica Nucleare - Laboratori Nazionali del Sud (Italy); Durante, M. [GSI Helmholtzzentrum für Schwerionenforschung, Darmstadt (Germany); and others

    2014-12-11

    Hadrontherapy treatments use charged particles (e.g. protons and carbon ions) to treat tumors. During a therapeutic treatment with carbon ions, the beam undergoes nuclear fragmentation processes giving rise to significant yields of secondary charged particles. An accurate prediction of these production rates is necessary to estimate precisely the dose deposited into the tumours and the surrounding healthy tissues. Nowadays, a limited set of double differential carbon fragmentation cross-section is available. Experimental data are necessary to benchmark Monte Carlo simulations for their use in hadrontherapy. The purpose of the FIRST experiment is to study nuclear fragmentation processes of ions with kinetic energy in the range from 100 to 1000 MeV/u. Tracks are reconstructed using information from a pixel silicon detector based on the CMOS technology. The performances achieved using this device for hadrontherapy purpose are discussed. For each reconstruction step (clustering, tracking and vertexing), different methods are implemented. The algorithm performances and the accuracy on reconstructed observables are evaluated on the basis of simulated and experimental data.

  13. Performance of rapid tests and algorithms for HIV screening in Abidjan, Ivory Coast.

    Science.gov (United States)

    Loukou, Y G; Cabran, M A; Yessé, Zinzendorf Nanga; Adouko, B M O; Lathro, S J; Agbessi-Kouassi, K B T

    2014-01-01

    Seven rapid diagnosis tests (RDTs) of HIV were evaluated by a panel group who collected serum samples from patients in Abidjan (HIV-1 = 203, HIV-2 = 25, HIV-dual = 25, HIV = 305). Kit performances were recorded after the reference techniques (enzyme-linked immunosorbent assay). The following RDTs showed a sensitivity of 100% and a specificity higher than 99%: Determine, Oraquick, SD Bioline, BCP, and Stat-Pak. These kits were used to establish infection screening strategies. The combination with 2 or 3 of these tests in series or parallel algorithms showed that series combinations with 2 tests (Oraquick and Bioline) and 3 tests (Determine, BCP, and Stat-Pak) gave the best performances (sensitivity, specificity, positive predictive value, and negative predictive value of 100%). However, the combination with 2 tests appeared to be more onerous than the combination with 3 tests. The combination with Determine, BCP, and Stat-Pak tests serving as a tiebreaker could be an alternative to the HIV/AIDS serological screening in Abidjan.

  14. Method to evaluate steering and alignment algorithms for controlling emittance growth

    International Nuclear Information System (INIS)

    Adolphsen, C.; Raubenheimer, T.

    1993-04-01

    Future linear colliders will likely use sophisticated beam-based alignment and/or steering algorithms to control the growth of the beam emittance in the linac. In this paper, a mathematical framework is presented which simplifies the evaluation of the effectiveness of these algorithms. As an application, a quad alignment that uses beam data taken with the nominal linac optics, and with a scaled optics, is evaluated in terms of the dispersive emittance growth remaining after alignment

  15. Evaluation of global synchronization for iterative algebra algorithms on many-core

    KAUST Repository

    ul Hasan Khan, Ayaz; Al-Mouhamed, Mayez; Firdaus, Lutfi A.

    2015-01-01

    © 2015 IEEE. Massively parallel computing is applied extensively in various scientific and engineering domains. With the growing interest in many-core architectures and due to the lack of explicit support for inter-block synchronization specifically in GPUs, synchronization becomes necessary to minimize inter-block communication time. In this paper, we have proposed two new inter-block synchronization techniques: 1) Relaxed Synchronization, and 2) Block-Query Synchronization. These schemes are used in implementing numerical iterative solvers where computation/communication overlapping is one used optimization to enhance application performance. We have evaluated and analyzed the performance of the proposed synchronization techniques using Jacobi Iterative Solver in comparison to the state of the art inter-block lock-free synchronization techniques. We have achieved about 1-8% performance improvement in terms of execution time over lock-free synchronization depending on the problem size and the number of thread blocks. We have also evaluated the proposed algorithm on GPU and MIC architectures and obtained about 8-26% performance improvement over the barrier synchronization available in OpenMP programming environment depending on the problem size and number of cores used.

  16. Evaluation of global synchronization for iterative algebra algorithms on many-core

    KAUST Repository

    ul Hasan Khan, Ayaz

    2015-06-01

    © 2015 IEEE. Massively parallel computing is applied extensively in various scientific and engineering domains. With the growing interest in many-core architectures and due to the lack of explicit support for inter-block synchronization specifically in GPUs, synchronization becomes necessary to minimize inter-block communication time. In this paper, we have proposed two new inter-block synchronization techniques: 1) Relaxed Synchronization, and 2) Block-Query Synchronization. These schemes are used in implementing numerical iterative solvers where computation/communication overlapping is one used optimization to enhance application performance. We have evaluated and analyzed the performance of the proposed synchronization techniques using Jacobi Iterative Solver in comparison to the state of the art inter-block lock-free synchronization techniques. We have achieved about 1-8% performance improvement in terms of execution time over lock-free synchronization depending on the problem size and the number of thread blocks. We have also evaluated the proposed algorithm on GPU and MIC architectures and obtained about 8-26% performance improvement over the barrier synchronization available in OpenMP programming environment depending on the problem size and number of cores used.

  17. Evaluation of margining algorithms in commercial treatment planning systems

    International Nuclear Information System (INIS)

    Pooler, Alistair M.; Mayles, Helen M.; Naismith, Olivia F.; Sage, John P.; Dearnaley, David P.

    2008-01-01

    Introduction: During commissioning of the Pinnacle (Philips) treatment planning system (TPS) the margining algorithm was investigated and was found to produce larger PTVs than Plato (Nucletron) for identical GTVs. Subsequent comparison of PTV volumes resulting from the QA outlining exercise for the CHHIP (Conventional or Hypofractionated High Dose IMRT for Prostate Ca.) trial confirmed that there were differences in TPS's margining algorithms. Margining and the clinical impact of the different PTVs in seven different planning and virtual simulation systems (Pinnacle, Plato, Prosoma (MedCom), Eclipse (7.3 and 7.5) (Varian), MasterPlan (Nucletron), Xio (CMS) and Advantage Windows (AW) (GE)) is investigated, and a simple test for 3D margining consistency is proposed. Methods: Using each TPS, two different sets of prostate GTVs on 2.5 mm and 5 mm slices were margined according to the CHHIP protocol to produce PTV3 (prostate + 5 mm/0 mm post), PTV2 (PTV3 + 5 mm) and PTV1 (prostate and seminal vesicles + 10 mm). GTVs and PTVs were imported into Pinnacle for volume calculation. DVHs for 5 mm slice plans, created using the smallest PTVs, were recalculated on the largest PTV dataset and vice versa. Since adding a margin of 50 mm to a structure should give the same result as adding five margins of 10 mm, this was tested for each TPS (consistency test) using an octahedron as the GTV and CT datasets with 2.5 mm and 5 mm slices. Results: The CHHIP PTV3 and PTV1 volumes had a standard deviation, across the seven systems, of 5% and PTV2 (margined twice) 9%, on the 5 mm slices. For 2.5 mm slices the standard deviations were 4% and 6%. The ratio of the Pinnacle and the Eclipse 7.3 PTV2 volumes was 1.25. Rectal doses were significantly increased when encompassing Pinnacle PTVs (V 50 42.8%), compared to Eclipse 7.3 PTVs (V 50 = 36.4%). Conversely, fields that adequately treated an Eclipse 7.3 PTV2 were inadequate for a Pinnacle PTV2. AW and Plato PTV volumes were the most consistent

  18. An improved multileaving algorithm for online ranker evaluation

    DEFF Research Database (Denmark)

    Brost, Brian; Cox, Ingemar Johansson; Seldin, Yevgeny

    2016-01-01

    Online ranker evaluation is a key challenge in information retrieval. An important task in the online evaluation of rankers is using implicit user feedback for inferring preferences between rankers. Interleaving methods have been found to be ecient and sensitive, i.e. they can quickly detect even...

  19. Methodology for quantitative evaluation of diagnostic performance

    International Nuclear Information System (INIS)

    Metz, C.

    1981-01-01

    Of various approaches that might be taken to the diagnostic performance evaluation problem, Receiver Operating Characteristic (ROC) analysis holds great promise. Further development of the methodology for a unified, objective, and meaningful approach to evaluating the usefulness of medical imaging procedures is done by consideration of statistical significance testing, optimal sequencing of correlated studies, and analysis of observer performance

  20. The Study on Food Sensory Evaluation based on Particle Swarm Optimization Algorithm

    OpenAIRE

    Hairong Wang; Huijuan Xu

    2015-01-01

    In this study, it explores the procedures and methods of the system for establishing food sensory evaluation based on particle swarm optimization algorithm, by means of explaining the interpretation of sensory evaluation and sensory analysis, combined with the applying situation of sensory evaluation in food industry.

  1. A Comparative Study of Classification and Regression Algorithms for Modelling Students' Academic Performance

    Science.gov (United States)

    Strecht, Pedro; Cruz, Luís; Soares, Carlos; Mendes-Moreira, João; Abreu, Rui

    2015-01-01

    Predicting the success or failure of a student in a course or program is a problem that has recently been addressed using data mining techniques. In this paper we evaluate some of the most popular classification and regression algorithms on this problem. We address two problems: prediction of approval/failure and prediction of grade. The former is…

  2. The Performance and Development of the Inner Detector Trigger Algorithms at ATLAS for LHC Run 2

    CERN Document Server

    Sowden, Benjamin Charles; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly reimplemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is provided. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2 for the HLT. This new strategy will use a Fast Track Finder (FTF) algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 but with no significant reduction in efficiency. The performance and timing of the algorithms for numerous physics signatures in the trigger are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performan...

  3. A novel evaluation of two related and two independent algorithms for eye movement classification during reading.

    Science.gov (United States)

    Friedman, Lee; Rigas, Ioannis; Abdulin, Evgeny; Komogortsev, Oleg V

    2018-05-15

    Nystrӧm and Holmqvist have published a method for the classification of eye movements during reading (ONH) (Nyström & Holmqvist, 2010). When we applied this algorithm to our data, the results were not satisfactory, so we modified the algorithm (now the MNH) to better classify our data. The changes included: (1) reducing the amount of signal filtering, (2) excluding a new type of noise, (3) removing several adaptive thresholds and replacing them with fixed thresholds, (4) changing the way that the start and end of each saccade was determined, (5) employing a new algorithm for detecting PSOs, and (6) allowing a fixation period to either begin or end with noise. A new method for the evaluation of classification algorithms is presented. It was designed to provide comprehensive feedback to an algorithm developer, in a time-efficient manner, about the types and numbers of classification errors that an algorithm produces. This evaluation was conducted by three expert raters independently, across 20 randomly chosen recordings, each classified by both algorithms. The MNH made many fewer errors in determining when saccades start and end, and it also detected some fixations and saccades that the ONH did not. The MNH fails to detect very small saccades. We also evaluated two additional algorithms: the EyeLink Parser and a more current, machine-learning-based algorithm. The EyeLink Parser tended to find more saccades that ended too early than did the other methods, and we found numerous problems with the output of the machine-learning-based algorithm.

  4. Development and Evaluation of an Automated Machine Learning Algorithm for In-Hospital Mortality Risk Adjustment Among Critical Care Patients.

    Science.gov (United States)

    Delahanty, Ryan J; Kaufman, David; Jones, Spencer S

    2018-06-01

    Risk adjustment algorithms for ICU mortality are necessary for measuring and improving ICU performance. Existing risk adjustment algorithms are not widely adopted. Key barriers to adoption include licensing and implementation costs as well as labor costs associated with human-intensive data collection. Widespread adoption of electronic health records makes automated risk adjustment feasible. Using modern machine learning methods and open source tools, we developed and evaluated a retrospective risk adjustment algorithm for in-hospital mortality among ICU patients. The Risk of Inpatient Death score can be fully automated and is reliant upon data elements that are generated in the course of usual hospital processes. One hundred thirty-one ICUs in 53 hospitals operated by Tenet Healthcare. A cohort of 237,173 ICU patients discharged between January 2014 and December 2016. The data were randomly split into training (36 hospitals), and validation (17 hospitals) data sets. Feature selection and model training were carried out using the training set while the discrimination, calibration, and accuracy of the model were assessed in the validation data set. Model discrimination was evaluated based on the area under receiver operating characteristic curve; accuracy and calibration were assessed via adjusted Brier scores and visual analysis of calibration curves. Seventeen features, including a mix of clinical and administrative data elements, were retained in the final model. The Risk of Inpatient Death score demonstrated excellent discrimination (area under receiver operating characteristic curve = 0.94) and calibration (adjusted Brier score = 52.8%) in the validation dataset; these results compare favorably to the published performance statistics for the most commonly used mortality risk adjustment algorithms. Low adoption of ICU mortality risk adjustment algorithms impedes progress toward increasing the value of the healthcare delivered in ICUs. The Risk of Inpatient Death

  5. PERFORMANCE EVALUATION: LITERATURE REVIEW AND TIME EVOLUTION.

    Directory of Open Access Journals (Sweden)

    Pintea Mirela-Oana

    2012-07-01

    Full Text Available Performance evaluation of an economic entity requires approaching several criteria, such as industry and economic entity type, managerial and entrepreneurial strategy, competitive environment, human and material resources available, using a system of appropriate performance indicators for this purpose.The exigencies of communication occurred on the growing number of phenomena that marked the global economy in recent decades (internationalization and relocation of business crises and turmoil in financial markets, demand performance measurement to be made in a comprehensive way by financial and non-financial criteria. Indicators are measures of performance used by management to measure, report and improve performance of the economic entity. The relationship between indicators and management is ensured by the existence of performance measurement systems. Studies to date indicate that economic entities using balanced performance measurement systems as a key management tool registered superior performance compared to entities not using such systems. This study attempts to address the issue of performance evaluation by presenting opinions of different authors concerning the process of performance measurement and to present, after revising the literature, the evolution of the performance evaluation systems. We tried to do this literature review because sustainable development and, therefore, globalization require new standards of performance that exceeds the economic field, both for domestic companies as well as international ones. So, these standards should be integrated into corporate strategy development to ensure sustainability of activities undertaken by harmonizing the economic, social and environmental objectives. To assess the performance of economic entities it is required that performance evaluation to be done with a balanced multidimensional system, including both financial ratios and non-financial indicators in order to reduce the limits of

  6. Performance in population models for count data, part II: a new SAEM algorithm

    Science.gov (United States)

    Savic, Radojka; Lavielle, Marc

    2009-01-01

    Analysis of count data from clinical trials using mixed effect analysis has recently become widely used. However, algorithms available for the parameter estimation, including LAPLACE and Gaussian quadrature (GQ), are associated with certain limitations, including bias in parameter estimates and the long analysis runtime. The stochastic approximation expectation maximization (SAEM) algorithm has proven to be a very efficient and powerful tool in the analysis of continuous data. The aim of this study was to implement and investigate the performance of a new SAEM algorithm for application to count data. A new SAEM algorithm was implemented in MATLAB for estimation of both, parameters and the Fisher information matrix. Stochastic Monte Carlo simulations followed by re-estimation were performed according to scenarios used in previous studies (part I) to investigate properties of alternative algorithms (1). A single scenario was used to explore six probability distribution models. For parameter estimation, the relative bias was less than 0.92% and 4.13 % for fixed and random effects, for all models studied including ones accounting for over- or under-dispersion. Empirical and estimated relative standard errors were similar, with distance between them being <1.7 % for all explored scenarios. The longest CPU time was 95s for parameter estimation and 56s for SE estimation. The SAEM algorithm was extended for analysis of count data. It provides accurate estimates of both, parameters and standard errors. The estimation is significantly faster compared to LAPLACE and GQ. The algorithm is implemented in Monolix 3.1, (beta-version available in July 2009). PMID:19680795

  7. A Study on the Enhanced Best Performance Algorithm for the Just-in-Time Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Sivashan Chetty

    2015-01-01

    Full Text Available The Just-In-Time (JIT scheduling problem is an important subject of study. It essentially constitutes the problem of scheduling critical business resources in an attempt to optimize given business objectives. This problem is NP-Hard in nature, hence requiring efficient solution techniques. To solve the JIT scheduling problem presented in this study, a new local search metaheuristic algorithm, namely, the enhanced Best Performance Algorithm (eBPA, is introduced. This is part of the initial study of the algorithm for scheduling problems. The current problem setting is the allocation of a large number of jobs required to be scheduled on multiple and identical machines which run in parallel. The due date of a job is characterized by a window frame of time, rather than a specific point in time. The performance of the eBPA is compared against Tabu Search (TS and Simulated Annealing (SA. SA and TS are well-known local search metaheuristic algorithms. The results show the potential of the eBPA as a metaheuristic algorithm.

  8. Signal and image processing algorithm performance in a virtual and elastic computing environment

    Science.gov (United States)

    Bennett, Kelly W.; Robertson, James

    2013-05-01

    The U.S. Army Research Laboratory (ARL) supports the development of classification, detection, tracking, and localization algorithms using multiple sensing modalities including acoustic, seismic, E-field, magnetic field, PIR, and visual and IR imaging. Multimodal sensors collect large amounts of data in support of algorithm development. The resulting large amount of data, and their associated high-performance computing needs, increases and challenges existing computing infrastructures. Purchasing computer power as a commodity using a Cloud service offers low-cost, pay-as-you-go pricing models, scalability, and elasticity that may provide solutions to develop and optimize algorithms without having to procure additional hardware and resources. This paper provides a detailed look at using a commercial cloud service provider, such as Amazon Web Services (AWS), to develop and deploy simple signal and image processing algorithms in a cloud and run the algorithms on a large set of data archived in the ARL Multimodal Signatures Database (MMSDB). Analytical results will provide performance comparisons with existing infrastructure. A discussion on using cloud computing with government data will discuss best security practices that exist within cloud services, such as AWS.

  9. Performance of humans vs. exploration algorithms on the Tower of London Test.

    Directory of Open Access Journals (Sweden)

    Eric Fimbel

    Full Text Available The Tower of London Test (TOL used to assess executive functions was inspired in Artificial Intelligence tasks used to test problem-solving algorithms. In this study, we compare the performance of humans and of exploration algorithms. Instead of absolute execution times, we focus on how the execution time varies with the tasks and/or the number of moves. This approach used in Algorithmic Complexity provides a fair comparison between humans and computers, although humans are several orders of magnitude slower. On easy tasks (1 to 5 moves, healthy elderly persons performed like exploration algorithms using bounded memory resources, i.e., the execution time grew exponentially with the number of moves. This result was replicated with a group of healthy young participants. However, for difficult tasks (5 to 8 moves the execution time of young participants did not increase significantly, whereas for exploration algorithms, the execution time keeps on increasing exponentially. A pre-and post-test control task showed a 25% improvement of visuo-motor skills but this was insufficient to explain this result. The findings suggest that naive participants used systematic exploration to solve the problem but under the effect of practice, they developed markedly more efficient strategies using the information acquired during the test.

  10. PEACE: pulsar evaluation algorithm for candidate extraction - a software package for post-analysis processing of pulsar survey candidates

    Science.gov (United States)

    Lee, K. J.; Stovall, K.; Jenet, F. A.; Martinez, J.; Dartez, L. P.; Mata, A.; Lunsford, G.; Cohen, S.; Biwer, C. M.; Rohr, M.; Flanigan, J.; Walker, A.; Banaszak, S.; Allen, B.; Barr, E. D.; Bhat, N. D. R.; Bogdanov, S.; Brazier, A.; Camilo, F.; Champion, D. J.; Chatterjee, S.; Cordes, J.; Crawford, F.; Deneva, J.; Desvignes, G.; Ferdman, R. D.; Freire, P.; Hessels, J. W. T.; Karuppusamy, R.; Kaspi, V. M.; Knispel, B.; Kramer, M.; Lazarus, P.; Lynch, R.; Lyne, A.; McLaughlin, M.; Ransom, S.; Scholz, P.; Siemens, X.; Spitler, L.; Stairs, I.; Tan, M.; van Leeuwen, J.; Zhu, W. W.

    2013-07-01

    Modern radio pulsar surveys produce a large volume of prospective candidates, the majority of which are polluted by human-created radio frequency interference or other forms of noise. Typically, large numbers of candidates need to be visually inspected in order to determine if they are real pulsars. This process can be labour intensive. In this paper, we introduce an algorithm called Pulsar Evaluation Algorithm for Candidate Extraction (PEACE) which improves the efficiency of identifying pulsar signals. The algorithm ranks the candidates based on a score function. Unlike popular machine-learning-based algorithms, no prior training data sets are required. This algorithm has been applied to data from several large-scale radio pulsar surveys. Using the human-based ranking results generated by students in the Arecibo Remote Command Center programme, the statistical performance of PEACE was evaluated. It was found that PEACE ranked 68 per cent of the student-identified pulsars within the top 0.17 per cent of sorted candidates, 95 per cent within the top 0.34 per cent and 100 per cent within the top 3.7 per cent. This clearly demonstrates that PEACE significantly increases the pulsar identification rate by a factor of about 50 to 1000. To date, PEACE has been directly responsible for the discovery of 47 new pulsars, 5 of which are millisecond pulsars that may be useful for pulsar timing based gravitational-wave detection projects.

  11. Proposed evaluation framework for assessing operator performance with multisensor displays

    Science.gov (United States)

    Foyle, David C.

    1992-01-01

    Despite aggressive work on the development of sensor fusion algorithms and techniques, no formal evaluation procedures have been proposed. Based on existing integration models in the literature, an evaluation framework is developed to assess an operator's ability to use multisensor, or sensor fusion, displays. The proposed evaluation framework for evaluating the operator's ability to use such systems is a normative approach: The operator's performance with the sensor fusion display can be compared to the models' predictions based on the operator's performance when viewing the original sensor displays prior to fusion. This allows for the determination as to when a sensor fusion system leads to: 1) poorer performance than one of the original sensor displays (clearly an undesirable system in which the fused sensor system causes some distortion or interference); 2) better performance than with either single sensor system alone, but at a sub-optimal (compared to the model predictions) level; 3) optimal performance (compared to model predictions); or, 4) super-optimal performance, which may occur if the operator were able to use some highly diagnostic 'emergent features' in the sensor fusion display, which were unavailable in the original sensor displays. An experiment demonstrating the usefulness of the proposed evaluation framework is discussed.

  12. Evaluation of Retrieval Algorithms for Ice Microphysics Using CALIPSO/CloudSat and Earthcare

    Directory of Open Access Journals (Sweden)

    Okamoto Hajime

    2016-01-01

    We performed several sensitivity studies to evaluate uncertainties in the retrieved ice microphysics due to ice particle orientation and shape. It was found that the implementation of orientation of horizontally oriented ice plate model in the algorithm drastically improved the retrieval results in both for nadir- and off-nadir lidar pointing periods. Differences in the retrieved microphysics between only randomly oriented ice model (3D-ice and mixture of 3D-ice and Q2Dplate model were large especially in off-nadir period, e.g., 100% in effective radius and one order in ice water content, respectively. And differences in the retrieved ice microphysics among different mixture models were smaller than about 50% for effective radius in nadir period.

  13. Performance Comparison of Reconstruction Algorithms in Discrete Blind Multi-Coset Sampling

    DEFF Research Database (Denmark)

    Grigoryan, Ruben; Arildsen, Thomas; Tandur, Deepaknath

    2012-01-01

    This paper investigates the performance of different reconstruction algorithms in discrete blind multi-coset sampling. Multi-coset scheme is a promising compressed sensing architecture that can replace traditional Nyquist-rate sampling in the applications with multi-band frequency sparse signals...

  14. Drug Safety Monitoring in Children: Performance of Signal Detection Algorithms and Impact of Age Stratification

    NARCIS (Netherlands)

    O.U. Osokogu (Osemeke); C. Dodd (Caitlin); A.C. Pacurariu (Alexandra C.); F. Kaguelidou (Florentia); D.M. Weibel (Daniel); M.C.J.M. Sturkenboom (Miriam)

    2016-01-01

    textabstractIntroduction: Spontaneous reports of suspected adverse drug reactions (ADRs) can be analyzed to yield additional drug safety evidence for the pediatric population. Signal detection algorithms (SDAs) are required for these analyses; however, the performance of SDAs in the pediatric

  15. State-Dependent Decoding Algorithms Improve the Performance of a Bidirectional BMI in Anesthetized Rats

    Directory of Open Access Journals (Sweden)

    Vito De Feo

    2017-05-01

    Full Text Available Brain-machine interfaces (BMIs promise to improve the quality of life of patients suffering from sensory and motor disabilities by creating a direct communication channel between the brain and the external world. Yet, their performance is currently limited by the relatively small amount of information that can be decoded from neural activity recorded form the brain. We have recently proposed that such decoding performance may be improved when using state-dependent decoding algorithms that predict and discount the large component of the trial-to-trial variability of neural activity which is due to the dependence of neural responses on the network's current internal state. Here we tested this idea by using a bidirectional BMI to investigate the gain in performance arising from using a state-dependent decoding algorithm. This BMI, implemented in anesthetized rats, controlled the movement of a dynamical system using neural activity decoded from motor cortex and fed back to the brain the dynamical system's position by electrically microstimulating somatosensory cortex. We found that using state-dependent algorithms that tracked the dynamics of ongoing activity led to an increase in the amount of information extracted form neural activity by 22%, with a consequently increase in all of the indices measuring the BMI's performance in controlling the dynamical system. This suggests that state-dependent decoding algorithms may be used to enhance BMIs at moderate computational cost.

  16. State-Dependent Decoding Algorithms Improve the Performance of a Bidirectional BMI in Anesthetized Rats.

    Science.gov (United States)

    De Feo, Vito; Boi, Fabio; Safaai, Houman; Onken, Arno; Panzeri, Stefano; Vato, Alessandro

    2017-01-01

    Brain-machine interfaces (BMIs) promise to improve the quality of life of patients suffering from sensory and motor disabilities by creating a direct communication channel between the brain and the external world. Yet, their performance is currently limited by the relatively small amount of information that can be decoded from neural activity recorded form the brain. We have recently proposed that such decoding performance may be improved when using state-dependent decoding algorithms that predict and discount the large component of the trial-to-trial variability of neural activity which is due to the dependence of neural responses on the network's current internal state. Here we tested this idea by using a bidirectional BMI to investigate the gain in performance arising from using a state-dependent decoding algorithm. This BMI, implemented in anesthetized rats, controlled the movement of a dynamical system using neural activity decoded from motor cortex and fed back to the brain the dynamical system's position by electrically microstimulating somatosensory cortex. We found that using state-dependent algorithms that tracked the dynamics of ongoing activity led to an increase in the amount of information extracted form neural activity by 22%, with a consequently increase in all of the indices measuring the BMI's performance in controlling the dynamical system. This suggests that state-dependent decoding algorithms may be used to enhance BMIs at moderate computational cost.

  17. Design, implementation and evaluation of a practical pseudoknot folding algorithm based on thermodynamics

    Directory of Open Access Journals (Sweden)

    Giegerich Robert

    2004-08-01

    Full Text Available Abstract Background The general problem of RNA secondary structure prediction under the widely used thermodynamic model is known to be NP-complete when the structures considered include arbitrary pseudoknots. For restricted classes of pseudoknots, several polynomial time algorithms have been designed, where the O(n6time and O(n4 space algorithm by Rivas and Eddy is currently the best available program. Results We introduce the class of canonical simple recursive pseudoknots and present an algorithm that requires O(n4 time and O(n2 space to predict the energetically optimal structure of an RNA sequence, possible containing such pseudoknots. Evaluation against a large collection of known pseudoknotted structures shows the adequacy of the canonization approach and our algorithm. Conclusions RNA pseudoknots of medium size can now be predicted reliably as well as efficiently by the new algorithm.

  18. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Seung Geun; Seong, Poong Hyun [KAIST, Daejeon (Korea, Republic of)

    2016-10-15

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time.

  19. Development of Nuclear Power Plant Safety Evaluation Method for the Automation Algorithm Application

    International Nuclear Information System (INIS)

    Kim, Seung Geun; Seong, Poong Hyun

    2016-01-01

    It is commonly believed that replacing human operators to the automated system would guarantee greater efficiency, lower workloads, and fewer human error. Conventional machine learning techniques are considered as not capable to handle complex situations in NPP. Due to these kinds of issues, automation is not actively adopted although human error probability drastically increases during abnormal situations in NPP due to overload of information, high workload, and short time available for diagnosis. Recently, new machine learning techniques, which are known as ‘deep learning’ techniques have been actively applied to many fields, and the deep learning technique-based artificial intelligences (AIs) are showing better performance than conventional AIs. In 2015, deep Q-network (DQN) which is one of the deep learning techniques was developed and applied to train AI that automatically plays various Atari 2800 games, and this AI surpassed the human-level playing in many kind of games. Also in 2016, ‘Alpha-Go’, which was developed by ‘Google Deepmind’ based on deep learning technique to play the game of Go (i.e. Baduk), was defeated Se-dol Lee who is the World Go champion with score of 4:1. By the effort for reducing human error in NPPs, the ultimate goal of this study is the development of automation algorithm which can cover various situations in NPPs. As the first part, quantitative and real-time NPP safety evaluation method is being developed in order to provide the training criteria for automation algorithm. For that, EWS concept of medical field was adopted, and the applicability is investigated in this paper. Practically, the application of full automation (i.e. fully replaces human operators) may requires much more time for the validation and investigation of side-effects after the development of automation algorithm, and so the adoption in the form of full automation will take long time

  20. Evaluation of a new commercial Monte Carlo dose calculation algorithm for electron beams.

    Science.gov (United States)

    Vandervoort, Eric J; Tchistiakova, Ekaterina; La Russa, Daniel J; Cygler, Joanna E

    2014-02-01

    In this report the authors present the validation of a Monte Carlo dose calculation algorithm (XiO EMC from Elekta Software) for electron beams. Calculated and measured dose distributions were compared for homogeneous water phantoms and for a 3D heterogeneous phantom meant to approximate the geometry of a trachea and spine. Comparisons of measurements and calculated data were performed using 2D and 3D gamma index dose comparison metrics. Measured outputs agree with calculated values within estimated uncertainties for standard and extended SSDs for open applicators, and for cutouts, with the exception of the 17 MeV electron beam at extended SSD for cutout sizes smaller than 5 × 5 cm(2). Good agreement was obtained between calculated and experimental depth dose curves and dose profiles (minimum number of measurements that pass a 2%/2 mm agreement 2D gamma index criteria for any applicator or energy was 97%). Dose calculations in a heterogeneous phantom agree with radiochromic film measurements (>98% of pixels pass a 3 dimensional 3%/2 mm γ-criteria) provided that the steep dose gradient in the depth direction is considered. Clinically acceptable agreement (at the 2%/2 mm level) between the measurements and calculated data for measurements in water are obtained for this dose calculation algorithm. Radiochromic film is a useful tool to evaluate the accuracy of electron MC treatment planning systems in heterogeneous media.

  1. Comparison of algorithms of testing for use in automated evaluation of sensation.

    Science.gov (United States)

    Dyck, P J; Karnes, J L; Gillen, D A; O'Brien, P C; Zimmerman, I R; Johnson, D M

    1990-10-01

    Estimates of vibratory detection threshold may be used to detect, characterize, and follow the course of sensory abnormality in neurologic disease. The approach is especially useful in epidemiologic and controlled clinical trials. We studied which algorithm of testing and finding threshold should be used in automatic systems by comparing among algorithms and stimulus conditions for the index finger of healthy subjects and for the great toe of patients with mild neuropathy. Appearance thresholds obtained by linear ramps increasing at a rate less than 4.15 microns/sec provided accurate and repeatable thresholds compared with thresholds obtained by forced-choice testing. These rates would be acceptable if only sensitive sites were studied, but they were too slow for use in automatic testing of insensitive parts. Appearance thresholds obtained by fast linear rates (4.15 or 16.6 microns/sec) overestimated threshold, especially for sensitive parts. Use of the mean of appearance and disappearance thresholds, with the stimulus increasing exponentially at rates of 0.5 or 1.0 just noticeable difference (JND) units per second, and interspersion of null stimuli, Békésy with null stimuli, provided accurate, repeatable, and fast estimates of threshold for sensitive parts. Despite the good performance of Békésy testing, we prefer forced choice for evaluation of the sensation of patients with neuropathy.

  2. A modified scout bee for artificial bee colony algorithm and its performance on optimization problems

    Directory of Open Access Journals (Sweden)

    Syahid Anuar

    2016-10-01

    Full Text Available The artificial bee colony (ABC is one of the swarm intelligence algorithms used to solve optimization problems which is inspired by the foraging behaviour of the honey bees. In this paper, artificial bee colony with the rate of change technique which models the behaviour of scout bee to improve the performance of the standard ABC in terms of exploration is introduced. The technique is called artificial bee colony rate of change (ABC-ROC because the scout bee process depends on the rate of change on the performance graph, replace the parameter limit. The performance of ABC-ROC is analysed on a set of benchmark problems and also on the effect of the parameter colony size. Furthermore, the performance of ABC-ROC is compared with the state of the art algorithms.

  3. Evaluation of an Automated Swallow-Detection Algorithm Using Visual Biofeedback in Healthy Adults and Head and Neck Cancer Survivors.

    Science.gov (United States)

    Constantinescu, Gabriela; Kuffel, Kristina; Aalto, Daniel; Hodgetts, William; Rieger, Jana

    2017-11-02

    Mobile health (mHealth) technologies may offer an opportunity to address longstanding clinical challenges, such as access and adherence to swallowing therapy. Mobili-T ® is an mHealth device that uses surface electromyography (sEMG) to provide biofeedback on submental muscles activity during exercise. An automated swallow-detection algorithm was developed for Mobili-T ® . This study evaluated the performance of the swallow-detection algorithm. Ten healthy participants and 10 head and neck cancer (HNC) patients were fitted with the device. Signal was acquired during regular, effortful, and Mendelsohn maneuver saliva swallows, as well as lip presses, tongue, and head movements. Signals of interest were tagged during data acquisition and used to evaluate algorithm performance. Sensitivity and positive predictive values (PPV) were calculated for each participant. Saliva swallows were compared between HNC and controls in the four sEMG-based parameters used in the algorithm: duration, peak amplitude ratio, median frequency, and 15th percentile of the power spectrum density. In healthy participants, sensitivity and PPV were 92.3 and 83.9%, respectively. In HNC patients, sensitivity was 92.7% and PPV was 72.2%. In saliva swallows, HNC patients had longer event durations (U = 1925.5, p performed well with healthy participants and retained a high sensitivity, but had lowered PPV with HNC patients. With respect to Mobili-T ® , the algorithm will next be evaluated using the mHealth system.

  4. Performance improvement of VAV air conditioning system through feedforward compensation decoupling and genetic algorithm

    International Nuclear Information System (INIS)

    Wang Jun; Wang Yan

    2008-01-01

    VAV (variable air volume) control system has the feature of multi-control loops. While all the control loops are working together, they interfere and influence each other. This paper designs the decoupling compensation unit in VAV system in the method of feedforward compensation. This paper also designs the controller parameters of VAV system by means of inverse deducing and the genetic algorithm. Experimental results demonstrate that the combination of the feedforward compensation decoupling and the controller optimization by genetic algorithm can improve the performance of the VAV control system

  5. δ-Similar Elimination to Enhance Search Performance of Multiobjective Evolutionary Algorithms

    Science.gov (United States)

    Aguirre, Hernán; Sato, Masahiko; Tanaka, Kiyoshi

    In this paper, we propose δ-similar elimination to improve the search performance of multiobjective evolutionary algorithms in combinatorial optimization problems. This method eliminates similar individuals in objective space to fairly distribute selection among the different regions of the instantaneous Pareto front. We investigate four eliminating methods analyzing their effects using NSGA-II. In addition, we compare the search performance of NSGA-II enhanced by our method and NSGA-II enhanced by controlled elitism.

  6. Utilizing Machine Learning and Automated Performance Metrics to Evaluate Robot-Assisted Radical Prostatectomy Performance and Predict Outcomes.

    Science.gov (United States)

    Hung, Andrew J; Chen, Jian; Che, Zhengping; Nilanon, Tanachat; Jarc, Anthony; Titus, Micha; Oh, Paul J; Gill, Inderbir S; Liu, Yan

    2018-05-01

    Surgical performance is critical for clinical outcomes. We present a novel machine learning (ML) method of processing automated performance metrics (APMs) to evaluate surgical performance and predict clinical outcomes after robot-assisted radical prostatectomy (RARP). We trained three ML algorithms utilizing APMs directly from robot system data (training material) and hospital length of stay (LOS; training label) (≤2 days and >2 days) from 78 RARP cases, and selected the algorithm with the best performance. The selected algorithm categorized the cases as "Predicted as expected LOS (pExp-LOS)" and "Predicted as extended LOS (pExt-LOS)." We compared postoperative outcomes of the two groups (Kruskal-Wallis/Fisher's exact tests). The algorithm then predicted individual clinical outcomes, which we compared with actual outcomes (Spearman's correlation/Fisher's exact tests). Finally, we identified five most relevant APMs adopted by the algorithm during predicting. The "Random Forest-50" (RF-50) algorithm had the best performance, reaching 87.2% accuracy in predicting LOS (73 cases as "pExp-LOS" and 5 cases as "pExt-LOS"). The "pExp-LOS" cases outperformed the "pExt-LOS" cases in surgery time (3.7 hours vs 4.6 hours, p = 0.007), LOS (2 days vs 4 days, p = 0.02), and Foley duration (9 days vs 14 days, p = 0.02). Patient outcomes predicted by the algorithm had significant association with the "ground truth" in surgery time (p algorithm in predicting, were largely related to camera manipulation. To our knowledge, ours is the first study to show that APMs and ML algorithms may help assess surgical RARP performance and predict clinical outcomes. With further accrual of clinical data (oncologic and functional data), this process will become increasingly relevant and valuable in surgical assessment and training.

  7. Evaluation of sea-surface photosynthetically available radiation algorithms under various sky conditions and solar elevations.

    Science.gov (United States)

    Somayajula, Srikanth Ayyala; Devred, Emmanuel; Bélanger, Simon; Antoine, David; Vellucci, V; Babin, Marcel

    2018-04-20

    In this study, we report on the performance of satellite-based photosynthetically available radiation (PAR) algorithms used in published oceanic primary production models. The performance of these algorithms was evaluated using buoy observations under clear and cloudy skies, and for the particular case of low sun angles typically encountered at high latitudes or at moderate latitudes in winter. The PAR models consisted of (i) the standard one from the NASA-Ocean Biology Processing Group (OBPG), (ii) the Gregg and Carder (GC) semi-analytical clear-sky model, and (iii) look-up-tables based on the Santa Barbara DISORT atmospheric radiative transfer (SBDART) model. Various combinations of atmospheric inputs, empirical cloud corrections, and semi-analytical irradiance models yielded a total of 13 (11 + 2 developed in this study) different PAR products, which were compared with in situ measurements collected at high frequency (15 min) at a buoy site in the Mediterranean Sea (the "BOUée pour l'acquiSition d'une Série Optique à Long termE," or, "BOUSSOLE" site). An objective ranking method applied to the algorithm results indicated that seven PAR products out of 13 were well in agreement with the in situ measurements. Specifically, the OBPG method showed the best overall performance with a root mean square difference (RMSD) (bias) of 19.7% (6.6%) and 10% (6.3%) followed by the look-up-table method with a RMSD (bias) of 25.5% (6.8%) and 9.6% (2.6%) at daily and monthly scales, respectively. Among the four methods based on clear-sky PAR empirically corrected for cloud cover, the Dobson and Smith method consistently underestimated daily PAR while the Budyko formulation overestimated daily PAR. Empirically cloud-corrected methods using cloud fraction (CF) performed better under quasi-clear skies (CF0.7), however, all methods showed larger RMSD differences (biases) ranging between 32% and 80.6% (-54.5%-8.7%). Finally, three methods tested for low sun elevations revealed

  8. NPP Krsko natural circulation performance evaluation

    International Nuclear Information System (INIS)

    Segon, Velimir; Bajs, Tomislav; Frogheri, Monica

    1999-01-01

    The present document deals with an evaluation of the natural circulation performance of the Krsko nuclear power plant. Two calculation have been performed using the NPP Krsko nodalization (both similar to the LOBI A2-77 natural circulation experiment) - the first with the present steam generators at NPP Krsko (Westinghouse, 18% plugged), the second with the future steam generators (Siemens, 0% plugged). The results were evaluated using the natural circulation flow map derived in /1/, and were compared to evaluate the influence of the new steam generators on the natural circulation performance. (author)

  9. Evaluation of software based redundancy algorithms for the EOS storage system at CERN

    International Nuclear Information System (INIS)

    Peters, Andreas-Joachim; Sindrilaru, Elvin Alin; Zigann, Philipp

    2012-01-01

    EOS is a new disk based storage system used in production at CERN since autumn 2011. It is implemented using the plug-in architecture of the XRootD software framework and allows remote file access via XRootD protocol or POSIX-like file access via FUSE mounting. EOS was designed to fulfill specific requirements of disk storage scalability and IO scheduling performance for LHC analysis use cases. This is achieved by following a strategy of decoupling disk and tape storage as individual storage systems. A key point of the EOS design is to provide high availability and redundancy of files via a software implementation which uses disk-only storage systems without hardware RAID arrays. All this is aimed at reducing the overall cost of the system and also simplifying the operational procedures. This paper presents the advantages and disadvantages of redundancy by hardware (most classical storage installations) in comparison to redundancy by software. The latter is implemented in the EOS system and achieves its goal by spawning data and parity stripes via remote file access over nodes. The gain in redundancy and reliability comes with a trade-off in the following areas: • Increased complexity of the network connectivity • CPU intensive parity computations during file creation and recovery • Performance loss through remote disk coupling An evaluation and performance figures of several redundancy algorithms are presented for dual parity RAID and Reed-Solomon codecs. Moreover, the characteristics and applicability of these algorithms are discussed in the context of reliable data storage systems.

  10. Benchmark for Peak Detection Algorithms in Fiber Bragg Grating Interrogation and a New Neural Network for its Performance Improvement

    Science.gov (United States)

    Negri, Lucas; Nied, Ademir; Kalinowski, Hypolito; Paterno, Aleksander

    2011-01-01

    This paper presents a benchmark for peak detection algorithms employed in fiber Bragg grating spectrometric interrogation systems. The accuracy, precision, and computational performance of currently used algorithms and those of a new proposed artificial neural network algorithm are compared. Centroid and gaussian fitting algorithms are shown to have the highest precision but produce systematic errors that depend on the FBG refractive index modulation profile. The proposed neural network displays relatively good precision with reduced systematic errors and improved computational performance when compared to other networks. Additionally, suitable algorithms may be chosen with the general guidelines presented. PMID:22163806

  11. A Systematic Evaluation of Feature Selection and Classification Algorithms Using Simulated and Real miRNA Sequencing Data

    Directory of Open Access Journals (Sweden)

    Sheng Yang

    2015-01-01

    Full Text Available Sequencing is widely used to discover associations between microRNAs (miRNAs and diseases. However, the negative binomial distribution (NB and high dimensionality of data obtained using sequencing can lead to low-power results and low reproducibility. Several statistical learning algorithms have been proposed to address sequencing data, and although evaluation of these methods is essential, such studies are relatively rare. The performance of seven feature selection (FS algorithms, including baySeq, DESeq, edgeR, the rank sum test, lasso, particle swarm optimistic decision tree, and random forest (RF, was compared by simulation under different conditions based on the difference of the mean, the dispersion parameter of the NB, and the signal to noise ratio. Real data were used to evaluate the performance of RF, logistic regression, and support vector machine. Based on the simulation and real data, we discuss the behaviour of the FS and classification algorithms. The Apriori algorithm identified frequent item sets (mir-133a, mir-133b, mir-183, mir-937, and mir-96 from among the deregulated miRNAs of six datasets from The Cancer Genomics Atlas. Taking these findings altogether and considering computational memory requirements, we propose a strategy that combines edgeR and DESeq for large sample sizes.

  12. Evaluation of vertical coordinate and vertical mixing algorithms in the HYbrid-Coordinate Ocean Model (HYCOM)

    Science.gov (United States)

    Halliwell, George R.

    Vertical coordinate and vertical mixing algorithms included in the HYbrid Coordinate Ocean Model (HYCOM) are evaluated in low-resolution climatological simulations of the Atlantic Ocean. The hybrid vertical coordinates are isopycnic in the deep ocean interior, but smoothly transition to level (pressure) coordinates near the ocean surface, to sigma coordinates in shallow water regions, and back again to level coordinates in very shallow water. By comparing simulations to climatology, the best model performance is realized using hybrid coordinates in conjunction with one of the three available differential vertical mixing models: the nonlocal K-Profile Parameterization, the NASA GISS level 2 turbulence closure, and the Mellor-Yamada level 2.5 turbulence closure. Good performance is also achieved using the quasi-slab Price-Weller-Pinkel dynamical instability model. Differences among these simulations are too small relative to other errors and biases to identify the "best" vertical mixing model for low-resolution climate simulations. Model performance deteriorates slightly when the Kraus-Turner slab mixed layer model is used with hybrid coordinates. This deterioration is smallest when solar radiation penetrates beneath the mixed layer and when shear instability mixing is included. A simulation performed using isopycnic coordinates to emulate the Miami Isopycnic Coordinate Ocean Model (MICOM), which uses Kraus-Turner mixing without penetrating shortwave radiation and shear instability mixing, demonstrates that the advantages of switching from isopycnic to hybrid coordinates and including more sophisticated turbulence closures outweigh the negative numerical effects of maintaining hybrid vertical coordinates.

  13. Decision Diagram Based Symbolic Algorithm for Evaluating the Reliability of a Multistate Flow Network

    Directory of Open Access Journals (Sweden)

    Rongsheng Dong

    2016-01-01

    Full Text Available Evaluating the reliability of Multistate Flow Network (MFN is an NP-hard problem. Ordered binary decision diagram (OBDD or variants thereof, such as multivalued decision diagram (MDD, are compact and efficient data structures suitable for dealing with large-scale problems. Two symbolic algorithms for evaluating the reliability of MFN, MFN_OBDD and MFN_MDD, are proposed in this paper. In the algorithms, several operating functions are defined to prune the generated decision diagrams. Thereby the state space of capacity combinations is further compressed and the operational complexity of the decision diagrams is further reduced. Meanwhile, the related theoretical proofs and complexity analysis are carried out. Experimental results show the following: (1 compared to the existing decomposition algorithm, the proposed algorithms take less memory space and fewer loops. (2 The number of nodes and the number of variables of MDD generated in MFN_MDD algorithm are much smaller than those of OBDD built in the MFN_OBDD algorithm. (3 In two cases with the same number of arcs, the proposed algorithms are more suitable for calculating the reliability of sparse networks.

  14. Bobcat 2013: a hyperspectral data collection supporting the development and evaluation of spatial-spectral algorithms

    Science.gov (United States)

    Kaufman, Jason; Celenk, Mehmet; White, A. K.; Stocker, Alan D.

    2014-06-01

    The amount of hyperspectral imagery (HSI) data currently available is relatively small compared to other imaging modalities, and what is suitable for developing, testing, and evaluating spatial-spectral algorithms is virtually nonexistent. In this work, a significant amount of coincident airborne hyperspectral and high spatial resolution panchromatic imagery that supports the advancement of spatial-spectral feature extraction algorithms was collected to address this need. The imagery was collected in April 2013 for Ohio University by the Civil Air Patrol, with their Airborne Real-time Cueing Hyperspectral Enhanced Reconnaissance (ARCHER) sensor. The target materials, shapes, and movements throughout the collection area were chosen such that evaluation of change detection algorithms, atmospheric compensation techniques, image fusion methods, and material detection and identification algorithms is possible. This paper describes the collection plan, data acquisition, and initial analysis of the collected imagery.

  15. CVFEM for Multiphase Flow with Disperse and Interface Tracking, and Algorithms Performances

    Directory of Open Access Journals (Sweden)

    M. Milanez

    2015-12-01

    Full Text Available A Control-Volume Finite-Element Method (CVFEM is newly formulated within Eulerian and spatial averaging frameworks for effective simulation of disperse transport, deposit distribution and interface tracking. Their algorithms are implemented alongside an existing continuous phase algorithm. Flow terms are newly implemented for a control volume (CV fixed in a space, and the CVs' equations are assembled based on a finite element method (FEM. Upon impacting stationary and moving boundaries, the disperse phase changes its phase and the solver triggers identification of CVs with excess deposit and their neighboring CVs for its accommodation in front of an interface. The solver then updates boundary conditions on the moving interface as well as domain conditions on the accumulating deposit. Corroboration of the algorithms' performances is conducted on illustrative simulations with novel and existing Eulerian and Lagrangian solutions, such as (- other, i. e. external methods with analytical and physical experimental formulations, and (- characteristics internal to CVFEM.

  16. NESSIE: A European Approach to Evaluate Cryptographic Algorithms

    OpenAIRE

    Preneel, Bart

    2002-01-01

    The NESSIE project (New European Schemes for Signature, Integrity and Encryption) intends to put forward a portfolio containing the next generation of cryptographic primitives. These primitives will offer a higher security level than existing primitives, and/or will offer a higher confidence level, built up by an open evaluation process. Moreover, they should be better suited for the constraints of future hardware and software environments. In order to reach this goal, the project has launche...

  17. Conductor gestures influence evaluations of ensemble performance.

    Science.gov (United States)

    Morrison, Steven J; Price, Harry E; Smedley, Eric M; Meals, Cory D

    2014-01-01

    Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor's gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance: articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and non-majors (N = 285) viewed sixteen 30 s performances and evaluated the quality of the ensemble's articulation, dynamics, technique, and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble's performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity.

  18. Conductor gestures influence evaluations of ensemble performance

    Directory of Open Access Journals (Sweden)

    Steven eMorrison

    2014-07-01

    Full Text Available Previous research has found that listener evaluations of ensemble performances vary depending on the expressivity of the conductor’s gestures, even when performances are otherwise identical. It was the purpose of the present study to test whether this effect of visual information was evident in the evaluation of specific aspects of ensemble performance, articulation and dynamics. We constructed a set of 32 music performances that combined auditory and visual information and were designed to feature a high degree of contrast along one of two target characteristics: articulation and dynamics. We paired each of four music excerpts recorded by a chamber ensemble in both a high- and low-contrast condition with video of four conductors demonstrating high- and low-contrast gesture specifically appropriate to either articulation or dynamics. Using one of two equivalent test forms, college music majors and nonmajors (N = 285 viewed sixteen 30-second performances and evaluated the quality of the ensemble’s articulation, dynamics, technique and tempo along with overall expressivity. Results showed significantly higher evaluations for performances featuring high rather than low conducting expressivity regardless of the ensemble’s performance quality. Evaluations for both articulation and dynamics were strongly and positively correlated with evaluations of overall ensemble expressivity.

  19. Accuracy of claims-based algorithms for epilepsy research: Revealing the unseen performance of claims-based studies.

    Science.gov (United States)

    Moura, Lidia M V R; Price, Maggie; Cole, Andrew J; Hoch, Daniel B; Hsu, John

    2017-04-01

    To evaluate published algorithms for the identification of epilepsy cases in medical claims data using a unique linked dataset with both clinical and claims data. Using data from a large, regional health delivery system, we identified all patients contributing biologic samples to the health system's Biobank (n = 36K). We identified all subjects with at least one diagnosis potentially consistent with epilepsy, for example, epilepsy, convulsions, syncope, or collapse, between 2014 and 2015, or who were seen at the epilepsy clinic (n = 1,217), plus a random sample of subjects with neither claims nor clinic visits (n = 435); we then performed a medical chart review in a random subsample of 1,377 to assess the epilepsy diagnosis status. Using the chart review as the reference standard, we evaluated the test characteristics of six published algorithms. The best-performing algorithm used diagnostic and prescription drug data (sensitivity = 70%, 95% confidence interval [CI] 66-73%; specificity = 77%, 95% CI 73-81%; and area under the curve [AUC] = 0.73, 95%CI 0.71-0.76) when applied to patients age 18 years or older. Restricting the sample to adults aged 18-64 years resulted in a mild improvement in accuracy (AUC = 0.75,95%CI 0.73-0.78). Adding information about current antiepileptic drug use to the algorithm increased test performance (AUC = 0.78, 95%CI 0.76-0.80). Other algorithms varied in their included data types and performed worse. Current approaches for identifying patients with epilepsy in insurance claims have important limitations when applied to the general population. Approaches incorporating a range of information, for example, diagnoses, treatments, and site of care/specialty of physician, improve the performance of identification and could be useful in epilepsy studies using large datasets. Wiley Periodicals, Inc. © 2017 International League Against Epilepsy.

  20. An evaluation of the multi-state node networks reliability using the traditional binary-state networks reliability algorithm

    International Nuclear Information System (INIS)

    Yeh, W.-C.

    2003-01-01

    A system where the components and system itself are allowed to have a number of performance levels is called the Multi-state system (MSS). A multi-state node network (MNN) is a generalization of the MSS without satisfying the flow conservation law. Evaluating the MNN reliability arises at the design and exploitation stage of many types of technical systems. Up to now, the known existing methods can only evaluate a special MNN reliability called the multi-state node acyclic network (MNAN) in which no cyclic is allowed. However, no method exists for evaluating the general MNN reliability. The main purpose of this article is to show first that each MNN reliability can be solved using any the traditional binary-state networks (TBSN) reliability algorithm with a special code for the state probability. A simple heuristic SDP algorithm based on minimal cuts (MC) for estimating the MNN reliability is presented as an example to show how the TBSN reliability algorithm is revised to solve the MNN reliability problem. To the author's knowledge, this study is the first to discuss the relationships between MNN and TBSN and also the first to present methods to solve the exact and approximated MNN reliability. One example is illustrated to show how the exact MNN reliability is obtained using the proposed algorithm

  1. On The Effective Construction of Asymmetric Chudnovsky Multiplication Algorithms in Finite Fields Without Derivated Evaluation

    OpenAIRE

    Ballet, Stéphane; Baudru, Nicolas; Bonnecaze, Alexis; Tukumuli, Mila

    2016-01-01

    The Chudnovsky and Chudnovsky algorithm for the multiplication in extensions of finite fields provides a bilinear complexity which is uniformly linear whith respect to the degree of the extension. Recently, Randriambololona has generalized the method, allowing asymmetry in the interpolation procedure and leading to new upper bounds on the bilinear complexity. We describe the effective algorithm of this asymmetric method, without derivated evaluation. Finally, we give examples with the finite ...

  2. Cloud-Based Evaluation of Anatomical Structure Segmentation and Landmark Detection Algorithms : VISCERAL Anatomy Benchmarks

    OpenAIRE

    Jimenez-del-Toro, Oscar; Muller, Henning; Krenn, Markus; Gruenberg, Katharina; Taha, Abdel Aziz; Winterstein, Marianne; Eggel, Ivan; Foncubierta-Rodriguez, Antonio; Goksel, Orcun; Jakab, Andres; Kontokotsios, Georgios; Langs, Georg; Menze, Bjoern H.; Fernandez, Tomas Salas; Schaer, Roger

    2016-01-01

    Variations in the shape and appearance of anatomical structures in medical images are often relevant radiological signs of disease. Automatic tools can help automate parts of this manual process. A cloud-based evaluation framework is presented in this paper including results of benchmarking current state-of-the-art medical imaging algorithms for anatomical structure segmentation and landmark detection: the VISCERAL Anatomy benchmarks. The algorithms are implemented in virtual machines in the ...

  3. Profiling high performance dense linear algebra algorithms on multicore architectures for power and energy efficiency

    KAUST Repository

    Ltaief, Hatem

    2011-08-31

    This paper presents the power profile of two high performance dense linear algebra libraries i.e., LAPACK and PLASMA. The former is based on block algorithms that use the fork-join paradigm to achieve parallel performance. The latter uses fine-grained task parallelism that recasts the computation to operate on submatrices called tiles. In this way tile algorithms are formed. We show results from the power profiling of the most common routines, which permits us to clearly identify the different phases of the computations. This allows us to isolate the bottlenecks in terms of energy efficiency. Our results show that PLASMA surpasses LAPACK not only in terms of performance but also in terms of energy efficiency. © 2011 Springer-Verlag.

  4. WEAR PERFORMANCE OPTIMIZATION OF SILICON NITRIDE USING GENETIC AND SIMULATED ANNEALING ALGORITHM

    Directory of Open Access Journals (Sweden)

    SACHIN GHALME

    2017-12-01

    Full Text Available Replacing damaged joint with the suitable alternative material is a prime requirement in a patient who has arthritis. Generation of wear particles in the artificial joint during action or movement is a serious issue and leads to aseptic loosening of joint. Research in the field of bio-tribology is trying to evaluate materials with minimum wear volume loss so as to extend joint life. Silicon nitride (Si3N4 is non-oxide ceramic suggested as a new alternative for hip/knee joint replacement. Hexagonal Boron Nitride (hBN is recommended as a solid additive lubricant to improve the wear performance of Si3N4 . In this paper, an attempt has been made to evaluate the optimum combination of load and % volume of hBN in Si3N4 to minimize wear volume loss (WVL. The experiments were conducted according to Design of Experiments (DoE – Taguchi method and a mathematical model is developed. Further, this model is processed with Genetic Algorithm (GA and Simulated Annealing (SA to find out the optimum percentage of hBN in Si3N4 to minimize wear volume loss against Alumina (Al2O3 counterface. Taguchi method presents 15 N load and 8% volume of hBN to minimize WVL of Si3N4 . While GA and SA optimization offer 11.08 N load, 12.115% volume of hBN and 11.0789 N load, 12.128% volume of hBN respectively to minimize WVL in Si3N4. .

  5. Laboratory performance evaluation reports for management

    International Nuclear Information System (INIS)

    Lindahl, P.C.; Hensley, J.E.; Bass, D.A.; Johnson, P.L.; Marr, J.J.; Streets, W.E.; Warren, S.W.; Newberry, R.W.

    1995-01-01

    In support of the US DOE's environmental restoration efforts, the Integrated Performance Evaluation Program (IPEP) was developed to produce laboratory performance evaluation reports for management. These reports will provide information necessary to allow DOE headquarters and field offices to determine whether or not contracted analytical laboratories have the capability to produce environmental data of the quality necessary for the remediation program. This document describes the management report

  6. Diagnostic tests and algorithms used in the investigation of haematuria: systematic reviews and economic evaluation.

    Science.gov (United States)

    Rodgers, M; Nixon, J; Hempel, S; Aho, T; Kelly, J; Neal, D; Duffy, S; Ritchie, G; Kleijnen, J; Westwood, M

    2006-06-01

    To determine the most effective diagnostic strategy for the investigation of microscopic and macroscopic haematuria in adults. Electronic databases from inception to October 2003, updated in August 2004. A systematic review was undertaken according to published guidelines. Decision analytic modelling was undertaken, based on the findings of the review, expert opinion and additional information from the literature, to assess the relative cost-effectiveness of plausible alternative tests that are part of diagnostic algorithms for haematuria. A total of 118 studies met the inclusion criteria. No studies that evaluated the effectiveness of diagnostic algorithms for haematuria or the effectiveness of screening for haematuria or investigating its underlying cause were identified. Eighteen out of 19 identified studies evaluated dipstick tests and data from these suggested that these are moderately useful in establishing the presence of, but cannot be used to rule out, haematuria. Six studies using haematuria as a test for the presence of a disease indicated that the detection of microhaematuria cannot alone be considered a useful test either to rule in or rule out the presence of a significant underlying pathology (urinary calculi or bladder cancer). Forty-eight of 80 studies addressed methods to localise the source of bleeding (renal or lower urinary tract). The methods and thresholds described in these studies varied greatly, precluding any estimate of a 'best performance' threshold that could be applied across patient groups. However, studies of red blood cell morphology that used a cut-off value of 80% dysmorphic cells for glomerular disease reported consistently high specificities (potentially useful in ruling in a renal cause for haematuria). The reported sensitivities were generally low. Twenty-eight studies included data on the accuracy of laboratory tests (tumour markers, cytology) for the diagnosis of bladder cancer. The majority of tumour marker studies

  7. MO-FG-204-05: Evaluation of a Novel Algorithm for Improved 4DCT Resolution

    Energy Technology Data Exchange (ETDEWEB)

    Glide-Hurst, C; Briceno, J; Chetty, I. J. [Henry Ford Health System, Detroit, MI (United States); Klahr, P [Philips Healthcare, Highland Heights, OH (United States)

    2015-06-15

    Purpose: Accurate tumor motion characterization is critical for increasing the therapeutic ratio of radiation therapy. To accommodate the divergent fan-beam geometry of the scanner, the current 4D-CT algorithm utilizes a larger temporal window to ensure that pixel values are valid throughout the entire FOV. To minimize the impact on temporal resolution, a cos{sup 2} weighting is employed. We propose a novel exponential weighting (“exponential”) 4DCT reconstruction algorithm that has a sharper slope and provides a more optimal temporal resolution. Methods: A respiratory motion platform translated a lung-mimicking Styrofoam slab with several high and low-contrast inserts 2 cm in the superior-inferior direction. Breathing rates (10–15 bpm) and couch pitch (0.06–0.1 A.U.) were varied to assess interplay between parameters. Multi-slice helical 4DCTs were acquired with 0.5 sec gantry rotation and data were reconstructed with cos{sup 2} and exponential weighting. Mean and standard deviation were calculated via region of interest analysis. Intensity profiles evaluated object boundaries. Retrospective raw data reconstructions were performed for both 4DCT algorithms for 3 liver and lung cancer patients. Image quality (temporal blurring/sharpness) and subtraction images were compared between reconstructions. Results: In the phantom, profile analysis revealed that sharper boundaries were obtained with exponential reconstructions at transitioning breathing phases (i.e. mid-inhale or mid-exhale). Reductions in full-width half maximum were ∼1 mm in the superior-inferior direction and appreciable sharpness could be observed in difference maps. This reduction also yielded a slight reduction in target volume between reconstruction algorithms. For patient cases, coronal views showed less blurring at object boundaries and local intensity differences near the tumor and diaphragm with exponential weighted reconstruction. Conclusion: Exponential weighted 4DCT offers potential

  8. Evaluation of high-performance computing software

    Energy Technology Data Exchange (ETDEWEB)

    Browne, S.; Dongarra, J. [Univ. of Tennessee, Knoxville, TN (United States); Rowan, T. [Oak Ridge National Lab., TN (United States)

    1996-12-31

    The absence of unbiased and up to date comparative evaluations of high-performance computing software complicates a user`s search for the appropriate software package. The National HPCC Software Exchange (NHSE) is attacking this problem using an approach that includes independent evaluations of software, incorporation of author and user feedback into the evaluations, and Web access to the evaluations. We are applying this approach to the Parallel Tools Library (PTLIB), a new software repository for parallel systems software and tools, and HPC-Netlib, a high performance branch of the Netlib mathematical software repository. Updating the evaluations with feed-back and making it available via the Web helps ensure accuracy and timeliness, and using independent reviewers produces unbiased comparative evaluations difficult to find elsewhere.

  9. SU-G-JeP1-12: Head-To-Head Performance Characterization of Two Multileaf Collimator Tracking Algorithms for Radiotherapy

    International Nuclear Information System (INIS)

    Caillet, V; Colvill, E; O’Brien, R; Keall, P; Poulsen, P; Moore, D; Booth, J; Sawant, A

    2016-01-01

    Purpose: Multi-leaf collimator (MLC) tracking is being clinically pioneered to continuously compensate for thoracic and abdominal motion during radiotherapy. The purpose of this work is to characterize the performance of two MLC tracking algorithms for cancer radiotherapy, based on a direct optimization and a piecewise leaf fitting approach respectively. Methods: To test the algorithms, both physical and in silico experiments were performed. Previously published high and low modulation VMAT plans for lung and prostate cancer cases were used along with eight patient-measured organ-specific trajectories. For both MLC tracking algorithm, the plans were run with their corresponding patient trajectories. The physical experiments were performed on a Trilogy Varian linac and a programmable phantom (HexaMotion platform). For each MLC tracking algorithm, plan and patient trajectory, the tracking accuracy was quantified as the difference in aperture area between ideal and fitted MLC. To compare algorithms, the average cumulative tracking error area for each experiment was calculated. The two-sample Kolmogorov-Smirnov (KS) test was used to evaluate the cumulative tracking errors between algorithms. Results: Comparison of tracking errors for the physical and in silico experiments showed minor differences between the two algorithms. The KS D-statistics for the physical experiments were below 0.05 denoting no significant differences between the two distributions pattern and the average error area (direct optimization/piecewise leaf-fitting) were comparable (66.64 cm2/65.65 cm2). For the in silico experiments, the KS D-statistics were below 0.05 and the average errors area were also equivalent (49.38 cm2/48.98 cm2). Conclusion: The comparison between the two leaf fittings algorithms demonstrated no significant differences in tracking errors, neither in a clinically realistic environment nor in silico. The similarities in the two independent algorithms give confidence in the use

  10. SU-G-JeP1-12: Head-To-Head Performance Characterization of Two Multileaf Collimator Tracking Algorithms for Radiotherapy

    Energy Technology Data Exchange (ETDEWEB)

    Caillet, V; Colvill, E [School of Medecine, The University of Sydney, Sydney, NSW (Australia); Royal North Shore Hospital, St Leonards, Sydney (Australia); O’Brien, R; Keall, P [School of Medecine, The University of Sydney, Sydney, NSW (Australia); Poulsen, P [Aarhus University Hospital, Aarhus (Denmark); Moore, D [UT Southwestern Medical Center, Dallas, TX (United States); University of Maryland School of Medicine, Baltimore, MD (United States); Booth, J [Royal North Shore Hospital, St Leonards, Sydney (Australia); Sawant, A [University of Maryland School of Medicine, Baltimore, MD (United States)

    2016-06-15

    Purpose: Multi-leaf collimator (MLC) tracking is being clinically pioneered to continuously compensate for thoracic and abdominal motion during radiotherapy. The purpose of this work is to characterize the performance of two MLC tracking algorithms for cancer radiotherapy, based on a direct optimization and a piecewise leaf fitting approach respectively. Methods: To test the algorithms, both physical and in silico experiments were performed. Previously published high and low modulation VMAT plans for lung and prostate cancer cases were used along with eight patient-measured organ-specific trajectories. For both MLC tracking algorithm, the plans were run with their corresponding patient trajectories. The physical experiments were performed on a Trilogy Varian linac and a programmable phantom (HexaMotion platform). For each MLC tracking algorithm, plan and patient trajectory, the tracking accuracy was quantified as the difference in aperture area between ideal and fitted MLC. To compare algorithms, the average cumulative tracking error area for each experiment was calculated. The two-sample Kolmogorov-Smirnov (KS) test was used to evaluate the cumulative tracking errors between algorithms. Results: Comparison of tracking errors for the physical and in silico experiments showed minor differences between the two algorithms. The KS D-statistics for the physical experiments were below 0.05 denoting no significant differences between the two distributions pattern and the average error area (direct optimization/piecewise leaf-fitting) were comparable (66.64 cm2/65.65 cm2). For the in silico experiments, the KS D-statistics were below 0.05 and the average errors area were also equivalent (49.38 cm2/48.98 cm2). Conclusion: The comparison between the two leaf fittings algorithms demonstrated no significant differences in tracking errors, neither in a clinically realistic environment nor in silico. The similarities in the two independent algorithms give confidence in the use

  11. The performance of the ATLAS Inner Detector Trigger algorithms in pp collisions at the LHC

    International Nuclear Information System (INIS)

    Sutton, Mark

    2011-01-01

    The ATLAS [The ATLAS Collaboration, The ATLAS Experiment at the CERN Large Hadron Collider, JINST 3:S08003, 2008 (2008)] Inner Detector trigger algorithms have been running online during data taking with proton-proton collisions at the Large Hadron Collider (LHC) since December 2009. Preliminary results on the performance of the algorithms in collisions at centre-of-mass energies of 900 GeV and 7 TeV, are discussed. The ATLAS trigger performs the online event selection in three stages. The Inner Detector information is used in the second and third triggering stages, referred to as Level-2 trigger (L2) and Event Filter (EF) respectively, or collectively as the High Level Trigger (HLT). The HLT runs software algorithms on large farms of commercial CPUs and is designed to reject collision events in real time, keeping the most interesting few events in every thousand. The average execution times per event at L2 and the EF are around 40 ms and 4 s respectively and the Inner Detector trigger algorithms can use only a fraction of these times. Within these times, data from interesting regions of the Inner Detector have to be read out through the network, unpacked, clustered and converted to the ATLAS global coordinates. The pattern recognition follows to identify the trajectories of charged particles (tracks), which are then used in combination with information from the other subdetectors to accept or reject events depending on whether they satisfy certain trigger signatures.

  12. Performance-Based Evaluation and School Librarians

    Science.gov (United States)

    Church, Audrey P.

    2015-01-01

    Evaluation of instructional personnel is standard procedure in our Pre-K-12 public schools, and its purpose is to document educator effectiveness. With Race to the Top and No Child Left Behind waivers, states are required to implement performance-based evaluations that demonstrate student academic progress. This three-year study describes the…

  13. Building Leadership Talent through Performance Evaluation

    Science.gov (United States)

    Clifford, Matthew

    2015-01-01

    Most states and districts scramble to provide professional development to support principals, but "principal evaluation" is often lost amid competing priorities. Evaluation is an important method for supporting principal growth, communicating performance expectations to principals, and improving leadership practice. It provides leaders…

  14. Algorithmic, LOCS and HOCS (chemistry) exam questions: performance and attitudes of college students

    Science.gov (United States)

    Zoller, Uri

    2002-02-01

    The performance of freshmen biology and physics-mathematics majors and chemistry majors as well as pre- and in-service chemistry teachers in two Israeli universities on algorithmic (ALG), lower-order cognitive skills (LOCS), and higher-order cognitive skills (HOCS) chemistry exam questions were studied. The driving force for the study was an interest in moving science and chemistry instruction from an algorithmic and factual recall orientation dominated by LOCS, to a decision-making, problem-solving and critical system thinking approach, dominated by HOCS. College students' responses to the specially designed ALG, LOCS and HOCS chemistry exam questions were scored and analysed for differences and correlation between the performance means within and across universities by the questions' category. This was followed by a combined student interview - 'speaking aloud' problem solving session for assessing the thinking processes involved in solving these types of questions and the students' attitudes towards them. The main findings were: (1) students in both universities performed consistently in each of the three categories in the order of ALG > LOCS > HOCS; their 'ideological' preference, was HOCS > algorithmic/LOCS, - referred to as 'computational questions', but their pragmatic preference was the reverse; (2) success on algorithmic/LOCS does not imply success on HOCS questions; algorithmic questions constitute a category on its own as far as students success in solving them is concerned. Our study and its results support the effort being made, worldwide, to integrate HOCS-fostering teaching and assessment strategies and, to develop HOCS-oriented science-technology-environment-society (STES)-type curricula within science and chemistry education.

  15. Performance Evaluation Of Behavioral Biometric Systems

    OpenAIRE

    Cherifi , Fouad; Hemery , Baptiste; Giot , Romain; Pasquet , Marc; Rosenberger , Christophe

    2009-01-01

    We present in this chapter an overview of techniques for the performance evaluation of behavioral biometric systems. The BioAPI standard that defines the architecture of a biometric system is presented in the first part of the chapter... The general methodology for the evaluation of biometric systems is given including statistical metrics, definition of benchmark databases and subjective evaluation. These considerations rely with the ISO/IEC19795-1 standard describing the biometric performanc...

  16. Performance of operational satellite bio-optical algorithms in different water types in the southeastern Arabian Sea

    Directory of Open Access Journals (Sweden)

    P. Minu

    2016-10-01

    Full Text Available The in situ remote sensing reflectance (Rrs and optically active substances (OAS measured using hyperspectral radiometer, were used for optical classification of coastal waters in the southeastern Arabian Sea. The spectral Rrs showed three distinct water types, that were associated with the variability in OAS such as chlorophyll-a (chl-a, chromophoric dissolved organic matter (CDOM and volume scattering function at 650 nm (β650. The water types were classified as Type-I, Type-II and Type-III respectively for the three Rrs spectra. The Type-I waters showed the peak Rrs in the blue band (470 nm, whereas in the case of Type-II and III waters the peak Rrs was at 560 and 570 nm respectively. The shifting of the peak Rrs at the longer wavelength was due to an increase in concentration of OAS. Further, we evaluated six bio-optical algorithms (OC3C, OC4O, OC4, OC4E, OC3M and OC4O2 used operationally to retrieve chl-a from Coastal Zone Colour Scanner (CZCS, Ocean Colour Temperature Scanner (OCTS, Sea-viewing Wide Field-of-view Sensor (SeaWiFS, MEdium Resolution Imaging Spectrometer (MERIS, Moderate Resolution Imaging Spectroradiometer (MODIS and Ocean Colour Monitor (OCM2. For chl-a concentration greater than 1.0 mg m−3, algorithms based on the reference band ratios 488/510/520 nm to 547/550/555/560/565 nm have to be considered. The assessment of algorithms showed better performance of OC3M and OC4. All the algorithms exhibited better performance in Type-I waters. However, the performance was poor in Type-II and Type-III waters which could be attributed to the significant co-variance of chl-a with CDOM.

  17. Dependency of human target detection performance on clutter and quality of supporting image analysis algorithms in a video surveillance task

    Science.gov (United States)

    Huber, Samuel; Dunau, Patrick; Wellig, Peter; Stein, Karin

    2017-10-01

    Background: In target detection, the success rates depend strongly on human observer performances. Two prior studies tested the contributions of target detection algorithms and prior training sessions. The aim of this Swiss-German cooperation study was to evaluate the dependency of human observer performance on the quality of supporting image analysis algorithms. Methods: The participants were presented 15 different video sequences. Their task was to detect all targets in the shortest possible time. Each video sequence showed a heavily cluttered simulated public area from a different viewing angle. In each video sequence, the number of avatars in the area was altered to 100, 150 and 200 subjects. The number of targets appearing was kept at 10%. The number of marked targets varied from 0, 5, 10, 20 up to 40 marked subjects while keeping the positive predictive value of the detection algorithm at 20%. During the task, workload level was assessed by applying an acoustic secondary task. Detection rates and detection times for the targets were analyzed using inferential statistics. Results: The study found Target Detection Time to increase and Target Detection Rates to decrease with increasing numbers of avatars. The same is true for the Secondary Task Reaction Time while there was no effect on Secondary Task Hit Rate. Furthermore, we found a trend for a u-shaped correlation between the numbers of markings and RTST indicating increased workload. Conclusion: The trial results may indicate useful criteria for the design of training and support of observers in observational tasks.

  18. Evaluation of optimal dual axis concentrated photovoltaic thermal system with active ventilation using Frog Leap algorithm

    International Nuclear Information System (INIS)

    Gholami, H.; Sarwat, A.I.; Hosseinian, H.; Khalilnejad, A.

    2015-01-01

    Highlights: • Electro-thermal performance of open-loop controlled dual axis CPVT is investigated. • For using the absorbed heat, active ventilation with a heat storage tank is used. • Economic optimization of the system is performed, using Frog Leap algorithm. • Detailed model of all sections is simulated with their characteristics evaluation. • Triple-junction photovoltaic cells, which are the most recent technology, are used. - Abstract: In this study, design and optimization of a concentrated photovoltaic thermal (CPVT) system considering electrical, mechanical, and economical aspects is investigated. For this purpose, each section of the system is simulated in MATLAB, in detail. Triple-junction photovoltaic cells, which are the most recent technology, are used in this study. They are more efficient in comparison to conventional photovoltaic cells. Unlike ordinary procedures, in this work active ventilation is used for absorbing the thermal power of radiation, using heat storage tanks, which not only results in increasing the electrical efficiency of the system through decreasing the temperature, but also leads to storing and managing produced thermal energy and increasing the total efficiency of the system up to 85 percent. The operation of the CPVT system is investigated for total hours of the year, considering the needed thermal load, meteorological conditions, and hourly radiation of Khuznin, a city in Qazvin province, Iran. Finally, the collector used for this system is optimized economically, using frog leap algorithm, which resulted in the cost of 13.4 $/m"2 for a collector with the optimal distance between tubes of 6.34 cm.

  19. Determining the Effectiveness of Incorporating Geographic Information Into Vehicle Performance Algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Sera White

    2012-04-01

    This thesis presents a research study using one year of driving data obtained from plug-in hybrid electric vehicles (PHEV) located in Sacramento and San Francisco, California to determine the effectiveness of incorporating geographic information into vehicle performance algorithms. Sacramento and San Francisco were chosen because of the availability of high resolution (1/9 arc second) digital elevation data. First, I present a method for obtaining instantaneous road slope, given a latitude and longitude, and introduce its use into common driving intensity algorithms. I show that for trips characterized by >40m of net elevation change (from key on to key off), the use of instantaneous road slope significantly changes the results of driving intensity calculations. For trips exhibiting elevation loss, algorithms ignoring road slope overestimated driving intensity by as much as 211 Wh/mile, while for trips exhibiting elevation gain these algorithms underestimated driving intensity by as much as 333 Wh/mile. Second, I describe and test an algorithm that incorporates vehicle route type into computations of city and highway fuel economy. Route type was determined by intersecting trip GPS points with ESRI StreetMap road types and assigning each trip as either city or highway route type according to whichever road type comprised the largest distance traveled. The fuel economy results produced by the geographic classification were compared to the fuel economy results produced by algorithms that assign route type based on average speed or driving style. Most results were within 1 mile per gallon ({approx}3%) of one another; the largest difference was 1.4 miles per gallon for charge depleting highway trips. The methods for acquiring and using geographic data introduced in this thesis will enable other vehicle technology researchers to incorporate geographic data into their research problems.

  20. Performance Evaluation and Modelling of Container Terminals

    Science.gov (United States)

    Venkatasubbaiah, K.; Rao, K. Narayana; Rao, M. Malleswara; Challa, Suresh

    2018-02-01

    The present paper evaluates and analyzes the performance of 28 container terminals of south East Asia through data envelopment analysis (DEA), principal component analysis (PCA) and hybrid method of DEA-PCA. DEA technique is utilized to identify efficient decision making unit (DMU)s and to rank DMUs in a peer appraisal mode. PCA is a multivariate statistical method to evaluate the performance of container terminals. In hybrid method, DEA is integrated with PCA to arrive the ranking of container terminals. Based on the composite ranking, performance modelling and optimization of container terminals is carried out through response surface methodology (RSM).

  1. Performance evaluation of Central European companies

    Directory of Open Access Journals (Sweden)

    Petr Fiala

    2015-10-01

    Full Text Available The paper presents a modelling approach for performance comparison of Central European companies on three levels: country, industry, and company. The approach is based on Data Envelopment Analysis and Analytic Hierarchy Process. The proposed model consists of two basic sections. The first section estimates the importance of selected industries in the countries, whereas the second section evaluates the performance of companies within industries. The results of both sections are synthesized and finally the country performance is estimated. The evaluation is based on the data set resulting from a survey of companies from selected industries.

  2. [Evaluation of the toxoplasmosis seroprevalence in pregnant women and creating a diagnostic algorithm].

    Science.gov (United States)

    Mumcuoglu, Ipek; Toyran, Alparslan; Cetin, Feyza; Coskun, Feride Alaca; Baran, Irmak; Aksu, Neriman; Aksoy, Altan

    2014-04-01

    Toxoplasma gondii, an obligatory intracellular protozoon is widely distributed around the world and can infect all mammals and birds. While acquired toxoplasmosis is usually asymptomatic in healthy subjects, acute infection during pregnancy may lead to abortion, stillbirth, fetal neurological and ocular damages. For the prevention of congenital toxoplasmosis it is recommended that a screening programme and a diagnostic algorithm in pregnant women should be implemented while considering the cost effectiveness. Thus, it is necessary to determine the seroprevalence of toxoplasmosis in pregnant women and the actual risk of T.gondii transmission during pregnancy in a certain area. The aims of this study were to detect the T.gondii seropositivity in the pregnant women admitted to our hospital and to create a diagnostic algorithm in order to solve the problems arising from interpretation of the serological test results. A total of 6140 women aged 15-49 years who were admitted to our hospital between April 1st, 2010 to July 31st, 2013, were evaluated retrospectively. In the serum samples, T.gondii IgM, IgG and IgG avidity tests were performed by VIDAS automated analyzer using TOXO IgM, TOXO IgG II and TOXO IgG avidity kits (bioMerieux, France). It was noted that, both T.gondii IgM and IgG tests were requested from 4758 (77.5%) of the pregnant women, while only IgM test from 1382 (22.5%) cases. Sole IgM positivity was found as 0.2% (11/6140), IgG as 26.4% (1278/4758) and both IgM + IgG as 0.9% (44/4758). T.gondii IgG avidity tests were requested from 12 of 44 women who were found both IgM and IgG positive and eight of them revealed high avidity and four low avidity. Avidity test was ordered for the 91 (7.1%) of 1278 sole IgG positive cases and four of them were found to have low avidity. IgG avidity test was ordered for 554 (16.2%) of IgM and/or IgG negative subjects, however, the test was not performed according to rejection criteria of the laboratory. It was noticed that

  3. Supplier Performance Evaluation and Rating System (SPEARS)

    International Nuclear Information System (INIS)

    Oged, M.; Warner, D.; Gurbuz, E.

    1993-03-01

    The SSCL Magnet Quality Assurance Department has implemented a Supplier Performance Evaluation and Rating System (SPEARS) to assess supplier performance throughout the development and production stages of the SSCL program. The main objectives of SPEARS are to promote teamwork and recognize performance. This paper examines the current implementation of SPEARS. MSD QA supports the development and production of SSCsuperconducting magnets while implementing the requirements of DOE Order 5700.6C. The MSD QA program is based on the concept of continuous improvement in quality and productivity. The QA program requires that procurement of items and services be controlled to assure conformance to specification. SPEARS has been implemented to meet DOE requirements and to enhance overall confidence in supplier performance. Key elements of SPEARS include supplier evaluation and selection as well as evaluation of furnished quality through source inspection, audit, and receipt inspection. These elements are described in this paper

  4. Supplier Performance Evaluation and Rating System (SPEARS)

    International Nuclear Information System (INIS)

    Oged, M.; Warner, D.G.; Gurbuz, E.

    1994-01-01

    The SSCL Magnet Quality Assurance Department has implemented a Supplier Performance Evaluation and Rating System (SPEARS) to assess supplier performance throughout the development and production stages of the SSCL program. The main objectives of SPEARS are to promote teamwork and recognize performance. This paper examines the current implementation of SPEARS. MSD QA supports the development and production of SSC superconducting magnets while implementing the requirements of DOE Order 5700.6C. The MSD QA program is based on the concept of continuous improvement in quality and productivity. The QA program requires that procurement of items and services be controlled to assure conformance to specification. SPEARS has been implemented to meet DOE requirements and to enhance overall confidence in supplier performance. Key elements of SPEARS include supplier evaluation and selection as well as evaluation of furnished quality through source inspection, audit, and receipt inspection. These elements are described in this paper

  5. Toward a High Performance Tile Divide and Conquer Algorithm for the Dense Symmetric Eigenvalue Problem

    KAUST Repository

    Haidar, Azzam

    2012-01-01

    Classical solvers for the dense symmetric eigenvalue problem suffer from the first step, which involves a reduction to tridiagonal form that is dominated by the cost of accessing memory during the panel factorization. The solution is to reduce the matrix to a banded form, which then requires the eigenvalues of the banded matrix to be computed. The standard divide and conquer algorithm can be modified for this purpose. The paper combines this insight with tile algorithms that can be scheduled via a dynamic runtime system to multicore architectures. A detailed analysis of performance and accuracy is included. Performance improvements of 14-fold and 4-fold speedups are reported relative to LAPACK and Intel\\'s Math Kernel Library.

  6. Evaluation of two "integrated" polarimetric Quantitative Precipitation Estimation (QPE) algorithms at C-band

    Science.gov (United States)

    Tabary, Pierre; Boumahmoud, Abdel-Amin; Andrieu, Hervé; Thompson, Robert J.; Illingworth, Anthony J.; Le Bouar, Erwan; Testud, Jacques

    2011-08-01

    SummaryTwo so-called "integrated" polarimetric rate estimation techniques, ZPHI ( Testud et al., 2000) and ZZDR ( Illingworth and Thompson, 2005), are evaluated using 12 episodes of the year 2005 observed by the French C-band operational Trappes radar, located near Paris. The term "integrated" means that the concentration parameter of the drop size distribution is assumed to be constant over some area and the algorithms retrieve it using the polarimetric variables in that area. The evaluation is carried out in ideal conditions (no partial beam blocking, no ground-clutter contamination, no bright band contamination, a posteriori calibration of the radar variables ZH and ZDR) using hourly rain gauges located at distances less than 60 km from the radar. Also included in the comparison, for the sake of benchmarking, is a conventional Z = 282 R1.66 estimator, with and without attenuation correction and with and without adjustment by rain gauges as currently done operationally at Météo France. Under those ideal conditions, the two polarimetric algorithms, which rely solely on radar data, appear to perform as well if not better, pending on the measurements conditions (attenuation, rain rates, …), than the conventional algorithms, even when the latter take into account rain gauges through the adjustment scheme. ZZDR with attenuation correction is the best estimator for hourly rain gauge accumulations lower than 5 mm h -1 and ZPHI is the best one above that threshold. A perturbation analysis has been conducted to assess the sensitivity of the various estimators with respect to biases on ZH and ZDR, taking into account the typical accuracy and stability that can be reasonably achieved with modern operational radars these days (1 dB on ZH and 0.2 dB on ZDR). A +1 dB positive bias on ZH (radar too hot) results in a +14% overestimation of the rain rate with the conventional estimator used in this study (Z = 282R1.66), a -19% underestimation with ZPHI and a +23

  7. Performance Evaluation Model for Application Layer Firewalls.

    Science.gov (United States)

    Xuan, Shichang; Yang, Wu; Dong, Hui; Zhang, Jiangchuan

    2016-01-01

    Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers). Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  8. Performance Evaluation Model for Application Layer Firewalls.

    Directory of Open Access Journals (Sweden)

    Shichang Xuan

    Full Text Available Application layer firewalls protect the trusted area network against information security risks. However, firewall performance may affect user experience. Therefore, performance analysis plays a significant role in the evaluation of application layer firewalls. This paper presents an analytic model of the application layer firewall, based on a system analysis to evaluate the capability of the firewall. In order to enable users to improve the performance of the application layer firewall with limited resources, resource allocation was evaluated to obtain the optimal resource allocation scheme in terms of throughput, delay, and packet loss rate. The proposed model employs the Erlangian queuing model to analyze the performance parameters of the system with regard to the three layers (network, transport, and application layers. Then, the analysis results of all the layers are combined to obtain the overall system performance indicators. A discrete event simulation method was used to evaluate the proposed model. Finally, limited service desk resources were allocated to obtain the values of the performance indicators under different resource allocation scenarios in order to determine the optimal allocation scheme. Under limited resource allocation, this scheme enables users to maximize the performance of the application layer firewall.

  9. Validation Study of a Predictive Algorithm to Evaluate Opioid Use Disorder in a Primary Care Setting

    Science.gov (United States)

    Sharma, Maneesh; Lee, Chee; Kantorovich, Svetlana; Tedtaotao, Maria; Smith, Gregory A.

    2017-01-01

    Background: Opioid abuse in chronic pain patients is a major public health issue. Primary care providers are frequently the first to prescribe opioids to patients suffering from pain, yet do not always have the time or resources to adequately evaluate the risk of opioid use disorder (OUD). Purpose: This study seeks to determine the predictability of aberrant behavior to opioids using a comprehensive scoring algorithm (“profile”) incorporating phenotypic and, more uniquely, genotypic risk factors. Methods and Results: In a validation study with 452 participants diagnosed with OUD and 1237 controls, the algorithm successfully categorized patients at high and moderate risk of OUD with 91.8% sensitivity. Regardless of changes in the prevalence of OUD, sensitivity of the algorithm remained >90%. Conclusion: The algorithm correctly stratifies primary care patients into low-, moderate-, and high-risk categories to appropriately identify patients in need for additional guidance, monitoring, or treatment changes. PMID:28890908

  10. Evaluation of Gear Condition Indicator Performance on Rotorcraft Fleet

    Science.gov (United States)

    Antolick, Lance J.; Branning, Jeremy S.; Wade, Daniel R.; Dempsey, Paula J.

    2010-01-01

    The U.S. Army is currently expanding its fleet of Health Usage Monitoring Systems (HUMS) equipped aircraft at significant rates, to now include over 1,000 rotorcraft. Two different on-board HUMS, the Honeywell Modern Signal Processing Unit (MSPU) and the Goodrich Integrated Vehicle Health Management System (IVHMS), are collecting vibration health data on aircraft that include the Apache, Blackhawk, Chinook, and Kiowa Warrior. The objective of this paper is to recommend the most effective gear condition indicators for fleet use based on both a theoretical foundation and field data. Gear diagnostics with better performance will be recommended based on both a theoretical foundation and results of in-fleet use. In order to evaluate the gear condition indicator performance on rotorcraft fleets, results of more than five years of health monitoring for gear faults in the entire HUMS equipped Army helicopter fleet will be presented. More than ten examples of gear faults indicated by the gear CI have been compiled and each reviewed for accuracy. False alarms indications will also be discussed. Performance data from test rigs and seeded fault tests will also be presented. The results of the fleet analysis will be discussed, and a performance metric assigned to each of the competing algorithms. Gear fault diagnostic algorithms that are compliant with ADS-79A will be recommended for future use and development. The performance of gear algorithms used in the commercial units and the effectiveness of the gear CI as a fault identifier will be assessed using the criteria outlined in the standards in ADS-79A-HDBK, an Army handbook that outlines the conversion from Reliability Centered Maintenance to the On-Condition status of Condition Based Maintenance.

  11. High-definition computed tomography for coronary artery stents imaging: Initial evaluation of the optimal reconstruction algorithm.

    Science.gov (United States)

    Cui, Xiaoming; Li, Tao; Li, Xin; Zhou, Weihua

    2015-05-01

    The aim of this study was to evaluate the in vivo performance of four image reconstruction algorithms in a high-definition CT (HDCT) scanner with improved spatial resolution for the evaluation of coronary artery stents and intrastent lumina. Thirty-nine consecutive patients with a total of 71 implanted coronary stents underwent coronary CT angiography (CCTA) on a HDCT (Discovery CT 750 HD; GE Healthcare) with the high-resolution scanning mode. Four different reconstruction algorithms (HD-stand, HD-detail; HD-stand-plus; HD-detail-plus) were applied to reconstruct the stented coronary arteries. Image quality for stent characterization was assessed. Image noise and intrastent luminal diameter were measured. The relationship between the measurement of inner stent diameter (ISD) and the true stent diameter (TSD) and stent type were analysed. The stent-dedicated kernel (HD-detail) offered the highest percentage (53.5%) of good image quality for stent characterization and the highest ratio (68.0±8.4%) of visible stent lumen/true stent lumen for luminal diameter measurement at the expense of an increased overall image noise. The Pearson correlation coefficient between the ISD and TSD measurement and spearman correlation coefficient between the ISD measurement and stent type were 0.83 and 0.48, respectively. Compared with standard reconstruction algorithms, high-definition CT imaging technique with dedicated high-resolution reconstruction algorithm provides more accurate stent characterization and intrastent luminal diameter measurement. Copyright © 2015 Elsevier Ireland Ltd. All rights reserved.

  12. Evaluation of AVHRR Aerosol Properties Over Mainland China from Deepblue Algorithm

    Science.gov (United States)

    Xue, Y.; Che, Y.; She, L.

    2017-12-01

    Advanced Very High Resolution Radiometer (AVHRR) on-board NOAA series satellites is the only operational senor which keeps observing surface of the Earth and cloud over 30 years since 1979. Such long time coverage helps to expand the application of AVHRR to aerosol properties retrieval over both land and ocean successfully. Recently in 2017, the Deep Blue Project has published AVHRR `Deep Blue' dataset version 001 (V001) using `Deep Blue (DB)' algorithm(Sayer et al., 2017). This dataset includes not only aerosol properties over land but also oceanic aerosol product at three periods (NOAA-11: 1989-1990, NOAA-14: 1995-1999, NOAA-18: 2006-2011). We pay much of our attention to DB's performance over mainland China. Therefore, in the presenting paper, we focus on validating AVHRR/DB dataset over different land covers in China in 2007, 2008 and 2010. Both of data from ground-based networks from the Aerosol Robotic NETwork (AERONET) and China Aerosol Remote Sensing Network (CARSNET) are used as reference data. The collocation method is to match data at a time range of of satellite pass-by and at a spatial frame of pixels around ground-based site. Totally, data from 18 AERONET and 25 CARSNET are used as shown in figure, collocating 922 matches with AERONET and 2325 matches with CARSNET. Additionally, we introduced a corrected RMS error as main evaluation metric. As a result, AVHRR/DB underestimates AOD increasingly and more uncertainties and errors will be introduced with the growth of AOD. Otherwise, the performance of AVHRR/DB are better compared with AERONET data than with CARSNET data from RMSbc of 0.35 vs. 0.42. Their Rs (0.757 vs. 0.654) prove this characteristic too. For urban areas, the performances in Beijing are better than that in Xi'an from RMSbc, otherwise RMS in Xi'an (0.324) is lower than others' (0.346 and 0.383) mainly because of small AOD observed range and low R (0.624). For croplands, those performances are at same levels with RMSbc from 0.312 to 0

  13. Data and software tools for gamma radiation spectral threat detection and nuclide identification algorithm development and evaluation

    International Nuclear Information System (INIS)

    Portnoy, David; Fisher, Brian; Phifer, Daniel

    2015-01-01

    The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal

  14. Data and software tools for gamma radiation spectral threat detection and nuclide identification algorithm development and evaluation

    Energy Technology Data Exchange (ETDEWEB)

    Portnoy, David; Fisher, Brian; Phifer, Daniel

    2015-06-01

    The detection of radiological and nuclear threats is extremely important to national security. The federal government is spending significant resources developing new detection systems and attempting to increase the performance of existing ones. The detection of illicit radionuclides that may pose a radiological or nuclear threat is a challenging problem complicated by benign radiation sources (e.g., cat litter and medical treatments), shielding, and large variations in background radiation. Although there is a growing acceptance within the community that concentrating efforts on algorithm development (independent of the specifics of fully assembled systems) has the potential for significant overall system performance gains, there are two major hindrances to advancements in gamma spectral analysis algorithms under the current paradigm: access to data and common performance metrics along with baseline performance measures. Because many of the signatures collected during performance measurement campaigns are classified, dissemination to algorithm developers is extremely limited. This leaves developers no choice but to collect their own data if they are lucky enough to have access to material and sensors. This is often combined with their own definition of metrics for measuring performance. These two conditions make it all but impossible for developers and external reviewers to make meaningful comparisons between algorithms. Without meaningful comparisons, performance advancements become very hard to achieve and (more importantly) recognize. The objective of this work is to overcome these obstacles by developing and freely distributing real and synthetically generated gamma-spectra data sets as well as software tools for performance evaluation with associated performance baselines to national labs, academic institutions, government agencies, and industry. At present, datasets for two tracks, or application domains, have been developed: one that includes temporal

  15. Parallel implementation and evaluation of motion estimation system algorithms on a distributed memory multiprocessor using knowledge based mappings

    Science.gov (United States)

    Choudhary, Alok Nidhi; Leung, Mun K.; Huang, Thomas S.; Patel, Janak H.

    1989-01-01

    Several techniques to perform static and dynamic load balancing techniques for vision systems are presented. These techniques are novel in the sense that they capture the computational requirements of a task by examining the data when it is produced. Furthermore, they can be applied to many vision systems because many algorithms in different systems are either the same, or have similar computational characteristics. These techniques are evaluated by applying them on a parallel implementation of the algorithms in a motion estimation system on a hypercube multiprocessor system. The motion estimation system consists of the following steps: (1) extraction of features; (2) stereo match of images in one time instant; (3) time match of images from different time instants; (4) stereo match to compute final unambiguous points; and (5) computation of motion parameters. It is shown that the performance gains when these data decomposition and load balancing techniques are used are significant and the overhead of using these techniques is minimal.

  16. Artificial Bee Colony Algorithm for Transient Performance Augmentation of Grid Connected Distributed Generation

    Science.gov (United States)

    Chatterjee, A.; Ghoshal, S. P.; Mukherjee, V.

    In this paper, a conventional thermal power system equipped with automatic voltage regulator, IEEE type dual input power system stabilizer (PSS) PSS3B and integral controlled automatic generation control loop is considered. A distributed generation (DG) system consisting of aqua electrolyzer, photovoltaic cells, diesel engine generator, and some other energy storage devices like flywheel energy storage system and battery energy storage system is modeled. This hybrid distributed system is connected to the grid. While integrating this DG with the onventional thermal power system, improved transient performance is noticed. Further improvement in the transient performance of this grid connected DG is observed with the usage of superconducting magnetic energy storage device. The different tunable parameters of the proposed hybrid power system model are optimized by artificial bee colony (ABC) algorithm. The optimal solutions offered by the ABC algorithm are compared with those offered by genetic algorithm (GA). It is also revealed that the optimizing performance of the ABC is better than the GA for this specific application.

  17. On the performance of SART and ART algorithms for microwave imaging

    Science.gov (United States)

    Aprilliyani, Ria; Prabowo, Rian Gilang; Basari

    2018-02-01

    The development of advanced technology leads to the change of human lifestyle in current society. One of the disadvantage impact is arising the degenerative diseases such as cancers and tumors, not just common infectious diseases. Every year, victims of cancers and tumors grow significantly leading to one of the death causes in the world. In early stage, cancer/tumor does not have definite symptoms, but it will grow abnormally as tissue cells and damage normal tissue. Hence, early cancer detection is required. Some common diagnostics modalities such as MRI, CT and PET are quite difficult to be operated in home or mobile environment such as ambulance. Those modalities are also high cost, unpleasant, complex, less safety and harder to move. Hence, this paper proposes a microwave imaging system due to its portability and low cost. In current study, we address on the performance of simultaneous algebraic reconstruction technique (SART) algorithm that was applied in microwave imaging. In addition, SART algorithm performance compared with our previous work on algebraic reconstruction technique (ART), in order to have performance comparison, especially in the case of reconstructed image quality. The result showed that by applying SART algorithm on microwave imaging, suspicious cancer/tumor can be detected with better image quality.

  18. Performance analysis of multidimensional wavefront algorithms with application to deterministic particle transport

    International Nuclear Information System (INIS)

    Hoisie, A.; Lubeck, O.; Wasserman, H.

    1998-01-01

    The authors develop a model for the parallel performance of algorithms that consist of concurrent, two-dimensional wavefronts implemented in a message passing environment. The model, based on a LogGP machine parameterization, combines the separate contributions of computation and communication wavefronts. They validate the model on three important supercomputer systems, on up to 500 processors. They use data from a deterministic particle transport application taken from the ASCI workload, although the model is general to any wavefront algorithm implemented on a 2-D processor domain. They also use the validated model to make estimates of performance and scalability of wavefront algorithms on 100-TFLOPS computer systems expected to be in existence within the next decade as part of the ASCI program and elsewhere. In this context, the authors analyze two problem sizes. Their model shows that on the largest such problem (1 billion cells), inter-processor communication performance is not the bottleneck. Single-node efficiency is the dominant factor

  19. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    CERN Document Server

    Martin-haugh, Stewart; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly re-implemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is presented. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2. This new strategy will use a Fast Track Finder (FTF) algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 and with better efficiency. The performance and timing of the algorithms for electron and tau track triggers are presented. The profiling infrastructure, constructed to provide prompt feedback from the optimisation, is described, including the methods used to monitor the relative performance improvements as the code evolves. The o...

  20. Performance of a rain retrieval algorithm using TRMM data in the Eastern Mediterranean

    Directory of Open Access Journals (Sweden)

    D. Katsanos

    2006-01-01

    Full Text Available This study aims to make a regional characterization of the performance of the rain retrieval algorithm BRAIN. This algorithm estimates the rain rate from brightness temperatures measured by the TRMM Microwave Imager (TMI onboard the TRMM satellite. In this stage of the study, a comparison between the rain estimated from Precipitation Radar (PR onboard TRMM (2A25 version 5 and the rain retrieved by the BRAIN algorithm is presented, for about 30 satellite overpasses over the Central and Eastern Mediterranean during the period October 2003–March 2004, in order to assess the behavior of the algorithm in the Eastern Mediterranean region. BRAIN was built and tested using PR rain estimates distributed randomly over the whole TRMM sampling region. Characterization of the differences between PR and BRAIN over a specific region is thus interesting because it might show some local trend for one or the other of the instrument. The checking of BRAIN results against the PR rain-estimate appears to be consistent with former results i.e. a somewhat marked discrepancy for the highest rain rates. This difference arises from a known problem that affect rain retrieval based on passive microwave radiometers measurements, but some of the higher radar rain rates could also be questioned. As an independent test, a good correlation between the rain retrieved by BRAIN and lighting data (obtained by the UK Met. Office long range detection system is also emphasized in the paper.

  1. Optimization of thermal performance of a smooth flat-plate solar air heater using teaching–learning-based optimization algorithm

    Directory of Open Access Journals (Sweden)

    R. Venkata Rao

    2015-12-01

    Full Text Available This paper presents the performance of teaching–learning-based optimization (TLBO algorithm to obtain the optimum set of design and operating parameters for a smooth flat plate solar air heater (SFPSAH. The TLBO algorithm is a recently proposed population-based algorithm, which simulates the teaching–learning process of the classroom. Maximization of thermal efficiency is considered as an objective function for the thermal performance of SFPSAH. The number of glass plates, irradiance, and the Reynolds number are considered as the design parameters and wind velocity, tilt angle, ambient temperature, and emissivity of the plate are considered as the operating parameters to obtain the thermal performance of the SFPSAH using the TLBO algorithm. The computational results have shown that the TLBO algorithm is better or competitive to other optimization algorithms recently reported in the literature for the considered problem.

  2. Evaluation of downwelling diffuse attenuation coefficient algorithms in the Red Sea

    KAUST Repository

    Tiwari, Surya Prakash

    2016-05-07

    Despite the importance of the optical properties such as the downwelling diffuse attenuation coefficient for characterizing the upper water column, until recently no in situ optical measurements were published for the Red Sea. Kirby et al. used observations from the Coastal Zone Color Scanner to characterize the spatial and temporal variability of the diffuse attenuation coefficient (Kd(490)) in the Red Sea. To better understand optical variability and its utility in the Red Sea, it is imperative to comprehend the diffuse attenuation coefficient and its relationship with in situ properties. Two apparent optical properties, spectral remote sensing reflectance (Rrs) and the downwelling diffuse attenuation coefficient (Kd), are calculated from vertical profile measurements of downwelling irradiance (Ed) and upwelling radiance (Lu). Kd characterizes light penetration into water column that is important for understanding both the physical and biogeochemical environment, including water quality and the health of ocean environment. Our study tests the performance of the existing Kd(490) algorithms in the Red Sea and compares them against direct in situ measurements within various subdivisions of the Red Sea. Most standard algorithms either overestimated or underestimated with the measured in situ values of Kd. Consequently, these algorithms provided poor retrieval of Kd(490) for the Red Sea. Random errors were high for all algorithms and the correlation coefficients (r2) with in situ measurements were quite low. Hence, these algorithms may not be suitable for the Red Sea. Overall, statistical analyses of the various algorithms indicated that the existing algorithms are inadequate for the Red Sea. The present study suggests that reparameterizing existing algorithms or developing new regional algorithms is required to improve retrieval of Kd(490) for the Red Sea. © (2016) COPYRIGHT Society of Photo-Optical Instrumentation Engineers (SPIE). Downloading of the abstract is

  3. Traffic Congestion Evaluation and Signal Control Optimization Based on Wireless Sensor Networks: Model and Algorithms

    Directory of Open Access Journals (Sweden)

    Wei Zhang

    2012-01-01

    Full Text Available This paper presents the model and algorithms for traffic flow data monitoring and optimal traffic light control based on wireless sensor networks. Given the scenario that sensor nodes are sparsely deployed along the segments between signalized intersections, an analytical model is built using continuum traffic equation and develops the method to estimate traffic parameter with the scattered sensor data. Based on the traffic data and principle of traffic congestion formation, we introduce the congestion factor which can be used to evaluate the real-time traffic congestion status along the segment and to predict the subcritical state of traffic jams. The result is expected to support the timing phase optimization of traffic light control for the purpose of avoiding traffic congestion before its formation. We simulate the traffic monitoring based on the Mobile Century dataset and analyze the performance of traffic light control on VISSIM platform when congestion factor is introduced into the signal timing optimization model. The simulation result shows that this method can improve the spatial-temporal resolution of traffic data monitoring and evaluate traffic congestion status with high precision. It is helpful to remarkably alleviate urban traffic congestion and decrease the average traffic delays and maximum queue length.

  4. Evaluation of focused ultrasound algorithms: Issues for reducing pre-focal heating and treatment time.

    Science.gov (United States)

    Yiannakou, Marinos; Trimikliniotis, Michael; Yiallouras, Christos; Damianou, Christakis

    2016-02-01

    Due to the heating in the pre-focal field the delay between successive movements in high intensity focused ultrasound (HIFU) are sometimes as long as 60s, resulting to treatment time in the order of 2-3h. Because there is generally a requirement to reduce treatment time, we were motivated to explore alternative transducer motion algorithms in order to reduce pre-focal heating and treatment time. A 1 MHz single element transducer with 4 cm diameter and 10 cm focal length was used. A simulation model was developed that estimates the temperature, thermal dose and lesion development in the pre-focal field. The simulated temperature history that was combined with the motion algorithms produced thermal maps in the pre-focal region. Polyacrylimde gel phantom was used to evaluate the induced pre-focal heating for each motion algorithm used, and also was used to assess the accuracy of the simulation model. Three out of the six algorithms having successive steps close to each other, exhibited severe heating in the pre-focal field. Minimal heating was produced with the algorithms having successive steps apart from each other (square, square spiral and random). The last three algorithms were improved further (with small cost in time), thus eliminating completely the pre-focal heating and reducing substantially the treatment time as compared to traditional algorithms. Out of the six algorithms, 3 were successful in eliminating the pre-focal heating completely. Because these 3 algorithms required no delay between successive movements (except in the last part of the motion), the treatment time was reduced by 93%. Therefore, it will be possible in the future, to achieve treatment time of focused ultrasound therapies shorter than 30 min. The rate of ablated volume achieved with one of the proposed algorithms was 71 cm(3)/h. The intention of this pilot study was to demonstrate that the navigation algorithms play the most important role in reducing pre-focal heating. By evaluating in

  5. An Approach to a Comprehensive Test Framework for Analysis and Evaluation of Text Line Segmentation Algorithms

    Directory of Open Access Journals (Sweden)

    Zoran N. Milivojevic

    2011-09-01

    Full Text Available The paper introduces a testing framework for the evaluation and validation of text line segmentation algorithms. Text line segmentation represents the key action for correct optical character recognition. Many of the tests for the evaluation of text line segmentation algorithms deal with text databases as reference templates. Because of the mismatch, the reliable testing framework is required. Hence, a new approach to a comprehensive experimental framework for the evaluation of text line segmentation algorithms is proposed. It consists of synthetic multi-like text samples and real handwritten text as well. Although the tests are mutually independent, the results are cross-linked. The proposed method can be used for different types of scripts and languages. Furthermore, two different procedures for the evaluation of algorithm efficiency based on the obtained error type classification are proposed. The first is based on the segmentation line error description, while the second one incorporates well-known signal detection theory. Each of them has different capabilities and convenience, but they can be used as supplements to make the evaluation process efficient. Overall the proposed procedure based on the segmentation line error description has some advantages, characterized by five measures that describe measurement procedures.

  6. Case-Mix for Performance Management: A Risk Algorithm Based on ICD-10-CM.

    Science.gov (United States)

    Gao, Jian; Moran, Eileen; Almenoff, Peter L

    2018-06-01

    Accurate risk adjustment is the key to a reliable comparison of cost and quality performance among providers and hospitals. However, the existing case-mix algorithms based on age, sex, and diagnoses can only explain up to 50% of the cost variation. More accurate risk adjustment is desired for provider performance assessment and improvement. To develop a case-mix algorithm that hospitals and payers can use to measure and compare cost and quality performance of their providers. All 6,048,895 patients with valid diagnoses and cost recorded in the US Veterans health care system in fiscal year 2016 were included in this study. The dependent variable was total cost at the patient level, and the explanatory variables were age, sex, and comorbidities represented by 762 clinically homogeneous groups, which were created by expanding the 283 categories from Clinical Classifications Software based on ICD-10-CM codes. The split-sample method was used to assess model overfitting and coefficient stability. The predictive power of the algorithms was ascertained by comparing the R, mean absolute percentage error, root mean square error, predictive ratios, and c-statistics. The expansion of the Clinical Classifications Software categories resulted in higher predictive power. The R reached 0.72 and 0.52 for the transformed and raw scale cost, respectively. The case-mix algorithm we developed based on age, sex, and diagnoses outperformed the existing case-mix models reported in the literature. The method developed in this study can be used by other health systems to produce tailored risk models for their specific purpose.

  7. High-performance bidiagonal reduction using tile algorithms on homogeneous multicore architectures

    KAUST Repository

    Ltaief, Hatem

    2013-04-01

    This article presents a new high-performance bidiagonal reduction (BRD) for homogeneous multicore architectures. This article is an extension of the high-performance tridiagonal reduction implemented by the same authors [Luszczek et al., IPDPS 2011] to the BRD case. The BRD is the first step toward computing the singular value decomposition of a matrix, which is one of the most important algorithms in numerical linear algebra due to its broad impact in computational science. The high performance of the BRD described in this article comes from the combination of four important features: (1) tile algorithms with tile data layout, which provide an efficient data representation in main memory; (2) a two-stage reduction approach that allows to cast most of the computation during the first stage (reduction to band form) into calls to Level 3 BLAS and reduces the memory traffic during the second stage (reduction from band to bidiagonal form) by using high-performance kernels optimized for cache reuse; (3) a data dependence translation layer that maps the general algorithm with column-major data layout into the tile data layout; and (4) a dynamic runtime system that efficiently schedules the newly implemented kernels across the processing units and ensures that the data dependencies are not violated. A detailed analysis is provided to understand the critical impact of the tile size on the total execution time, which also corresponds to the matrix bandwidth size after the reduction of the first stage. The performance results show a significant improvement over currently established alternatives. The new high-performance BRD achieves up to a 30-fold speedup on a 16-core Intel Xeon machine with a 12000×12000 matrix size against the state-of-the-art open source and commercial numerical software packages, namely LAPACK, compiled with optimized and multithreaded BLAS from MKL as well as Intel MKL version 10.2. © 2013 ACM.

  8. Virginia power's human performance evaluation system (HPES)

    International Nuclear Information System (INIS)

    Patterson, W.E.

    1991-01-01

    This paper reports on the Human Performance Evaluation System (HPES) which was initially developed by the Institute of Nuclear Power Operations (INPO) using the Aviation Safety Reporting System (ASRS) as a guide. After a pilot program involving three utilities ended in 1983, the present day program was instituted. A methodology was developed, for specific application to nuclear power plant employees, to aid trained coordinators/evaluators in determining those factors that exert a negative influence on human behavior in the nuclear power plant environment. HPES is for anyone and everyone on site, from contractors to plant staff to plant management. No one is excluded from participation. Virginia Power's HPES program goal is to identify and correct the root causes of human performance problems. Evaluations are performed on reported real or perceived conditions that may have an adverse influence on members of the nuclear team. A report is provided to management identifying root cause and contributing factors along with recommended corrective actions

  9. Decision-making in pediatrics: a practical algorithm to evaluate complementary and alternative medicine for children.

    Science.gov (United States)

    Renella, Raffaele; Fanconi, Sergio

    2006-07-01

    We herein present a preliminary practical algorithm for evaluating complementary and alternative medicine (CAM) for children which relies on basic bioethical principles and considers the influence of CAM on global child healthcare. CAM is currently involved in almost all sectors of pediatric care and frequently represents a challenge to the pediatrician. The aim of this article is to provide a decision-making tool to assist the physician, especially as it remains difficult to keep up-to-date with the latest developments in the field. The reasonable application of our algorithm together with common sense should enable the pediatrician to decide whether pediatric (P)-CAM represents potential harm to the patient, and allow ethically sound counseling. In conclusion, we propose a pragmatic algorithm designed to evaluate P-CAM, briefly explain the underlying rationale and give a concrete clinical example.

  10. A novel orthoimage mosaic method using a weighted A∗ algorithm - Implementation and evaluation

    Science.gov (United States)

    Zheng, Maoteng; Xiong, Xiaodong; Zhu, Junfeng

    2018-04-01

    The implementation and evaluation of a weighted A∗ algorithm for orthoimage mosaic with UAV (Unmanned Aircraft Vehicle) imagery is proposed. The initial seam-line network is firstly generated by standard Voronoi Diagram algorithm; an edge diagram is generated based on DSM (Digital Surface Model) data; the vertices (conjunction nodes of seam-lines) of the initial network are relocated if they are on high objects (buildings, trees and other artificial structures); and the initial seam-lines are refined using the weighted A∗ algorithm based on the edge diagram and the relocated vertices. Our method was tested with three real UAV datasets. Two quantitative terms are introduced to evaluate the results of the proposed method. Preliminary results show that the method is suitable for regular and irregular aligned UAV images for most terrain types (flat or mountainous areas), and is better than the state-of-the-art method in both quality and efficiency based on the test datasets.

  11. Extreme-Scale Algorithms & Software Resilience (EASIR) Architecture-Aware Algorithms for Scalable Performance and Resilience on Heterogeneous Architectures

    Energy Technology Data Exchange (ETDEWEB)

    Demmel, James W. [Univ. of California, Berkeley, CA (United States)

    2017-09-14

    This project addresses both communication-avoiding algorithms, and reproducible floating-point computation. Communication, i.e. moving data, either between levels of memory or processors over a network, is much more expensive per operation than arithmetic (measured in time or energy), so we seek algorithms that greatly reduce communication. We developed many new algorithms for both dense and sparse, and both direct and iterative linear algebra, attaining new communication lower bounds, and getting large speedups in many cases. We also extended this work in several ways: (1) We minimize writes separately from reads, since writes may be much more expensive than reads on emerging memory technologies, like Flash, sometimes doing asymptotically fewer writes than reads. (2) We extend the lower bounds and optimal algorithms to arbitrary algorithms that may be expressed as perfectly nested loops accessing arrays, where the array subscripts may be arbitrary affine functions of the loop indices (eg A(i), B(i,j+k, k+3*m-7, …) etc.). (3) We extend our communication-avoiding approach to some machine learning algorithms, such as support vector machines. This work has won a number of awards. We also address reproducible floating-point computation. We define reproducibility to mean getting bitwise identical results from multiple runs of the same program, perhaps with different hardware resources or other changes that should ideally not change the answer. Many users depend on reproducibility for debugging or correctness. However, dynamic scheduling of parallel computing resources, combined with nonassociativity of floating point addition, makes attaining reproducibility a challenge even for simple operations like summing a vector of numbers, or more complicated operations like the Basic Linear Algebra Subprograms (BLAS). We describe an algorithm that computes a reproducible sum of floating point numbers, independent of the order of summation. The algorithm depends only on a

  12. Evaluating Nonclinical Performance of the Academic Pathologist

    Directory of Open Access Journals (Sweden)

    Austin Blackburn Wiles MD

    2018-02-01

    Full Text Available Academic pathologists perform clinical duties, as well as valuable nonclinical activities. Nonclinical activities may consist of research, teaching, and administrative management among many other important tasks. While clinical duties have many clear metrics to measure productivity, like the relative value units of Medicare reimbursement, nonclinical performance is often difficult to measure. Despite the difficulty of evaluating nonclinical activities, nonclinical productivity is used to determine promotion, funding, and inform professional evaluations of performance. In order to better evaluate the important nonclinical performance of academic pathologists, we present an evaluation system for leadership use. This system uses a Microsoft Excel workbook to provide academic pathologist respondents and reviewing leadership a transparent, easy-to-complete system that is both flexible and scalable. This system provides real-time feedback to academic pathologist respondents and a clear executive summary that allows for focused guidance of the respondent. This system may be adapted to fit practices of varying size, measure performance differently based on years of experience, and can work with many different institutional values.

  13. Performance analysis of visual tracking algorithms for motion-based user interfaces on mobile devices

    Science.gov (United States)

    Winkler, Stefan; Rangaswamy, Karthik; Tedjokusumo, Jefry; Zhou, ZhiYing

    2008-02-01

    Determining the self-motion of a camera is useful for many applications. A number of visual motion-tracking algorithms have been developed till date, each with their own advantages and restrictions. Some of them have also made their foray into the mobile world, powering augmented reality-based applications on phones with inbuilt cameras. In this paper, we compare the performances of three feature or landmark-guided motion tracking algorithms, namely marker-based tracking with MXRToolkit, face tracking based on CamShift, and MonoSLAM. We analyze and compare the complexity, accuracy, sensitivity, robustness and restrictions of each of the above methods. Our performance tests are conducted over two stages: The first stage of testing uses video sequences created with simulated camera movements along the six degrees of freedom in order to compare accuracy in tracking, while the second stage analyzes the robustness of the algorithms by testing for manipulative factors like image scaling and frame-skipping.

  14. Effects of directional microphone and adaptive multichannel noise reduction algorithm on cochlear implant performance.

    Science.gov (United States)

    Chung, King; Zeng, Fan-Gang; Acker, Kyle N

    2006-10-01

    Although cochlear implant (CI) users have enjoyed good speech recognition in quiet, they still have difficulties understanding speech in noise. We conducted three experiments to determine whether a directional microphone and an adaptive multichannel noise reduction algorithm could enhance CI performance in noise and whether Speech Transmission Index (STI) can be used to predict CI performance in various acoustic and signal processing conditions. In Experiment I, CI users listened to speech in noise processed by 4 hearing aid settings: omni-directional microphone, omni-directional microphone plus noise reduction, directional microphone, and directional microphone plus noise reduction. The directional microphone significantly improved speech recognition in noise. Both directional microphone and noise reduction algorithm improved overall preference. In Experiment II, normal hearing individuals listened to the recorded speech produced by 4- or 8-channel CI simulations. The 8-channel simulation yielded similar speech recognition results as in Experiment I, whereas the 4-channel simulation produced no significant difference among the 4 settings. In Experiment III, we examined the relationship between STIs and speech recognition. The results suggested that STI could predict actual and simulated CI speech intelligibility with acoustic degradation and the directional microphone, but not the noise reduction algorithm. Implications for intelligibility enhancement are discussed.

  15. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    AUTHOR|(INSPIRE)INSPIRE-00372074; The ATLAS collaboration; Sotiropoulou, Calliope Louisa; Annovi, Alberto; Kordas, Kostantinos

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avail...

  16. Performance of a Real-time Multipurpose 2-Dimensional Clustering Algorithm Developed for the ATLAS Experiment

    CERN Document Server

    Gkaitatzis, Stamatios; The ATLAS collaboration

    2016-01-01

    In this paper the performance of the 2D pixel clustering algorithm developed for the Input Mezzanine card of the ATLAS Fast TracKer system is presented. Fast TracKer is an approved ATLAS upgrade that has the goal to provide a complete list of tracks to the ATLAS High Level Trigger for each level-1 accepted event, at up to 100 kHz event rate with a very small latency, in the order of 100 µs. The Input Mezzanine card is the input stage of the Fast TracKer system. Its role is to receive data from the silicon detector and perform real time clustering, thus to reduce the amount of data propagated to the subsequent processing levels with minimal information loss. We focus on the most challenging component on the Input Mezzanine card, the 2D clustering algorithm executed on the pixel data. We compare two different implementations of the algorithm. The first is one called the ideal one which searches clusters of pixels in the whole silicon module at once and calculates the cluster centroids exploiting the whole avai...

  17. Performance and development for the Inner Detector Trigger algorithms at ATLAS

    CERN Document Server

    Penc, O; The ATLAS collaboration

    2014-01-01

    The performance of the ATLAS Inner Detector (ID) Trigger algorithms being developed for running on the ATLAS High Level Trigger (HLT) processor farm during Run 2 of the LHC are presented. During the 2013-14 LHC long shutdown modifications are being carried out to the LHC accelerator to increase both the beam energy and luminosity. These modifications will pose significant challenges for the ID Trigger algorithms, both in terms execution time and physics performance. To meet these challenges, the ATLAS HLT software is being restructured to run as a more flexible single stage HLT, instead of two separate stages (Level2 and Event Filter) as in Run 1. This will reduce the overall data volume that needs to be requested by the HLT system, since data will no longer need to be requested for each of the two separate processing stages. Development of the ID Trigger algorithms for Run 2, currently expected to be ready for detector commissioning near the end of 2014, is progressing well and the current efforts towards op...

  18. Performance and development plans for the Inner Detector trigger algorithms at ATLAS

    CERN Document Server

    Martin-haugh, Stewart; The ATLAS collaboration

    2015-01-01

    A description of the design and performance of the newly re-implemented tracking algorithms for the ATLAS trigger for LHC Run 2, to commence in spring 2015, is presented. The ATLAS High Level Trigger (HLT) has been restructured to run as a more flexible single stage process, rather than the two separate Level 2 and Event Filter stages used during Run 1. To make optimal use of this new scenario, a new tracking strategy has been implemented for Run 2. This new strategy will use a FastTrackFinder algorithm to directly seed the subsequent Precision Tracking, and will result in improved track parameter resolution and significantly faster execution times than achieved during Run 1 and with better efficien