WorldWideScience

Sample records for quantification algorithm including

  1. A fast and robust hepatocyte quantification algorithm including vein processing

    Directory of Open Access Journals (Sweden)

    Homeyer André

    2010-03-01

    Full Text Available Abstract Background Quantification of different types of cells is often needed for analysis of histological images. In our project, we compute the relative number of proliferating hepatocytes for the evaluation of the regeneration process after partial hepatectomy in normal rat livers. Results Our presented automatic approach for hepatocyte (HC quantification is suitable for the analysis of an entire digitized histological section given in form of a series of images. It is the main part of an automatic hepatocyte quantification tool that allows for the computation of the ratio between the number of proliferating HC-nuclei and the total number of all HC-nuclei for a series of images in one processing run. The processing pipeline allows us to obtain desired and valuable results for a wide range of images with different properties without additional parameter adjustment. Comparing the obtained segmentation results with a manually retrieved segmentation mask which is considered to be the ground truth, we achieve results with sensitivity above 90% and false positive fraction below 15%. Conclusions The proposed automatic procedure gives results with high sensitivity and low false positive fraction and can be applied to process entire stained sections.

  2. Development of computational algorithms for quantification of pulmonary structures

    International Nuclear Information System (INIS)

    Oliveira, Marcela de; Alvarez, Matheus; Alves, Allan F.F.; Miranda, Jose R.A.; Pina, Diana R.

    2012-01-01

    The high-resolution computed tomography has become the imaging diagnostic exam most commonly used for the evaluation of the squeals of Paracoccidioidomycosis. The subjective evaluations the radiological abnormalities found on HRCT images do not provide an accurate quantification. The computer-aided diagnosis systems produce a more objective assessment of the abnormal patterns found in HRCT images. Thus, this research proposes the development of algorithms in MATLAB® computing environment can quantify semi-automatically pathologies such as pulmonary fibrosis and emphysema. The algorithm consists in selecting a region of interest (ROI), and by the use of masks, filter densities and morphological operators, to obtain a quantification of the injured area to the area of a healthy lung. The proposed method was tested on ten HRCT scans of patients with confirmed PCM. The results of semi-automatic measurements were compared with subjective evaluations performed by a specialist in radiology, falling to a coincidence of 80% for emphysema and 58% for fibrosis. (author)

  3. Efficient Algorithms for Electrostatic Interactions Including Dielectric Contrasts

    Directory of Open Access Journals (Sweden)

    Christian Holm

    2013-10-01

    Full Text Available Coarse-grained models of soft matter are usually combined with implicit solvent models that take the electrostatic polarizability into account via a dielectric background. In biophysical or nanoscale simulations that include water, this constant can vary greatly within the system. Performing molecular dynamics or other simulations that need to compute exact electrostatic interactions between charges in those systems is computationally demanding. We review here several algorithms developed by us that perform exactly this task. For planar dielectric surfaces in partial periodic boundary conditions, the arising image charges can be either treated with the MMM2D algorithm in a very efficient and accurate way or with the electrostatic layer correction term, which enables the user to use his favorite 3D periodic Coulomb solver. Arbitrarily-shaped interfaces can be dealt with using induced surface charges with the induced charge calculation (ICC* algorithm. Finally, the local electrostatics algorithm, MEMD(Maxwell Equations Molecular Dynamics, even allows one to employ a smoothly varying dielectric constant in the systems. We introduce the concepts of these three algorithms and an extension for the inclusion of boundaries that are to be held fixed at a constant potential (metal conditions. For each method, we present a showcase application to highlight the importance of dielectric interfaces.

  4. A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification

    Science.gov (United States)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.

    MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.

  5. Scalable Hierarchical Algorithms for stochastic PDEs and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander; Chavez, Gustavo; Keyes, David E.; Ltaief, Hatem; Yokota, Rio

    2015-01-01

    number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered by R

  6. Scalable Hierarchical Algorithms for stochastic PDEs and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander

    2015-01-05

    H-matrices and Fast Multipole (FMM) are powerful methods to approximate linear operators coming from partial differential and integral equations as well as speed up computational cost from quadratic or cubic to log-linear (O(n log n)), where n number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered by R. Kriemann, 2005. Since 2005, the area of parallel architectures and software is developing very fast. Progress in GPUs and Many-Core Systems (e.g. XeonPhi with 64 cores) motivated us to extend work started in [1,2,7,8].

  7. A new automated quantification algorithm for the detection and evaluation of focal liver lesions with contrast-enhanced ultrasound.

    Science.gov (United States)

    Gatos, Ilias; Tsantis, Stavros; Spiliopoulos, Stavros; Skouroliakou, Aikaterini; Theotokas, Ioannis; Zoumpoulis, Pavlos; Hazle, John D; Kagadis, George C

    2015-07-01

    Detect and classify focal liver lesions (FLLs) from contrast-enhanced ultrasound (CEUS) imaging by means of an automated quantification algorithm. The proposed algorithm employs a sophisticated segmentation method to detect and contour focal lesions from 52 CEUS video sequences (30 benign and 22 malignant). Lesion detection involves wavelet transform zero crossings utilization as an initialization step to the Markov random field model toward the lesion contour extraction. After FLL detection across frames, time intensity curve (TIC) is computed which provides the contrast agents' behavior at all vascular phases with respect to adjacent parenchyma for each patient. From each TIC, eight features were automatically calculated and employed into the support vector machines (SVMs) classification algorithm in the design of the image analysis model. With regard to FLLs detection accuracy, all lesions detected had an average overlap value of 0.89 ± 0.16 with manual segmentations for all CEUS frame-subsets included in the study. Highest classification accuracy from the SVM model was 90.3%, misdiagnosing three benign and two malignant FLLs with sensitivity and specificity values of 93.1% and 86.9%, respectively. The proposed quantification system that employs FLLs detection and classification algorithms may be of value to physicians as a second opinion tool for avoiding unnecessary invasive procedures.

  8. Dataset exploited for the development and validation of automated cyanobacteria quantification algorithm, ACQUA

    Directory of Open Access Journals (Sweden)

    Emanuele Gandola

    2016-09-01

    Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements. Keywords: Comparing data, Filamentous cyanobacteria, Algorithm, Deoising, Natural sample

  9. Adaptation of the Maracas algorithm for carotid artery segmentation and stenosis quantification on CT images

    International Nuclear Information System (INIS)

    Maria A Zuluaga; Maciej Orkisz; Edgar J F Delgado; Vincent Dore; Alfredo Morales Pinzon; Marcela Hernandez Hoyos

    2010-01-01

    This paper describes the adaptations of Maracas algorithm to the segmentation and quantification of vascular structures in CTA images of the carotid artery. The maracas algorithm, which is based on an elastic model and on a multi-scale Eigen-analysis of the inertia matrix, was originally designed to segment a single artery in MRA images. The modifications are primarily aimed at addressing the specificities of CT images and the bifurcations. The algorithms implemented in this new version are classified into two levels. 1. The low-level processing (filtering of noise and directional artifacts, enhancement and pre-segmentation) to improve the quality of the image and to pre-segment it. These techniques are based on a priori information about noise, artifacts and typical gray levels ranges of lumen, background and calcifications. 2. The high-level processing to extract the centerline of the artery, to segment the lumen and to quantify the stenosis. At this level, we apply a priori knowledge of shape and anatomy of vascular structures. The method was evaluated on 31 datasets from the carotid lumen segmentation and stenosis grading grand challenge 2009. The segmentation results obtained an average of 80:4% dice similarity score, compared to reference segmentation, and the mean stenosis quantification error was 14.4%.

  10. Computer-assisted imaging algorithms facilitate histomorphometric quantification of kidney damage in rodent renal failure models

    Directory of Open Access Journals (Sweden)

    Marcin Klapczynski

    2012-01-01

    Full Text Available Introduction: Surgical 5/6 nephrectomy and adenine-induced kidney failure in rats are frequently used models of progressive renal failure. In both models, rats develop significant morphological changes in the kidneys and quantification of these changes can be used to measure the efficacy of prophylactic or therapeutic approaches. In this study, the Aperio Genie Pattern Recognition technology, along with the Positive Pixel Count, Nuclear and Rare Event algorithms were used to quantify histological changes in both rat renal failure models. Methods: Analysis was performed on digitized slides of whole kidney sagittal sections stained with either hematoxylin and eosin or immunohistochemistry with an anti-nestin antibody to identify glomeruli, regenerating tubular epithelium, and tubulointerstitial myofibroblasts. An anti-polymorphonuclear neutrophil (PMN antibody was also used to investigate neutrophil tissue infiltration. Results: Image analysis allowed for rapid and accurate quantification of relevant histopathologic changes such as increased cellularity and expansion of glomeruli, renal tubular dilatation, and degeneration, tissue inflammation, and mineral aggregation. The algorithms provided reliable and consistent results in both control and experimental groups and presented a quantifiable degree of damage associated with each model. Conclusion: These algorithms represent useful tools for the uniform and reproducible characterization of common histomorphologic features of renal injury in rats.

  11. Comparison of machine learning and semi-quantification algorithms for (I123)FP-CIT classification: the beginning of the end for semi-quantification?

    Science.gov (United States)

    Taylor, Jonathan Christopher; Fenner, John Wesley

    2017-11-29

    Semi-quantification methods are well established in the clinic for assisted reporting of (I123) Ioflupane images. Arguably, these are limited diagnostic tools. Recent research has demonstrated the potential for improved classification performance offered by machine learning algorithms. A direct comparison between methods is required to establish whether a move towards widespread clinical adoption of machine learning algorithms is justified. This study compared three machine learning algorithms with that of a range of semi-quantification methods, using the Parkinson's Progression Markers Initiative (PPMI) research database and a locally derived clinical database for validation. Machine learning algorithms were based on support vector machine classifiers with three different sets of features: Voxel intensities Principal components of image voxel intensities Striatal binding radios from the putamen and caudate. Semi-quantification methods were based on striatal binding ratios (SBRs) from both putamina, with and without consideration of the caudates. Normal limits for the SBRs were defined through four different methods: Minimum of age-matched controls Mean minus 1/1.5/2 standard deviations from age-matched controls Linear regression of normal patient data against age (minus 1/1.5/2 standard errors) Selection of the optimum operating point on the receiver operator characteristic curve from normal and abnormal training data Each machine learning and semi-quantification technique was evaluated with stratified, nested 10-fold cross-validation, repeated 10 times. The mean accuracy of the semi-quantitative methods for classification of local data into Parkinsonian and non-Parkinsonian groups varied from 0.78 to 0.87, contrasting with 0.89 to 0.95 for classifying PPMI data into healthy controls and Parkinson's disease groups. The machine learning algorithms gave mean accuracies between 0.88 to 0.92 and 0.95 to 0.97 for local and PPMI data respectively. Classification

  12. Automatic quantification of defect size using normal templates: a comparative clinical study of three commercially available algorithms

    International Nuclear Information System (INIS)

    Sutter, J. de; Wiele, C. van de; Bondt, P. de; Dierckx, R.; D'Asseler, Y.; Backer, G. de; Rigo, P.

    2000-01-01

    Infarct size assessed by myocardial single-photon emission tomography (SPET) imaging is an important prognostic parameter after myocardial infarction (MI). We compared three commercially available automatic quantification algorithms that make use of normal templates for the evaluation of infarct extent and severity in a large population of patients with remote MI. We studied 100 consecutive patients (80 men, mean age 63±11 years, mean LVEF 47%±15%) with a remote MI who underwent resting technetium-99m tetrofosmin gated SPET study for infarct extent and severity quantification. The quantification algorithms used for comparison were a short-axis algorithm (Cedars-Emory quantitative analysis software, CEqual), a vertical long-axis algorithm (VLAX) and a three-dimensional fitting algorithm (Perfit). Semiquantitative visual infarct extent and severity assessment using a 20-segment model with a 5-point score and the relation of infarct extent and severity with rest LVEF determined by quantitative gated SPET (QGS) were used as standards to compare the different algorithms. Mean infarct extent was similar for visual analysis (30%±21%) and the VLAX algorithm (25%±17%), but CEqual (15%±11%) and Perfit (5%±6%) mean infarct extents were significantly lower compared with visual analysis and the VLAX algorithm. Moreover, infarct extent determined by Perfit was significantly lower than infarct extent determined by CEqual. Correlations between automatic and visual infarct extent and severity evaluations were moderate (r=0.47, P 2 , n=32) compared with anterior infarctions and non-obese patients for all three algorithms. In this large series of post-MI patients, results of infarct extent and severity determination by automatic quantification algorithms that make use of normal templates were not interchangeable and correlated only moderately with semiquantitative visual analysis and LVEF. (orig.)

  13. A novel synthetic quantification standard including virus and internal report targets: application for the detection and quantification of emerging begomoviruses on tomato

    OpenAIRE

    Péréfarres, Frédéric; Hoareau, Murielle; Chiroleu, Frédéric; Reynaud, Bernard; Dintinger, Jacques; Lett, Jean-Michel

    2011-01-01

    Abstract Background Begomovirus is a genus of phytopathogenic single-stranded DNA viruses, transmitted by the whitefly Bemisia tabaci. This genus includes emerging and economically significant viruses such as those associated with Tomato Yellow Leaf Curl Disease, for which diagnostic tools are needed to prevent dispersion and new introductions. Five real-time PCRs with an internal tomato reporter gene were developed for accurate detection and quantification of monopartite begomoviruses, inclu...

  14. A new warfarin dosing algorithm including VKORC1 3730 G > A polymorphism: comparison with results obtained by other published algorithms.

    Science.gov (United States)

    Cini, Michela; Legnani, Cristina; Cosmi, Benilde; Guazzaloca, Giuliana; Valdrè, Lelia; Frascaro, Mirella; Palareti, Gualtiero

    2012-08-01

    Warfarin dosing is affected by clinical and genetic variants, but the contribution of the genotype associated with warfarin resistance in pharmacogenetic algorithms has not been well assessed yet. We developed a new dosing algorithm including polymorphisms associated both with warfarin sensitivity and resistance in the Italian population, and its performance was compared with those of eight previously published algorithms. Clinical and genetic data (CYP2C9*2, CYP2C9*3, VKORC1 -1639 G > A, and VKORC1 3730 G > A) were used to elaborate the new algorithm. Derivation and validation groups comprised 55 (58.2% men, mean age 69 years) and 40 (57.5% men, mean age 70 years) patients, respectively, who were on stable anticoagulation therapy for at least 3 months with different oral anticoagulation therapy (OAT) indications. Performance of the new algorithm, evaluated with mean absolute error (MAE) defined as the absolute value of the difference between observed daily maintenance dose and predicted daily dose, correlation with the observed dose and R(2) value, was comparable with or slightly lower than that obtained using the other algorithms. The new algorithm could correctly assign 53.3%, 50.0%, and 57.1% of patients to the low (≤25 mg/week), intermediate (26-44 mg/week) and high (≥ 45 mg/week) dosing range, respectively. Our data showed a significant increase in predictive accuracy among patients requiring high warfarin dose compared with the other algorithms (ranging from 0% to 28.6%). The algorithm including VKORC1 3730 G > A, associated with warfarin resistance, allowed a more accurate identification of resistant patients who require higher warfarin dosage.

  15. Optimal Feature Space Selection in Detecting Epileptic Seizure based on Recurrent Quantification Analysis and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Saleh LAshkari

    2016-06-01

    Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.

  16. Adaptive multiscale MCMC algorithm for uncertainty quantification in seismic parameter estimation

    KAUST Repository

    Tan, Xiaosi

    2014-08-05

    Formulating an inverse problem in a Bayesian framework has several major advantages (Sen and Stoffa, 1996). It allows finding multiple solutions subject to flexible a priori information and performing uncertainty quantification in the inverse problem. In this paper, we consider Bayesian inversion for the parameter estimation in seismic wave propagation. The Bayes\\' theorem allows writing the posterior distribution via the likelihood function and the prior distribution where the latter represents our prior knowledge about physical properties. One of the popular algorithms for sampling this posterior distribution is Markov chain Monte Carlo (MCMC), which involves making proposals and calculating their acceptance probabilities. However, for large-scale problems, MCMC is prohibitevely expensive as it requires many forward runs. In this paper, we propose a multilevel MCMC algorithm that employs multilevel forward simulations. Multilevel forward simulations are derived using Generalized Multiscale Finite Element Methods that we have proposed earlier (Efendiev et al., 2013a; Chung et al., 2013). Our overall Bayesian inversion approach provides a substantial speed-up both in the process of the sampling via preconditioning using approximate posteriors and the computation of the forward problems for different proposals by using the adaptive nature of multiscale methods. These aspects of the method are discussed n the paper. This paper is motivated by earlier work of M. Sen and his collaborators (Hong and Sen, 2007; Hong, 2008) who proposed the development of efficient MCMC techniques for seismic applications. In the paper, we present some preliminary numerical results.

  17. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.

    2013-12-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.

  18. A novel synthetic quantification standard including virus and internal report targets: application for the detection and quantification of emerging begomoviruses on tomato.

    Science.gov (United States)

    Péréfarres, Frédéric; Hoareau, Murielle; Chiroleu, Frédéric; Reynaud, Bernard; Dintinger, Jacques; Lett, Jean-Michel

    2011-08-05

    Begomovirus is a genus of phytopathogenic single-stranded DNA viruses, transmitted by the whitefly Bemisia tabaci. This genus includes emerging and economically significant viruses such as those associated with Tomato Yellow Leaf Curl Disease, for which diagnostic tools are needed to prevent dispersion and new introductions. Five real-time PCRs with an internal tomato reporter gene were developed for accurate detection and quantification of monopartite begomoviruses, including two strains of the Tomato yellow leaf curl virus (TYLCV; Mld and IL strains), the Tomato leaf curl Comoros virus-like viruses (ToLCKMV-like viruses) and the two molecules of the bipartite Potato yellow mosaic virus. These diagnostic tools have a unique standard quantification, comprising the targeted viral and internal report amplicons. These duplex real-time PCRs were applied to artificially inoculated plants to monitor and compare their viral development. Real-time PCRs were optimized for accurate detection and quantification over a range of 2 × 10(9) to 2 × 10(3) copies of genomic viral DNA/μL for TYLCV-Mld, TYLCV-IL and PYMV-B and 2 × 10(8) to 2 × 10(3) copies of genomic viral DNA/μL for PYMV-A and ToLCKMV-like viruses. These real-time PCRs were applied to artificially inoculated plants and viral loads were compared at 10, 20 and 30 days post-inoculation. Different patterns of viral accumulation were observed between the bipartite and the monopartite begomoviruses. Interestingly, PYMV accumulated more viral DNA at each date for both genomic components compared to all the monopartite viruses. Also, PYMV reached its highest viral load at 10 dpi contrary to the other viruses (20 dpi). The accumulation kinetics of the two strains of emergent TYLCV differed from the ToLCKMV-like viruses in the higher quantities of viral DNA produced in the early phase of the infection and in the shorter time to reach this peak viral load. To detect and quantify a wide range of begomoviruses, five duplex

  19. A novel synthetic quantification standard including virus and internal report targets: application for the detection and quantification of emerging begomoviruses on tomato

    Directory of Open Access Journals (Sweden)

    Lett Jean-Michel

    2011-08-01

    Full Text Available Abstract Background Begomovirus is a genus of phytopathogenic single-stranded DNA viruses, transmitted by the whitefly Bemisia tabaci. This genus includes emerging and economically significant viruses such as those associated with Tomato Yellow Leaf Curl Disease, for which diagnostic tools are needed to prevent dispersion and new introductions. Five real-time PCRs with an internal tomato reporter gene were developed for accurate detection and quantification of monopartite begomoviruses, including two strains of the Tomato yellow leaf curl virus (TYLCV; Mld and IL strains, the Tomato leaf curl Comoros virus-like viruses (ToLCKMV-like viruses and the two molecules of the bipartite Potato yellow mosaic virus. These diagnostic tools have a unique standard quantification, comprising the targeted viral and internal report amplicons. These duplex real-time PCRs were applied to artificially inoculated plants to monitor and compare their viral development. Results Real-time PCRs were optimized for accurate detection and quantification over a range of 2 × 109 to 2 × 103 copies of genomic viral DNA/μL for TYLCV-Mld, TYLCV-IL and PYMV-B and 2 × 108 to 2 × 103 copies of genomic viral DNA/μL for PYMV-A and ToLCKMV-like viruses. These real-time PCRs were applied to artificially inoculated plants and viral loads were compared at 10, 20 and 30 days post-inoculation. Different patterns of viral accumulation were observed between the bipartite and the monopartite begomoviruses. Interestingly, PYMV accumulated more viral DNA at each date for both genomic components compared to all the monopartite viruses. Also, PYMV reached its highest viral load at 10 dpi contrary to the other viruses (20 dpi. The accumulation kinetics of the two strains of emergent TYLCV differed from the ToLCKMV-like viruses in the higher quantities of viral DNA produced in the early phase of the infection and in the shorter time to reach this peak viral load. Conclusions To detect and

  20. Uncertainty Quantification of the Reverse Taylor Impact Test and Localized Asynchronous Space-Time Algorithm

    Science.gov (United States)

    Subber, Waad; Salvadori, Alberto; Lee, Sangmin; Matous, Karel

    2017-06-01

    The reverse Taylor impact is a common experiment to investigate the dynamical response of materials at high strain rates. To better understand the physical phenomena and to provide a platform for code validation and Uncertainty Quantification (UQ), a co-designed simulation and experimental paradigm is investigated. For validation under uncertainty, quantities of interest (QOIs) within subregions of the computational domain are introduced. For such simulations where regions of interest can be identified, the computational cost for UQ can be reduced by confining the random variability within these regions of interest. This observation inspired us to develop an asynchronous space and time computational algorithm with localized UQ. In the region of interest, the high resolution space and time discretization schemes are used for a stochastic model. Apart from the region of interest, low spatial and temporal resolutions are allowed for a stochastic model with low dimensional representation of uncertainty. The model is exercised on the linear elastodynamics and shows a potential in reducing the UQ computational cost. Although, we consider wave prorogation in solid, the proposed framework is general and can be used for fluid flow problems as well. Department of Energy, National Nuclear Security Administration (PSAAP-II).

  1. BWR transient analysis using neutronic / thermal hydraulic coupled codes including uncertainty quantification

    International Nuclear Information System (INIS)

    Hartmann, C.; Sanchez, V.; Tietsch, W.; Stieglitz, R.

    2012-01-01

    The KIT is involved in the development and qualification of best estimate methodologies for BWR transient analysis in cooperation with industrial partners. The goal is to establish the most advanced thermal hydraulic system codes coupled with 3D reactor dynamic codes to be able to perform a more realistic evaluation of the BWR behavior under accidental conditions. For this purpose a computational chain based on the lattice code (SCALE6/GenPMAXS), the coupled neutronic/thermal hydraulic code (TRACE/PARCS) as well as a Monte Carlo based uncertainty and sensitivity package (SUSA) has been established and applied to different kind of transients of a Boiling Water Reactor (BWR). This paper will describe the multidimensional models of the plant elaborated for TRACE and PARCS to perform the investigations mentioned before. For the uncertainty quantification of the coupled code TRACE/PARCS and specifically to take into account the influence of the kinetics parameters in such studies, the PARCS code has been extended to facilitate the change of model parameters in such a way that the SUSA package can be used in connection with TRACE/PARCS for the U and S studies. This approach will be presented in detail. The results obtained for a rod drop transient with TRACE/PARCS using the SUSA-methodology showed clearly the importance of some kinetic parameters on the transient progression demonstrating that the coupling of a best-estimate coupled codes with uncertainty and sensitivity tools is very promising and of great importance for the safety assessment of nuclear reactors. (authors)

  2. Development of computational algorithms for quantification of pulmonary structures; Desenvolvimento de algoritmos computacionais para quantificacao de estruturas pulmonares

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Marcela de; Alvarez, Matheus; Alves, Allan F.F.; Miranda, Jose R.A., E-mail: marceladeoliveira@ig.com.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Hospital das Clinicas. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2012-12-15

    The high-resolution computed tomography has become the imaging diagnostic exam most commonly used for the evaluation of the squeals of Paracoccidioidomycosis. The subjective evaluations the radiological abnormalities found on HRCT images do not provide an accurate quantification. The computer-aided diagnosis systems produce a more objective assessment of the abnormal patterns found in HRCT images. Thus, this research proposes the development of algorithms in MATLAB® computing environment can quantify semi-automatically pathologies such as pulmonary fibrosis and emphysema. The algorithm consists in selecting a region of interest (ROI), and by the use of masks, filter densities and morphological operators, to obtain a quantification of the injured area to the area of a healthy lung. The proposed method was tested on ten HRCT scans of patients with confirmed PCM. The results of semi-automatic measurements were compared with subjective evaluations performed by a specialist in radiology, falling to a coincidence of 80% for emphysema and 58% for fibrosis. (author)

  3. Improved algorithm for computerized detection and quantification of pulmonary emphysema at high-resolution computed tomography (HRCT)

    Science.gov (United States)

    Tylen, Ulf; Friman, Ola; Borga, Magnus; Angelhed, Jan-Erik

    2001-05-01

    Emphysema is characterized by destruction of lung tissue with development of small or large holes within the lung. These areas will have Hounsfield values (HU) approaching -1000. It is possible to detect and quantificate such areas using simple density mask technique. The edge enhancement reconstruction algorithm, gravity and motion of the heart and vessels during scanning causes artefacts, however. The purpose of our work was to construct an algorithm that detects such image artefacts and corrects them. The first step is to apply inverse filtering to the image removing much of the effect of the edge enhancement reconstruction algorithm. The next step implies computation of the antero-posterior density gradient caused by gravity and correction for that. Motion artefacts are in a third step corrected for by use of normalized averaging, thresholding and region growing. Twenty healthy volunteers were investigated, 10 with slight emphysema and 10 without. Using simple density mask technique it was not possible to separate persons with disease from those without. Our algorithm improved separation of the two groups considerably. Our algorithm needs further refinement, but may form a basis for further development of methods for computerized diagnosis and quantification of emphysema by HRCT.

  4. A practical algorithm for distribution state estimation including renewable energy sources

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)

    2009-11-15

    Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)

  5. Automatic Detection and Quantification of WBCs and RBCs Using Iterative Structured Circle Detection Algorithm

    Directory of Open Access Journals (Sweden)

    Yazan M. Alomari

    2014-01-01

    Full Text Available Segmentation and counting of blood cells are considered as an important step that helps to extract features to diagnose some specific diseases like malaria or leukemia. The manual counting of white blood cells (WBCs and red blood cells (RBCs in microscopic images is an extremely tedious, time consuming, and inaccurate process. Automatic analysis will allow hematologist experts to perform faster and more accurately. The proposed method uses an iterative structured circle detection algorithm for the segmentation and counting of WBCs and RBCs. The separation of WBCs from RBCs was achieved by thresholding, and specific preprocessing steps were developed for each cell type. Counting was performed for each image using the proposed method based on modified circle detection, which automatically counted the cells. Several modifications were made to the basic (RCD algorithm to solve the initialization problem, detecting irregular circles (cells, selecting the optimal circle from the candidate circles, determining the number of iterations in a fully dynamic way to enhance algorithm detection, and running time. The validation method used to determine segmentation accuracy was a quantitative analysis that included Precision, Recall, and F-measurement tests. The average accuracy of the proposed method was 95.3% for RBCs and 98.4% for WBCs.

  6. ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Bin; Maddumage, Prasad [Research Computing Center, Department of Scientific Computing, Florida State University, Tallahassee, FL 32306 (United States); Kantowski, Ronald; Dai, Xinyu; Baron, Eddie, E-mail: bchen3@fsu.edu [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK 73019 (United States)

    2015-05-15

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.

  7. ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION

    International Nuclear Information System (INIS)

    Chen, Bin; Maddumage, Prasad; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie

    2015-01-01

    Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python

  8. A constriction factor based particle swarm optimisation algorithm to solve the economic dispatch problem including losses

    Energy Technology Data Exchange (ETDEWEB)

    Young, Steven; Montakhab, Mohammad; Nouri, Hassan

    2011-07-15

    Economic dispatch (ED) is one of the most important problems to be solved in power generation as fractional percentage fuel reductions represent significant cost savings. ED wishes to optimise the power generated by each generating unit in a system in order to find the minimum operating cost at a required load demand, whilst ensuring both equality and inequality constraints are met. For the process of optimisation, a model must be created for each generating unit. The particle swarm optimisation technique is an evolutionary computation technique with one of the most powerful methods for solving global optimisation problems. The aim of this paper is to add in a constriction factor to the particle swarm optimisation algorithm (CFBPSO). Results show that the algorithm is very good at solving the ED problem and that CFBPSO must be able to work in a practical environment and so a valve point effect with transmission losses should be included in future work.

  9. Comparison of quantification algorithms for circulating cell-free DNA methylation biomarkers in blood plasma from cancer patients.

    Science.gov (United States)

    de Vos, Luka; Gevensleben, Heidrun; Schröck, Andreas; Franzen, Alina; Kristiansen, Glen; Bootz, Friedrich; Dietrich, Dimo

    2017-01-01

    SHOX2 and SEPT9 methylation in circulating cell-free DNA (ccfDNA) in blood are established powerful and clinically valuable biomarkers for diagnosis, staging, prognosis, and monitoring of cancer patients. The aim of the present study was to evaluate different quantification algorithms (relative quantification, absolute quantification, quasi-digital PCR) with regard to their clinical performance. Methylation analyses were performed in a training cohort (141 patients with head and neck squamous cell carcinoma [HNSCC], 170 control cases) and a testing cohort (137 HNSCC cases, 102 controls). DNA was extracted from plasma samples, bisulfite-converted, and analyzed via quantitative real-time PCR. SHOX2 and SEPT9 methylations were assessed separately and as panel [mean SEPT9 / SHOX2 ] using the ΔCT method for absolute quantification and the ΔΔCT-method for relative quantification. Quasi-digital PCR was defined as the number of amplification-positive PCR replicates. The diagnostic (sensitivity, specificity, area under the curve (AUC) of the receiver operating characteristic (ROC)) and prognostic accuracy (hazard ratio (HR) from Cox regression) were evaluated. Sporadic methylation in control samples necessitated the introduction of cutoffs resulting in 61-63% sensitivity/90-92% specificity ( SEPT9 /training), 53-57% sensitivity/87-90% specificity ( SHOX2 /training), and 64-65% sensitivity/90-91% specificity (mean SEPT9 / SHOX2 /training). Results were confirmed in a testing cohort with 54-56% sensitivity/88-90% specificity ( SEPT9 /testing), 43-48% sensitivity/93-95% specificity ( SHOX2 /testing), and 49-58% sensitivity/88-94% specificity (mean SEPT9 / SHOX2 /testing). All algorithms showed comparable cutoff-independent diagnostic accuracy with largely overlapping 95% confidence intervals ( SEPT9 : AUC training  = 0.79-0.80; AUC testing  = 0.74-0.75; SHOX2 : AUC training  = 0.78-0.81, AUC testing  = 0.77-0.79; mean SEPT9 / SHOX2 : AUC training  = 0

  10. A practical algorithm for optimal operation management of distribution network including fuel cell power plants

    Energy Technology Data Exchange (ETDEWEB)

    Niknam, Taher; Meymand, Hamed Zeinoddini; Nayeripour, Majid [Electrical and Electronic Engineering Department, Shiraz University of Technology, Shiraz (Iran)

    2010-08-15

    Fuel cell power plants (FCPPs) have been taken into a great deal of consideration in recent years. The continuing growth of the power demand together with environmental constraints is increasing interest to use FCPPs in power system. Since FCPPs are usually connected to distribution network, the effect of FCPPs on distribution network is more than other sections of power system. One of the most important issues in distribution networks is optimal operation management (OOM) which can be affected by FCPPs. This paper proposes a new approach for optimal operation management of distribution networks including FCCPs. In the article, we consider the total electrical energy losses, the total electrical energy cost and the total emission as the objective functions which should be minimized. Whereas the optimal operation in distribution networks has a nonlinear mixed integer optimization problem, the optimal solution could be obtained through an evolutionary method. We use a new evolutionary algorithm based on Fuzzy Adaptive Particle Swarm Optimization (FAPSO) to solve the optimal operation problem and compare this method with Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Differential Evolution (DE), Ant Colony Optimization (ACO) and Tabu Search (TS) over two distribution test feeders. (author)

  11. Study of Power Flow Algorithm of AC/DC Distribution System including VSC-MTDC

    Directory of Open Access Journals (Sweden)

    Haifeng Liang

    2015-08-01

    Full Text Available In recent years, distributed generation and a large number of sensitive AC and DC loads have been connected to distribution networks, which introduce a series of challenges to distribution network operators (DNOs. In addition, the advantages of DC distribution networks, such as the energy conservation and emission reduction, mean that the voltage source converter based multi-terminal direct current (VSC-MTDC for AC/DC distribution systems demonstrates a great potential, hence drawing growing research interest. In this paper, considering losses of the reactor, the filter and the converter, a mathematical model of VSC-HVDC for the load flow analysis is derived. An AC/DC distribution network architecture has been built, based on which the differences in modified equations of the VSC-MTDC-based network under different control modes are analyzed. In addition, corresponding interface functions under five control modes are provided, and a back/forward iterative algorithm which is applied to power flow calculation of the AC/DC distribution system including VSC-MTDC is proposed. Finally, by calculating the power flow of the modified IEEE14 AC/DC distribution network, the efficiency and validity of the model and algorithm are evaluated. With various distributed generations connected to the network at appropriate locations, power flow results show that network losses and utilization of transmission networks are effectively reduced.

  12. Standardized evaluation framework for evaluating coronary artery stenosis detection, stenosis quantification and lumen segmentation algorithms in computed tomography angiography.

    Science.gov (United States)

    Kirişli, H A; Schaap, M; Metz, C T; Dharampal, A S; Meijboom, W B; Papadopoulou, S L; Dedic, A; Nieman, K; de Graaf, M A; Meijs, M F L; Cramer, M J; Broersen, A; Cetin, S; Eslami, A; Flórez-Valencia, L; Lor, K L; Matuszewski, B; Melki, I; Mohr, B; Oksüz, I; Shahzad, R; Wang, C; Kitslaar, P H; Unal, G; Katouzian, A; Örkisz, M; Chen, C M; Precioso, F; Najman, L; Masood, S; Ünay, D; van Vliet, L; Moreno, R; Goldenberg, R; Vuçini, E; Krestin, G P; Niessen, W J; van Walsum, T

    2013-12-01

    Though conventional coronary angiography (CCA) has been the standard of reference for diagnosing coronary artery disease in the past decades, computed tomography angiography (CTA) has rapidly emerged, and is nowadays widely used in clinical practice. Here, we introduce a standardized evaluation framework to reliably evaluate and compare the performance of the algorithms devised to detect and quantify the coronary artery stenoses, and to segment the coronary artery lumen in CTA data. The objective of this evaluation framework is to demonstrate the feasibility of dedicated algorithms to: (1) (semi-)automatically detect and quantify stenosis on CTA, in comparison with quantitative coronary angiography (QCA) and CTA consensus reading, and (2) (semi-)automatically segment the coronary lumen on CTA, in comparison with expert's manual annotation. A database consisting of 48 multicenter multivendor cardiac CTA datasets with corresponding reference standards are described and made available. The algorithms from 11 research groups were quantitatively evaluated and compared. The results show that (1) some of the current stenosis detection/quantification algorithms may be used for triage or as a second-reader in clinical practice, and that (2) automatic lumen segmentation is possible with a precision similar to that obtained by experts. The framework is open for new submissions through the website, at http://coronary.bigr.nl/stenoses/. Copyright © 2013 Elsevier B.V. All rights reserved.

  13. Performance of thigh-mounted triaxial accelerometer algorithms in objective quantification of sedentary behaviour and physical activity in older adults.

    Directory of Open Access Journals (Sweden)

    Jorgen A Wullems

    Full Text Available Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual's physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry.

  14. Performance of thigh-mounted triaxial accelerometer algorithms in objective quantification of sedentary behaviour and physical activity in older adults

    Science.gov (United States)

    Verschueren, Sabine M. P.; Degens, Hans; Morse, Christopher I.; Onambélé, Gladys L.

    2017-01-01

    Accurate monitoring of sedentary behaviour and physical activity is key to investigate their exact role in healthy ageing. To date, accelerometers using cut-off point models are most preferred for this, however, machine learning seems a highly promising future alternative. Hence, the current study compared between cut-off point and machine learning algorithms, for optimal quantification of sedentary behaviour and physical activity intensities in the elderly. Thus, in a heterogeneous sample of forty participants (aged ≥60 years, 50% female) energy expenditure during laboratory-based activities (ranging from sedentary behaviour through to moderate-to-vigorous physical activity) was estimated by indirect calorimetry, whilst wearing triaxial thigh-mounted accelerometers. Three cut-off point algorithms and a Random Forest machine learning model were developed and cross-validated using the collected data. Detailed analyses were performed to check algorithm robustness, and examine and benchmark both overall and participant-specific balanced accuracies. This revealed that the four models can at least be used to confidently monitor sedentary behaviour and moderate-to-vigorous physical activity. Nevertheless, the machine learning algorithm outperformed the cut-off point models by being robust for all individual’s physiological and non-physiological characteristics and showing more performance of an acceptable level over the whole range of physical activity intensities. Therefore, we propose that Random Forest machine learning may be optimal for objective assessment of sedentary behaviour and physical activity in older adults using thigh-mounted triaxial accelerometry. PMID:29155839

  15. From Pixels to Region: A Salient Region Detection Algorithm for Location-Quantification Image

    Directory of Open Access Journals (Sweden)

    Mengmeng Zhang

    2014-01-01

    Full Text Available Image saliency detection has become increasingly important with the development of intelligent identification and machine vision technology. This process is essential for many image processing algorithms such as image retrieval, image segmentation, image recognition, and adaptive image compression. We propose a salient region detection algorithm for full-resolution images. This algorithm analyzes the randomness and correlation of image pixels and pixel-to-region saliency computation mechanism. The algorithm first obtains points with more saliency probability by using the improved smallest univalue segment assimilating nucleus operator. It then reconstructs the entire saliency region detection by taking these points as reference and combining them with image spatial color distribution, as well as regional and global contrasts. The results for subjective and objective image saliency detection show that the proposed algorithm exhibits outstanding performance in terms of technology indices such as precision and recall rates.

  16. Nested sampling algorithm for subsurface flow model selection, uncertainty quantification, and nonlinear calibration

    KAUST Repository

    Elsheikh, A. H.; Wheeler, M. F.; Hoteit, Ibrahim

    2013-01-01

    Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known

  17. Fitting-free algorithm for efficient quantification of collagen fiber alignment in SHG imaging applications.

    Science.gov (United States)

    Hall, Gunnsteinn; Liang, Wenxuan; Li, Xingde

    2017-10-01

    Collagen fiber alignment derived from second harmonic generation (SHG) microscopy images can be important for disease diagnostics. Image processing algorithms are needed to robustly quantify the alignment in images with high sensitivity and reliability. Fourier transform (FT) magnitude, 2D power spectrum, and image autocorrelation have previously been used to extract fiber information from images by assuming a certain mathematical model (e.g. Gaussian distribution of the fiber-related parameters) and fitting. The fitting process is slow and fails to converge when the data is not Gaussian. Herein we present an efficient constant-time deterministic algorithm which characterizes the symmetricity of the FT magnitude image in terms of a single parameter, named the fiber alignment anisotropy R ranging from 0 (randomized fibers) to 1 (perfect alignment). This represents an important improvement of the technology and may bring us one step closer to utilizing the technology for various applications in real time. In addition, we present a digital image phantom-based framework for characterizing and validating the algorithm, as well as assessing the robustness of the algorithm against different perturbations.

  18. Spot quantification in two dimensional gel electrophoresis image analysis: comparison of different approaches and presentation of a novel compound fitting algorithm

    Science.gov (United States)

    2014-01-01

    Background Various computer-based methods exist for the detection and quantification of protein spots in two dimensional gel electrophoresis images. Area-based methods are commonly used for spot quantification: an area is assigned to each spot and the sum of the pixel intensities in that area, the so-called volume, is used a measure for spot signal. Other methods use the optical density, i.e. the intensity of the most intense pixel of a spot, or calculate the volume from the parameters of a fitted function. Results In this study we compare the performance of different spot quantification methods using synthetic and real data. We propose a ready-to-use algorithm for spot detection and quantification that uses fitting of two dimensional Gaussian function curves for the extraction of data from two dimensional gel electrophoresis (2-DE) images. The algorithm implements fitting using logical compounds and is computationally efficient. The applicability of the compound fitting algorithm was evaluated for various simulated data and compared with other quantification approaches. We provide evidence that even if an incorrect bell-shaped function is used, the fitting method is superior to other approaches, especially when spots overlap. Finally, we validated the method with experimental data of urea-based 2-DE of Aβ peptides andre-analyzed published data sets. Our methods showed higher precision and accuracy than other approaches when applied to exposure time series and standard gels. Conclusion Compound fitting as a quantification method for 2-DE spots shows several advantages over other approaches and could be combined with various spot detection methods. The algorithm was scripted in MATLAB (Mathworks) and is available as a supplemental file. PMID:24915860

  19. A Quantification of the 3D Modeling Capabilities of the Kinectfusion Algorithm

    Science.gov (United States)

    2014-03-27

    experiment, several tests of the experiment setup were run with the Kinect for Xbox 360 sensor, the only sensor on -hand at the start of the testing phase. As...pairing the KinectFusion algorithm with a higher fidelity sensor, such as a Light Distance and Ranging (LiDaR) or the newly released Xbox One Kinect...or three-fold improvement still be possible with LiDaR or Xbox One data? 5.1.3 KinectFusion and Vicon Info. Another source of noise (or error) in the

  20. Performance quantification of clustering algorithms for false positive removal in fMRI by ROC curves

    Directory of Open Access Journals (Sweden)

    André Salles Cunha Peres

    Full Text Available Abstract Introduction Functional magnetic resonance imaging (fMRI is a non-invasive technique that allows the detection of specific cerebral functions in humans based on hemodynamic changes. The contrast changes are about 5%, making visual inspection impossible. Thus, statistic strategies are applied to infer which brain region is engaged in a task. However, the traditional methods like general linear model and cross-correlation utilize voxel-wise calculation, introducing a lot of false-positive data. So, in this work we tested post-processing cluster algorithms to diminish the false-positives. Methods In this study, three clustering algorithms (the hierarchical cluster, k-means and self-organizing maps were tested and compared for false-positive removal in the post-processing of cross-correlation analyses. Results Our results showed that the hierarchical cluster presented the best performance to remove the false positives in fMRI, being 2.3 times more accurate than k-means, and 1.9 times more accurate than self-organizing maps. Conclusion The hierarchical cluster presented the best performance in false-positive removal because it uses the inconsistency coefficient threshold, while k-means and self-organizing maps utilize a priori cluster number (centroids and neurons number; thus, the hierarchical cluster avoids clustering scattered voxels, as the inconsistency coefficient threshold allows only the voxels to be clustered that are at a minimum distance to some cluster.

  1. Characterization results and Markov chain Monte Carlo algorithms including exact simulation for some spatial point processes

    DEFF Research Database (Denmark)

    Häggström, Olle; Lieshout, Marie-Colette van; Møller, Jesper

    1999-01-01

    The area-interaction process and the continuum random-cluster model are characterized in terms of certain functional forms of their respective conditional intensities. In certain cases, these two point process models can be derived from a bivariate point process model which in many respects...... is simpler to analyse and simulate. Using this correspondence we devise a two-component Gibbs sampler, which can be used for fast and exact simulation by extending the recent ideas of Propp and Wilson. We further introduce a Swendsen-Wang type algorithm. The relevance of the results within spatial statistics...

  2. Quantification of the myocardial area at risk using coronary CT angiography and Voronoi algorithm-based myocardial segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Kurata, Akira; Kono, Atsushi; Coenen, Adriaan; Saru-Chelu, Raluca G.; Krestin, Gabriel P. [Erasmus University Medical Center, Department of Radiology, Rotterdam (Netherlands); Sakamoto, Tsuyoshi [AZE inc, Development Division, Chiyoda, Tokyo (Japan); Kido, Teruhito; Mochizuki, Teruhito [Ehime University Graduate School of Medicine, Department of Radiology, Toon, Ehime (Japan); Higashino, Hiroshi [Yotsuba Circulation Clinic, Department of Radiology, Matsuyama, Ehime (Japan); Abe, Mitsunori [Yotsuba Circulation Clinic, Department of Cardiology, Matsuyama, Ehime (Japan); Feyter, Pim J. de; Nieman, Koen [Erasmus University Medical Center, Department of Radiology, Rotterdam (Netherlands); Erasmus University Medical Center, Department of Cardiology, Rotterdam (Netherlands)

    2015-01-15

    The purpose of this study was to estimate the myocardial area at risk (MAAR) using coronary computed tomography angiography (CTA) and Voronoi algorithm-based myocardial segmentation in comparison with single-photon emission computed tomography (SPECT). Thirty-four patients with coronary artery disease underwent 128-slice coronary CTA, stress/rest thallium-201 SPECT, and coronary angiography (CAG). CTA-based MAAR was defined as the sum of all CAG stenosis (>50 %) related territories (the ratio of the left ventricular volume). Using automated quantification software (17-segment model, 5-point scale), SPECT-based MAAR was defined as the number of segments with a score above zero as compared to the total 17 segments by summed stress score (SSS), difference (SDS) score map, and comprehensive SPECT interpretation with either SSS or SDS best correlating CAG findings (SSS/SDS). Results were compared using Pearson's correlation coefficient. Forty-nine stenoses were observed in 102 major coronary territories. Mean value of CTA-based MAAR was 28.3 ± 14.0 %. SSS-based, SDS-based, and SSS/SDS-based MAAR was 30.1 ± 6.1 %, 20.1 ± 15.8 %, and 26.8 ± 15.7 %, respectively. CTA-based MAAR was significantly related to SPECT-based MAAR (r = 0.531 for SSS; r = 0.494 for SDS; r = 0.814 for SSS/SDS; P < 0.05 in each). CTA-based Voronoi algorithm myocardial segmentation reliably quantifies SPECT-based MAAR. (orig.)

  3. Quantification of the myocardial area at risk using coronary CT angiography and Voronoi algorithm-based myocardial segmentation

    International Nuclear Information System (INIS)

    Kurata, Akira; Kono, Atsushi; Coenen, Adriaan; Saru-Chelu, Raluca G.; Krestin, Gabriel P.; Sakamoto, Tsuyoshi; Kido, Teruhito; Mochizuki, Teruhito; Higashino, Hiroshi; Abe, Mitsunori; Feyter, Pim J. de; Nieman, Koen

    2015-01-01

    The purpose of this study was to estimate the myocardial area at risk (MAAR) using coronary computed tomography angiography (CTA) and Voronoi algorithm-based myocardial segmentation in comparison with single-photon emission computed tomography (SPECT). Thirty-four patients with coronary artery disease underwent 128-slice coronary CTA, stress/rest thallium-201 SPECT, and coronary angiography (CAG). CTA-based MAAR was defined as the sum of all CAG stenosis (>50 %) related territories (the ratio of the left ventricular volume). Using automated quantification software (17-segment model, 5-point scale), SPECT-based MAAR was defined as the number of segments with a score above zero as compared to the total 17 segments by summed stress score (SSS), difference (SDS) score map, and comprehensive SPECT interpretation with either SSS or SDS best correlating CAG findings (SSS/SDS). Results were compared using Pearson's correlation coefficient. Forty-nine stenoses were observed in 102 major coronary territories. Mean value of CTA-based MAAR was 28.3 ± 14.0 %. SSS-based, SDS-based, and SSS/SDS-based MAAR was 30.1 ± 6.1 %, 20.1 ± 15.8 %, and 26.8 ± 15.7 %, respectively. CTA-based MAAR was significantly related to SPECT-based MAAR (r = 0.531 for SSS; r = 0.494 for SDS; r = 0.814 for SSS/SDS; P < 0.05 in each). CTA-based Voronoi algorithm myocardial segmentation reliably quantifies SPECT-based MAAR. (orig.)

  4. Two Algorithms for High-throughput and Multi-parametric Quantification of Drosophila Neuromuscular Junction Morphology.

    Science.gov (United States)

    Castells-Nobau, Anna; Nijhof, Bonnie; Eidhof, Ilse; Wolf, Louis; Scheffer-de Gooyert, Jolanda M; Monedero, Ignacio; Torroja, Laura; van der Laak, Jeroen A W M; Schenck, Annette

    2017-05-03

    Synaptic morphology is tightly related to synaptic efficacy, and in many cases morphological synapse defects ultimately lead to synaptic malfunction. The Drosophila larval neuromuscular junction (NMJ), a well-established model for glutamatergic synapses, has been extensively studied for decades. Identification of mutations causing NMJ morphological defects revealed a repertoire of genes that regulate synapse development and function. Many of these were identified in large-scale studies that focused on qualitative approaches to detect morphological abnormalities of the Drosophila NMJ. A drawback of qualitative analyses is that many subtle players contributing to NMJ morphology likely remain unnoticed. Whereas quantitative analyses are required to detect the subtler morphological differences, such analyses are not yet commonly performed because they are laborious. This protocol describes in detail two image analysis algorithms "Drosophila NMJ Morphometrics" and "Drosophila NMJ Bouton Morphometrics", available as Fiji-compatible macros, for quantitative, accurate and objective morphometric analysis of the Drosophila NMJ. This methodology is developed to analyze NMJ terminals immunolabeled with the commonly used markers Dlg-1 and Brp. Additionally, its wider application to other markers such as Hrp, Csp and Syt is presented in this protocol. The macros are able to assess nine morphological NMJ features: NMJ area, NMJ perimeter, number of boutons, NMJ length, NMJ longest branch length, number of islands, number of branches, number of branching points and number of active zones in the NMJ terminal.

  5. Dicotyledon Weed Quantification Algorithm for Selective Herbicide Application in Maize Crops.

    Science.gov (United States)

    Laursen, Morten Stigaard; Jørgensen, Rasmus Nyholm; Midtiby, Henrik Skov; Jensen, Kjeld; Christiansen, Martin Peter; Giselsson, Thomas Mosgaard; Mortensen, Anders Krogh; Jensen, Peter Kryger

    2016-11-04

    The stricter legislation within the European Union for the regulation of herbicides that are prone to leaching causes a greater economic burden on the agricultural industry through taxation. Owing to the increased economic burden, research in reducing herbicide usage has been prompted. High-resolution images from digital cameras support the studying of plant characteristics. These images can also be utilized to analyze shape and texture characteristics for weed identification. Instead of detecting weed patches, weed density can be estimated at a sub-patch level, through which even the identification of a single plant is possible. The aim of this study is to adapt the monocot and dicot coverage ratio vision (MoDiCoVi) algorithm to estimate dicotyledon leaf cover, perform grid spraying in real time, and present initial results in terms of potential herbicide savings in maize. The authors designed and executed an automated, large-scale field trial supported by the Armadillo autonomous tool carrier robot. The field trial consisted of 299 maize plots. Half of the plots (parcels) were planned with additional seeded weeds; the other half were planned with naturally occurring weeds. The in-situ evaluation showed that, compared to conventional broadcast spraying, the proposed method can reduce herbicide usage by 65% without measurable loss in biological effect.

  6. Branch-and-Bound algorithm applied to uncertainty quantification of a Boiling Water Reactor Station Blackout

    Energy Technology Data Exchange (ETDEWEB)

    Nielsen, Joseph, E-mail: joseph.nielsen@inl.gov [Idaho National Laboratory, 1955 N. Fremont Avenue, P.O. Box 1625, Idaho Falls, ID 83402 (United States); University of Idaho, Department of Mechanical Engineering and Nuclear Engineering Program, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States); Tokuhiro, Akira [University of Idaho, Department of Mechanical Engineering and Nuclear Engineering Program, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States); Hiromoto, Robert [University of Idaho, Department of Computer Science, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States); Tu, Lei [University of Idaho, Department of Mechanical Engineering and Nuclear Engineering Program, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States)

    2015-12-15

    Highlights: • Dynamic Event Tree solutions have been optimized using the Branch-and-Bound algorithm. • A 60% efficiency in optimization has been achieved. • Modeling uncertainty within a risk-informed framework is evaluated. - Abstract: Evaluation of the impacts of uncertainty and sensitivity in modeling presents a significant set of challenges in particular to high fidelity modeling. Computational costs and validation of models creates a need for cost effective decision making with regards to experiment design. Experiments designed to validate computation models can be used to reduce uncertainty in the physical model. In some cases, large uncertainty in a particular aspect of the model may or may not have a large impact on the final results. For example, modeling of a relief valve may result in large uncertainty, however, the actual effects on final peak clad temperature in a reactor transient may be small and the large uncertainty with respect to valve modeling may be considered acceptable. Additionally, the ability to determine the adequacy of a model and the validation supporting it should be considered within a risk informed framework. Low fidelity modeling with large uncertainty may be considered adequate if the uncertainty is considered acceptable with respect to risk. In other words, models that are used to evaluate the probability of failure should be evaluated more rigorously with the intent of increasing safety margin. Probabilistic risk assessment (PRA) techniques have traditionally been used to identify accident conditions and transients. Traditional classical event tree methods utilize analysts’ knowledge and experience to identify the important timing of events in coordination with thermal-hydraulic modeling. These methods lack the capability to evaluate complex dynamic systems. In these systems, time and energy scales associated with transient events may vary as a function of transition times and energies to arrive at a different physical

  7. Branch-and-Bound algorithm applied to uncertainty quantification of a Boiling Water Reactor Station Blackout

    International Nuclear Information System (INIS)

    Nielsen, Joseph; Tokuhiro, Akira; Hiromoto, Robert; Tu, Lei

    2015-01-01

    Highlights: • Dynamic Event Tree solutions have been optimized using the Branch-and-Bound algorithm. • A 60% efficiency in optimization has been achieved. • Modeling uncertainty within a risk-informed framework is evaluated. - Abstract: Evaluation of the impacts of uncertainty and sensitivity in modeling presents a significant set of challenges in particular to high fidelity modeling. Computational costs and validation of models creates a need for cost effective decision making with regards to experiment design. Experiments designed to validate computation models can be used to reduce uncertainty in the physical model. In some cases, large uncertainty in a particular aspect of the model may or may not have a large impact on the final results. For example, modeling of a relief valve may result in large uncertainty, however, the actual effects on final peak clad temperature in a reactor transient may be small and the large uncertainty with respect to valve modeling may be considered acceptable. Additionally, the ability to determine the adequacy of a model and the validation supporting it should be considered within a risk informed framework. Low fidelity modeling with large uncertainty may be considered adequate if the uncertainty is considered acceptable with respect to risk. In other words, models that are used to evaluate the probability of failure should be evaluated more rigorously with the intent of increasing safety margin. Probabilistic risk assessment (PRA) techniques have traditionally been used to identify accident conditions and transients. Traditional classical event tree methods utilize analysts’ knowledge and experience to identify the important timing of events in coordination with thermal-hydraulic modeling. These methods lack the capability to evaluate complex dynamic systems. In these systems, time and energy scales associated with transient events may vary as a function of transition times and energies to arrive at a different physical

  8. Quantification of Parkinson Tremor Intensity Based On EMG Signal Analysis Using Fast Orthogonal Search Algorithm

    Directory of Open Access Journals (Sweden)

    H. Rezghian Moghadam

    2018-06-01

    Full Text Available The tremor injury is one of the common symptoms of Parkinson's disease. The patients suffering from Parkinson's disease have difficulty in controlling their movements owing to tremor. The intensity of the disease can be determined through specifying the range of intensity values of involuntary tremor in Parkinson patients. The level of disease in patients is determined through an empirical range of 0-5. In the early stages of Parkinson, resting tremor can be very mild and intermittent. So, diagnosing the levels of disease is difficult but important since it has only medication therapy. The aim of this study is to quantify the intensity of tremor by the analysis of electromyogram signal. The solution proposed in this paper is to employ a polynomial function model to estimate the Unified Parkinson's Disease Rating Scale (UPDRS value. The algorithm of Fast Orthogonal Search (FOS, which is based on identification of orthogonal basic functions, was utilized for model identification. In fact, some linear and nonlinear features extracted from wrist surface electromyogram signal were considered as the input of the model identified by FOS, and the model output was the UPDRS value. In this research, the proposed model was designed based on two different structures which have been called the single structure and parallel structure. The efficiency of designed models with different structures was evaluated. The evaluation results using K-fold cross validation approach showed that the proposed model with a parallel structure could determine the tremor severity of the Parkinson's disease with accuracy of 99.25% ±0.41, sensitivity of 97.17% ±1.9 and specificity of 99.72% ±0.18.

  9. Algorithms

    Indian Academy of Sciences (India)

    polynomial) division have been found in Vedic Mathematics which are dated much before Euclid's algorithm. A programming language Is used to describe an algorithm for execution on a computer. An algorithm expressed using a programming.

  10. BROMOCEA Code: An Improved Grand Canonical Monte Carlo/Brownian Dynamics Algorithm Including Explicit Atoms.

    Science.gov (United States)

    Solano, Carlos J F; Pothula, Karunakar R; Prajapati, Jigneshkumar D; De Biase, Pablo M; Noskov, Sergei Yu; Kleinekathöfer, Ulrich

    2016-05-10

    All-atom molecular dynamics simulations have a long history of applications studying ion and substrate permeation across biological and artificial pores. While offering unprecedented insights into the underpinning transport processes, MD simulations are limited in time-scales and ability to simulate physiological membrane potentials or asymmetric salt solutions and require substantial computational power. While several approaches to circumvent all of these limitations were developed, Brownian dynamics simulations remain an attractive option to the field. The main limitation, however, is an apparent lack of protein flexibility important for the accurate description of permeation events. In the present contribution, we report an extension of the Brownian dynamics scheme which includes conformational dynamics. To achieve this goal, the dynamics of amino-acid residues was incorporated into the many-body potential of mean force and into the Langevin equations of motion. The developed software solution, called BROMOCEA, was applied to ion transport through OmpC as a test case. Compared to fully atomistic simulations, the results show a clear improvement in the ratio of permeating anions and cations. The present tests strongly indicate that pore flexibility can enhance permeation properties which will become even more important in future applications to substrate translocation.

  11. Algorithms

    Indian Academy of Sciences (India)

    to as 'divide-and-conquer'. Although there has been a large effort in realizing efficient algorithms, there are not many universally accepted algorithm design paradigms. In this article, we illustrate algorithm design techniques such as balancing, greedy strategy, dynamic programming strategy, and backtracking or traversal of ...

  12. A quasi-static algorithm that includes effects of characteristic time scales for simulating failures in brittle materials

    KAUST Repository

    Liu, Jinxing

    2013-04-24

    When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice, the characteristic relaxation time of the lattice, both of which are infinitesimal compared with Tload, the characteristic loading period. The load-unload (L-U) method is used for one extreme, Telem << Tlattice, whereas the force-release (F-R) method is used for the other, Telem T lattice. For cases between the above two extremes, we develop a new algorithm by combining the L-U and the F-R trial displacement fields to construct the new trial field. As a result, our algorithm includes both L-U and F-R failure characteristics, which allows us to observe the influence of the ratio of Telem to Tlattice by adjusting their contributions in the trial displacement field. Therefore, the material dependence of the snap-back instabilities is implemented by introducing one snap-back parameter γ. Although in principle catastrophic failures can hardly be predicted accurately without knowing all microstructural information, effects of γ can be captured by numerical simulations conducted on samples with exactly the same microstructure but different γs. Such a same-specimen-based study shows how the lattice behaves along with the changing ratio of the L-U and F-R components. © 2013 The Author(s).

  13. The Role of the Pulmonary Embolism Response Team: How to Build One, Who to Include, Scenarios, Organization, and Algorithms.

    Science.gov (United States)

    Galmer, Andrew; Weinberg, Ido; Giri, Jay; Jaff, Michael; Weinberg, Mitchell

    2017-09-01

    Pulmonary embolism response teams (PERTs) are multidisciplinary response teams aimed at delivering a range of diagnostic and therapeutic modalities to patients with pulmonary embolism. These teams have gained traction on a national scale. However, despite sharing a common goal, individual PERT programs are quite individualized-varying in their methods of operation, team structures, and practice patterns. The tendency of such response teams is to become intensely structured, algorithmic, and inflexible. However, in their current form, PERT programs are quite the opposite. They are being creatively customized to meet the needs of the individual institution based on available resources, skills, personnel, and institutional goals. After a review of the essential core elements needed to create and operate a PERT team in any form, this article will discuss the more flexible feature development of the nascent PERT team. These include team planning, member composition, operational structure, benchmarking, market analysis, and rudimentary financial operations. Copyright © 2017 Elsevier Inc. All rights reserved.

  14. Quantification of the influence of the choice of the algorithm and planning system on the calculation of a treatment plan

    International Nuclear Information System (INIS)

    Moral, F. del; Ramos, A.; Salgado, M.; Andrade, B; Munoz, V.

    2010-01-01

    In this work an analysis of the influence of the choice of the algorithm or planning system, on the calculus of the same treatment plan is introduced. For this purpose specific software has been developed for comparing plans of a series of IMRT cases of prostate and head and neck cancer calculated using the convolution, superposition and fast superposition algorithms implemented in the XiO 4.40 planning system (CMS). It has also been used for the comparison of the same treatment plan for lung pathology calculated in XiO with the mentioned algorithms, and calculated in the Plan 4.1 planning system (Brainlab) using its pencil beam algorithm. Differences in dose among the treatment plans have been quantified using a set of metrics. The recommendation for the dosimetrist of a careful choice of the algorithm has been numerically confirmed. (Author).

  15. Algorithms

    Indian Academy of Sciences (India)

    ticians but also forms the foundation of computer science. Two ... with methods of developing algorithms for solving a variety of problems but ... applications of computers in science and engineer- ... numerical calculus are as important. We will ...

  16. A NEW IMAGE RETRIEVAL ALGORITHM BASED ON VECTOR QUANTIFICATION%一种新的基于矢量量化的图像检索算法

    Institute of Scientific and Technical Information of China (English)

    冀鑫; 冀小平

    2016-01-01

    针对目前基于颜色的图像检索算法在颜色特征提取的不足,提出一种新的颜色特征提取算法。利用 LBG 算法对 HSI 空间的颜色信息矢量量化,然后统计图像中各个码字出现的频数,形成颜色直方图。这样在提取颜色特征过程中,尽可能地降低图像原有特征失真。同时通过设定门限值,多次实验比较查全率和查准率,找到较为满意的门限值,使检索算法更加完善。实验结果表明,该算法能有效地提高图像检索精准度。%We put forward a new colour feature extraction algorithm for the shortcoming of present colour-based image retrieval algorithm in colour feature extraction.First,the algorithm uses LBG algorithm to carry out vector quantification on colour information in HSI space,and then counts the appearance frequency of each code word in the image to form colour histogram.So in the process of colour feature extraction the distortion of original image features can be reduced as far as possible.Meanwhile,by setting the threshold value we compared the recall and precision rates through a couple of the experiments until a satisfied threshold value was found,thus made the retrieval method more perfect.Experimental results showed that the new algorithm could effectively improve the accuracy of image retrieval.

  17. A flexible and accurate quantification algorithm for electron probe X-ray microanalysis based on thin-film element yields

    International Nuclear Information System (INIS)

    Schalm, O.; Janssens, K.

    2003-01-01

    Quantitative analysis by means of electron probe X-ray microanalysis (EPXMA) of low Z materials such as silicate glasses can be hampered by the fact that ice or other contaminants build up on the Si(Li) detector beryllium window or (in the case of a windowless detector) on the Si(Li) crystal itself. These layers act as an additional absorber in front of the detector crystal, decreasing the detection efficiency at low energies (<5 keV). Since the layer thickness gradually changes with time, also the detector efficiency in the low energy region is not constant. Using the normal ZAF approach to quantification of EPXMA data is cumbersome in these conditions, because spectra from reference materials and from unknown samples must be acquired within a fairly short period of time in order to avoid the effect of the change in efficiency. To avoid this problem, an alternative approach to quantification of EPXMA data is proposed, following a philosophy often employed in quantitative analysis of X-ray fluorescence (XRF) and proton-induced X-ray emission (PIXE) data. This approach is based on the (experimental) determination of thin-film element yields, rather than starting from infinitely thick and single element calibration standards. These thin-film sensitivity coefficients can also be interpolated to allow quantification of elements for which no suitable standards are available. The change in detector efficiency can be monitored by collecting an X-ray spectrum of one multi-element glass standard. This information is used to adapt the previously determined thin-film sensitivity coefficients to the actual detector efficiency conditions valid on the day that the experiments were carried out. The main advantage of this method is that spectra collected from the standards and from the unknown samples should not be acquired within a short period of time. This new approach is evaluated for glass and metal matrices and is compared with a standard ZAF method

  18. Modeling a novel CCHP system including solar and wind renewable energy resources and sizing by a CC-MOPSO algorithm

    International Nuclear Information System (INIS)

    Soheyli, Saman; Shafiei Mayam, Mohamad Hossein; Mehrjoo, Mehri

    2016-01-01

    Highlights: • Considering renewable energy resources as the main prime movers in CCHP systems. • Simultaneous application of FEL and FTL by optimizing two probability functions. • Simultaneous optimization the equipment and penalty factors by CC-MOPSO algorithm. • Reducing fuel consumption and pollution up to 263 and 353 times, respectively. - Abstract: Due to problems, such as, heat losses of equipment, low energy efficiency, increasing pollution and the fossil fuels consumption, combined cooling, heating, and power (CCHP) systems have attracted lots of attention during the last decade. In this paper, for minimizing fossil fuel consumption and pollution, a novel CCHP system including photovoltaic (PV) modules, wind turbines, and solid oxide fuel cells (SOFC) as the prime movers is considered. Moreover, in order to minimize the excess electrical and heat energy production of the CCHP system and so reducing the need for the local power grid and any auxiliary heat production system, following electrical load (FEL) and following thermal load (FTL) operation strategies are considered, simultaneously. In order to determine the optimal number of each system component and also set the penalty factors in the used penalty function, a co-constrained multi objective particle swarm optimization (CC-MOPSO) algorithm is applied. Utilization of the renewable energy resources, the annual total cost (ATC) and the CCHP system area are considered as the objective functions. It also includes constraints such as, loss of power supply probability (LPSP), loss of heat supply probability (LHSP), state of battery charge (SOC), and the number of each CCHP component. A hypothetical hotel in Kermanshah, Iran is conducted to verify the feasibility of the proposed system. 10 wind turbines, 430 PV modules, 11 SOFCs, 106 batteries and 2 heat storage tanks (HST) are numerical results for the spring as the best season in terms of decreasing cost and fuel consumption. Comparing the results

  19. Front-face fluorescence spectroscopy combined with second-order multivariate algorithms for the quantification of polyphenols in red wine samples.

    Science.gov (United States)

    Cabrera-Bañegil, Manuel; Hurtado-Sánchez, María Del Carmen; Galeano-Díaz, Teresa; Durán-Merás, Isabel

    2017-04-01

    The potential of front-face fluorescence spectroscopy combined with second-order chemometric methods was investigated for the quantification of the main polyphenols present in wine samples. Parallel factor analysis (PARAFAC) and unfolded-partial least squares coupled to residual bilinearization (U-PLS/RBL) were assessed for the quantification of catechin, epicatechin, quercetin, resveratrol, caffeic acid, gallic acid, p-coumaric acid, and vanillic acid in red wines. Excitation-emission matrices of different red wine samples, without pretreatment, were obtained in front-face mode, recording emission between 290 and 450 nm, exciting between 240 and 290 nm, for the analysis of epicatechin, catechin, caffeic acid, gallic acid, and vanillic acid; and excitation and emission between 300-360 and 330-400nm, respectively, for the analysis of resveratrol. U-PLS/RBL algorithm provided the best results and this methodology was validated by an optimized liquid chromatographic coupled to diode array and fluorimetric detectors procedure, obtaining a very good correlation for vanillic acid, caffeic acid, epicatechin and resveratrol. Copyright © 2016 Elsevier Ltd. All rights reserved.

  20. Fast mode decision algorithm in MPEG-2 to H.264/AVC transcoding including group of picture structure conversion

    Science.gov (United States)

    Lee, Kangjun; Jeon, Gwanggil; Jeong, Jechang

    2009-05-01

    The H.264/AVC baseline profile is used in many applications, including digital multimedia broadcasting, Internet protocol television, and storage devices, while the MPEG-2 main profile is widely used in applications, such as high-definition television and digital versatile disks. The MPEG-2 main profile supports B pictures for bidirectional motion prediction. Therefore, transcoding the MPEG-2 main profile to the H.264/AVC baseline is necessary for universal multimedia access. In the cascaded pixel domain transcoder architecture, the calculation of the rate distortion cost as part of the mode decision process in the H.264/AVC encoder requires extremely complex computations. To reduce the complexity inherent in the implementation of a real-time transcoder, we propose a fast mode decision algorithm based on complexity information from the reference region that is used for motion compensation. In this study, an adaptive mode decision process was used based on the modes assigned to the reference regions. Simulation results indicated that a significant reduction in complexity was achieved without significant degradation of video quality.

  1. Algorithms

    Indian Academy of Sciences (India)

    algorithm design technique called 'divide-and-conquer'. One of ... Turtle graphics, September. 1996. 5. ... whole list named 'PO' is a pointer to the first element of the list; ..... Program for computing matrices X and Y and placing the result in C *).

  2. Algorithms

    Indian Academy of Sciences (India)

    algorithm that it is implicitly understood that we know how to generate the next natural ..... Explicit comparisons are made in line (1) where maximum and minimum is ... It can be shown that the function T(n) = 3/2n -2 is the solution to the above ...

  3. Algorithms

    Indian Academy of Sciences (India)

    will become clear in the next article when we discuss a simple logo like programming language. ... Rod B may be used as an auxiliary store. The problem is to find an algorithm which performs this task. ... No disks are moved from A to Busing C as auxiliary rod. • move _disk (A, C);. (No + l)th disk is moved from A to C directly ...

  4. A flexible and accurate quantification algorithm for electron probe X-ray microanalysis based on thin-film element yields

    Science.gov (United States)

    Schalm, O.; Janssens, K.

    2003-04-01

    Quantitative analysis by means of electron probe X-ray microanalysis (EPXMA) of low Z materials such as silicate glasses can be hampered by the fact that ice or other contaminants build up on the Si(Li) detector beryllium window or (in the case of a windowless detector) on the Si(Li) crystal itself. These layers act as an additional absorber in front of the detector crystal, decreasing the detection efficiency at low energies (philosophy often employed in quantitative analysis of X-ray fluorescence (XRF) and proton-induced X-ray emission (PIXE) data. This approach is based on the (experimental) determination of thin-film element yields, rather than starting from infinitely thick and single element calibration standards. These thin-film sensitivity coefficients can also be interpolated to allow quantification of elements for which no suitable standards are available. The change in detector efficiency can be monitored by collecting an X-ray spectrum of one multi-element glass standard. This information is used to adapt the previously determined thin-film sensitivity coefficients to the actual detector efficiency conditions valid on the day that the experiments were carried out. The main advantage of this method is that spectra collected from the standards and from the unknown samples should not be acquired within a short period of time. This new approach is evaluated for glass and metal matrices and is compared with a standard ZAF method.

  5. Multimaterial Decomposition Algorithm for the Quantification of Liver Fat Content by Using Fast-Kilovolt-Peak Switching Dual-Energy CT: Clinical Evaluation.

    Science.gov (United States)

    Hyodo, Tomoko; Yada, Norihisa; Hori, Masatoshi; Maenishi, Osamu; Lamb, Peter; Sasaki, Kosuke; Onoda, Minori; Kudo, Masatoshi; Mochizuki, Teruhito; Murakami, Takamichi

    2017-04-01

    Purpose To assess the clinical accuracy and reproducibility of liver fat quantification with the multimaterial decomposition (MMD) algorithm, comparing the performance of MMD with that of magnetic resonance (MR) spectroscopy by using liver biopsy as the reference standard. Materials and Methods This prospective study was approved by the institutional ethics committee, and patients provided written informed consent. Thirty-three patients suspected of having hepatic steatosis underwent non-contrast material-enhanced and triple-phase dynamic contrast-enhanced dual-energy computed tomography (CT) (80 and 140 kVp) and single-voxel proton MR spectroscopy within 30 days before liver biopsy. Percentage fat volume fraction (FVF) images were generated by using the MMD algorithm on dual-energy CT data to measure hepatic fat content. FVFs determined by using dual-energy CT and percentage fat fractions (FFs) determined by using MR spectroscopy were compared with histologic steatosis grade (0-3, as defined by the nonalcoholic fatty liver disease activity score system) by using Jonckheere-Terpstra trend tests and were compared with each other by using Bland-Altman analysis. Real non-contrast-enhanced FVFs were compared with triple-phase contrast-enhanced FVFs to determine the reproducibility of MMD by using Bland-Altman analyses. Results Both dual-energy CT FVF and MR spectroscopy FF increased with increasing histologic steatosis grade (trend test, P algorithm quantifying hepatic fat in dual-energy CT images is accurate and reproducible across imaging phases. © RSNA, 2017 Online supplemental material is available for this article.

  6. TH-CD-206-01: Expectation-Maximization Algorithm-Based Tissue Mixture Quantification for Perfusion MRI

    Energy Technology Data Exchange (ETDEWEB)

    Han, H; Xing, L [Stanford University, Palo Alto, CA (United States); Liang, Z [Stony Brook University, Stony Brook, NY (United States); Li, L [City University of New York College of Staten Island, Staten Island, NY (United States)

    2016-06-15

    Purpose: To investigate the feasibility of estimating the tissue mixture perfusions and quantifying cerebral blood flow change in arterial spin labeled (ASL) perfusion MR images. Methods: The proposed perfusion MR image analysis framework consists of 5 steps: (1) Inhomogeneity correction was performed on the T1- and T2-weighted images, which are available for each studied perfusion MR dataset. (2) We used the publicly available FSL toolbox to strip off the non-brain structures from the T1- and T2-weighted MR images. (3) We applied a multi-spectral tissue-mixture segmentation algorithm on both T1- and T2-structural MR images to roughly estimate the fraction of each tissue type - white matter, grey matter and cerebral spinal fluid inside each image voxel. (4) The distributions of the three tissue types or tissue mixture across the structural image array are down-sampled and mapped onto the ASL voxel array via a co-registration operation. (5) The presented 4-dimensional expectation-maximization (4D-EM) algorithm takes the down-sampled three tissue type distributions on perfusion image data to generate the perfusion mean, variance and percentage images for each tissue type of interest. Results: Experimental results on three volunteer datasets demonstrated that the multi-spectral tissue-mixture segmentation algorithm was effective to initialize tissue mixtures from T1- and T2-weighted MR images. Compared with the conventional ASL image processing toolbox, the proposed 4D-EM algorithm not only generated comparable perfusion mean images, but also produced perfusion variance and percentage images, which the ASL toolbox cannot obtain. It is observed that the perfusion contribution percentages may not be the same as the corresponding tissue mixture volume fractions estimated in the structural images. Conclusion: A specific application to brain ASL images showed that the presented perfusion image analysis method is promising for detecting subtle changes in tissue perfusions

  7. TH-CD-206-01: Expectation-Maximization Algorithm-Based Tissue Mixture Quantification for Perfusion MRI

    International Nuclear Information System (INIS)

    Han, H; Xing, L; Liang, Z; Li, L

    2016-01-01

    Purpose: To investigate the feasibility of estimating the tissue mixture perfusions and quantifying cerebral blood flow change in arterial spin labeled (ASL) perfusion MR images. Methods: The proposed perfusion MR image analysis framework consists of 5 steps: (1) Inhomogeneity correction was performed on the T1- and T2-weighted images, which are available for each studied perfusion MR dataset. (2) We used the publicly available FSL toolbox to strip off the non-brain structures from the T1- and T2-weighted MR images. (3) We applied a multi-spectral tissue-mixture segmentation algorithm on both T1- and T2-structural MR images to roughly estimate the fraction of each tissue type - white matter, grey matter and cerebral spinal fluid inside each image voxel. (4) The distributions of the three tissue types or tissue mixture across the structural image array are down-sampled and mapped onto the ASL voxel array via a co-registration operation. (5) The presented 4-dimensional expectation-maximization (4D-EM) algorithm takes the down-sampled three tissue type distributions on perfusion image data to generate the perfusion mean, variance and percentage images for each tissue type of interest. Results: Experimental results on three volunteer datasets demonstrated that the multi-spectral tissue-mixture segmentation algorithm was effective to initialize tissue mixtures from T1- and T2-weighted MR images. Compared with the conventional ASL image processing toolbox, the proposed 4D-EM algorithm not only generated comparable perfusion mean images, but also produced perfusion variance and percentage images, which the ASL toolbox cannot obtain. It is observed that the perfusion contribution percentages may not be the same as the corresponding tissue mixture volume fractions estimated in the structural images. Conclusion: A specific application to brain ASL images showed that the presented perfusion image analysis method is promising for detecting subtle changes in tissue perfusions

  8. A two-stage algorithm for Clostridium difficile including PCR: can we replace the toxin EIA?

    Science.gov (United States)

    Orendi, J M; Monnery, D J; Manzoor, S; Hawkey, P M

    2012-01-01

    A two step, three-test algorithm for Clostridium difficile infection (CDI) was reviewed. Stool samples were tested by enzyme immunoassays for C. difficile common antigen glutamate dehydrogenase (G) and toxin A/B (T). Samples with discordant results were tested by polymerase chain reaction detecting the toxin B gene (P). The algorithm quickly identified patients with detectable toxin A/B, whereas a large group of patients excreting toxigenic C. difficile but with toxin A/B production below detection level (G(+)T(-)P(+)) was identified separately. The average white blood cell count in patients with a G(+)T(+) result was higher than in those with a G(+)T(-)P(+) result. Copyright © 2011 The Healthcare Infection Society. Published by Elsevier Ltd. All rights reserved.

  9. An algorithm to include the bremsstrahlung component in the determination of the absorbed dose in electron beams

    Energy Technology Data Exchange (ETDEWEB)

    Klevenhagen, S C [The Royal London Hospital, London (United Kingdom). Medical Physics Dept.

    1996-08-01

    Currently used dosimetry protocols for absolute dose determination of electron beams from accelerators in radiation therapy do not account for the effect of the bremsstrahlung contamination of the beam. This results in slightly erroneous doses calculated from ionization chamber measurements. In this report the deviation is calculated and an improved algorithm, which accounts for the effect of the bremsstrahlung component of the beam, is suggested. (author). 14 refs, 2 figs, 1 tab.

  10. A new automatic algorithm for quantification of myocardial infarction imaged by late gadolinium enhancement cardiovascular magnetic resonance: experimental validation and comparison to expert delineations in multi-center, multi-vendor patient data.

    Science.gov (United States)

    Engblom, Henrik; Tufvesson, Jane; Jablonowski, Robert; Carlsson, Marcus; Aletras, Anthony H; Hoffmann, Pavel; Jacquier, Alexis; Kober, Frank; Metzler, Bernhard; Erlinge, David; Atar, Dan; Arheden, Håkan; Heiberg, Einar

    2016-05-04

    Late gadolinium enhancement (LGE) cardiovascular magnetic resonance (CMR) using magnitude inversion recovery (IR) or phase sensitive inversion recovery (PSIR) has become clinical standard for assessment of myocardial infarction (MI). However, there is no clinical standard for quantification of MI even though multiple methods have been proposed. Simple thresholds have yielded varying results and advanced algorithms have only been validated in single center studies. Therefore, the aim of this study was to develop an automatic algorithm for MI quantification in IR and PSIR LGE images and to validate the new algorithm experimentally and compare it to expert delineations in multi-center, multi-vendor patient data. The new automatic algorithm, EWA (Expectation Maximization, weighted intensity, a priori information), was implemented using an intensity threshold by Expectation Maximization (EM) and a weighted summation to account for partial volume effects. The EWA algorithm was validated in-vivo against triphenyltetrazolium-chloride (TTC) staining (n = 7 pigs with paired IR and PSIR images) and against ex-vivo high resolution T1-weighted images (n = 23 IR and n = 13 PSIR images). The EWA algorithm was also compared to expert delineation in 124 patients from multi-center, multi-vendor clinical trials 2-6 days following first time ST-elevation myocardial infarction (STEMI) treated with percutaneous coronary intervention (PCI) (n = 124 IR and n = 49 PSIR images). Infarct size by the EWA algorithm in vivo in pigs showed a bias to ex-vivo TTC of -1 ± 4%LVM (R = 0.84) in IR and -2 ± 3%LVM (R = 0.92) in PSIR images and a bias to ex-vivo T1-weighted images of 0 ± 4%LVM (R = 0.94) in IR and 0 ± 5%LVM (R = 0.79) in PSIR images. In multi-center patient studies, infarct size by the EWA algorithm showed a bias to expert delineation of -2 ± 6 %LVM (R = 0.81) in IR images (n = 124) and 0 ± 5%LVM (R = 0.89) in

  11. Abdominal adipose tissue quantification on water-suppressed and non-water-suppressed MRI at 3T using semi-automated FCM clustering algorithm

    Science.gov (United States)

    Valaparla, Sunil K.; Peng, Qi; Gao, Feng; Clarke, Geoffrey D.

    2014-03-01

    Accurate measurements of human body fat distribution are desirable because excessive body fat is associated with impaired insulin sensitivity, type 2 diabetes mellitus (T2DM) and cardiovascular disease. In this study, we hypothesized that the performance of water suppressed (WS) MRI is superior to non-water suppressed (NWS) MRI for volumetric assessment of abdominal subcutaneous (SAT), intramuscular (IMAT), visceral (VAT), and total (TAT) adipose tissues. We acquired T1-weighted images on a 3T MRI system (TIM Trio, Siemens), which was analyzed using semi-automated segmentation software that employs a fuzzy c-means (FCM) clustering algorithm. Sixteen contiguous axial slices, centered at the L4-L5 level of the abdomen, were acquired in eight T2DM subjects with water suppression (WS) and without (NWS). Histograms from WS images show improved separation of non-fatty tissue pixels from fatty tissue pixels, compared to NWS images. Paired t-tests of WS versus NWS showed a statistically significant lower volume of lipid in the WS images for VAT (145.3 cc less, p=0.006) and IMAT (305 cc less, p1), but not SAT (14.1 cc more, NS). WS measurements of TAT also resulted in lower fat volumes (436.1 cc less, p=0.002). There is strong correlation between WS and NWS quantification methods for SAT measurements (r=0.999), but poorer correlation for VAT studies (r=0.845). These results suggest that NWS pulse sequences may overestimate adipose tissue volumes and that WS pulse sequences are more desirable due to the higher contrast generated between fatty and non-fatty tissues.

  12. Optimum Design of Braced Steel Space Frames including Soil-Structure Interaction via Teaching-Learning-Based Optimization and Harmony Search Algorithms

    OpenAIRE

    Ayse T. Daloglu; Musa Artar; Korhan Ozgan; Ali İ. Karakas

    2018-01-01

    Optimum design of braced steel space frames including soil-structure interaction is studied by using harmony search (HS) and teaching-learning-based optimization (TLBO) algorithms. A three-parameter elastic foundation model is used to incorporate the soil-structure interaction effect. A 10-storey braced steel space frame example taken from literature is investigated according to four different bracing types for the cases with/without soil-structure interaction. X, V, Z, and eccentric V-shaped...

  13. Quantification of steviol glycosides in food products, Stevia leaves and formulations by planar chromatography, including proof of absence for steviol and isosteviol.

    Science.gov (United States)

    Wald, Julian P; Morlock, Gertrud E

    2017-07-14

    Steviol glycosides may degrade in food products under certain processing and storage conditions. Hence, a method was developed that separated in the same chromatographic run seven important steviol glycosides, and additionally as a sum parameter, their reported breakdown products steviol and isosteviol. Through derivatizations with the 2-naphthol and the primuline reagent, the detection was selective and inexpensive. In case needed, the baseline separation of steviol and isosteviol was also demonstrated after a plate cut and subsequent short development (two-step method). The HPTLC method was robust with regard to varying sample matrix loads, as the stationary phase was used only once. A high sample throughput was achieved, i.e. 23 separations were performed in parallel on one plate. The total analysis time took 1h (30min application, 15min separation and 15min derivatization/densitometry) leading to a calculated analysis time of 2.6min per sample. The solvent consumption was 8mL in total (0.4mL per analysis). HPTLC-ESI-MS was employed for confirmation of the results obtained. Mass spectra were recorded only from the zones of interest, and not from matrix or background, leading to decisive advantages, such as less need for MS cleaning. The optimized HPTLC method was shown to effectively support quality control, as marketed samples may be falsified with cheaper synthetic sweeteners, which was also demonstrated in this study. The accuracy of the densitometric quantification in HPTLC was considered as high, as standards and samples were separated on fresh adsorbent and detected simultaneously under identical conditions, which minimized the influence of errors. Finally, the Aliivibrio fischeri bioassay was employed to obtain information on bioactive compounds in Stevia leaf extracts. Copyright © 2017 Elsevier B.V. All rights reserved.

  14. Incremental Yield of Including Determine-TB LAM Assay in Diagnostic Algorithms for Hospitalized and Ambulatory HIV-Positive Patients in Kenya.

    Science.gov (United States)

    Huerga, Helena; Ferlazzo, Gabriella; Bevilacqua, Paolo; Kirubi, Beatrice; Ardizzoni, Elisa; Wanjala, Stephen; Sitienei, Joseph; Bonnet, Maryline

    2017-01-01

    Determine-TB LAM assay is a urine point-of-care test useful for TB diagnosis in HIV-positive patients. We assessed the incremental diagnostic yield of adding LAM to algorithms based on clinical signs, sputum smear-microscopy, chest X-ray and Xpert MTB/RIF in HIV-positive patients with symptoms of pulmonary TB (PTB). Prospective observational cohort of ambulatory (either severely ill or CD4<200cells/μl or with Body Mass Index<17Kg/m2) and hospitalized symptomatic HIV-positive adults in Kenya. Incremental diagnostic yield of adding LAM was the difference in the proportion of confirmed TB patients (positive Xpert or MTB culture) diagnosed by the algorithm with LAM compared to the algorithm without LAM. The multivariable mortality model was adjusted for age, sex, clinical severity, BMI, CD4, ART initiation, LAM result and TB confirmation. Among 474 patients included, 44.1% were severely ill, 69.6% had CD4<200cells/μl, 59.9% had initiated ART, 23.2% could not produce sputum. LAM, smear-microscopy, Xpert and culture in sputum were positive in 39.0% (185/474), 21.6% (76/352), 29.1% (102/350) and 39.7% (92/232) of the patients tested, respectively. Of 156 patients with confirmed TB, 65.4% were LAM positive. Of those classified as non-TB, 84.0% were LAM negative. Adding LAM increased the diagnostic yield of the algorithms by 36.6%, from 47.4% (95%CI:39.4-55.6) to 84.0% (95%CI:77.3-89.4%), when using clinical signs and X-ray; by 19.9%, from 62.2% (95%CI:54.1-69.8) to 82.1% (95%CI:75.1-87.7), when using clinical signs and microscopy; and by 13.4%, from 74.4% (95%CI:66.8-81.0) to 87.8% (95%CI:81.6-92.5), when using clinical signs and Xpert. LAM positive patients had an increased risk of 2-months mortality (aOR:2.7; 95%CI:1.5-4.9). LAM should be included in TB diagnostic algorithms in parallel to microscopy or Xpert request for HIV-positive patients either ambulatory (severely ill or CD4<200cells/μl) or hospitalized. LAM allows same day treatment initiation in patients at

  15. An optimized method for fatty acid analysis, including quantification of trans fatty acids, in human adipose tissue by gas-liquid chromatography

    DEFF Research Database (Denmark)

    Bysted, Anette; Cold, S; Hølmer, Gunhild Kofoed

    1999-01-01

    Considering the need for a quick direct method for measurement of the fatty acid composition including trans isomers ofhuman adipose tissue we have developed a procedure using gas-liquid chromatography (GLC) alone, which is thussuitable for validation of fatty acid status in epidemiological studies...... for 25 min, and finally raised at 25 degrees C/min to 225 degrees C. The trans and cis isomers of18:1 were well separated from each other, as shown by silver-ion thin-layer chromatography. Verification by standardsshowed that the trans 18:1 isomers with a double bond in position 12 or lower were...

  16. A Study on the quantification of hydration and the strength development mechanism of cementitious materials including amorphous phases by using XRD/Rietveld method

    International Nuclear Information System (INIS)

    Yamada, Kazuo; Hoshino, Seiichi; Hirao, Hiroshi; Yamashita, Hiroki

    2008-01-01

    X-ray diffraction (XRD)/Rietveld method was applied to measure the phase composition of cement. The quantative analysis concerning the progress of hydration was accomplished in an error of about the maximum 2-3% in spite of including amorphous materials such as blast furnace slag, fly ash, silica fume and C-S-H. The influence of the compressive strength on the lime stone fine powder mixture material was studied from the hydration analysis by Rietveld method. The two stages were observed in the strength development mechanism of cement; the hydration promotion of C 3 S in the early stage and the filling of cavities by carbonate hydrate for the longer term. It is useful to use various mixture materials for the formation of the resource recycling society and the durability improvement of concrete. (author)

  17. The effectiveness of extracorporeal shockwave therapy in common lower limb conditions: a systematic review including quantification of patient-rated pain reduction.

    Science.gov (United States)

    Korakakis, Vasileios; Whiteley, Rodney; Tzavara, Alexander; Malliaropoulos, Nikolaos

    2018-03-01

    To evaluate extracorporeal shockwave therapy (ESWT) in treating Achilles tendinopathy (AT), greater trochanteric pain syndrome (GTPS), medial tibial stress syndrome (MTSS), patellar tendinopathy (PT) and proximal hamstring tendinopathy (PHT). Systematic review. Randomised and non-randomised studies assessing ESWT in patients with AT, GTPS, MTSS, PT and PHT were included. Risk of bias and quality of studies were evaluated. Moderate-level evidence suggests (1) no difference between focused ESWT and placebo ESWT at short and mid-term in PT and (2) radial ESWT is superior to conservative treatment at short, mid and long term in PHT. Low-level evidence suggests that ESWT (1) is comparable to eccentric training, but superior to wait-and-see policy at 4 months in mid-portion AT; (2) is superior to eccentric training at 4 months in insertional AT; (3) less effective than corticosteroid injection at short term, but ESWT produced superior results at mid and long term in GTPS; (4) produced comparable results to control treatment at long term in GTPS; and (5) is superior to control conservative treatment at long term in PT. Regarding the rest of the results, there was only very low or no level of evidence. 13 studies showed high risk of bias largely due to methodology, blinding and reporting. Low level of evidence suggests that ESWT may be effective for some lower limb conditions in all phases of the rehabilitation. © Article author(s) (or their employer(s) unless otherwise stated in the text of the article) 2018. All rights reserved. No commercial use is permitted unless otherwise expressly granted.

  18. Automated quantification of cerebral edema following hemispheric infarction: Application of a machine-learning algorithm to evaluate CSF shifts on serial head CTs

    Directory of Open Access Journals (Sweden)

    Yasheng Chen

    2016-01-01

    . This algorithm will allow us to efficiently and accurately measure the evolution of cerebral edema in future studies including large multi-site patient populations.

  19. Cost-effectiveness of collaborative care including PST and an antidepressant treatment algorithm for the treatment of major depressive disorder in primary care; a randomised clinical trial

    Directory of Open Access Journals (Sweden)

    Beekman Aartjan TF

    2007-03-01

    Full Text Available Abstract Background Depressive disorder is currently one of the most burdensome disorders worldwide. Evidence-based treatments for depressive disorder are already available, but these are used insufficiently, and with less positive results than possible. Earlier research in the USA has shown good results in the treatment of depressive disorder based on a collaborative care approach with Problem Solving Treatment and an antidepressant treatment algorithm, and research in the UK has also shown good results with Problem Solving Treatment. These treatment strategies may also work very well in the Netherlands too, even though health care systems differ between countries. Methods/design This study is a two-armed randomised clinical trial, with randomization on patient-level. The aim of the trial is to evaluate the treatment of depressive disorder in primary care in the Netherlands by means of an adapted collaborative care framework, including contracting and adherence-improving strategies, combined with Problem Solving Treatment and antidepressant medication according to a treatment algorithm. Forty general practices will be randomised to either the intervention group or the control group. Included will be patients who are diagnosed with moderate to severe depression, based on DSM-IV criteria, and stratified according to comorbid chronic physical illness. Patients in the intervention group will receive treatment based on the collaborative care approach, and patients in the control group will receive care as usual. Baseline measurements and follow up measures (3, 6, 9 and 12 months are assessed using questionnaires and an interview. The primary outcome measure is severity of depressive symptoms, according to the PHQ9. Secondary outcome measures are remission as measured with the PHQ9 and the IDS-SR, and cost-effectiveness measured with the TiC-P, the EQ-5D and the SF-36. Discussion In this study, an American model to enhance care for patients with a

  20. Optimum Design of Braced Steel Space Frames including Soil-Structure Interaction via Teaching-Learning-Based Optimization and Harmony Search Algorithms

    Directory of Open Access Journals (Sweden)

    Ayse T. Daloglu

    2018-01-01

    Full Text Available Optimum design of braced steel space frames including soil-structure interaction is studied by using harmony search (HS and teaching-learning-based optimization (TLBO algorithms. A three-parameter elastic foundation model is used to incorporate the soil-structure interaction effect. A 10-storey braced steel space frame example taken from literature is investigated according to four different bracing types for the cases with/without soil-structure interaction. X, V, Z, and eccentric V-shaped bracing types are considered in the study. Optimum solutions of examples are carried out by a computer program coded in MATLAB interacting with SAP2000-OAPI for two-way data exchange. The stress constraints according to AISC-ASD (American Institute of Steel Construction-Allowable Stress Design, maximum lateral displacement constraints, interstorey drift constraints, and beam-to-column connection constraints are taken into consideration in the optimum design process. The parameters of the foundation model are calculated depending on soil surface displacements by using an iterative approach. The results obtained in the study show that bracing types and soil-structure interaction play very important roles in the optimum design of steel space frames. Finally, the techniques used in the optimum design seem to be quite suitable for practical applications.

  1. Multimaterial Decomposition Algorithm for the Quantification of Liver Fat Content by Using Fast-Kilovolt-Peak Switching Dual-Energy CT: Experimental Validation.

    Science.gov (United States)

    Hyodo, Tomoko; Hori, Masatoshi; Lamb, Peter; Sasaki, Kosuke; Wakayama, Tetsuya; Chiba, Yasutaka; Mochizuki, Teruhito; Murakami, Takamichi

    2017-02-01

    Purpose To assess the ability of fast-kilovolt-peak switching dual-energy computed tomography (CT) by using the multimaterial decomposition (MMD) algorithm to quantify liver fat. Materials and Methods Fifteen syringes that contained various proportions of swine liver obtained from an abattoir, lard in food products, and iron (saccharated ferric oxide) were prepared. Approval of this study by the animal care and use committee was not required. Solid cylindrical phantoms that consisted of a polyurethane epoxy resin 20 and 30 cm in diameter that held the syringes were scanned with dual- and single-energy 64-section multidetector CT. CT attenuation on single-energy CT images (in Hounsfield units) and MMD-derived fat volume fraction (FVF; dual-energy CT FVF) were obtained for each syringe, as were magnetic resonance (MR) spectroscopy measurements by using a 1.5-T imager (fat fraction [FF] of MR spectroscopy). Reference values of FVF (FVF ref ) were determined by using the Soxhlet method. Iron concentrations were determined by inductively coupled plasma optical emission spectroscopy and divided into three ranges (0 mg per 100 g, 48.1-55.9 mg per 100 g, and 92.6-103.0 mg per 100 g). Statistical analysis included Spearman rank correlation and analysis of covariance. Results Both dual-energy CT FVF (ρ = 0.97; P iron. Phantom size had a significant effect on dual-energy CT FVF after controlling for FVF ref (P iron concentrations, the linear coefficients of dual-energy CT FVF decreased and those of MR spectroscopy FF increased (P iron, dual-energy CT FVF led to underestimateion of FVF ref to a lesser degree than FF of MR spectroscopy led to overestimation of FVF ref . © RSNA, 2016 Online supplemental material is available for this article.

  2. A new and improved algorithm for the quantification of chromatin condensation from microscopic data shows decreased chromatin condensation in regenerating axolotl limb cells.

    Directory of Open Access Journals (Sweden)

    Julian Sosnik

    Full Text Available The nuclear landscape plays an important role in the regulation of tissue and positional specific genes in embryonic and developing cells. Changes in this landscape can be dynamic, and are associated with the differentiation of cells during embryogenesis, and the de-differentiation of cells during induced pluripotent stem cell (iPSC formation and in many cancers. However, tools to quantitatively characterize these changes are limited, especially in the in vivo context, where numerous tissue types are present and cells are arranged in multiple layers. Previous tools have been optimized for the monolayer nature of cultured cells. Therefore, we present a new algorithm to quantify the condensation of chromatin in two in vivo systems. We first developed this algorithm to quantify changes in chromatin compaction and validated it in differentiating spermatids in zebrafish testes. Our algorithm successfully detected the typical increase in chromatin compaction as these cells differentiate. We then employed the algorithm to quantify the changes that occur in amphibian limb cells as they participate in a regenerative response. We observed that the chromatin in the limb cells de-compacts as they contribute to the regenerating organ. We present this new tool as an open sourced software that can be readily accessed and optimized to quantify chromatin compaction in complex multi-layered samples.

  3. A flexibility-based method via the iterated improved reduction system and the cuckoo optimization algorithm for damage quantification with limited sensors

    International Nuclear Information System (INIS)

    Zare Hosseinzadeh, Ali; Ghodrati Amiri, Gholamreza; Bagheri, Abdollah; Koo, Ki-Young

    2014-01-01

    In this paper, a novel and effective damage diagnosis algorithm is proposed to localize and quantify structural damage using incomplete modal data, considering the existence of some limitations in the number of attached sensors on structures. The damage detection problem is formulated as an optimization problem by computing static displacements in the reduced model of a structure subjected to a unique static load. The static responses are computed through the flexibility matrix of the damaged structure obtained based on the incomplete modal data of the structure. In the algorithm, an iterated improved reduction system method is applied to prepare an accurate reduced model of a structure. The optimization problem is solved via a new evolutionary optimization algorithm called the cuckoo optimization algorithm. The efficiency and robustness of the presented method are demonstrated through three numerical examples. Moreover, the efficiency of the method is verified by an experimental study of a five-story shear building structure on a shaking table considering only two sensors. The obtained damage identification results for the numerical and experimental studies show the suitable and stable performance of the proposed damage identification method for structures with limited sensors. (paper)

  4. An effective medium inversion algorithm for gas hydrate quantification and its application to laboratory and borehole measurements of gas hydrate-bearing sediments

    Science.gov (United States)

    Chand, S.; Minshull, T.A.; Priest, J.A.; Best, A.I.; Clayton, C.R.I.; Waite, W.F.

    2006-01-01

    The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L–38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.

  5. Variability and accuracy of coronary CT angiography including use of iterative reconstruction algorithms for plaque burden assessment as compared with intravascular ultrasound - an ex vivo study

    Energy Technology Data Exchange (ETDEWEB)

    Stolzmann, Paul [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Boston, MA (United States); University Hospital Zurich, Institute of Diagnostic and Interventional Radiology, Zurich (Switzerland); Schlett, Christopher L.; Maurovich-Horvat, Pal; Scheffel, Hans; Engel, Leif-Christopher; Karolyi, Mihaly; Hoffmann, Udo [Massachusetts General Hospital and Harvard Medical School, Cardiac MR PET CT Program, Boston, MA (United States); Maehara, Akiko; Ma, Shixin; Mintz, Gary S. [Columbia University Medical Center, Cardiovascular Research Foundation, New York, NY (United States)

    2012-10-15

    To systematically assess inter-technique and inter-/intra-reader variability of coronary CT angiography (CTA) to measure plaque burden compared with intravascular ultrasound (IVUS) and to determine whether iterative reconstruction algorithms affect variability. IVUS and CTA data were acquired from nine human coronary arteries ex vivo. CT images were reconstructed using filtered back projection (FBPR) and iterative reconstruction algorithms: adaptive-statistical (ASIR) and model-based (MBIR). After co-registration of 284 cross-sections between IVUS and CTA, two readers manually delineated the cross-sectional plaque area in all images presented in random order. Average plaque burden by IVUS was 63.7 {+-} 10.7% and correlated significantly with all CTA measurements (r = 0.45-0.52; P < 0.001), while CTA overestimated the burden by 10 {+-} 10%. There were no significant differences among FBPR, ASIR and MBIR (P > 0.05). Increased overestimation was associated with smaller plaques, eccentricity and calcification (P < 0.001). Reproducibility of plaque burden by CTA and IVUS datasets was excellent with a low mean intra-/inter-reader variability of <1/<4% for CTA and <0.5/<1% for IVUS respectively (P < 0.05) with no significant difference between CT reconstruction algorithms (P > 0.05). In ex vivo coronary arteries, plaque burden by coronary CTA had extremely low inter-/intra-reader variability and correlated significantly with IVUS measurements. Accuracy as well as reader reliability were independent of CT image reconstruction algorithm. (orig.)

  6. Creation of operation algorithms for combined operation of anti-lock braking system (ABS) and electric machine included in the combined power plant

    Science.gov (United States)

    Bakhmutov, S. V.; Ivanov, V. G.; Karpukhin, K. E.; Umnitsyn, A. A.

    2018-02-01

    The paper considers the Anti-lock Braking System (ABS) operation algorithm, which enables the implementation of hybrid braking, i.e. the braking process combining friction brake mechanisms and e-machine (electric machine), which operates in the energy recovery mode. The provided materials focus only on the rectilinear motion of the vehicle. That the ABS task consists in the maintenance of the target wheel slip ratio, which depends on the tyre-road adhesion coefficient. The tyre-road adhesion coefficient was defined based on the vehicle deceleration. In the course of calculated studies, the following operation algorithm of hybrid braking was determined. At adhesion coefficient ≤0.1, driving axle braking occurs only due to the e-machine operating in the energy recovery mode. In other cases, depending on adhesion coefficient, the e-machine provides the brake torque, which changes from 35 to 100% of the maximum available brake torque. Virtual tests showed that values of the wheel slip ratio are close to the required ones. Thus, this algorithm makes it possible to implement hybrid braking by means of the two sources creating the brake torque.

  7. Uncertainty quantification theory, implementation, and applications

    CERN Document Server

    Smith, Ralph C

    2014-01-01

    The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...

  8. Solving the competitive facility location problem considering the reactions of competitor with a hybrid algorithm including Tabu Search and exact method

    Science.gov (United States)

    Bagherinejad, Jafar; Niknam, Azar

    2018-03-01

    In this paper, a leader-follower competitive facility location problem considering the reactions of the competitors is studied. A model for locating new facilities and determining levels of quality for the facilities of the leader firm is proposed. Moreover, changes in the location and quality of existing facilities in a competitive market where a competitor offers the same goods or services are taken into account. The competitor could react by opening new facilities, closing existing ones, and adjusting the quality levels of its existing facilities. The market share, captured by each facility, depends on its distance to customer and its quality that is calculated based on the probabilistic Huff's model. Each firm aims to maximize its profit subject to constraints on quality levels and budget of setting up new facilities. This problem is formulated as a bi-level mixed integer non-linear model. The model is solved using a combination of Tabu Search with an exact method. The performance of the proposed algorithm is compared with an upper bound that is achieved by applying Karush-Kuhn-Tucker conditions. Computational results show that our algorithm finds near the upper bound solutions in a reasonable time.

  9. Quantification of differences between nailfold capillaroscopy images with a scleroderma pattern and normal pattern using measures of geometric and algorithmic complexity.

    Science.gov (United States)

    Urwin, Samuel George; Griffiths, Bridget; Allen, John

    2017-02-01

    This study aimed to quantify and investigate differences in the geometric and algorithmic complexity of the microvasculature in nailfold capillaroscopy (NFC) images displaying a scleroderma pattern and those displaying a 'normal' pattern. 11 NFC images were qualitatively classified by a capillary specialist as indicative of 'clear microangiopathy' (CM), i.e. a scleroderma pattern, and 11 as 'not clear microangiopathy' (NCM), i.e. a 'normal' pattern. Pre-processing was performed, and fractal dimension (FD) and Kolmogorov complexity (KC) were calculated following image binarisation. FD and KC were compared between groups, and a k-means cluster analysis (n  =  2) on all images was performed, without prior knowledge of the group assigned to them (i.e. CM or NCM), using FD and KC as inputs. CM images had significantly reduced FD and KC compared to NCM images, and the cluster analysis displayed promising results that the quantitative classification of images into CM and NCM groups is possible using the mathematical measures of FD and KC. The analysis techniques used show promise for quantitative microvascular investigation in patients with systemic sclerosis.

  10. Superposition Quantification

    Science.gov (United States)

    Chang, Li-Na; Luo, Shun-Long; Sun, Yuan

    2017-11-01

    The principle of superposition is universal and lies at the heart of quantum theory. Although ever since the inception of quantum mechanics a century ago, superposition has occupied a central and pivotal place, rigorous and systematic studies of the quantification issue have attracted significant interests only in recent years, and many related problems remain to be investigated. In this work we introduce a figure of merit which quantifies superposition from an intuitive and direct perspective, investigate its fundamental properties, connect it to some coherence measures, illustrate it through several examples, and apply it to analyze wave-particle duality. Supported by Science Challenge Project under Grant No. TZ2016002, Laboratory of Computational Physics, Institute of Applied Physics and Computational Mathematics, Beijing, Key Laboratory of Random Complex Structures and Data Science, Chinese Academy of Sciences, Grant under No. 2008DP173182

  11. Standardless quantification by parameter optimization in electron probe microanalysis

    International Nuclear Information System (INIS)

    Limandri, Silvina P.; Bonetto, Rita D.; Josa, Víctor Galván; Carreras, Alejo C.; Trincavelli, Jorge C.

    2012-01-01

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum® for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: ► A method for standardless quantification in EPMA is presented. ► It gives better results than the commercial software GENESIS Spectrum. ► It gives better results than the software DTSA. ► It allows the determination of the conductive coating thickness. ► It gives an estimation for the concentration uncertainties.

  12. Algorithms in ambient intelligence

    NARCIS (Netherlands)

    Aarts, E.H.L.; Korst, J.H.M.; Verhaegh, W.F.J.; Weber, W.; Rabaey, J.M.; Aarts, E.

    2005-01-01

    We briefly review the concept of ambient intelligence and discuss its relation with the domain of intelligent algorithms. By means of four examples of ambient intelligent systems, we argue that new computing methods and quantification measures are needed to bridge the gap between the class of

  13. freeQuant: A Mass Spectrometry Label-Free Quantification Software Tool for Complex Proteome Analysis.

    Science.gov (United States)

    Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong

    2015-01-01

    Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.

  14. Nature-inspired optimization algorithms

    CERN Document Server

    Yang, Xin-She

    2014-01-01

    Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning

  15. Application of Fuzzy Comprehensive Evaluation Method in Trust Quantification

    Directory of Open Access Journals (Sweden)

    Shunan Ma

    2011-10-01

    Full Text Available Trust can play an important role for the sharing of resources and information in open network environments. Trust quantification is thus an important issue in dynamic trust management. By considering the fuzziness and uncertainty of trust, in this paper, we propose a fuzzy comprehensive evaluation method to quantify trust along with a trust quantification algorithm. Simulation results show that the trust quantification algorithm that we propose can effectively quantify trust and the quantified value of an entity's trust is consistent with the behavior of the entity.

  16. Two-stream Convolutional Neural Network for Methane Emissions Quantification

    Science.gov (United States)

    Wang, J.; Ravikumar, A. P.; McGuire, M.; Bell, C.; Tchapmi, L. P.; Brandt, A. R.

    2017-12-01

    Methane, a key component of natural gas, has a 25x higher global warming potential than carbon dioxide on a 100-year basis. Accurately monitoring and mitigating methane emissions require cost-effective detection and quantification technologies. Optical gas imaging, one of the most commonly used leak detection technology, adopted by Environmental Protection Agency, cannot estimate leak-sizes. In this work, we harness advances in computer science to allow for rapid and automatic leak quantification. Particularly, we utilize two-stream deep Convolutional Networks (ConvNets) to estimate leak-size by capturing complementary spatial information from still plume frames, and temporal information from plume motion between frames. We build large leak datasets for training and evaluating purposes by collecting about 20 videos (i.e. 397,400 frames) of leaks. The videos were recorded at six distances from the source, covering 10 -60 ft. Leak sources included natural gas well-heads, separators, and tanks. All frames were labeled with a true leak size, which has eight levels ranging from 0 to 140 MCFH. Preliminary analysis shows that two-stream ConvNets provides significant accuracy advantage over single steam ConvNets. Spatial stream ConvNet can achieve an accuracy of 65.2%, by extracting important features, including texture, plume area, and pattern. Temporal stream, fed by the results of optical flow analysis, results in an accuracy of 58.3%. The integration of the two-stream ConvNets gives a combined accuracy of 77.6%. For future work, we will split the training and testing datasets in distinct ways in order to test the generalization of the algorithm for different leak sources. Several analytic metrics, including confusion matrix and visualization of key features, will be used to understand accuracy rates and occurrences of false positives. The quantification algorithm can help to find and fix super-emitters, and improve the cost-effectiveness of leak detection and repair

  17. Lung involvement quantification in chest radiographs

    International Nuclear Information System (INIS)

    Giacomini, Guilherme; Alvarez, Matheus; Oliveira, Marcela de; Miranda, Jose Ricardo A.; Pina, Diana R.; Pereira, Paulo C.M.; Ribeiro, Sergio M.

    2014-01-01

    Tuberculosis (TB) caused by Mycobacterium tuberculosis, is an infectious disease which remains a global health problem. The chest radiography is the commonly method employed to assess the TB's evolution. The methods for quantification of abnormalities of chest are usually performed on CT scans (CT). This quantification is important to assess the TB evolution and treatment and comparing different treatments. However, precise quantification is not feasible for the amount of CT scans required. The purpose of this work is to develop a methodology for quantification of lung damage caused by TB through chest radiographs. It was developed an algorithm for computational processing of exams in Matlab, which creates a lungs' 3D representation, with compromised dilated regions inside. The quantification of lung lesions was also made for the same patients through CT scans. The measurements from the two methods were compared and resulting in strong correlation. Applying statistical Bland and Altman, all samples were within the limits of agreement, with a confidence interval of 95%. The results showed an average variation of around 13% between the two quantification methods. The results suggest the effectiveness and applicability of the method developed, providing better risk-benefit to the patient and cost-benefit ratio for the institution. (author)

  18. Algorithmically specialized parallel computers

    CERN Document Server

    Snyder, Lawrence; Gannon, Dennis B

    1985-01-01

    Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster

  19. Standardless quantification by parameter optimization in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Limandri, Silvina P. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Bonetto, Rita D. [Centro de Investigacion y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco (CINDECA), CONICET, 47 Street 257, (1900) La Plata (Argentina); Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1 and 47 Streets (1900) La Plata (Argentina); Josa, Victor Galvan; Carreras, Alejo C. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Trincavelli, Jorge C., E-mail: trincavelli@famaf.unc.edu.ar [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina)

    2012-11-15

    A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum Registered-Sign for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: Black-Right-Pointing-Pointer A method for standardless quantification in EPMA is presented. Black-Right-Pointing-Pointer It gives better results than the commercial software GENESIS Spectrum. Black-Right-Pointing-Pointer It gives better results than the software DTSA. Black-Right-Pointing-Pointer It allows the determination of the conductive coating thickness. Black-Right-Pointing-Pointer It gives an estimation for the concentration uncertainties.

  20. Final Report: Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Marzouk, Youssef [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Conrad, Patrick [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Bigoni, Daniele [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States); Parno, Matthew [Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)

    2017-06-09

    QUEST (\\url{www.quest-scidac.org}) is a SciDAC Institute that is focused on uncertainty quantification (UQ) in large-scale scientific computations. Our goals are to (1) advance the state of the art in UQ mathematics, algorithms, and software; and (2) provide modeling, algorithmic, and general UQ expertise, together with software tools, to other SciDAC projects, thereby enabling and guiding a broad range of UQ activities in their respective contexts. QUEST is a collaboration among six institutions (Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University) with a history of joint UQ research. Our vision encompasses all aspects of UQ in leadership-class computing. This includes the well-founded setup of UQ problems; characterization of the input space given available data/information; local and global sensitivity analysis; adaptive dimensionality and order reduction; forward and inverse propagation of uncertainty; handling of application code failures, missing data, and hardware/software fault tolerance; and model inadequacy, comparison, validation, selection, and averaging. The nature of the UQ problem requires the seamless combination of data, models, and information across this landscape in a manner that provides a self-consistent quantification of requisite uncertainties in predictions from computational models. Accordingly, our UQ methods and tools span an interdisciplinary space across applied math, information theory, and statistics. The MIT QUEST effort centers on statistical inference and methods for surrogate or reduced-order modeling. MIT personnel have been responsible for the development of adaptive sampling methods, methods for approximating computationally intensive models, and software for both forward uncertainty propagation and statistical inverse problems. A key software product of the MIT QUEST effort is the MIT

  1. Sci-Fri PM: Radiation Therapy, Planning, Imaging, and Special Techniques - 11: Quantification of chest wall motion during deep inspiration breast hold treatments using cine EPID images and a physics based algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Alpuche Aviles, Jorge E.; VanBeek, Timothy [CancerCare Manitoba, Winnipeg (Canada); Sasaki, David; Rivest, Ryan; Akra, Mohamed [CancerCare Manitoba, Winnipeg (Canada); University of Manitoba, Winnipeg (Canada)

    2016-08-15

    Purpose: This work presents an algorithm used to quantify intra-fraction motion for patients treated using deep inspiration breath hold (DIBH). The algorithm quantifies the position of the chest wall in breast tangent fields using electronic portal images. Methods: The algorithm assumes that image profiles, taken along a direction perpendicular to the medial border of the field, follow a monotonically and smooth decreasing function. This assumption is invalid in the presence of lung and can be used to calculate chest wall position. The algorithm was validated by determining the position of the chest wall for varying field edge positions in portal images of a thoracic phantom. The algorithm was used to quantify intra-fraction motion in cine images for 7 patients treated with DIBH. Results: Phantom results show that changes in the distance between chest wall and field edge were accurate within 0.1 mm on average. For a fixed field edge, the algorithm calculates the position of the chest wall with a 0.2 mm standard deviation. Intra-fraction motion for DIBH patients was within 1 mm 91.4% of the time and within 1.5 mm 97.9% of the time. The maximum intra-fraction motion was 3.0 mm. Conclusions: A physics based algorithm was developed and can be used to quantify the position of chest wall irradiated in tangent portal images with an accuracy of 0.1 mm and precision of 0.6 mm. Intra-fraction motion for patients treated with DIBH at our clinic is less than 3 mm.

  2. Autonomous Star Tracker Algorithms

    DEFF Research Database (Denmark)

    Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren

    1998-01-01

    Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....

  3. In vivo MRS metabolite quantification using genetic optimization

    Science.gov (United States)

    Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; van Ormondt, D.; Graveron-Demilly, D.

    2011-11-01

    The in vivo quantification of metabolites' concentrations, revealed in magnetic resonance spectroscopy (MRS) spectra, constitutes the main subject under investigation in this work. Significant contributions based on artificial intelligence tools, such as neural networks (NNs), with good results have been presented lately but have shown several drawbacks, regarding their quantification accuracy under difficult conditions. A general framework that encounters the quantification procedure as an optimization problem, which is solved using a genetic algorithm (GA), is proposed in this paper. Two different lineshape models are examined, while two GA configurations are applied on artificial data. Moreover, the introduced quantification technique deals with metabolite peaks' overlapping, a considerably difficult situation occurring under real conditions. Appropriate experiments have proved the efficiency of the introduced methodology, in artificial MRS data, by establishing it as a generic metabolite quantification procedure.

  4. In vivo MRS metabolite quantification using genetic optimization

    International Nuclear Information System (INIS)

    Papakostas, G A; Mertzios, B G; Karras, D A; Van Ormondt, D; Graveron-Demilly, D

    2011-01-01

    The in vivo quantification of metabolites' concentrations, revealed in magnetic resonance spectroscopy (MRS) spectra, constitutes the main subject under investigation in this work. Significant contributions based on artificial intelligence tools, such as neural networks (NNs), with good results have been presented lately but have shown several drawbacks, regarding their quantification accuracy under difficult conditions. A general framework that encounters the quantification procedure as an optimization problem, which is solved using a genetic algorithm (GA), is proposed in this paper. Two different lineshape models are examined, while two GA configurations are applied on artificial data. Moreover, the introduced quantification technique deals with metabolite peaks' overlapping, a considerably difficult situation occurring under real conditions. Appropriate experiments have proved the efficiency of the introduced methodology, in artificial MRS data, by establishing it as a generic metabolite quantification procedure

  5. Deep learning approach for the detection and quantification of intraretinal cystoid fluid in multivendor optical coherence tomography

    Science.gov (United States)

    Venhuizen, Freerk G.; van Ginneken, Bram; Liefers, Bart; van Asten, Freekje; Schreur, Vivian; Fauser, Sascha; Hoyng, Carel; Theelen, Thomas; Sánchez, Clara I.

    2018-01-01

    We developed a deep learning algorithm for the automatic segmentation and quantification of intraretinal cystoid fluid (IRC) in spectral domain optical coherence tomography (SD-OCT) volumes independent of the device used for acquisition. A cascade of neural networks was introduced to include prior information on the retinal anatomy, boosting performance significantly. The proposed algorithm approached human performance reaching an overall Dice coefficient of 0.754 ± 0.136 and an intraclass correlation coefficient of 0.936, for the task of IRC segmentation and quantification, respectively. The proposed method allows for fast quantitative IRC volume measurements that can be used to improve patient care, reduce costs, and allow fast and reliable analysis in large population studies. PMID:29675301

  6. Automated Quantification of Pneumothorax in CT

    Science.gov (United States)

    Do, Synho; Salvaggio, Kristen; Gupta, Supriya; Kalra, Mannudeep; Ali, Nabeel U.; Pien, Homer

    2012-01-01

    An automated, computer-aided diagnosis (CAD) algorithm for the quantification of pneumothoraces from Multidetector Computed Tomography (MDCT) images has been developed. Algorithm performance was evaluated through comparison to manual segmentation by expert radiologists. A combination of two-dimensional and three-dimensional processing techniques was incorporated to reduce required processing time by two-thirds (as compared to similar techniques). Volumetric measurements on relative pneumothorax size were obtained and the overall performance of the automated method shows an average error of just below 1%. PMID:23082091

  7. Exploring Heterogeneous Multicore Architectures for Advanced Embedded Uncertainty Quantification.

    Energy Technology Data Exchange (ETDEWEB)

    Phipps, Eric T.; Edwards, Harold C.; Hu, Jonathan J.

    2014-09-01

    We explore rearrangements of classical uncertainty quantification methods with the aim of achieving higher aggregate performance for uncertainty quantification calculations on emerging multicore and manycore architectures. We show a rearrangement of the stochastic Galerkin method leads to improved performance and scalability on several computational architectures whereby un- certainty information is propagated at the lowest levels of the simulation code improving memory access patterns, exposing new dimensions of fine grained parallelism, and reducing communica- tion. We also develop a general framework for implementing such rearrangements for a diverse set of uncertainty quantification algorithms as well as computational simulation codes to which they are applied.

  8. Algorithming the Algorithm

    DEFF Research Database (Denmark)

    Mahnke, Martina; Uprichard, Emma

    2014-01-01

    Imagine sailing across the ocean. The sun is shining, vastness all around you. And suddenly [BOOM] you’ve hit an invisible wall. Welcome to the Truman Show! Ever since Eli Pariser published his thoughts on a potential filter bubble, this movie scenario seems to have become reality, just with slight...... changes: it’s not the ocean, it’s the internet we’re talking about, and it’s not a TV show producer, but algorithms that constitute a sort of invisible wall. Building on this assumption, most research is trying to ‘tame the algorithmic tiger’. While this is a valuable and often inspiring approach, we...

  9. Quantification in emission tomography

    International Nuclear Information System (INIS)

    Buvat, Irene

    2011-11-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) and positron emission tomography (PET) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena 2 - Main problems impacting quantification in PET and SPECT: problems, consequences, correction methods, results (Attenuation, scattering, partial volume effect, movement, un-stationary spatial resolution in SPECT, fortuitous coincidences in PET, standardisation in PET); 3 - Synthesis: accessible efficiency, know-how, Precautions, beyond the activity measurement

  10. Quantum Computation and Algorithms

    International Nuclear Information System (INIS)

    Biham, O.; Biron, D.; Biham, E.; Grassi, M.; Lidar, D.A.

    1999-01-01

    It is now firmly established that quantum algorithms provide a substantial speedup over classical algorithms for a variety of problems, including the factorization of large numbers and the search for a marked element in an unsorted database. In this talk I will review the principles of quantum algorithms, the basic quantum gates and their operation. The combination of superposition and interference, that makes these algorithms efficient, will be discussed. In particular, Grover's search algorithm will be presented as an example. I will show that the time evolution of the amplitudes in Grover's algorithm can be found exactly using recursion equations, for any initial amplitude distribution

  11. 'Motion frozen' quantification and display of myocardial perfusion gated SPECT

    International Nuclear Information System (INIS)

    Slomka, P.J.; Hurwitz, G.A.; Baddredine, M.; Baranowski, J.; Aladl, U.E.

    2002-01-01

    Aim: Gated SPECT imaging incorporates both functional and perfusion information of the left ventricle (LV). However perfusion data is confounded by the effect of ventricular motion. Most existing quantification paradigms simply add all gated frames and then proceed to extract the perfusion information from static images, discarding the effects of cardiac motion. In an attempt to improve the reliability and accuracy of cardiac SPECT quantification we propose to eliminate the LV motion prior to the perfusion quantification via automated image warping algorithm. Methods: A pilot series of 14 male and 11 female gated stress SPECT images acquired with 8 time bins have been co-registered to the coordinates of the 3D normal templates. Subsequently the LV endo and epi-cardial 3D points (300-500) were identified on end-systolic (ES) and end-diastolic (ED) frames, defining the ES-ED motion vectors. The nonlinear image warping algorithm (thin-plate-spline) was then applied to warp end-systolic frame was onto the end-diastolic frames using the corresponding ES-ED motion vectors. The remaining 6 intermediate frames were also transformed to the ED coordinates using fractions of the motion vectors. Such warped images were then summed to provide the LV perfusion image in the ED phase but with counts from the full cycle. Results: The identification of the ED/ES corresponding points was successful in all cases. The corrected displacement between ED and ES images was up to 25 mm. The summed images had the appearance of the ED frames but have been much less noisy since all the counts have been used. The spatial resolution of such images appeared higher than that of summed gated images, especially in the female scans. These 'motion frozen' images could be displayed and quantified as regular non-gated tomograms including polar map paradigm. Conclusions: This image processing technique may improve the effective image resolution of summed gated myocardial perfusion images used for

  12. Quantification of local mobilities

    DEFF Research Database (Denmark)

    Zhang, Y. B.

    2018-01-01

    A new method for quantification of mobilities of local recrystallization boundary segments is presented. The quantification is based on microstructures characterized using electron microscopy and on determination of migration velocities and driving forces for local boundary segments. Pure aluminium...... is investigated and the results show that even for a single recrystallization boundary, different boundary segments migrate differently, and the differences can be understood based on variations in mobilities and local deformed microstructures. The present work has important implications for understanding...

  13. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  14. Quantification of susceptibility change at high-concentrated SPIO-labeled target by characteristic phase gradient recognition.

    Science.gov (United States)

    Zhu, Haitao; Nie, Binbin; Liu, Hua; Guo, Hua; Demachi, Kazuyuki; Sekino, Masaki; Shan, Baoci

    2016-05-01

    Phase map cross-correlation detection and quantification may produce highlighted signal at superparamagnetic iron oxide nanoparticles, and distinguish them from other hypointensities. The method may quantify susceptibility change by performing least squares analysis between a theoretically generated magnetic field template and an experimentally scanned phase image. Because characteristic phase recognition requires the removal of phase wrap and phase background, additional steps of phase unwrapping and filtering may increase the chance of computing error and enlarge the inconsistence among algorithms. To solve problem, phase gradient cross-correlation and quantification method is developed by recognizing characteristic phase gradient pattern instead of phase image because phase gradient operation inherently includes unwrapping and filtering functions. However, few studies have mentioned the detectable limit of currently used phase gradient calculation algorithms. The limit may lead to an underestimation of large magnetic susceptibility change caused by high-concentrated iron accumulation. In this study, mathematical derivation points out the value of maximum detectable phase gradient calculated by differential chain algorithm in both spatial and Fourier domain. To break through the limit, a modified quantification method is proposed by using unwrapped forward differentiation for phase gradient generation. The method enlarges the detectable range of phase gradient measurement and avoids the underestimation of magnetic susceptibility. Simulation and phantom experiments were used to quantitatively compare different methods. In vivo application performs MRI scanning on nude mice implanted by iron-labeled human cancer cells. Results validate the limit of detectable phase gradient and the consequent susceptibility underestimation. Results also demonstrate the advantage of unwrapped forward differentiation compared with differential chain algorithms for susceptibility

  15. Probabilistic production simulation including CHP plants

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, H.V.; Palsson, H.; Ravn, H.F.

    1997-04-01

    A probabilistic production simulation method is presented for an energy system containing combined heat and power plants. The method permits incorporation of stochastic failures (forced outages) of the plants and is well suited for analysis of the dimensioning of the system, that is, for finding the appropriate types and capacities of production plants in relation to expansion planning. The method is in the tradition of similar approaches for the analysis of power systems, based on the load duration curve. The present method extends on this by considering a two-dimensional load duration curve where the two dimensions represent heat and power. The method permits the analysis of a combined heat and power system which includes all the basic relevant types of plants, viz., condensing plants, back pressure plants, extraction plants and heat plants. The focus of the method is on the situation where the heat side has priority. This implies that on the power side there may be imbalances between demand and production. The method permits quantification of the expected power overflow, the expected unserviced power demand, and the expected unserviced heat demand. It is shown that a discretization method as well as double Fourier series may be applied in algorithms based on the method. (au) 1 tab., 28 ills., 21 refs.

  16. Mixture quantification using PLS in plastic scintillation measurements

    Energy Technology Data Exchange (ETDEWEB)

    Bagan, H.; Tarancon, A.; Rauret, G. [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Garcia, J.F., E-mail: jfgarcia@ub.ed [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain)

    2011-06-15

    This article reports the capability of plastic scintillation (PS) combined with multivariate calibration (Partial least squares; PLS) to detect and quantify alpha and beta emitters in mixtures. While several attempts have been made with this purpose in mind using liquid scintillation (LS), no attempt was done using PS that has the great advantage of not producing mixed waste after the measurements are performed. Following this objective, ternary mixtures of alpha and beta emitters ({sup 241}Am, {sup 137}Cs and {sup 90}Sr/{sup 90}Y) have been quantified. Procedure optimisation has evaluated the use of the net spectra or the sample spectra, the inclusion of different spectra obtained at different values of the Pulse Shape Analysis parameter and the application of the PLS1 or PLS2 algorithms. The conclusions show that the use of PS+PLS2 applied to the sample spectra, without the use of any pulse shape discrimination, allows quantification of the activities with relative errors less than 10% in most of the cases. This procedure not only allows quantification of mixtures but also reduces measurement time (no blanks are required) and the application of this procedure does not require detectors that include the pulse shape analysis parameter.

  17. Sound algorithms

    OpenAIRE

    De Götzen , Amalia; Mion , Luca; Tache , Olivier

    2007-01-01

    International audience; We call sound algorithms the categories of algorithms that deal with digital sound signal. Sound algorithms appeared in the very infancy of computer. Sound algorithms present strong specificities that are the consequence of two dual considerations: the properties of the digital sound signal itself and its uses, and the properties of auditory perception.

  18. Genetic algorithms

    Science.gov (United States)

    Wang, Lui; Bayer, Steven E.

    1991-01-01

    Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.

  19. The Parallel C++ Statistical Library ‘QUESO’: Quantification of Uncertainty for Estimation, Simulation and Optimization

    KAUST Repository

    Prudencio, Ernesto E.; Schulz, Karl W.

    2012-01-01

    QUESO is a collection of statistical algorithms and programming constructs supporting research into the uncertainty quantification (UQ) of models and their predictions. It has been designed with three objectives: it should (a) be sufficiently

  20. An overview of quantification methods in energy-dispersive X-ray ...

    Indian Academy of Sciences (India)

    methods for thin samples, samples with intermediate thickness and thick ... algorithms and quantification methods based on scattered primary radiation. ... technique for in situ characterization of materials such as contaminated soil, archaeo-.

  1. Fluorescent quantification of melanin.

    Science.gov (United States)

    Fernandes, Bruno; Matamá, Teresa; Guimarães, Diana; Gomes, Andreia; Cavaco-Paulo, Artur

    2016-11-01

    Melanin quantification is reportedly performed by absorption spectroscopy, commonly at 405 nm. Here, we propose the implementation of fluorescence spectroscopy for melanin assessment. In a typical in vitro assay to assess melanin production in response to an external stimulus, absorption spectroscopy clearly overvalues melanin content. This method is also incapable of distinguishing non-melanotic/amelanotic control cells from those that are actually capable of performing melanogenesis. Therefore, fluorescence spectroscopy is the best method for melanin quantification as it proved to be highly specific and accurate, detecting even small variations in the synthesis of melanin. This method can also be applied to the quantification of melanin in more complex biological matrices like zebrafish embryos and human hair. © 2016 John Wiley & Sons A/S. Published by John Wiley & Sons Ltd.

  2. Improved multivariate polynomial factoring algorithm

    International Nuclear Information System (INIS)

    Wang, P.S.

    1978-01-01

    A new algorithm for factoring multivariate polynomials over the integers based on an algorithm by Wang and Rothschild is described. The new algorithm has improved strategies for dealing with the known problems of the original algorithm, namely, the leading coefficient problem, the bad-zero problem and the occurrence of extraneous factors. It has an algorithm for correctly predetermining leading coefficients of the factors. A new and efficient p-adic algorithm named EEZ is described. Bascially it is a linearly convergent variable-by-variable parallel construction. The improved algorithm is generally faster and requires less store then the original algorithm. Machine examples with comparative timing are included

  3. Application of Genetic Algorithm (GA) Assisted Partial Least Square (PLS) Analysis on Trilinear and Non-trilinear Fluorescence Data Sets to Quantify the Fluorophores in Multifluorophoric Mixtures: Improving Quantification Accuracy of Fluorimetric Estimations of Dilute Aqueous Mixtures.

    Science.gov (United States)

    Kumar, Keshav

    2018-03-29

    Excitation-emission matrix fluorescence (EEMF) and total synchronous fluorescence spectroscopy (TSFS) are the 2 fluorescence techniques that are commonly used for the analysis of multifluorophoric mixtures. These 2 fluorescence techniques are conceptually different and provide certain advantages over each other. The manual analysis of such highly correlated large volume of EEMF and TSFS towards developing a calibration model is difficult. Partial least square (PLS) analysis can analyze the large volume of EEMF and TSFS data sets by finding important factors that maximize the correlation between the spectral and concentration information for each fluorophore. However, often the application of PLS analysis on entire data sets does not provide a robust calibration model and requires application of suitable pre-processing step. The present work evaluates the application of genetic algorithm (GA) analysis prior to PLS analysis on EEMF and TSFS data sets towards improving the precision and accuracy of the calibration model. The GA algorithm essentially combines the advantages provided by stochastic methods with those provided by deterministic approaches and can find the set of EEMF and TSFS variables that perfectly correlate well with the concentration of each of the fluorophores present in the multifluorophoric mixtures. The utility of the GA assisted PLS analysis is successfully validated using (i) EEMF data sets acquired for dilute aqueous mixture of four biomolecules and (ii) TSFS data sets acquired for dilute aqueous mixtures of four carcinogenic polycyclic aromatic hydrocarbons (PAHs) mixtures. In the present work, it is shown that by using the GA it is possible to significantly improve the accuracy and precision of the PLS calibration model developed for both EEMF and TSFS data set. Hence, GA must be considered as a useful pre-processing technique while developing an EEMF and TSFS calibration model.

  4. Statistical Assessment of Gene Fusion Detection Algorithms using RNASequencing Data

    NARCIS (Netherlands)

    Varadan, V.; Janevski, A.; Kamalakaran, S.; Banerjee, N.; Harris, L.; Dimitrova, D.

    2012-01-01

    The detection and quantification of fusion transcripts has both biological and clinical implications. RNA sequencing technology provides a means for unbiased and high resolution characterization of fusion transcript information in tissue samples. We evaluated two fusiondetection algorithms,

  5. Exact reliability quantification of highly reliable systems with maintenance

    Energy Technology Data Exchange (ETDEWEB)

    Bris, Radim, E-mail: radim.bris@vsb.c [VSB-Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Applied Mathematics, 17. listopadu 15, 70833 Ostrava-Poruba (Czech Republic)

    2010-12-15

    When a system is composed of highly reliable elements, exact reliability quantification may be problematic, because computer accuracy is limited. Inaccuracy can be due to different aspects. For example, an error may be made when subtracting two numbers that are very close to each other, or at the process of summation of many very different numbers, etc. The basic objective of this paper is to find a procedure, which eliminates errors made by PC when calculations close to an error limit are executed. Highly reliable system is represented by the use of directed acyclic graph which is composed from terminal nodes, i.e. highly reliable input elements, internal nodes representing subsystems and edges that bind all of these nodes. Three admissible unavailability models of terminal nodes are introduced, including both corrective and preventive maintenance. The algorithm for exact unavailability calculation of terminal nodes is based on merits of a high-performance language for technical computing MATLAB. System unavailability quantification procedure applied to a graph structure, which considers both independent and dependent (i.e. repeatedly occurring) terminal nodes is based on combinatorial principle. This principle requires summation of a lot of very different non-negative numbers, which may be a source of an inaccuracy. That is why another algorithm for exact summation of such numbers is designed in the paper. The summation procedure uses benefits from a special number system with the base represented by the value 2{sup 32}. Computational efficiency of the new computing methodology is compared with advanced simulation software. Various calculations on systems from references are performed to emphasize merits of the methodology.

  6. Automated image analysis for quantification of filamentous bacteria

    DEFF Research Database (Denmark)

    Fredborg, Marlene; Rosenvinge, Flemming Schønning; Spillum, Erik

    2015-01-01

    in systems relying on colorimetry or turbidometry (such as Vitek-2, Phoenix, MicroScan WalkAway). The objective was to examine an automated image analysis algorithm for quantification of filamentous bacteria using the 3D digital microscopy imaging system, oCelloScope. Results Three E. coli strains displaying...

  7. Algorithmic cryptanalysis

    CERN Document Server

    Joux, Antoine

    2009-01-01

    Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic

  8. Disease quantification in dermatology

    DEFF Research Database (Denmark)

    Greve, Tanja Maria; Kamp, Søren; Jemec, Gregor B E

    2013-01-01

    Accurate documentation of disease severity is a prerequisite for clinical research and the practice of evidence-based medicine. The quantification of skin diseases such as psoriasis currently relies heavily on clinical scores. Although these clinical scoring methods are well established and very ...

  9. Algorithmic mathematics

    CERN Document Server

    Hougardy, Stefan

    2016-01-01

    Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.

  10. The quantification of risk and tourism

    Directory of Open Access Journals (Sweden)

    Piet Croucamp

    2014-01-01

    Full Text Available Tourism in South Africa comprises 9.5% of Gross Domestic Product (GDP, but remains an underresearched industry, especially regarding the quantification of the risks prevailing in the social, political and economic environment in which the industry operates. Risk prediction, extrapolation forecasting is conducted largely in the context of a qualitative methodology. This article reflects on the quantification of social constructs as variables of risk in the tourism industry with reference to South Africa. The theory and methodology of quantification is briefly reviewed and the indicators of risk are conceptualized and operationalized. The identified indicators are scaled in indices for purposes of quantification. Risk assessments and the quantification of constructs rely heavily on the experience - often personal - of the researcher and this scholarly endeavour is, therefore, not inclusive of all possible identified indicators of risk. It is accepted that tourism in South Africa is an industry comprising of a large diversity of sectors, each with a different set of risk indicators and risk profiles. The emphasis of this article is thus on the methodology to be applied to a risk profile. A secondary endeavour is to provide for clarity about the conceptual and operational confines of risk in general, as well as how quantified risk relates to the tourism industry. The indices provided include both domesticand international risk indicators. The motivation for the article is to encourage a greater emphasis on quantitative research in our efforts to understand and manage a risk profile for the tourist industry.

  11. Total algorithms

    NARCIS (Netherlands)

    Tel, G.

    We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of

  12. A comprehensive evaluation of alignment algorithms in the context of RNA-seq.

    Directory of Open Access Journals (Sweden)

    Robert Lindner

    Full Text Available Transcriptome sequencing (RNA-Seq overcomes limitations of previously used RNA quantification methods and provides one experimental framework for both high-throughput characterization and quantification of transcripts at the nucleotide level. The first step and a major challenge in the analysis of such experiments is the mapping of sequencing reads to a transcriptomic origin including the identification of splicing events. In recent years, a large number of such mapping algorithms have been developed, all of which have in common that they require algorithms for aligning a vast number of reads to genomic or transcriptomic sequences. Although the FM-index based aligner Bowtie has become a de facto standard within mapping pipelines, a much larger number of possible alignment algorithms have been developed also including other variants of FM-index based aligners. Accordingly, developers and users of RNA-seq mapping pipelines have the choice among a large number of available alignment algorithms. To provide guidance in the choice of alignment algorithms for these purposes, we evaluated the performance of 14 widely used alignment programs from three different algorithmic classes: algorithms using either hashing of the reference transcriptome, hashing of reads, or a compressed FM-index representation of the genome. Here, special emphasis was placed on both precision and recall and the performance for different read lengths and numbers of mismatches and indels in a read. Our results clearly showed the significant reduction in memory footprint and runtime provided by FM-index based aligners at a precision and recall comparable to the best hash table based aligners. Furthermore, the recently developed Bowtie 2 alignment algorithm shows a remarkable tolerance to both sequencing errors and indels, thus, essentially making hash-based aligners obsolete.

  13. New Optimization Algorithms in Physics

    CERN Document Server

    Hartmann, Alexander K

    2004-01-01

    Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.

  14. Impact of muscular uptake and statistical noise on tumor quantification based on simulated FDG-PET studies

    International Nuclear Information System (INIS)

    Silva-Rodríguez, Jesús; Domínguez-Prado, Inés; Pardo-Montero, Juan; Ruibal, Álvaro

    2017-01-01

    Purpose: The aim of this work is to study the effect of physiological muscular uptake variations and statistical noise on tumor quantification in FDG-PET studies. Methods: We designed a realistic framework based on simulated FDG-PET acquisitions from an anthropomorphic phantom that included different muscular uptake levels and three spherical lung lesions with diameters of 31, 21 and 9 mm. A distribution of muscular uptake levels was obtained from 136 patients remitted to our center for whole-body FDG-PET. Simulated FDG-PET acquisitions were obtained by using the Simulation System for Emission Tomography package (SimSET) Monte Carlo package. Simulated data was reconstructed by using an iterative Ordered Subset Expectation Maximization (OSEM) algorithm implemented in the Software for Tomographic Image Reconstruction (STIR) library. Tumor quantification was carried out by using estimations of SUV max , SUV 50 and SUV mean from different noise realizations, lung lesions and multiple muscular uptakes. Results: Our analysis provided quantification variability values of 17–22% (SUV max ), 11–19% (SUV 50 ) and 8–10% (SUV mean ) when muscular uptake variations and statistical noise were included. Meanwhile, quantification variability due only to statistical noise was 7–8% (SUV max ), 3–7% (SUV 50 ) and 1–2% (SUV mean ) for large tumors (>20 mm) and 13% (SUV max ), 16% (SUV 50 ) and 8% (SUV mean ) for small tumors (<10 mm), thus showing that the variability in tumor quantification is mainly affected by muscular uptake variations when large enough tumors are considered. In addition, our results showed that quantification variability is strongly dominated by statistical noise when the injected dose decreases below 222 MBq. Conclusions: Our study revealed that muscular uptake variations between patients who are totally relaxed should be considered as an uncertainty source of tumor quantification values. - Highlights: • Distribution of muscular uptake from 136 PET

  15. Semioptimal practicable algorithmic cooling

    International Nuclear Information System (INIS)

    Elias, Yuval; Mor, Tal; Weinstein, Yossi

    2011-01-01

    Algorithmic cooling (AC) of spins applies entropy manipulation algorithms in open spin systems in order to cool spins far beyond Shannon's entropy bound. Algorithmic cooling of nuclear spins was demonstrated experimentally and may contribute to nuclear magnetic resonance spectroscopy. Several cooling algorithms were suggested in recent years, including practicable algorithmic cooling (PAC) and exhaustive AC. Practicable algorithms have simple implementations, yet their level of cooling is far from optimal; exhaustive algorithms, on the other hand, cool much better, and some even reach (asymptotically) an optimal level of cooling, but they are not practicable. We introduce here semioptimal practicable AC (SOPAC), wherein a few cycles (typically two to six) are performed at each recursive level. Two classes of SOPAC algorithms are proposed and analyzed. Both attain cooling levels significantly better than PAC and are much more efficient than the exhaustive algorithms. These algorithms are shown to bridge the gap between PAC and exhaustive AC. In addition, we calculated the number of spins required by SOPAC in order to purify qubits for quantum computation. As few as 12 and 7 spins are required (in an ideal scenario) to yield a mildly pure spin (60% polarized) from initial polarizations of 1% and 10%, respectively. In the latter case, about five more spins are sufficient to produce a highly pure spin (99.99% polarized), which could be relevant for fault-tolerant quantum computing.

  16. Stochastic approach for radionuclides quantification

    Science.gov (United States)

    Clement, A.; Saurel, N.; Perrin, G.

    2018-01-01

    Gamma spectrometry is a passive non-destructive assay used to quantify radionuclides present in more or less complex objects. Basic methods using empirical calibration with a standard in order to quantify the activity of nuclear materials by determining the calibration coefficient are useless on non-reproducible, complex and single nuclear objects such as waste packages. Package specifications as composition or geometry change from one package to another and involve a high variability of objects. Current quantification process uses numerical modelling of the measured scene with few available data such as geometry or composition. These data are density, material, screen, geometric shape, matrix composition, matrix and source distribution. Some of them are strongly dependent on package data knowledge and operator backgrounds. The French Commissariat à l'Energie Atomique (CEA) is developing a new methodology to quantify nuclear materials in waste packages and waste drums without operator adjustment and internal package configuration knowledge. This method suggests combining a global stochastic approach which uses, among others, surrogate models available to simulate the gamma attenuation behaviour, a Bayesian approach which considers conditional probability densities of problem inputs, and Markov Chains Monte Carlo algorithms (MCMC) which solve inverse problems, with gamma ray emission radionuclide spectrum, and outside dimensions of interest objects. The methodology is testing to quantify actinide activity in different kind of matrix, composition, and configuration of sources standard in terms of actinide masses, locations and distributions. Activity uncertainties are taken into account by this adjustment methodology.

  17. Inverse Problems and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander

    2014-01-06

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  18. Inverse Problems and Uncertainty Quantification

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.

    2014-01-01

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  19. Inverse problems and uncertainty quantification

    KAUST Repository

    Litvinenko, Alexander

    2013-12-18

    In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.

  20. Lamb Wave Damage Quantification Using GA-Based LS-SVM

    Directory of Open Access Journals (Sweden)

    Fuqiang Sun

    2017-06-01

    Full Text Available Lamb waves have been reported to be an efficient tool for non-destructive evaluations (NDE for various application scenarios. However, accurate and reliable damage quantification using the Lamb wave method is still a practical challenge, due to the complex underlying mechanism of Lamb wave propagation and damage detection. This paper presents a Lamb wave damage quantification method using a least square support vector machine (LS-SVM and a genetic algorithm (GA. Three damage sensitive features, namely, normalized amplitude, phase change, and correlation coefficient, were proposed to describe changes of Lamb wave characteristics caused by damage. In view of commonly used data-driven methods, the GA-based LS-SVM model using the proposed three damage sensitive features was implemented to evaluate the crack size. The GA method was adopted to optimize the model parameters. The results of GA-based LS-SVM were validated using coupon test data and lap joint component test data with naturally developed fatigue cracks. Cases of different loading and manufacturer were also included to further verify the robustness of the proposed method for crack quantification.

  1. Lamb Wave Damage Quantification Using GA-Based LS-SVM.

    Science.gov (United States)

    Sun, Fuqiang; Wang, Ning; He, Jingjing; Guan, Xuefei; Yang, Jinsong

    2017-06-12

    Lamb waves have been reported to be an efficient tool for non-destructive evaluations (NDE) for various application scenarios. However, accurate and reliable damage quantification using the Lamb wave method is still a practical challenge, due to the complex underlying mechanism of Lamb wave propagation and damage detection. This paper presents a Lamb wave damage quantification method using a least square support vector machine (LS-SVM) and a genetic algorithm (GA). Three damage sensitive features, namely, normalized amplitude, phase change, and correlation coefficient, were proposed to describe changes of Lamb wave characteristics caused by damage. In view of commonly used data-driven methods, the GA-based LS-SVM model using the proposed three damage sensitive features was implemented to evaluate the crack size. The GA method was adopted to optimize the model parameters. The results of GA-based LS-SVM were validated using coupon test data and lap joint component test data with naturally developed fatigue cracks. Cases of different loading and manufacturer were also included to further verify the robustness of the proposed method for crack quantification.

  2. Subnuclear foci quantification using high-throughput 3D image cytometry

    Science.gov (United States)

    Wadduwage, Dushan N.; Parrish, Marcus; Choi, Heejin; Engelward, Bevin P.; Matsudaira, Paul; So, Peter T. C.

    2015-07-01

    Ionising radiation causes various types of DNA damages including double strand breaks (DSBs). DSBs are often recognized by DNA repair protein ATM which forms gamma-H2AX foci at the site of the DSBs that can be visualized using immunohistochemistry. However most of such experiments are of low throughput in terms of imaging and image analysis techniques. Most of the studies still use manual counting or classification. Hence they are limited to counting a low number of foci per cell (5 foci per nucleus) as the quantification process is extremely labour intensive. Therefore we have developed a high throughput instrumentation and computational pipeline specialized for gamma-H2AX foci quantification. A population of cells with highly clustered foci inside nuclei were imaged, in 3D with submicron resolution, using an in-house developed high throughput image cytometer. Imaging speeds as high as 800 cells/second in 3D were achieved by using HiLo wide-field depth resolved imaging and a remote z-scanning technique. Then the number of foci per cell nucleus were quantified using a 3D extended maxima transform based algorithm. Our results suggests that while most of the other 2D imaging and manual quantification studies can count only up to about 5 foci per nucleus our method is capable of counting more than 100. Moreover we show that 3D analysis is significantly superior compared to the 2D techniques.

  3. Uncertainty quantification and error analysis

    Energy Technology Data Exchange (ETDEWEB)

    Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL

    2010-01-01

    UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.

  4. Geometric approximation algorithms

    CERN Document Server

    Har-Peled, Sariel

    2011-01-01

    Exact algorithms for dealing with geometric objects are complicated, hard to implement in practice, and slow. Over the last 20 years a theory of geometric approximation algorithms has emerged. These algorithms tend to be simple, fast, and more robust than their exact counterparts. This book is the first to cover geometric approximation algorithms in detail. In addition, more traditional computational geometry techniques that are widely used in developing such algorithms, like sampling, linear programming, etc., are also surveyed. Other topics covered include approximate nearest-neighbor search, shape approximation, coresets, dimension reduction, and embeddings. The topics covered are relatively independent and are supplemented by exercises. Close to 200 color figures are included in the text to illustrate proofs and ideas.

  5. Influence of Co-57 and CT Transmission Measurements on the Quantification Accuracy and Partial Volume Effect of a Small Animal PET Scanner.

    Science.gov (United States)

    Mannheim, Julia G; Schmid, Andreas M; Pichler, Bernd J

    2017-12-01

    Non-invasive in vivo positron emission tomography (PET) provides high detection sensitivity in the nano- to picomolar range and in addition to other advantages, the possibility to absolutely quantify the acquired data. The present study focuses on the comparison of transmission data acquired with an X-ray computed tomography (CT) scanner or a Co-57 source for the Inveon small animal PET scanner (Siemens Healthcare, Knoxville, TN, USA), as well as determines their influences on the quantification accuracy and partial volume effect (PVE). A special focus included the impact of the performed calibration on the quantification accuracy. Phantom measurements were carried out to determine the quantification accuracy, the influence of the object size on the quantification, and the PVE for different sphere sizes, along the field of view and for different contrast ratios. An influence of the emission activity on the Co-57 transmission measurements was discovered (deviations up to 24.06 % measured to true activity), whereas no influence of the emission activity on the CT attenuation correction was identified (deviations influenced by the applied calibration factor and by the object size. The PVE demonstrated a dependency on the sphere size, the position within the field of view, the reconstruction and correction algorithms and the count statistics. Depending on the reconstruction algorithm, only ∼30-40 % of the true activity within a small sphere could be resolved. The iterative 3D reconstruction algorithms uncovered substantially increased recovery values compared to the analytical and 2D iterative reconstruction algorithms (up to 70.46 % and 80.82 % recovery for the smallest and largest sphere using iterative 3D reconstruction algorithms). The transmission measurement (CT or Co-57 source) to correct for attenuation did not severely influence the PVE. The analysis of the quantification accuracy and the PVE revealed an influence of the object size, the reconstruction

  6. A simplified Excel® algorithm for estimating the least limiting water range of soils

    Directory of Open Access Journals (Sweden)

    Leão Tairone Paiva

    2004-01-01

    Full Text Available The least limiting water range (LLWR of soils has been employed as a methodological approach for evaluation of soil physical quality in different agricultural systems, including forestry, grasslands and major crops. However, the absence of a simplified methodology for the quantification of LLWR has hampered the popularization of its use among researchers and soil managers. Taking this into account this work has the objective of proposing and describing a simplified algorithm developed in Excel® software for quantification of the LLWR, including the calculation of the critical bulk density, at which the LLWR becomes zero. Despite the simplicity of the procedures and numerical techniques of optimization used, the nonlinear regression produced reliable results when compared to those found in the literature.

  7. The impact of reconstruction method on the quantification of DaTSCAN images

    Energy Technology Data Exchange (ETDEWEB)

    Dickson, John C.; Erlandsson, Kjell; Hutton, Brian F. [UCLH NHS Foundation Trust and University College London, Institute of Nuclear Medicine, London (United Kingdom); Tossici-Bolt, Livia [Southampton University Hospitals NHS Trust, Department of Medical Physics, Southampton (United Kingdom); Sera, Terez [University of Szeged, Department of Nuclear Medicine and Euromedic Szeged, Szeged (Hungary); Varrone, Andrea [Psychiatry Section and Stockholm Brain Institute, Karolinska Institute, Department of Clinical Neuroscience, Stockholm (Sweden); Tatsch, Klaus [EANM/European Network of Excellence for Brain Imaging, Vienna (Austria)

    2010-01-15

    Reconstruction of DaTSCAN brain studies using OS-EM iterative reconstruction offers better image quality and more accurate quantification than filtered back-projection. However, reconstruction must proceed for a sufficient number of iterations to achieve stable and accurate data. This study assessed the impact of the number of iterations on the image quantification, comparing the results of the iterative reconstruction with filtered back-projection data. A striatal phantom filled with {sup 123}I using striatal to background ratios between 2:1 and 10:1 was imaged on five different gamma camera systems. Data from each system were reconstructed using OS-EM (which included depth-independent resolution recovery) with various combinations of iterations and subsets to achieve up to 200 EM-equivalent iterations and with filtered back-projection. Using volume of interest analysis, the relationships between image reconstruction strategy and quantification of striatal uptake were assessed. For phantom filling ratios of 5:1 or less, significant convergence of measured ratios occurred close to 100 EM-equivalent iterations, whereas for higher filling ratios, measured uptake ratios did not display a convergence pattern. Assessment of the count concentrations used to derive the measured uptake ratio showed that nonconvergence of low background count concentrations caused peaking in higher measured uptake ratios. Compared to filtered back-projection, OS-EM displayed larger uptake ratios because of the resolution recovery applied in the iterative algorithm. The number of EM-equivalent iterations used in OS-EM reconstruction influences the quantification of DaTSCAN studies because of incomplete convergence and possible bias in areas of low activity due to the nonnegativity constraint in OS-EM reconstruction. Nevertheless, OS-EM using 100 EM-equivalent iterations provides the best linear discriminatory measure to quantify the uptake in DaTSCAN studies. (orig.)

  8. Euclidean shortest paths exact or approximate algorithms

    CERN Document Server

    Li, Fajie

    2014-01-01

    This book reviews algorithms for the exact or approximate solution of shortest-path problems, with a specific focus on a class of algorithms called rubberband algorithms. The coverage includes mathematical proofs for many of the given statements.

  9. WE-G-17A-03: MRIgRT: Quantification of Organ Motion

    International Nuclear Information System (INIS)

    Stanescu, T; Tadic, T; Jaffray, D

    2014-01-01

    Purpose: To develop an MRI-based methodology and tools required for the quantification of organ motion on a dedicated MRI-guided radiotherapy system. A three-room facility, consisting of a TrueBeam 6X linac vault, a 1.5T MR suite and a brachytherapy interventional room, is currently under commissioning at our institution. The MR scanner can move and image in either room for diagnostic and treatment guidance purposes. Methods: A multi-imaging modality (MR, kV) phantom, featuring programmable 3D simple and complex motion trajectories, was used for the validation of several image sorting algorithms. The testing was performed on MRI (e.g. TrueFISP, TurboFLASH), 4D CT and 4D CBCT. The image sorting techniques were based on a) direct image pixel manipulation into columns or rows, b) single and aggregated pixel data tracking and c) using computer vision techniques for global pixel analysis. Subsequently, the motion phantom and sorting algorithms were utilized for commissioning of MR fast imaging techniques for 2D-cine and 4D data rendering. MR imaging protocols were optimized (e.g. readout gradient strength vs. SNR) to minimize the presence of susceptibility-induced distortions, which were reported through phantom experiments and numerical simulations. The system-related distortions were also quantified (dedicated field phantom) and treated as systematic shifts where relevant. Results: Image sorting algorithms were validated for specific MR-based applications such as quantification of organ motion, local data sampling, and 4D MRI for pre-RT delivery with accuracy better than the raw image pixel size (e.g. 1 mm). MR fast imaging sequences were commissioning and imaging strategies were developed to mitigate spatial artifacts with minimal penalty on the image spatial and temporal sampling. Workflows (e.g. liver) were optimized to include the new motion quantification tools for RT planning and daily patient setup verification. Conclusion: Comprehensive methods were developed

  10. Fast geometric algorithms

    International Nuclear Information System (INIS)

    Noga, M.T.

    1984-01-01

    This thesis addresses a number of important problems that fall within the framework of the new discipline of Computational Geometry. The list of topics covered includes sorting and selection, convex hull algorithms, the L 1 hull, determination of the minimum encasing rectangle of a set of points, the Euclidean and L 1 diameter of a set of points, the metric traveling salesman problem, and finding the superrange of star-shaped and monotype polygons. The main theme of all the work was to develop a set of very fast state-of-the-art algorithms that supersede any rivals in terms of speed and ease of implementation. In some cases existing algorithms were refined; for others new techniques were developed that add to the present database of fast adaptive geometric algorithms. What emerges is a collection of techniques that is successful at merging modern tools developed in analysis of algorithms with those of classical geometry

  11. Unsupervised learning algorithms

    CERN Document Server

    Aydin, Kemal

    2016-01-01

    This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...

  12. Composite Differential Search Algorithm

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2014-01-01

    Full Text Available Differential search algorithm (DS is a relatively new evolutionary algorithm inspired by the Brownian-like random-walk movement which is used by an organism to migrate. It has been verified to be more effective than ABC, JDE, JADE, SADE, EPSDE, GSA, PSO2011, and CMA-ES. In this paper, we propose four improved solution search algorithms, namely “DS/rand/1,” “DS/rand/2,” “DS/current to rand/1,” and “DS/current to rand/2” to search the new space and enhance the convergence rate for the global optimization problem. In order to verify the performance of different solution search methods, 23 benchmark functions are employed. Experimental results indicate that the proposed algorithm performs better than, or at least comparable to, the original algorithm when considering the quality of the solution obtained. However, these schemes cannot still achieve the best solution for all functions. In order to further enhance the convergence rate and the diversity of the algorithm, a composite differential search algorithm (CDS is proposed in this paper. This new algorithm combines three new proposed search schemes including “DS/rand/1,” “DS/rand/2,” and “DS/current to rand/1” with three control parameters using a random method to generate the offspring. Experiment results show that CDS has a faster convergence rate and better search ability based on the 23 benchmark functions.

  13. Accident sequence quantification with KIRAP

    International Nuclear Information System (INIS)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong.

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP's cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs

  14. Accident sequence quantification with KIRAP

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Tae Un; Han, Sang Hoon; Kim, Kil You; Yang, Jun Eon; Jeong, Won Dae; Chang, Seung Cheol; Sung, Tae Yong; Kang, Dae Il; Park, Jin Hee; Lee, Yoon Hwan; Hwang, Mi Jeong

    1997-01-01

    The tasks of probabilistic safety assessment(PSA) consists of the identification of initiating events, the construction of event tree for each initiating event, construction of fault trees for event tree logics, the analysis of reliability data and finally the accident sequence quantification. In the PSA, the accident sequence quantification is to calculate the core damage frequency, importance analysis and uncertainty analysis. Accident sequence quantification requires to understand the whole model of the PSA because it has to combine all event tree and fault tree models, and requires the excellent computer code because it takes long computation time. Advanced Research Group of Korea Atomic Energy Research Institute(KAERI) has developed PSA workstation KIRAP(Korea Integrated Reliability Analysis Code Package) for the PSA work. This report describes the procedures to perform accident sequence quantification, the method to use KIRAP`s cut set generator, and method to perform the accident sequence quantification with KIRAP. (author). 6 refs.

  15. Foundations of genetic algorithms 1991

    CERN Document Server

    1991-01-01

    Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition

  16. Combinatorial optimization algorithms and complexity

    CERN Document Server

    Papadimitriou, Christos H

    1998-01-01

    This clearly written, mathematically rigorous text includes a novel algorithmic exposition of the simplex method and also discusses the Soviet ellipsoid algorithm for linear programming; efficient algorithms for network flow, matching, spanning trees, and matroids; the theory of NP-complete problems; approximation algorithms, local search heuristics for NP-complete problems, more. All chapters are supplemented by thought-provoking problems. A useful work for graduate-level students with backgrounds in computer science, operations research, and electrical engineering.

  17. Algorithms in Singular

    Directory of Open Access Journals (Sweden)

    Hans Schonemann

    1996-12-01

    Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].

  18. Uncertainty quantification for hyperbolic and kinetic equations

    CERN Document Server

    Pareschi, Lorenzo

    2017-01-01

    This book explores recent advances in uncertainty quantification for hyperbolic, kinetic, and related problems. The contributions address a range of different aspects, including: polynomial chaos expansions, perturbation methods, multi-level Monte Carlo methods, importance sampling, and moment methods. The interest in these topics is rapidly growing, as their applications have now expanded to many areas in engineering, physics, biology and the social sciences. Accordingly, the book provides the scientific community with a topical overview of the latest research efforts.

  19. Uncertainty Quantification - an Overview

    KAUST Repository

    Litvinenko, Alexander

    2018-03-01

    1. Introduction to UQ 2. Low-rank tensors for representation of big/high-dimensional data 3. Inverse Problem via Bayesian Update 4. R-INLA and advance numerics for spatio-temporal statistics 5. High Performance Computing, parallel algorithms

  20. An Uncertainty Quantification Framework for Remote Sensing Retrievals

    Science.gov (United States)

    Braverman, A. J.; Hobbs, J.

    2017-12-01

    Remote sensing data sets produced by NASA and other space agencies are the result of complex algorithms that infer geophysical state from observed radiances using retrieval algorithms. The processing must keep up with the downlinked data flow, and this necessitates computational compromises that affect the accuracies of retrieved estimates. The algorithms are also limited by imperfect knowledge of physics and of ancillary inputs that are required. All of this contributes to uncertainties that are generally not rigorously quantified by stepping outside the assumptions that underlie the retrieval methodology. In this talk we discuss a practical framework for uncertainty quantification that can be applied to a variety of remote sensing retrieval algorithms. Ours is a statistical approach that uses Monte Carlo simulation to approximate the sampling distribution of the retrieved estimates. We will discuss the strengths and weaknesses of this approach, and provide a case-study example from the Orbiting Carbon Observatory 2 mission.

  1. Advancing agricultural greenhouse gas quantification*

    Science.gov (United States)

    Olander, Lydia; Wollenberg, Eva; Tubiello, Francesco; Herold, Martin

    2013-03-01

    1. Introduction Better information on greenhouse gas (GHG) emissions and mitigation potential in the agricultural sector is necessary to manage these emissions and identify responses that are consistent with the food security and economic development priorities of countries. Critical activity data (what crops or livestock are managed in what way) are poor or lacking for many agricultural systems, especially in developing countries. In addition, the currently available methods for quantifying emissions and mitigation are often too expensive or complex or not sufficiently user friendly for widespread use. The purpose of this focus issue is to capture the state of the art in quantifying greenhouse gases from agricultural systems, with the goal of better understanding our current capabilities and near-term potential for improvement, with particular attention to quantification issues relevant to smallholders in developing countries. This work is timely in light of international discussions and negotiations around how agriculture should be included in efforts to reduce and adapt to climate change impacts, and considering that significant climate financing to developing countries in post-2012 agreements may be linked to their increased ability to identify and report GHG emissions (Murphy et al 2010, CCAFS 2011, FAO 2011). 2. Agriculture and climate change mitigation The main agricultural GHGs—methane and nitrous oxide—account for 10%-12% of anthropogenic emissions globally (Smith et al 2008), or around 50% and 60% of total anthropogenic methane and nitrous oxide emissions, respectively, in 2005. Net carbon dioxide fluxes between agricultural land and the atmosphere linked to food production are relatively small, although significant carbon emissions are associated with degradation of organic soils for plantations in tropical regions (Smith et al 2007, FAO 2012). Population growth and shifts in dietary patterns toward more meat and dairy consumption will lead to

  2. Algorithms in combinatorial design theory

    CERN Document Server

    Colbourn, CJ

    1985-01-01

    The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.

  3. Essential algorithms a practical approach to computer algorithms

    CERN Document Server

    Stephens, Rod

    2013-01-01

    A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s

  4. Algorithmic alternatives

    International Nuclear Information System (INIS)

    Creutz, M.

    1987-11-01

    A large variety of Monte Carlo algorithms are being used for lattice gauge simulations. For purely bosonic theories, present approaches are generally adequate; nevertheless, overrelaxation techniques promise savings by a factor of about three in computer time. For fermionic fields the situation is more difficult and less clear. Algorithms which involve an extrapolation to a vanishing step size are all quite closely related. Methods which do not require such an approximation tend to require computer time which grows as the square of the volume of the system. Recent developments combining global accept/reject stages with Langevin or microcanonical updatings promise to reduce this growth to V/sup 4/3/

  5. Combinatorial algorithms

    CERN Document Server

    Hu, T C

    2002-01-01

    Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9

  6. Model Uncertainty Quantification Methods In Data Assimilation

    Science.gov (United States)

    Pathiraja, S. D.; Marshall, L. A.; Sharma, A.; Moradkhani, H.

    2017-12-01

    Data Assimilation involves utilising observations to improve model predictions in a seamless and statistically optimal fashion. Its applications are wide-ranging; from improving weather forecasts to tracking targets such as in the Apollo 11 mission. The use of Data Assimilation methods in high dimensional complex geophysical systems is an active area of research, where there exists many opportunities to enhance existing methodologies. One of the central challenges is in model uncertainty quantification; the outcome of any Data Assimilation study is strongly dependent on the uncertainties assigned to both observations and models. I focus on developing improved model uncertainty quantification methods that are applicable to challenging real world scenarios. These include developing methods for cases where the system states are only partially observed, where there is little prior knowledge of the model errors, and where the model error statistics are likely to be highly non-Gaussian.

  7. SPECT quantification of regional radionuclide distributions

    International Nuclear Information System (INIS)

    Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.

    1986-01-01

    SPECT quantification of regional radionuclide activities within the human body is affected by several physical and instrumental factors including attenuation of photons within the patient, Compton scattered events, the system's finite spatial resolution and object size, finite number of detected events, partial volume effects, the radiopharmaceutical biokinetics, and patient and/or organ motion. Furthermore, other instrumentation factors such as calibration of the center-of-rotation, sampling, and detector nonuniformities will affect the SPECT measurement process. These factors are described, together with examples of compensation methods that are currently available for improving SPECT quantification. SPECT offers the potential to improve in vivo estimates of absorbed dose, provided the acquisition, reconstruction, and compensation procedures are adequately implemented and utilized. 53 references, 2 figures

  8. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander

    2017-05-16

    We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.

  9. Pore REconstruction and Segmentation (PORES) method for improved porosity quantification of nanoporous materials

    Energy Technology Data Exchange (ETDEWEB)

    Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)

    2015-01-15

    Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.

  10. Algorithms for parallel computers

    International Nuclear Information System (INIS)

    Churchhouse, R.F.

    1985-01-01

    Until relatively recently almost all the algorithms for use on computers had been designed on the (usually unstated) assumption that they were to be run on single processor, serial machines. With the introduction of vector processors, array processors and interconnected systems of mainframes, minis and micros, however, various forms of parallelism have become available. The advantage of parallelism is that it offers increased overall processing speed but it also raises some fundamental questions, including: (i) which, if any, of the existing 'serial' algorithms can be adapted for use in the parallel mode. (ii) How close to optimal can such adapted algorithms be and, where relevant, what are the convergence criteria. (iii) How can we design new algorithms specifically for parallel systems. (iv) For multi-processor systems how can we handle the software aspects of the interprocessor communications. Aspects of these questions illustrated by examples are considered in these lectures. (orig.)

  11. Verb aspect, alternations and quantification

    Directory of Open Access Journals (Sweden)

    Svetla Koeva

    2015-11-01

    Full Text Available Verb aspect, alternations and quantification In this paper we are briefly discuss the nature of Bulgarian verb aspect and argue that the verb aspect pairs are different lexical units with different (although related meaning, different argument structure (reflecting categories, explicitness and referential status of arguments and different sets of semantic and syntactic alternations. The verb prefixes resulting in perfective verbs derivation in some cases can be interpreted as lexical quantifiers as well. Thus the Bulgarian verb aspect is related (in different way both with the potential for the generation of alternations and with the prefixal lexical quantification. It is shown that the scope of the lexical quantification by means of verbal prefixes is the quantified verb phrase and the scope remains constant in all derived alternations. The paper concerns the basic issues of these complex problems, while the detailed description of the conditions satisfying particular alternation or particular lexical quantification are subject of a more detailed study.

  12. Autodriver algorithm

    Directory of Open Access Journals (Sweden)

    Anna Bourmistrova

    2011-02-01

    Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.

  13. An open tool for input function estimation and quantification of dynamic PET FDG brain scans.

    Science.gov (United States)

    Bertrán, Martín; Martínez, Natalia; Carbajal, Guillermo; Fernández, Alicia; Gómez, Álvaro

    2016-08-01

    Positron emission tomography (PET) analysis of clinical studies is mostly restricted to qualitative evaluation. Quantitative analysis of PET studies is highly desirable to be able to compute an objective measurement of the process of interest in order to evaluate treatment response and/or compare patient data. But implementation of quantitative analysis generally requires the determination of the input function: the arterial blood or plasma activity which indicates how much tracer is available for uptake in the brain. The purpose of our work was to share with the community an open software tool that can assist in the estimation of this input function, and the derivation of a quantitative map from the dynamic PET study. Arterial blood sampling during the PET study is the gold standard method to get the input function, but is uncomfortable and risky for the patient so it is rarely used in routine studies. To overcome the lack of a direct input function, different alternatives have been devised and are available in the literature. These alternatives derive the input function from the PET image itself (image-derived input function) or from data gathered from previous similar studies (population-based input function). In this article, we present ongoing work that includes the development of a software tool that integrates several methods with novel strategies for the segmentation of blood pools and parameter estimation. The tool is available as an extension to the 3D Slicer software. Tests on phantoms were conducted in order to validate the implemented methods. We evaluated the segmentation algorithms over a range of acquisition conditions and vasculature size. Input function estimation algorithms were evaluated against ground truth of the phantoms, as well as on their impact over the final quantification map. End-to-end use of the tool yields quantification maps with [Formula: see text] relative error in the estimated influx versus ground truth on phantoms. The main

  14. MDCT quantification is the dominant parameter in decision–making regarding chest tube drainage for stable patients with traumatic pneumothorax

    Science.gov (United States)

    Cai, Wenli; Lee, June-Goo; Fikry, Karim; Yoshida, Hiroyuki; Novelline, Robert; de Moya, Marc

    2013-01-01

    It is commonly believed that the size of a pneumothorax is an important determinant of treatment decision, in particular regarding whether chest tube drainage (CTD) is required. However, the volumetric quantification of pneumothoraces has not routinely been performed in clinics. In this paper, we introduced an automated computer-aided volumetry (CAV) scheme for quantification of volume of pneumothoraces in chest multi-detect CT (MDCT) images. Moreover, we investigated the impact of accurate volume of pneumothoraces in the improvement of the performance in decision-making regarding CTD in the management of traumatic pneumothoraces. For this purpose, an occurrence frequency map was calculated for quantitative analysis of the importance of each clinical parameter in the decision-making regarding CTD by a computer simulation of decision-making using a genetic algorithm (GA) and a support vector machine (SVM). A total of 14 clinical parameters, including volume of pneumothorax calculated by our CAV scheme, was collected as parameters available for decision-making. The results showed that volume was the dominant parameter in decision-making regarding CTD, with an occurrence frequency value of 1.00. The results also indicated that the inclusion of volume provided the best performance that was statistically significant compared to the other tests in which volume was excluded from the clinical parameters. This study provides the scientific evidence for the application of CAV scheme in MDCT volumetric quantification of pneumothoraces in the management of clinically stable chest trauma patients with traumatic pneumothorax. PMID:22560899

  15. Quantification of informed opinion

    International Nuclear Information System (INIS)

    Rasmuson, D.M.

    1985-01-01

    The objective of this session, Quantification of Informed Opinion, is to provide the statistician with a better understanding of this important area. The NRC uses informed opinion, sometimes called engineering judgment or subjective judgment, in many areas. Sometimes informed opinion is the only source of information that exists, especially in phenomenological areas, such as steam explosions, where experiments are costly and phenomena are very difficult to measure. There are many degrees of informed opinion. These vary from the weatherman who makes predictions concerning relatively high probability events with a large data base to the phenomenological expert who must use his intuition tempered with basic knowledge and little or no measured data to predict the behavior of events with a low probability of occurrence. The first paper in this session provides the reader with an overview of the subject area. The second paper provides some aspects that must be considered in the collection of informed opinion to improve the quality of the information. The final paper contains an example of the use of informed opinion in the area of seismic hazard characterization. These papers should be useful to researchers and statisticians who need to collect and use informed opinion in their work

  16. Quantification In Neurology

    Directory of Open Access Journals (Sweden)

    Netravati M

    2005-01-01

    Full Text Available There is a distinct shift of emphasis in clinical neurology in the last few decades. A few years ago, it was just sufficient for a clinician to precisely record history, document signs, establish diagnosis and write prescription. In the present context, there has been a significant intrusion of scientific culture in clinical practice. Several criteria have been proposed, refined and redefined to ascertain accurate diagnosis for many neurological disorders. Introduction of the concept of impairment, disability, handicap and quality of life has added new dimension to the measurement of health and disease and neurological disorders are no exception. "Best guess" treatment modalities are no more accepted and evidence based medicine has become an integral component of medical care. Traditional treatments need validation and new therapies require vigorous trials. Thus, proper quantification in neurology has become essential, both in practice and research methodology in neurology. While this aspect is widely acknowledged, there is a limited access to a comprehensive document pertaining to measurements in neurology. This following description is a critical appraisal of various measurements and also provides certain commonly used rating scales/scores in neurological practice.

  17. A refined methodology for modeling volume quantification performance in CT

    Science.gov (United States)

    Chen, Baiyu; Wilson, Joshua; Samei, Ehsan

    2014-03-01

    The utility of CT lung nodule volume quantification technique depends on the precision of the quantification. To enable the evaluation of quantification precision, we previously developed a mathematical model that related precision to image resolution and noise properties in uniform backgrounds in terms of an estimability index (e'). The e' was shown to predict empirical precision across 54 imaging and reconstruction protocols, but with different correlation qualities for FBP and iterative reconstruction (IR) due to the non-linearity of IR impacted by anatomical structure. To better account for the non-linearity of IR, this study aimed to refine the noise characterization of the model in the presence of textured backgrounds. Repeated scans of an anthropomorphic lung phantom were acquired. Subtracted images were used to measure the image quantum noise, which was then used to adjust the noise component of the e' calculation measured from a uniform region. In addition to the model refinement, the validation of the model was further extended to 2 nodule sizes (5 and 10 mm) and 2 segmentation algorithms. Results showed that the magnitude of IR's quantum noise was significantly higher in structured backgrounds than in uniform backgrounds (ASiR, 30-50%; MBIR, 100-200%). With the refined model, the correlation between e' values and empirical precision no longer depended on reconstruction algorithm. In conclusion, the model with refined noise characterization relfected the nonlinearity of iterative reconstruction in structured background, and further showed successful prediction of quantification precision across a variety of nodule sizes, dose levels, slice thickness, reconstruction algorithms, and segmentation software.

  18. An automatic quantification system for MS lesions with integrated DICOM structured reporting (DICOM-SR) for implementation within a clinical environment

    Science.gov (United States)

    Jacobs, Colin; Ma, Kevin; Moin, Paymann; Liu, Brent

    2010-03-01

    Multiple Sclerosis (MS) is a common neurological disease affecting the central nervous system characterized by pathologic changes including demyelination and axonal injury. MR imaging has become the most important tool to evaluate the disease progression of MS which is characterized by the occurrence of white matter lesions. Currently, radiologists evaluate and assess the multiple sclerosis lesions manually by estimating the lesion volume and amount of lesions. This process is extremely time-consuming and sensitive to intra- and inter-observer variability. Therefore, there is a need for automatic segmentation of the MS lesions followed by lesion quantification. We have developed a fully automatic segmentation algorithm to identify the MS lesions. The segmentation algorithm is accelerated by parallel computing using Graphics Processing Units (GPU) for practical implementation into a clinical environment. Subsequently, characterized quantification of the lesions is performed. The quantification results, which include lesion volume and amount of lesions, are stored in a structured report together with the lesion location in the brain to establish a standardized representation of the disease progression of the patient. The development of this structured report in collaboration with radiologists aims to facilitate outcome analysis and treatment assessment of the disease and will be standardized based on DICOM-SR. The results can be distributed to other DICOM-compliant clinical systems that support DICOM-SR such as PACS. In addition, the implementation of a fully automatic segmentation and quantification system together with a method for storing, distributing, and visualizing key imaging and informatics data in DICOM-SR for MS lesions improves the clinical workflow of radiologists and visualizations of the lesion segmentations and will provide 3-D insight into the distribution of lesions in the brain.

  19. Uncertainty quantification for PZT bimorph actuators

    Science.gov (United States)

    Bravo, Nikolas; Smith, Ralph C.; Crews, John

    2018-03-01

    In this paper, we discuss the development of a high fidelity model for a PZT bimorph actuator used for micro-air vehicles, which includes the Robobee. We developed a high-fidelity model for the actuator using the homogenized energy model (HEM) framework, which quantifies the nonlinear, hysteretic, and rate-dependent behavior inherent to PZT in dynamic operating regimes. We then discussed an inverse problem on the model. We included local and global sensitivity analysis of the parameters in the high-fidelity model. Finally, we will discuss the results of Bayesian inference and uncertainty quantification on the HEM.

  20. Algorithmic Self

    DEFF Research Database (Denmark)

    Markham, Annette

    This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....

  1. Handbook of Memetic Algorithms

    CERN Document Server

    Cotta, Carlos; Moscato, Pablo

    2012-01-01

    Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems.  The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes.   “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now.  A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem,  memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...

  2. ROMA: representation and quantification of module activity from target expression data

    Directory of Open Access Journals (Sweden)

    Loredana eMartignetti

    2016-02-01

    Full Text Available In many analysis of high-throughput data in systems biology, there is a need to quantify the activity of a set of genes in individual samples. A typical example is the case where it is necessary to estimate the activity of a transcription factor (which is often not directly measurable from the expression of its target genes. We present here ROMA (Representation and quantification Of Module Activities Java software, designed for fast and robust computation of the activity of gene sets (or modules with coordinated expression. ROMA activity quantification is based on the simplest uni-factor linear model of gene regulation that approximates the expression data of a gene set by its first principal component.The proposed algorithm implements novel functionalities: it provides several method modifications for principal components computation, including weighted, robust and centered methods; it distinguishes overdispersed modules (based on the variance explained by the first principal component and coordinated modules (based on the significance of the spectral gap; finally, it computes statistical significance of the estimated module overdispersion or coordination.ROMA can be applied in many contexts, from estimating differential activities of transcriptional factors to findingoverdispersed pathways in single-cell transcriptomics data. We describe here the principles of ROMA providing several practical examples of its use.ROMA source code is available at https://github.com/sysbio-curie/Roma.

  3. Myocardial Infarction Area Quantification using High-Resolution SPECT Images in Rats

    Energy Technology Data Exchange (ETDEWEB)

    Oliveira, Luciano Fonseca Lemos de [Divisão de Cardiologia, Departamento de Clínica Médica, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP (Brazil); Mejia, Jorge [Faculdade de Medicina de São José do Rio Preto, São José do Rio Preto, SP (Brazil); Carvalho, Eduardo Elias Vieira de; Lataro, Renata Maria; Frassetto, Sarita Nasbine [Divisão de Cardiologia, Departamento de Clínica Médica, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP (Brazil); Fazan, Rubens Jr.; Salgado, Hélio Cesar [Departamento de Fisiologia, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP (Brazil); Galvis-Alonso, Orfa Yineth [Faculdade de Medicina de São José do Rio Preto, São José do Rio Preto, SP (Brazil); Simões, Marcus Vinícius, E-mail: msimoes@fmrp.usp.br [Divisão de Cardiologia, Departamento de Clínica Médica, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP (Brazil)

    2013-07-15

    Imaging techniques enable in vivo sequential assessment of the morphology and function of animal organs in experimental models. We developed a device for high-resolution single photon emission computed tomography (SPECT) imaging based on an adapted pinhole collimator. To determine the accuracy of this system for quantification of myocardial infarct area in rats. Thirteen male Wistar rats (250 g) underwent experimental myocardial infarction by occlusion of the left coronary artery. After 4 weeks, SPECT images were acquired 1.5 hours after intravenous injection of 555 MBq of 99mTc-Sestamibi. The tomographic reconstruction was performed by using specially developed software based on the Maximum Likelihood algorithm. The analysis of the data included the correlation between the area of perfusion defects detected by scintigraphy and extent of myocardial fibrosis assessed by histology. The images showed a high target organ/background ratio with adequate visualization of the left ventricular walls and cavity. All animals presenting infarction areas were correctly identified by the perfusion images. There was no difference of the infarct area as measured by SPECT (21.1 ± 21.2%) and by histology (21.7 ± 22.0%; p=0.45). There was a strong correlation between individual values of the area of infarction measured by these two methods. The developed system presented adequate spatial resolution and high accuracy for the detection and quantification of myocardial infarction areas, consisting in a low cost and versatile option for high-resolution SPECT imaging of small rodents.

  4. Histogram-Based Thresholding for Detection and Quantification of Hemorrhages in Retinal Images

    Directory of Open Access Journals (Sweden)

    Hussain Fadhel Hamdan Jaafar

    2016-12-01

    Full Text Available Retinal image analysis is commonly used for the detection and quantification of retinal diabetic retinopathy. In retinal images, dark lesions including hemorrhages and microaneurysms are the earliest warnings of vision loss. In this paper, new algorithm for extraction and quantification of hemorrhages in fundus images is presented. Hemorrhage candidates are extracted in a preliminary step as a coarse segmentation followed by a fine segmentation step. Local variation processes are applied in the coarse segmentation step to determine boundaries of all candidates with distinct edges. Fine segmentation processes are based on histogram thresholding to extract real hemorrhages from the segmented candidates locally. The proposed method was trained and tested using an image dataset of 153 manually labeled retinal images. At the pixel level, the proposed method could identify abnormal retinal images with 90.7% sensitivity and 85.1% predictive value. Due to its distinctive performance measurements, this technique demonstrates that it could be used for a computer-aided mass screening of retinal diseases.

  5. Standardless quantification methods in electron probe microanalysis

    Energy Technology Data Exchange (ETDEWEB)

    Trincavelli, Jorge, E-mail: trincavelli@famaf.unc.edu.ar [Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba (Argentina); Instituto de Física Enrique Gaviola, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba (Argentina); Limandri, Silvina, E-mail: s.limandri@conicet.gov.ar [Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba (Argentina); Instituto de Física Enrique Gaviola, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba (Argentina); Bonetto, Rita, E-mail: bonetto@quimica.unlp.edu.ar [Centro de Investigación y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Facultad de Ciencias Exactas, de la Universidad Nacional de La Plata, Calle 47 N° 257, 1900 La Plata (Argentina)

    2014-11-01

    The elemental composition of a solid sample can be determined by electron probe microanalysis with or without the use of standards. The standardless algorithms are quite faster than the methods that require standards; they are useful when a suitable set of standards is not available or for rough samples, and also they help to solve the problem of current variation, for example, in equipments with cold field emission gun. Due to significant advances in the accuracy achieved during the last years, product of the successive efforts made to improve the description of generation, absorption and detection of X-rays, the standardless methods have increasingly become an interesting option for the user. Nevertheless, up to now, algorithms that use standards are still more precise than standardless methods. It is important to remark, that care must be taken with results provided by standardless methods that normalize the calculated concentration values to 100%, unless an estimate of the errors is reported. In this work, a comprehensive discussion of the key features of the main standardless quantification methods, as well as the level of accuracy achieved by them is presented. - Highlights: • Standardless methods are a good alternative when no suitable standards are available. • Their accuracy reaches 10% for 95% of the analyses when traces are excluded. • Some of them are suitable for the analysis of rough samples.

  6. Contact-impact algorithms on parallel computers

    International Nuclear Information System (INIS)

    Zhong Zhihua; Nilsson, Larsgunnar

    1994-01-01

    Contact-impact algorithms on parallel computers are discussed within the context of explicit finite element analysis. The algorithms concerned include a contact searching algorithm and an algorithm for contact force calculations. The contact searching algorithm is based on the territory concept of the general HITA algorithm. However, no distinction is made between different contact bodies, or between different contact surfaces. All contact segments from contact boundaries are taken as a single set. Hierarchy territories and contact territories are expanded. A three-dimensional bucket sort algorithm is used to sort contact nodes. The defence node algorithm is used in the calculation of contact forces. Both the contact searching algorithm and the defence node algorithm are implemented on the connection machine CM-200. The performance of the algorithms is examined under different circumstances, and numerical results are presented. ((orig.))

  7. The Copenhagen Triage Algorithm

    DEFF Research Database (Denmark)

    Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia

    2016-01-01

    is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...

  8. Parallel algorithms

    CERN Document Server

    Casanova, Henri; Robert, Yves

    2008-01-01

    ""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi

  9. Uncertainty quantification for environmental models

    Science.gov (United States)

    Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming

    2012-01-01

    Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10

  10. Strategy study of quantification harmonization of SUV in PET/CT images

    International Nuclear Information System (INIS)

    Fischer, Andreia Caroline Fischer da Silveira

    2014-01-01

    In clinical practice, PET/CT images are often analyzed qualitatively by visual comparison of tumor lesions and normal tissues uptake; and semi-quantitatively by means of a parameter called SUV (Standardized Uptake Value). To ensure that longitudinal studies acquired on different scanners are interchangeable, and information of quantification is comparable, it is necessary to establish a strategy to harmonize the quantification of SUV. The aim of this study is to evaluate the strategy to harmonize the quantification of PET/CT images, performed with different scanner models and manufacturers. For this purpose, a survey of the technical characteristics of equipment and acquisition protocols of clinical images of different services of PET/CT in the state of Rio Grande do Sul was conducted. For each scanner, the accuracy of SUV quantification, and the Recovery Coefficient (RC) curves were determined, using the reconstruction parameters clinically relevant and available. From these data, harmonized performance specifications among the evaluated scanners were identified, as well as the algorithm that produces, for each one, the most accurate quantification. Finally, the most appropriate reconstruction parameters to harmonize the SUV quantification in each scanner, either regionally or internationally were identified. It was found that the RC values of the analyzed scanners proved to be overestimated by up to 38%, particularly for objects larger than 17mm. These results demonstrate the need for further optimization, through the reconstruction parameters modification, and even the change of the reconstruction algorithm used in each scanner. It was observed that there is a decoupling between the best image for PET/CT qualitative analysis and the best image for quantification studies. Thus, the choice of reconstruction method should be tied to the purpose of the PET/CT study in question, since the same reconstruction algorithm is not adequate, in one scanner, for qualitative

  11. Parallel Algorithms and Patterns

    Energy Technology Data Exchange (ETDEWEB)

    Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2016-06-16

    This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.

  12. Validation of a method for accurate and highly reproducible quantification of brain dopamine transporter SPECT studies

    DEFF Research Database (Denmark)

    Jensen, Peter S; Ziebell, Morten; Skouboe, Glenna

    2011-01-01

    In nuclear medicine brain imaging, it is important to delineate regions of interest (ROIs) so that the outcome is both accurate and reproducible. The purpose of this study was to validate a new time-saving algorithm (DATquan) for accurate and reproducible quantification of the striatal dopamine t...... transporter (DAT) with appropriate radioligands and SPECT and without the need for structural brain scanning....

  13. Automatic Drusen Quantification and Risk Assessment of Age-related Macular Degeneration on Color Fundus Images

    NARCIS (Netherlands)

    Grinsven, M.J.J.P. van; Lechanteur, Y.T.E.; Ven, J.P.H. van de; Ginneken, B. van; Hoyng, C.B.; Theelen, T.; Sanchez, C.I.

    2013-01-01

    PURPOSE: To evaluate a machine learning algorithm that allows for computer aided diagnosis (CAD) of non-advanced age-related macular degeneration (AMD) by providing an accurate detection and quantification of drusen location, area and size. METHODS: Color fundus photographs of 407 eyes without AMD

  14. Where are the parallel algorithms?

    Science.gov (United States)

    Voigt, R. G.

    1985-01-01

    Four paradigms that can be useful in developing parallel algorithms are discussed. These include computational complexity analysis, changing the order of computation, asynchronous computation, and divide and conquer. Each is illustrated with an example from scientific computation, and it is shown that computational complexity must be used with great care or an inefficient algorithm may be selected.

  15. Synthesis of Greedy Algorithms Using Dominance Relations

    Science.gov (United States)

    Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.

    2010-01-01

    Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.

  16. Brain-inspired algorithms for retinal image analysis

    NARCIS (Netherlands)

    ter Haar Romeny, B.M.; Bekkers, E.J.; Zhang, J.; Abbasi-Sureshjani, S.; Huang, F.; Duits, R.; Dasht Bozorg, Behdad; Berendschot, T.T.J.M.; Smit-Ockeloen, I.; Eppenhof, K.A.J.; Feng, J.; Hannink, J.; Schouten, J.; Tong, M.; Wu, H.; van Triest, J.W.; Zhu, S.; Chen, D.; He, W.; Xu, L.; Han, P.; Kang, Y.

    2016-01-01

    Retinal image analysis is a challenging problem due to the precise quantification required and the huge numbers of images produced in screening programs. This paper describes a series of innovative brain-inspired algorithms for automated retinal image analysis, recently developed for the RetinaCheck

  17. Rapid quantification and sex determination of forensic evidence materials.

    Science.gov (United States)

    Andréasson, Hanna; Allen, Marie

    2003-11-01

    DNA quantification of forensic evidence is very valuable for an optimal use of the available biological material. Moreover, sex determination is of great importance as additional information in criminal investigations as well as in identification of missing persons, no suspect cases, and ancient DNA studies. While routine forensic DNA analysis based on short tandem repeat markers includes a marker for sex determination, analysis of samples containing scarce amounts of DNA is often based on mitochondrial DNA, and sex determination is not performed. In order to allow quantification and simultaneous sex determination on minute amounts of DNA, an assay based on real-time PCR analysis of a marker within the human amelogenin gene has been developed. The sex determination is based on melting curve analysis, while an externally standardized kinetic analysis allows quantification of the nuclear DNA copy number in the sample. This real-time DNA quantification assay has proven to be highly sensitive, enabling quantification of single DNA copies. Although certain limitations were apparent, the system is a rapid, cost-effective, and flexible assay for analysis of forensic casework samples.

  18. Quantification of the influence of the choice of the algorithm and planning system on the calculation of a treatment plan; Cuantificacion de la influencia que tiene la eleccion del algoritmo y del sistema de planificacion en el calculo de una dosimetria clinica

    Energy Technology Data Exchange (ETDEWEB)

    Moral, F. del; Ramos, A.; Salgado, M.; Andrade, B; Munoz, V.

    2010-07-01

    In this work an analysis of the influence of the choice of the algorithm or planning system, on the calculus of the same treatment plan is introduced. For this purpose specific software has been developed for comparing plans of a series of IMRT cases of prostate and head and neck cancer calculated using the convolution, superposition and fast superposition algorithms implemented in the XiO 4.40 planning system (CMS). It has also been used for the comparison of the same treatment plan for lung pathology calculated in XiO with the mentioned algorithms, and calculated in the Plan 4.1 planning system (Brainlab) using its pencil beam algorithm. Differences in dose among the treatment plans have been quantified using a set of metrics. The recommendation for the dosimetry of a careful choice of the algorithm has been numerically confirmed. (Author).

  19. Quantification of abdominal aortic deformation after EVAR

    Science.gov (United States)

    Demirci, Stefanie; Manstad-Hulaas, Frode; Navab, Nassir

    2009-02-01

    Quantification of abdominal aortic deformation is an important requirement for the evaluation of endovascular stenting procedures and the further refinement of stent graft design. During endovascular aortic repair (EVAR) treatment, the aortic shape is subject to severe deformation that is imposed by medical instruments such as guide wires, catheters, and, the stent graft. This deformation can affect the flow characteristics and morphology of the aorta which have been shown to be elicitors for stent graft failures and be reason for reappearance of aneurysms. We present a method for quantifying the deformation of an aneurysmatic aorta imposed by an inserted stent graft device. The outline of the procedure includes initial rigid alignment of the two abdominal scans, segmentation of abdominal vessel trees, and automatic reduction of their centerline structures to one specified region of interest around the aorta. This is accomplished by preprocessing and remodeling of the pre- and postoperative aortic shapes before performing a non-rigid registration. We further narrow the resulting displacement fields to only include local non-rigid deformation and therefore, eliminate all remaining global rigid transformations. Finally, deformations for specified locations can be calculated from the resulting displacement fields. In order to evaluate our method, experiments for the extraction of aortic deformation fields are conducted on 15 patient datasets from endovascular aortic repair (EVAR) treatment. A visual assessment of the registration results and evaluation of the usage of deformation quantification were performed by two vascular surgeons and one interventional radiologist who are all experts in EVAR procedures.

  20. Algorithmic chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Fontana, W.

    1990-12-13

    In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.

  1. Recursive Algorithm For Linear Regression

    Science.gov (United States)

    Varanasi, S. V.

    1988-01-01

    Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.

  2. Partitional clustering algorithms

    CERN Document Server

    2015-01-01

    This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...

  3. The Parallel C++ Statistical Library ‘QUESO’: Quantification of Uncertainty for Estimation, Simulation and Optimization

    KAUST Repository

    Prudencio, Ernesto E.

    2012-01-01

    QUESO is a collection of statistical algorithms and programming constructs supporting research into the uncertainty quantification (UQ) of models and their predictions. It has been designed with three objectives: it should (a) be sufficiently abstract in order to handle a large spectrum of models, (b) be algorithmically extensible, allowing an easy insertion of new and improved algorithms, and (c) take advantage of parallel computing, in order to handle realistic models. Such objectives demand a combination of an object-oriented design with robust software engineering practices. QUESO is written in C++, uses MPI, and leverages libraries already available to the scientific community. We describe some UQ concepts, present QUESO, and list planned enhancements.

  4. Wavelets in quantification of liver tumors in contrasted computed tomography images

    International Nuclear Information System (INIS)

    Rodrigues, Bruna T.; Alvarez, Matheus; Souza, Rafael T.F.; Miranda, Jose R.A.; Romeiro, Fernando G.; Pina, Diana R. de; Trindade, Andre Petean

    2012-01-01

    This paper presents an original methodology of liver tumors segmentation, based on wavelet transform. A virtual phantom was constructed with the same mean and standard deviation of the intensity of gray presented by the measured liver tissue. The optimized algorithm had a sensitivity ranging from 0.81 to 0.83, with a specificity of 0.95 for differentiation of hepatic tumors from normal tissues. We obtained a 96% agreement between the pixels segmented by an experienced radiologist and the algorithm presented here. According to the results shown in this work, the algorithm is optimal for the beginning of the tests for quantification of liver tumors in retrospective surveys. (author)

  5. Boosting foundations and algorithms

    CERN Document Server

    Schapire, Robert E

    2012-01-01

    Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.

  6. Collagen Quantification in Tissue Specimens.

    Science.gov (United States)

    Coentro, João Quintas; Capella-Monsonís, Héctor; Graceffa, Valeria; Wu, Zhuning; Mullen, Anne Maria; Raghunath, Michael; Zeugolis, Dimitrios I

    2017-01-01

    Collagen is the major extracellular protein in mammals. Accurate quantification of collagen is essential in the biomaterials (e.g., reproducible collagen scaffold fabrication), drug discovery (e.g., assessment of collagen in pathophysiologies, such as fibrosis), and tissue engineering (e.g., quantification of cell-synthesized collagen) fields. Although measuring hydroxyproline content is the most widely used method to quantify collagen in biological specimens, the process is very laborious. To this end, the Sircol™ Collagen Assay is widely used due to its inherent simplicity and convenience. However, this method leads to overestimation of collagen content due to the interaction of Sirius red with basic amino acids of non-collagenous proteins. Herein, we describe the addition of an ultrafiltration purification step in the process to accurately determine collagen content in tissues.

  7. Preliminary study on computer automatic quantification of brain atrophy

    International Nuclear Information System (INIS)

    Li Chuanfu; Zhou Kangyuan

    2006-01-01

    Objective: To study the variability of normal brain volume with the sex and age, and put forward an objective standard for computer automatic quantification of brain atrophy. Methods: The cranial volume, brain volume and brain parenchymal fraction (BPF) of 487 cases of brain atrophy (310 males, 177 females) and 1901 cases of normal subjects (993 males, 908 females) were calculated with the newly developed algorithm of automatic quantification for brain atrophy. With the technique of polynomial curve fitting, the mathematical relationship of BPF with age in normal subjects was analyzed. Results: The cranial volume, brain volume and BPF of normal subjects were (1 271 322 ± 128 699) mm 3 , (1 211 725 ± 122 077) mm 3 and (95.3471 ± 2.3453)%, respectively, and those of atrophy subjects were (1 276 900 ± 125 180) mm 3 , (1 203 400 ± 117 760) mm 3 and BPF(91.8115 ± 2.3035)% respectively. The difference of BPF between the two groups was extremely significant (P 0.05). The expression P(x)=-0.0008x 2 + 0.0193x + 96.9999 could accurately describe the mathematical relationship between BPF and age in normal subject (lower limit of 95% CI y=-0.0008x 2 +0.0184x+95.1090). Conclusion: The lower limit of 95% confidence interval mathematical relationship between BPF and age could be used as an objective criteria for automatic quantification of brain atrophy with computer. (authors)

  8. Tissue quantification for development of pediatric phantom

    International Nuclear Information System (INIS)

    Alves, A.F.F.; Miranda, J.R.A.; Pina, D.R.

    2013-01-01

    The optimization of the risk- benefit ratio is a major concern in the pediatric radiology, due to the greater vulnerability of children to the late somatic effects and genetic effects of exposure to radiation compared to adults. In Brazil, it is estimated that the causes of death from head trauma are 18 % for the age group between 1-5 years and the radiograph is the primary diagnostic test for the detection of skull fracture . Knowing that the image quality is essential to ensure the identification of structures anatomical and minimizing errors diagnostic interpretation, this paper proposed the development and construction of homogeneous phantoms skull, for the age group 1-5 years. The construction of the phantoms homogeneous was performed using the classification and quantification of tissue present in the skull of pediatric patients. In this procedure computational algorithms were used, using Matlab, to quantify distinct biological tissues present in the anatomical regions studied , using pictures retrospective CT scans. Preliminary data obtained from measurements show that between the ages of 1-5 years, assuming an average anteroposterior diameter of the pediatric skull region of the 145.73 ± 2.97 mm, can be represented by 92.34 mm ± 5.22 of lucite and 1.75 ± 0:21 mm of aluminum plates of a provision of PEP (Pacient equivalent phantom). After its construction, the phantoms will be used for image and dose optimization in pediatric protocols process to examinations of computerized radiography

  9. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.

  10. Majorization arrow in quantum-algorithm design

    International Nuclear Information System (INIS)

    Latorre, J.I.; Martin-Delgado, M.A.

    2002-01-01

    We apply majorization theory to study the quantum algorithms known so far and find that there is a majorization principle underlying the way they operate. Grover's algorithm is a neat instance of this principle where majorization works step by step until the optimal target state is found. Extensions of this situation are also found in algorithms based in quantum adiabatic evolution and the family of quantum phase-estimation algorithms, including Shor's algorithm. We state that in quantum algorithms the time arrow is a majorization arrow

  11. Lung involvement quantification in chest radiographs; Quantificacao de comprometimento pulmonar em radiografias de torax

    Energy Technology Data Exchange (ETDEWEB)

    Giacomini, Guilherme; Alvarez, Matheus; Oliveira, Marcela de; Miranda, Jose Ricardo A. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R.; Pereira, Paulo C.M.; Ribeiro, Sergio M., E-mail: giacomini@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2014-12-15

    Tuberculosis (TB) caused by Mycobacterium tuberculosis, is an infectious disease which remains a global health problem. The chest radiography is the commonly method employed to assess the TB's evolution. The methods for quantification of abnormalities of chest are usually performed on CT scans (CT). This quantification is important to assess the TB evolution and treatment and comparing different treatments. However, precise quantification is not feasible for the amount of CT scans required. The purpose of this work is to develop a methodology for quantification of lung damage caused by TB through chest radiographs. It was developed an algorithm for computational processing of exams in Matlab, which creates a lungs' 3D representation, with compromised dilated regions inside. The quantification of lung lesions was also made for the same patients through CT scans. The measurements from the two methods were compared and resulting in strong correlation. Applying statistical Bland and Altman, all samples were within the limits of agreement, with a confidence interval of 95%. The results showed an average variation of around 13% between the two quantification methods. The results suggest the effectiveness and applicability of the method developed, providing better risk-benefit to the patient and cost-benefit ratio for the institution. (author)

  12. Strawberry: Fast and accurate genome-guided transcript reconstruction and quantification from RNA-Seq.

    Science.gov (United States)

    Liu, Ruolin; Dickerson, Julie

    2017-11-01

    We propose a novel method and software tool, Strawberry, for transcript reconstruction and quantification from RNA-Seq data under the guidance of genome alignment and independent of gene annotation. Strawberry consists of two modules: assembly and quantification. The novelty of Strawberry is that the two modules use different optimization frameworks but utilize the same data graph structure, which allows a highly efficient, expandable and accurate algorithm for dealing large data. The assembly module parses aligned reads into splicing graphs, and uses network flow algorithms to select the most likely transcripts. The quantification module uses a latent class model to assign read counts from the nodes of splicing graphs to transcripts. Strawberry simultaneously estimates the transcript abundances and corrects for sequencing bias through an EM algorithm. Based on simulations, Strawberry outperforms Cufflinks and StringTie in terms of both assembly and quantification accuracies. Under the evaluation of a real data set, the estimated transcript expression by Strawberry has the highest correlation with Nanostring probe counts, an independent experiment measure for transcript expression. Strawberry is written in C++14, and is available as open source software at https://github.com/ruolin/strawberry under the MIT license.

  13. Elementary functions algorithms and implementation

    CERN Document Server

    Muller, Jean-Michel

    2016-01-01

    This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...

  14. Reference genes for real-time PCR quantification of messenger RNAs and microRNAs in mouse model of obesity.

    Science.gov (United States)

    Matoušková, Petra; Bártíková, Hana; Boušová, Iva; Hanušová, Veronika; Szotáková, Barbora; Skálová, Lenka

    2014-01-01

    Obesity and metabolic syndrome is increasing health problem worldwide. Among other ways, nutritional intervention using phytochemicals is important method for treatment and prevention of this disease. Recent studies have shown that certain phytochemicals could alter the expression of specific genes and microRNAs (miRNAs) that play a fundamental role in the pathogenesis of obesity. For study of the obesity and its treatment, monosodium glutamate (MSG)-injected mice with developed central obesity, insulin resistance and liver lipid accumulation are frequently used animal models. To understand the mechanism of phytochemicals action in obese animals, the study of selected genes expression together with miRNA quantification is extremely important. For this purpose, real-time quantitative PCR is a sensitive and reproducible method, but it depends on proper normalization entirely. The aim of present study was to identify the appropriate reference genes for mRNA and miRNA quantification in MSG mice treated with green tea catechins, potential anti-obesity phytochemicals. Two sets of reference genes were tested: first set contained seven commonly used genes for normalization of messenger RNA, the second set of candidate reference genes included ten small RNAs for normalization of miRNA. The expression stability of these reference genes were tested upon treatment of mice with catechins using geNorm, NormFinder and BestKeeper algorithms. Selected normalizers for mRNA quantification were tested and validated on expression of quinone oxidoreductase, biotransformation enzyme known to be modified by catechins. The effect of selected normalizers for miRNA quantification was tested on two obesity- and diabetes- related miRNAs, miR-221 and miR-29b, respectively. Finally, the combinations of B2M/18S/HPRT1 and miR-16/sno234 were validated as optimal reference genes for mRNA and miRNA quantification in liver and 18S/RPlP0/HPRT1 and sno234/miR-186 in small intestine of MSG mice. These

  15. Benchmarking monthly homogenization algorithms

    Science.gov (United States)

    Venema, V. K. C.; Mestre, O.; Aguilar, E.; Auer, I.; Guijarro, J. A.; Domonkos, P.; Vertacnik, G.; Szentimrey, T.; Stepanek, P.; Zahradnicek, P.; Viarre, J.; Müller-Westermeier, G.; Lakatos, M.; Williams, C. N.; Menne, M.; Lindau, R.; Rasol, D.; Rustemeier, E.; Kolokythas, K.; Marinova, T.; Andresen, L.; Acquaotta, F.; Fratianni, S.; Cheval, S.; Klancar, M.; Brunetti, M.; Gruber, C.; Prohom Duran, M.; Likso, T.; Esteban, P.; Brandsma, T.

    2011-08-01

    The COST (European Cooperation in Science and Technology) Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME) has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative). The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide) trend was added. Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i) the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii) the error in linear trend estimates and (iii) traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve precipitation data

  16. Uncertainty Quantification Bayesian Framework for Porous Media Flows

    Science.gov (United States)

    Demyanov, V.; Christie, M.; Erbas, D.

    2005-12-01

    Uncertainty quantification is an increasingly important aspect of many areas of applied science, where the challenge is to make reliable predictions about the performance of complex physical systems in the absence of complete or reliable data. Predicting flows of fluids through undersurface reservoirs is an example of a complex system where accuracy in prediction is needed (e.g. in oil industry it is essential for financial reasons). Simulation of fluid flow in oil reservoirs is usually carried out using large commercially written finite difference simulators solving conservation equations describing the multi-phase flow through the porous reservoir rocks, which is a highly computationally expensive task. This work examines a Bayesian Framework for uncertainty quantification in porous media flows that uses a stochastic sampling algorithm to generate models that match observed time series data. The framework is flexible for a wide range of general physical/statistical parametric models, which are used to describe the underlying hydro-geological process in its temporal dynamics. The approach is based on exploration of the parameter space and update of the prior beliefs about what the most likely model definitions are. Optimization problem for a highly parametric physical model usually have multiple solutions, which impact the uncertainty of the made predictions. Stochastic search algorithm (e.g. genetic algorithm) allows to identify multiple "good enough" models in the parameter space. Furthermore, inference of the generated model ensemble via MCMC based algorithm evaluates the posterior probability of the generated models and quantifies uncertainty of the predictions. Machine learning algorithm - Artificial Neural Networks - are used to speed up the identification of regions in parameter space where good matches to observed data can be found. Adaptive nature of ANN allows to develop different ways of integrating them into the Bayesian framework: as direct time

  17. Treatment Algorithm for Ameloblastoma

    Directory of Open Access Journals (Sweden)

    Madhumati Singh

    2014-01-01

    Full Text Available Ameloblastoma is the second most common benign odontogenic tumour (Shafer et al. 2006 which constitutes 1–3% of all cysts and tumours of jaw, with locally aggressive behaviour, high recurrence rate, and a malignant potential (Chaine et al. 2009. Various treatment algorithms for ameloblastoma have been reported; however, a universally accepted approach remains unsettled and controversial (Chaine et al. 2009. The treatment algorithm to be chosen depends on size (Escande et al. 2009 and Sampson and Pogrel 1999, anatomical location (Feinberg and Steinberg 1996, histologic variant (Philipsen and Reichart 1998, and anatomical involvement (Jackson et al. 1996. In this paper various such treatment modalities which include enucleation and peripheral osteotomy, partial maxillectomy, segmental resection and reconstruction done with fibula graft, and radical resection and reconstruction done with rib graft and their recurrence rate are reviewed with study of five cases.

  18. Pseudo-deterministic Algorithms

    OpenAIRE

    Goldwasser , Shafi

    2012-01-01

    International audience; In this talk we describe a new type of probabilistic algorithm which we call Bellagio Algorithms: a randomized algorithm which is guaranteed to run in expected polynomial time, and to produce a correct and unique solution with high probability. These algorithms are pseudo-deterministic: they can not be distinguished from deterministic algorithms in polynomial time by a probabilistic polynomial time observer with black box access to the algorithm. We show a necessary an...

  19. New seismograph includes filters

    Energy Technology Data Exchange (ETDEWEB)

    1979-11-02

    The new Nimbus ES-1210 multichannel signal enhancement seismograph from EG and G geometrics has recently been redesigned to include multimode signal fillers on each amplifier. The ES-1210F is a shallow exploration seismograph for near subsurface exploration such as in depth-to-bedrock, geological hazard location, mineral exploration, and landslide investigations.

  20. Array architectures for iterative algorithms

    Science.gov (United States)

    Jagadish, Hosagrahar V.; Rao, Sailesh K.; Kailath, Thomas

    1987-01-01

    Regular mesh-connected arrays are shown to be isomorphic to a class of so-called regular iterative algorithms. For a wide variety of problems it is shown how to obtain appropriate iterative algorithms and then how to translate these algorithms into arrays in a systematic fashion. Several 'systolic' arrays presented in the literature are shown to be specific cases of the variety of architectures that can be derived by the techniques presented here. These include arrays for Fourier Transform, Matrix Multiplication, and Sorting.

  1. Analytic device including nanostructures

    KAUST Repository

    Di Fabrizio, Enzo M.; Fratalocchi, Andrea; Totero Gongora, Juan Sebastian; Coluccio, Maria Laura; Candeloro, Patrizio; Cuda, Gianni

    2015-01-01

    A device for detecting an analyte in a sample comprising: an array including a plurality of pixels, each pixel including a nanochain comprising: a first nanostructure, a second nanostructure, and a third nanostructure, wherein size of the first nanostructure is larger than that of the second nanostructure, and size of the second nanostructure is larger than that of the third nanostructure, and wherein the first nanostructure, the second nanostructure, and the third nanostructure are positioned on a substrate such that when the nanochain is excited by an energy, an optical field between the second nanostructure and the third nanostructure is stronger than an optical field between the first nanostructure and the second nanostructure, wherein the array is configured to receive a sample; and a detector arranged to collect spectral data from a plurality of pixels of the array.

  2. MDCT quantification is the dominant parameter in decision-making regarding chest tube drainage for stable patients with traumatic pneumothorax.

    Science.gov (United States)

    Cai, Wenli; Lee, June-Goo; Fikry, Karim; Yoshida, Hiroyuki; Novelline, Robert; de Moya, Marc

    2012-07-01

    It is commonly believed that the size of a pneumothorax is an important determinant of treatment decision, in particular regarding whether chest tube drainage (CTD) is required. However, the volumetric quantification of pneumothoraces has not routinely been performed in clinics. In this paper, we introduced an automated computer-aided volumetry (CAV) scheme for quantification of volume of pneumothoraces in chest multi-detect CT (MDCT) images. Moreover, we investigated the impact of accurate volume of pneumothoraces in the improvement of the performance in decision-making regarding CTD in the management of traumatic pneumothoraces. For this purpose, an occurrence frequency map was calculated for quantitative analysis of the importance of each clinical parameter in the decision-making regarding CTD by a computer simulation of decision-making using a genetic algorithm (GA) and a support vector machine (SVM). A total of 14 clinical parameters, including volume of pneumothorax calculated by our CAV scheme, was collected as parameters available for decision-making. The results showed that volume was the dominant parameter in decision-making regarding CTD, with an occurrence frequency value of 1.00. The results also indicated that the inclusion of volume provided the best performance that was statistically significant compared to the other tests in which volume was excluded from the clinical parameters. This study provides the scientific evidence for the application of CAV scheme in MDCT volumetric quantification of pneumothoraces in the management of clinically stable chest trauma patients with traumatic pneumothorax. Copyright © 2012 Elsevier Ltd. All rights reserved.

  3. Saskatchewan resources. [including uranium

    Energy Technology Data Exchange (ETDEWEB)

    1979-09-01

    The production of chemicals and minerals for the chemical industry in Saskatchewan are featured, with some discussion of resource taxation. The commodities mentioned include potash, fatty amines, uranium, heavy oil, sodium sulfate, chlorine, sodium hydroxide, sodium chlorate and bentonite. Following the successful outcome of the Cluff Lake inquiry, the uranium industry is booming. Some developments and production figures for Gulf Minerals, Amok, Cenex and Eldorado are mentioned.

  4. Inverse problems and uncertainty quantification

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.

    2013-01-01

    computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example

  5. A retrodictive stochastic simulation algorithm

    International Nuclear Information System (INIS)

    Vaughan, T.G.; Drummond, P.D.; Drummond, A.J.

    2010-01-01

    In this paper we describe a simple method for inferring the initial states of systems evolving stochastically according to master equations, given knowledge of the final states. This is achieved through the use of a retrodictive stochastic simulation algorithm which complements the usual predictive stochastic simulation approach. We demonstrate the utility of this new algorithm by applying it to example problems, including the derivation of likely ancestral states of a gene sequence given a Markovian model of genetic mutation.

  6. Algorithmic causets

    International Nuclear Information System (INIS)

    Bolognesi, Tommaso

    2011-01-01

    In the context of quantum gravity theories, several researchers have proposed causal sets as appropriate discrete models of spacetime. We investigate families of causal sets obtained from two simple models of computation - 2D Turing machines and network mobile automata - that operate on 'high-dimensional' supports, namely 2D arrays of cells and planar graphs, respectively. We study a number of quantitative and qualitative emergent properties of these causal sets, including dimension, curvature and localized structures, or 'particles'. We show how the possibility to detect and separate particles from background space depends on the choice between a global or local view at the causal set. Finally, we spot very rare cases of pseudo-randomness, or deterministic chaos; these exhibit a spontaneous phenomenon of 'causal compartmentation' that appears as a prerequisite for the occurrence of anything of physical interest in the evolution of spacetime.

  7. Kinetic quantification of plyometric exercise intensity.

    Science.gov (United States)

    Ebben, William P; Fauth, McKenzie L; Garceau, Luke R; Petushek, Erich J

    2011-12-01

    Ebben, WP, Fauth, ML, Garceau, LR, and Petushek, EJ. Kinetic quantification of plyometric exercise intensity. J Strength Cond Res 25(12): 3288-3298, 2011-Quantification of plyometric exercise intensity is necessary to understand the characteristics of these exercises and the proper progression of this mode of exercise. The purpose of this study was to assess the kinetic characteristics of a variety of plyometric exercises. This study also sought to assess gender differences in these variables. Twenty-six men and 23 women with previous experience in performing plyometric training served as subjects. The subjects performed a variety of plyometric exercises including line hops, 15.24-cm cone hops, squat jumps, tuck jumps, countermovement jumps (CMJs), loaded CMJs equal to 30% of 1 repetition maximum squat, depth jumps normalized to the subject's jump height (JH), and single leg jumps. All plyometric exercises were assessed with a force platform. Outcome variables associated with the takeoff, airborne, and landing phase of each plyometric exercise were evaluated. These variables included the peak vertical ground reaction force (GRF) during takeoff, the time to takeoff, flight time, JH, peak power, landing rate of force development, and peak vertical GRF during landing. A 2-way mixed analysis of variance with repeated measures for plyometric exercise type demonstrated main effects for exercise type and all outcome variables (p ≤ 0.05) and for the interaction between gender and peak vertical GRF during takeoff (p ≤ 0.05). Bonferroni-adjusted pairwise comparisons identified a number of differences between the plyometric exercises for the outcome variables assessed (p ≤ 0.05). These findings can be used to guide the progression of plyometric training by incorporating exercises of increasing intensity over the course of a program.

  8. Unconventional Algorithms: Complementarity of Axiomatics and Construction

    Directory of Open Access Journals (Sweden)

    Gordana Dodig Crnkovic

    2012-10-01

    Full Text Available In this paper, we analyze axiomatic and constructive issues of unconventional computations from a methodological and philosophical point of view. We explain how the new models of algorithms and unconventional computations change the algorithmic universe, making it open and allowing increased flexibility and expressive power that augment creativity. At the same time, the greater power of new types of algorithms also results in the greater complexity of the algorithmic universe, transforming it into the algorithmic multiverse and demanding new tools for its study. That is why we analyze new powerful tools brought forth by local mathematics, local logics, logical varieties and the axiomatic theory of algorithms, automata and computation. We demonstrate how these new tools allow efficient navigation in the algorithmic multiverse. Further work includes study of natural computation by unconventional algorithms and constructive approaches.

  9. Chinese handwriting recognition an algorithmic perspective

    CERN Document Server

    Su, Tonghua

    2013-01-01

    This book provides an algorithmic perspective on the recent development of Chinese handwriting recognition. Two technically sound strategies, the segmentation-free and integrated segmentation-recognition strategy, are investigated and algorithms that have worked well in practice are primarily focused on. Baseline systems are initially presented for these strategies and are subsequently expanded on and incrementally improved. The sophisticated algorithms covered include: 1) string sample expansion algorithms which synthesize string samples from isolated characters or distort realistic string samples; 2) enhanced feature representation algorithms, e.g. enhanced four-plane features and Delta features; 3) novel learning algorithms, such as Perceptron learning with dynamic margin, MPE training and distributed training; and lastly 4) ensemble algorithms, that is, combining the two strategies using both parallel structure and serial structure. All the while, the book moves from basic to advanced algorithms, helping ...

  10. Portable Health Algorithms Test System

    Science.gov (United States)

    Melcher, Kevin J.; Wong, Edmond; Fulton, Christopher E.; Sowers, Thomas S.; Maul, William A.

    2010-01-01

    A document discusses the Portable Health Algorithms Test (PHALT) System, which has been designed as a means for evolving the maturity and credibility of algorithms developed to assess the health of aerospace systems. Comprising an integrated hardware-software environment, the PHALT system allows systems health management algorithms to be developed in a graphical programming environment, to be tested and refined using system simulation or test data playback, and to be evaluated in a real-time hardware-in-the-loop mode with a live test article. The integrated hardware and software development environment provides a seamless transition from algorithm development to real-time implementation. The portability of the hardware makes it quick and easy to transport between test facilities. This hard ware/software architecture is flexible enough to support a variety of diagnostic applications and test hardware, and the GUI-based rapid prototyping capability is sufficient to support development execution, and testing of custom diagnostic algorithms. The PHALT operating system supports execution of diagnostic algorithms under real-time constraints. PHALT can perform real-time capture and playback of test rig data with the ability to augment/ modify the data stream (e.g. inject simulated faults). It performs algorithm testing using a variety of data input sources, including real-time data acquisition, test data playback, and system simulations, and also provides system feedback to evaluate closed-loop diagnostic response and mitigation control.

  11. Being Included and Excluded

    DEFF Research Database (Denmark)

    Korzenevica, Marina

    2016-01-01

    Following the civil war of 1996–2006, there was a dramatic increase in the labor mobility of young men and the inclusion of young women in formal education, which led to the transformation of the political landscape of rural Nepal. Mobility and schooling represent a level of prestige that rural...... politics. It analyzes how formal education and mobility either challenge or reinforce traditional gendered norms which dictate a lowly position for young married women in the household and their absence from community politics. The article concludes that women are simultaneously excluded and included from...... community politics. On the one hand, their mobility and decision-making powers decrease with the increase in the labor mobility of men and their newly gained education is politically devalued when compared to the informal education that men gain through mobility, but on the other hand, schooling strengthens...

  12. Developing and Implementing the Data Mining Algorithms in RAVEN

    International Nuclear Information System (INIS)

    Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian

    2015-01-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  13. Developing and Implementing the Data Mining Algorithms in RAVEN

    Energy Technology Data Exchange (ETDEWEB)

    Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2015-09-01

    The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.

  14. Hamiltonian Algorithm Sound Synthesis

    OpenAIRE

    大矢, 健一

    2013-01-01

    Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.

  15. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; de Berg, M.T.; Bouts, Q.W.; ten Brink, Alex P.; Buchin, K.A.; Westenberg, M.A.

    2015-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  16. Progressive geometric algorithms

    NARCIS (Netherlands)

    Alewijnse, S.P.A.; Bagautdinov, T.M.; Berg, de M.T.; Bouts, Q.W.; Brink, ten A.P.; Buchin, K.; Westenberg, M.A.

    2014-01-01

    Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms

  17. Microhemodynamic parameters quantification from intravital microscopy videos

    International Nuclear Information System (INIS)

    Ortiz, Daniel; Cabrales, Pedro; Briceño, Juan Carlos

    2014-01-01

    Blood flow and blood–endothelium interactions correspond with the genesis of cardiovascular diseases. Therefore, quantitative analysis of blood flow dynamics at the microcirculation level is of special interest. Regulatory mechanisms mediated by blow flow have been studied in detail using in vitro approaches. However, these mechanisms have not been fully validated in vivo due to technical limitations that arise when quantifying microhemodynamics with the required level of detail. Intravital microscopy combined with high-speed video recordings has been used for the analysis of blood flow in small blood vessels of chronic and acute experimental tissue preparations. This tool can be used to study the interaction between the flowing blood and the vessel walls of arterioles and venules with sufficient temporal and spatial resolution. Our objective was to develop a simple and robust cross-correlation algorithm for the automatic analysis of high-speed video recordings of microcirculatory blood flow. The algorithm was validated using in vitro and in vivo systems. Results indicate that the algorithm's ability to estimate the velocity of local red blood cells as a function of blood vessel radius is highly accurate. They thereby suggest that the algorithm could be used to explore dynamic changes in blood flow under different experimental conditions including a wide range of flow rates and hematocrit levels. The algorithm can also be used to measure volumetric flow rates, radial velocity profiles, wall shear rate, and wall shear stress. Several applications are presently explored, including the analysis of velocity profiles in the branches of arterial bifurcations. This work demonstrates the robustness of the cross-correlation technique in various flow conditions and elucidates its potential application for in vivo determination of blood flow dynamics in the microcirculation. (paper)

  18. The Algorithmic Imaginary

    DEFF Research Database (Denmark)

    Bucher, Taina

    2017-01-01

    the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...... of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...

  19. The BR eigenvalue algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics

    1997-11-01

    The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.

  20. Planar graphs theory and algorithms

    CERN Document Server

    Nishizeki, T

    1988-01-01

    Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.

  1. Optimally stopped variational quantum algorithms

    Science.gov (United States)

    Vinci, Walter; Shabani, Alireza

    2018-04-01

    Quantum processors promise a paradigm shift in high-performance computing which needs to be assessed by accurate benchmarking measures. In this article, we introduce a benchmark for the variational quantum algorithm (VQA), recently proposed as a heuristic algorithm for small-scale quantum processors. In VQA, a classical optimization algorithm guides the processor's quantum dynamics to yield the best solution for a given problem. A complete assessment of the scalability and competitiveness of VQA should take into account both the quality and the time of dynamics optimization. The method of optimal stopping, employed here, provides such an assessment by explicitly including time as a cost factor. Here, we showcase this measure for benchmarking VQA as a solver for some quadratic unconstrained binary optimization. Moreover, we show that a better choice for the cost function of the classical routine can significantly improve the performance of the VQA algorithm and even improve its scaling properties.

  2. A quantum causal discovery algorithm

    Science.gov (United States)

    Giarmatzi, Christina; Costa, Fabio

    2018-03-01

    Finding a causal model for a set of classical variables is now a well-established task—but what about the quantum equivalent? Even the notion of a quantum causal model is controversial. Here, we present a causal discovery algorithm for quantum systems. The input to the algorithm is a process matrix describing correlations between quantum events. Its output consists of different levels of information about the underlying causal model. Our algorithm determines whether the process is causally ordered by grouping the events into causally ordered non-signaling sets. It detects if all relevant common causes are included in the process, which we label Markovian, or alternatively if some causal relations are mediated through some external memory. For a Markovian process, it outputs a causal model, namely the causal relations and the corresponding mechanisms, represented as quantum states and channels. Our algorithm opens the route to more general quantum causal discovery methods.

  3. Amplicon sequencing for the quantification of spoilage microbiota in complex foods including bacterial spores

    NARCIS (Netherlands)

    Boer, de P.; Caspers, M.; Sanders, J.W.; Kemperman, R.; Wijman, J.; Lommerse, G.; Roeselers, G.; Montijn, R.; Abee, T.; Kort, R.

    2015-01-01

    Background
    Spoilage of food products is frequently caused by bacterial spores and lactic acid bacteria. Identification of these organisms by classic cultivation methods is limited by their ability to form colonies on nutrient agar plates. In this study, we adapted and optimized 16S rRNA amplicon

  4. Comparison of five DNA quantification methods

    DEFF Research Database (Denmark)

    Nielsen, Karsten; Mogensen, Helle Smidt; Hedman, Johannes

    2008-01-01

    Six commercial preparations of human genomic DNA were quantified using five quantification methods: UV spectrometry, SYBR-Green dye staining, slot blot hybridization with the probe D17Z1, Quantifiler Human DNA Quantification kit and RB1 rt-PCR. All methods measured higher DNA concentrations than...... Quantification kit in two experiments. The measured DNA concentrations with Quantifiler were 125 and 160% higher than expected based on the manufacturers' information. When the Quantifiler human DNA standard (Raji cell line) was replaced by the commercial human DNA preparation G147A (Promega) to generate the DNA...... standard curve in the Quantifiler Human DNA Quantification kit, the DNA quantification results of the human DNA preparations were 31% higher than expected based on the manufacturers' information. The results indicate a calibration problem with the Quantifiler human DNA standard for its use...

  5. A method for the 3-D quantification of bridging ligaments during crack propagation

    International Nuclear Information System (INIS)

    Babout, L.; Janaszewski, M.; Marrow, T.J.; Withers, P.J.

    2011-01-01

    This letter shows how a hole-closing algorithm can be used to identify and quantify crack-bridging ligaments from a sequence of X-ray tomography images of intergranular stress corrosion cracking. This allows automatic quantification of the evolution of bridging ligaments through the crack propagation sequence providing fracture mechanics insight previously unobtainable from fractography. The method may also be applied to other three-dimensional materials science problems, such as closing walls in foams.

  6. Current position of high-resolution MS for drug quantification in clinical & forensic toxicology.

    Science.gov (United States)

    Meyer, Markus R; Helfer, Andreas G; Maurer, Hans H

    2014-08-01

    This paper reviews high-resolution MS approaches published from January 2011 until March 2014 for the quantification of drugs (of abuse) and/or their metabolites in biosamples using LC-MS with time-of-flight or Orbitrap™ mass analyzers. Corresponding approaches are discussed including sample preparation and mass spectral settings. The advantages and limitations of high-resolution MS for drug quantification, as well as the demand for a certain resolution or a specific mass accuracy are also explored.

  7. PIV Uncertainty Quantification and Beyond

    NARCIS (Netherlands)

    Wieneke, B.F.A.

    2017-01-01

    The fundamental properties of computed flow fields using particle imaging velocimetry (PIV) have been investigated, viewing PIV processing as a black box without going in detail into algorithmic details. PIV processing can be analyzed using a linear filter model, i.e. assuming that the computed

  8. Efficient RNA structure comparison algorithms.

    Science.gov (United States)

    Arslan, Abdullah N; Anandan, Jithendar; Fry, Eric; Monschke, Keith; Ganneboina, Nitin; Bowerman, Jason

    2017-12-01

    Recently proposed relative addressing-based ([Formula: see text]) RNA secondary structure representation has important features by which an RNA structure database can be stored into a suffix array. A fast substructure search algorithm has been proposed based on binary search on this suffix array. Using this substructure search algorithm, we present a fast algorithm that finds the largest common substructure of given multiple RNA structures in [Formula: see text] format. The multiple RNA structure comparison problem is NP-hard in its general formulation. We introduced a new problem for comparing multiple RNA structures. This problem has more strict similarity definition and objective, and we propose an algorithm that solves this problem efficiently. We also develop another comparison algorithm that iteratively calls this algorithm to locate nonoverlapping large common substructures in compared RNAs. With the new resulting tools, we improved the RNASSAC website (linked from http://faculty.tamuc.edu/aarslan ). This website now also includes two drawing tools: one specialized for preparing RNA substructures that can be used as input by the search tool, and another one for automatically drawing the entire RNA structure from a given structure sequence.

  9. Subspace-based Inverse Uncertainty Quantification for Nuclear Data Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Khuwaileh, B.A., E-mail: bakhuwai@ncsu.edu; Abdel-Khalik, H.S.

    2015-01-15

    Safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. An inverse problem can be defined and solved to assess the sources of uncertainty, and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this work a subspace-based algorithm for inverse sensitivity/uncertainty quantification (IS/UQ) has been developed to enable analysts account for all sources of nuclear data uncertainties in support of target accuracy assessment-type analysis. An approximate analytical solution of the optimization problem is used to guide the search for the dominant uncertainty subspace. By limiting the search to a subspace, the degrees of freedom available for the optimization search are significantly reduced. A quarter PWR fuel assembly is modeled and the accuracy of the multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties are to be reduced. Numerical experiments are used to demonstrate the computational efficiency of the proposed algorithm. Our ongoing work is focusing on extending the proposed algorithm to account for various forms of feedback, e.g., thermal-hydraulics and depletion effects.

  10. Development of Quantification Method for Bioluminescence Imaging

    International Nuclear Information System (INIS)

    Kim, Hyeon Sik; Min, Jung Joon; Lee, Byeong Il; Choi, Eun Seo; Tak, Yoon O; Choi, Heung Kook; Lee, Ju Young

    2009-01-01

    Optical molecular luminescence imaging is widely used for detection and imaging of bio-photons emitted by luminescent luciferase activation. The measured photons in this method provide the degree of molecular alteration or cell numbers with the advantage of high signal-to-noise ratio. To extract useful information from the measured results, the analysis based on a proper quantification method is necessary. In this research, we propose a quantification method presenting linear response of measured light signal to measurement time. We detected the luminescence signal by using lab-made optical imaging equipment of animal light imaging system (ALIS) and different two kinds of light sources. One is three bacterial light-emitting sources containing different number of bacteria. The other is three different non-bacterial light sources emitting very weak light. By using the concept of the candela and the flux, we could derive simplified linear quantification formula. After experimentally measuring light intensity, the data was processed with the proposed quantification function. We could obtain linear response of photon counts to measurement time by applying the pre-determined quantification function. The ratio of the re-calculated photon counts and measurement time present a constant value although different light source was applied. The quantification function for linear response could be applicable to the standard quantification process. The proposed method could be used for the exact quantitative analysis in various light imaging equipment with presenting linear response behavior of constant light emitting sources to measurement time

  11. Efficient uncertainty quantification in fully-integrated surface and subsurface hydrologic simulations

    Science.gov (United States)

    Miller, K. L.; Berg, S. J.; Davison, J. H.; Sudicky, E. A.; Forsyth, P. A.

    2018-01-01

    Although high performance computers and advanced numerical methods have made the application of fully-integrated surface and subsurface flow and transport models such as HydroGeoSphere common place, run times for large complex basin models can still be on the order of days to weeks, thus, limiting the usefulness of traditional workhorse algorithms for uncertainty quantification (UQ) such as Latin Hypercube simulation (LHS) or Monte Carlo simulation (MCS), which generally require thousands of simulations to achieve an acceptable level of accuracy. In this paper we investigate non-intrusive polynomial chaos for uncertainty quantification, which in contrast to random sampling methods (e.g., LHS and MCS), represents a model response of interest as a weighted sum of polynomials over the random inputs. Once a chaos expansion has been constructed, approximating the mean, covariance, probability density function, cumulative distribution function, and other common statistics as well as local and global sensitivity measures is straightforward and computationally inexpensive, thus making PCE an attractive UQ method for hydrologic models with long run times. Our polynomial chaos implementation was validated through comparison with analytical solutions as well as solutions obtained via LHS for simple numerical problems. It was then used to quantify parametric uncertainty in a series of numerical problems with increasing complexity, including a two-dimensional fully-saturated, steady flow and transient transport problem with six uncertain parameters and one quantity of interest; a one-dimensional variably-saturated column test involving transient flow and transport, four uncertain parameters, and two quantities of interest at 101 spatial locations and five different times each (1010 total); and a three-dimensional fully-integrated surface and subsurface flow and transport problem for a small test catchment involving seven uncertain parameters and three quantities of interest at

  12. Extreme-Scale Bayesian Inference for Uncertainty Quantification of Complex Simulations

    Energy Technology Data Exchange (ETDEWEB)

    Biros, George [Univ. of Texas, Austin, TX (United States)

    2018-01-12

    Uncertainty quantification (UQ)—that is, quantifying uncertainties in complex mathematical models and their large-scale computational implementations—is widely viewed as one of the outstanding challenges facing the field of CS&E over the coming decade. The EUREKA project set to address the most difficult class of UQ problems: those for which both the underlying PDE model as well as the uncertain parameters are of extreme scale. In the project we worked on these extreme-scale challenges in the following four areas: 1. Scalable parallel algorithms for sampling and characterizing the posterior distribution that exploit the structure of the underlying PDEs and parameter-to-observable map. These include structure-exploiting versions of the randomized maximum likelihood method, which aims to overcome the intractability of employing conventional MCMC methods for solving extreme-scale Bayesian inversion problems by appealing to and adapting ideas from large-scale PDE-constrained optimization, which have been very successful at exploring high-dimensional spaces. 2. Scalable parallel algorithms for construction of prior and likelihood functions based on learning methods and non-parametric density estimation. Constructing problem-specific priors remains a critical challenge in Bayesian inference, and more so in high dimensions. Another challenge is construction of likelihood functions that capture unmodeled couplings between observations and parameters. We will create parallel algorithms for non-parametric density estimation using high dimensional N-body methods and combine them with supervised learning techniques for the construction of priors and likelihood functions. 3. Bayesian inadequacy models, which augment physics models with stochastic models that represent their imperfections. The success of the Bayesian inference framework depends on the ability to represent the uncertainty due to imperfections of the mathematical model of the phenomena of interest. This is a

  13. Guided Wave Delamination Detection and Quantification With Wavefield Data Analysis

    Science.gov (United States)

    Tian, Zhenhua; Campbell Leckey, Cara A.; Seebo, Jeffrey P.; Yu, Lingyu

    2014-01-01

    Unexpected damage can occur in aerospace composites due to impact events or material stress during off-nominal loading events. In particular, laminated composites are susceptible to delamination damage due to weak transverse tensile and inter-laminar shear strengths. Developments of reliable and quantitative techniques to detect delamination damage in laminated composites are imperative for safe and functional optimally-designed next-generation composite structures. In this paper, we investigate guided wave interactions with delamination damage and develop quantification algorithms by using wavefield data analysis. The trapped guided waves in the delamination region are observed from the wavefield data and further quantitatively interpreted by using different wavenumber analysis methods. The frequency-wavenumber representation of the wavefield shows that new wavenumbers are present and correlate to trapped waves in the damage region. These new wavenumbers are used to detect and quantify the delamination damage through the wavenumber analysis, which can show how the wavenumber changes as a function of wave propagation distance. The location and spatial duration of the new wavenumbers can be identified, providing a useful means not only for detecting the presence of delamination damage but also allowing for estimation of the delamination size. Our method has been applied to detect and quantify real delamination damage with complex geometry (grown using a quasi-static indentation technique). The detection and quantification results show the location, size, and shape of the delamination damage.

  14. Fermion cluster algorithms

    International Nuclear Information System (INIS)

    Chandrasekharan, Shailesh

    2000-01-01

    Cluster algorithms have been recently used to eliminate sign problems that plague Monte-Carlo methods in a variety of systems. In particular such algorithms can also be used to solve sign problems associated with the permutation of fermion world lines. This solution leads to the possibility of designing fermion cluster algorithms in certain cases. Using the example of free non-relativistic fermions we discuss the ideas underlying the algorithm

  15. Quantification of Uncertainty in Extreme Scale Computations (QUEST)

    Energy Technology Data Exchange (ETDEWEB)

    Ghanem, Roger [Univ. of Southern California, Los Angeles, CA (United States)

    2017-04-18

    QUEST was a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, the Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The USC effort centered on the development of reduced models and efficient algorithms for implementing various components of the UQ pipeline. USC personnel were responsible for the development of adaptive bases, adaptive quadrature, and reduced models to be used in estimation and inference.

  16. A new algorithm for hip fracture surgery

    DEFF Research Database (Denmark)

    Palm, Henrik; Krasheninnikoff, Michael; Holck, Kim

    2012-01-01

    Background and purpose Treatment of hip fracture patients is controversial. We implemented a new operative and supervision algorithm (the Hvidovre algorithm) for surgical treatment of all hip fractures, primarily based on own previously published results. Methods 2,000 consecutive patients over 50...... years of age who were admitted and operated on because of a hip fracture were prospectively included. 1,000 of these patients were included after implementation of the algorithm. Demographic parameters, hospital treatment, and reoperations within the first postoperative year were assessed from patient...... by reoperations was reduced from 24% of total hospitalization before the algorithm was introduced to 18% after it was introduced. Interpretation It is possible to implement an algorithm for treatment of all hip fracture patients in a large teaching hospital. In our case, the Hvidovre algorithm both raised...

  17. Uncertainty Quantification for Large-Scale Ice Sheet Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Ghattas, Omar [Univ. of Texas, Austin, TX (United States)

    2016-02-05

    This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.

  18. Good quantification practices of flavours and fragrances by mass spectrometry.

    Science.gov (United States)

    Begnaud, Frédéric; Chaintreau, Alain

    2016-10-28

    Over the past 15 years, chromatographic techniques with mass spectrometric detection have been increasingly used to monitor the rapidly expanded list of regulated flavour and fragrance ingredients. This trend entails a need for good quantification practices suitable for complex media, especially for multi-analytes. In this article, we present experimental precautions needed to perform the analyses and ways to process the data according to the most recent approaches. This notably includes the identification of analytes during their quantification and method validation, when applied to real matrices, based on accuracy profiles. A brief survey of application studies based on such practices is given.This article is part of the themed issue 'Quantitative mass spectrometry'. © 2016 The Authors.

  19. Clinical applications of MS-based protein quantification.

    Science.gov (United States)

    Sabbagh, Bassel; Mindt, Sonani; Neumaier, Michael; Findeisen, Peter

    2016-04-01

    Mass spectrometry-based assays are increasingly important in clinical laboratory medicine and nowadays are already commonly used in several areas of routine diagnostics. These include therapeutic drug monitoring, toxicology, endocrinology, pediatrics, and microbiology. Accordingly, some of the most common analyses are therapeutic drug monitoring of immunosuppressants, vitamin D, steroids, newborn screening, and bacterial identification. However, MS-based quantification of peptides and proteins for routine diagnostic use is rather rare up to now despite excellent analytical specificity and good sensitivity. Here, we want to give an overview over current fit-for-purpose assays for MS-based protein quantification. Advantages as well as challenges of this approach will be discussed with focus on feasibility for routine diagnostic use. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  20. A verified LLL algorithm

    NARCIS (Netherlands)

    Divasón, Jose; Joosten, Sebastiaan; Thiemann, René; Yamada, Akihisa

    2018-01-01

    The Lenstra-Lenstra-Lovász basis reduction algorithm, also known as LLL algorithm, is an algorithm to find a basis with short, nearly orthogonal vectors of an integer lattice. Thereby, it can also be seen as an approximation to solve the shortest vector problem (SVP), which is an NP-hard problem,

  1. Testing algorithms for critical slowing down

    Directory of Open Access Journals (Sweden)

    Cossu Guido

    2018-01-01

    Full Text Available We present the preliminary tests on two modifications of the Hybrid Monte Carlo (HMC algorithm. Both algorithms are designed to travel much farther in the Hamiltonian phase space for each trajectory and reduce the autocorrelations among physical observables thus tackling the critical slowing down towards the continuum limit. We present a comparison of costs of the new algorithms with the standard HMC evolution for pure gauge fields, studying the autocorrelation times for various quantities including the topological charge.

  2. Dynamic training algorithm for dynamic neural networks

    International Nuclear Information System (INIS)

    Tan, Y.; Van Cauwenberghe, A.; Liu, Z.

    1996-01-01

    The widely used backpropagation algorithm for training neural networks based on the gradient descent has a significant drawback of slow convergence. A Gauss-Newton method based recursive least squares (RLS) type algorithm with dynamic error backpropagation is presented to speed-up the learning procedure of neural networks with local recurrent terms. Finally, simulation examples concerning the applications of the RLS type algorithm to identification of nonlinear processes using a local recurrent neural network are also included in this paper

  3. Biased Monte Carlo algorithms on unitary groups

    International Nuclear Information System (INIS)

    Creutz, M.; Gausterer, H.; Sanielevici, S.

    1989-01-01

    We introduce a general updating scheme for the simulation of physical systems defined on unitary groups, which eliminates the systematic errors due to inexact exponentiation of algebra elements. The essence is to work directly with group elements for the stochastic noise. Particular cases of the scheme include the algorithm of Metropolis et al., overrelaxation algorithms, and globally corrected Langevin and hybrid algorithms. The latter are studied numerically for the case of SU(3) theory

  4. Fast algorithm of track detection

    International Nuclear Information System (INIS)

    Nehrguj, B.

    1980-01-01

    A fast algorithm of variable-slope histograms is proposed, which allows a considerable reduction of computer memory size and is quite simple to carry out. Corresponding FORTRAN subprograms given a triple speed gain have been included in spiral reader data handling software

  5. The NASA Langley Multidisciplinary Uncertainty Quantification Challenge

    Science.gov (United States)

    Crespo, Luis G.; Kenny, Sean P.; Giesy, Daniel P.

    2014-01-01

    This paper presents the formulation of an uncertainty quantification challenge problem consisting of five subproblems. These problems focus on key aspects of uncertainty characterization, sensitivity analysis, uncertainty propagation, extreme-case analysis, and robust design.

  6. Uncertainty Quantification in Aerodynamics Simulations, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — The objective of the proposed work (Phases I and II) is to develop uncertainty quantification methodologies and software suitable for use in CFD simulations of...

  7. Quantification of virus syndrome in chili peppers

    African Journals Online (AJOL)

    Jane

    2011-06-15

    Jun 15, 2011 ... alternative for the quantification of the disease' syndromes in regards to this crop. The result of these ..... parison of treatments such as cultivars or control measures and ..... Vascular discoloration and stem necrosis. 2.

  8. Low cost high performance uncertainty quantification

    KAUST Repository

    Bekas, C.; Curioni, A.; Fedulova, I.

    2009-01-01

    Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost

  9. Direct qPCR quantification using the Quantifiler(®) Trio DNA quantification kit.

    Science.gov (United States)

    Liu, Jason Yingjie

    2014-11-01

    The effectiveness of a direct quantification assay is essential to the adoption of the combined direct quantification/direct STR workflow. In this paper, the feasibility of using the Quantifiler(®) Trio DNA quantification kit for the direct quantification of forensic casework samples was investigated. Both low-level touch DNA samples and blood samples were collected on PE swabs and quantified directly. The increased sensitivity of the Quantifiler(®) Trio kit enabled the detection of less than 10pg of DNA in unprocessed touch samples and also minimizes the stochastic effect experienced by different targets in the same sample. The DNA quantity information obtained from a direct quantification assay using the Quantifiler(®) Trio kit can also be used to accurately estimate the optimal input DNA quantity for a direct STR amplification reaction. The correlation between the direct quantification results (Quantifiler(®) Trio kit) and the direct STR results (GlobalFiler™ PCR amplification kit(*)) for low-level touch DNA samples indicates that direct quantification using the Quantifiler(®) Trio DNA quantification kit is more reliable than the Quantifiler(®) Duo DNA quantification kit for predicting the STR results of unprocessed touch DNA samples containing less than 10pg of DNA. Copyright © 2014 Elsevier Ireland Ltd. All rights reserved.

  10. Bayesian deconvolution and quantification of metabolites in complex 1D NMR spectra using BATMAN.

    Science.gov (United States)

    Hao, Jie; Liebeke, Manuel; Astle, William; De Iorio, Maria; Bundy, Jacob G; Ebbels, Timothy M D

    2014-01-01

    Data processing for 1D NMR spectra is a key bottleneck for metabolomic and other complex-mixture studies, particularly where quantitative data on individual metabolites are required. We present a protocol for automated metabolite deconvolution and quantification from complex NMR spectra by using the Bayesian automated metabolite analyzer for NMR (BATMAN) R package. BATMAN models resonances on the basis of a user-controllable set of templates, each of which specifies the chemical shifts, J-couplings and relative peak intensities for a single metabolite. Peaks are allowed to shift position slightly between spectra, and peak widths are allowed to vary by user-specified amounts. NMR signals not captured by the templates are modeled non-parametrically by using wavelets. The protocol covers setting up user template libraries, optimizing algorithmic input parameters, improving prior information on peak positions, quality control and evaluation of outputs. The outputs include relative concentration estimates for named metabolites together with associated Bayesian uncertainty estimates, as well as the fit of the remainder of the spectrum using wavelets. Graphical diagnostics allow the user to examine the quality of the fit for multiple spectra simultaneously. This approach offers a workflow to analyze large numbers of spectra and is expected to be useful in a wide range of metabolomics studies.

  11. Automated and Adaptable Quantification of Cellular Alignment from Microscopic Images for Tissue Engineering Applications

    Science.gov (United States)

    Xu, Feng; Beyazoglu, Turker; Hefner, Evan; Gurkan, Umut Atakan

    2011-01-01

    Cellular alignment plays a critical role in functional, physical, and biological characteristics of many tissue types, such as muscle, tendon, nerve, and cornea. Current efforts toward regeneration of these tissues include replicating the cellular microenvironment by developing biomaterials that facilitate cellular alignment. To assess the functional effectiveness of the engineered microenvironments, one essential criterion is quantification of cellular alignment. Therefore, there is a need for rapid, accurate, and adaptable methodologies to quantify cellular alignment for tissue engineering applications. To address this need, we developed an automated method, binarization-based extraction of alignment score (BEAS), to determine cell orientation distribution in a wide variety of microscopic images. This method combines a sequenced application of median and band-pass filters, locally adaptive thresholding approaches and image processing techniques. Cellular alignment score is obtained by applying a robust scoring algorithm to the orientation distribution. We validated the BEAS method by comparing the results with the existing approaches reported in literature (i.e., manual, radial fast Fourier transform-radial sum, and gradient based approaches). Validation results indicated that the BEAS method resulted in statistically comparable alignment scores with the manual method (coefficient of determination R2=0.92). Therefore, the BEAS method introduced in this study could enable accurate, convenient, and adaptable evaluation of engineered tissue constructs and biomaterials in terms of cellular alignment and organization. PMID:21370940

  12. Multi-fidelity uncertainty quantification in large-scale predictive simulations of turbulent flow

    Science.gov (United States)

    Geraci, Gianluca; Jofre-Cruanyes, Lluis; Iaccarino, Gianluca

    2017-11-01

    The performance characterization of complex engineering systems often relies on accurate, but computationally intensive numerical simulations. It is also well recognized that in order to obtain a reliable numerical prediction the propagation of uncertainties needs to be included. Therefore, Uncertainty Quantification (UQ) plays a fundamental role in building confidence in predictive science. Despite the great improvement in recent years, even the more advanced UQ algorithms are still limited to fairly simplified applications and only moderate parameter dimensionality. Moreover, in the case of extremely large dimensionality, sampling methods, i.e. Monte Carlo (MC) based approaches, appear to be the only viable alternative. In this talk we describe and compare a family of approaches which aim to accelerate the convergence of standard MC simulations. These methods are based on hierarchies of generalized numerical resolutions (multi-level) or model fidelities (multi-fidelity), and attempt to leverage the correlation between Low- and High-Fidelity (HF) models to obtain a more accurate statistical estimator without introducing additional HF realizations. The performance of these methods are assessed on an irradiated particle laden turbulent flow (PSAAP II solar energy receiver). This investigation was funded by the United States Department of Energy's (DoE) National Nuclear Security Administration (NNSA) under the Predicitive Science Academic Alliance Program (PSAAP) II at Stanford University.

  13. VISUALIZATION OF PAGERANK ALGORITHM

    OpenAIRE

    Perhaj, Ervin

    2013-01-01

    The goal of the thesis is to develop a web application that help users understand the functioning of the PageRank algorithm. The thesis consists of two parts. First we develop an algorithm to calculate PageRank values of web pages. The input of algorithm is a list of web pages and links between them. The user enters the list through the web interface. From the data the algorithm calculates PageRank value for each page. The algorithm repeats the process, until the difference of PageRank va...

  14. Parallel sorting algorithms

    CERN Document Server

    Akl, Selim G

    1985-01-01

    Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the

  15. Modified Clipped LMS Algorithm

    Directory of Open Access Journals (Sweden)

    Lotfizad Mojtaba

    2005-01-01

    Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.

  16. Automated quantification of renal interstitial fibrosis for computer-aided diagnosis: A comprehensive tissue structure segmentation method.

    Science.gov (United States)

    Tey, Wei Keat; Kuang, Ye Chow; Ooi, Melanie Po-Leen; Khoo, Joon Joon

    2018-03-01

    Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses. This study proposes an automated quantification system for measuring the amount of interstitial fibrosis in renal biopsy images as a consistent basis of comparison among pathologists. The system extracts and segments the renal tissue structures based on colour information and structural assumptions of the tissue structures. The regions in the biopsy representing the interstitial fibrosis are deduced through the elimination of non-interstitial fibrosis structures from the biopsy area and quantified as a percentage of the total area of the biopsy sample. A ground truth image dataset has been manually prepared by consulting an experienced pathologist for the validation of the segmentation algorithms. The results from experiments involving experienced pathologists have demonstrated a good correlation in quantification result between the automated system and the pathologists' visual evaluation. Experiments investigating the variability in pathologists also proved the automated quantification error rate to be on par with the average intra-observer variability in pathologists' quantification. Interstitial fibrosis in renal biopsy samples is a scarring tissue structure that may be visually quantified by pathologists as an indicator to the presence and extent of chronic kidney disease. The standard method of quantification by visual evaluation presents reproducibility issues in the diagnoses due to the uncertainties in human judgement. An automated quantification system for accurately measuring the amount of interstitial fibrosis in renal biopsy images is presented as a consistent basis of comparison among pathologists. The system identifies the renal tissue structures

  17. Algorithm improvement program nuclide identification algorithm scoring criteria and scoring application.

    Energy Technology Data Exchange (ETDEWEB)

    Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-02-01

    The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.

  18. Quantum walks and search algorithms

    CERN Document Server

    Portugal, Renato

    2013-01-01

    This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...

  19. Universal algorithm of time sharing

    International Nuclear Information System (INIS)

    Silin, I.N.; Fedyun'kin, E.D.

    1979-01-01

    Timesharing system algorithm is proposed for the wide class of one- and multiprocessor computer configurations. Dynamical priority is the piece constant function of the channel characteristic and system time quantum. The interactive job quantum has variable length. Characteristic recurrent formula is received. The concept of the background job is introduced. Background job loads processor if high priority jobs are inactive. Background quality function is given on the base of the statistical data received in the timesharing process. Algorithm includes optimal trashing off procedure for the jobs replacements in the memory. Sharing of the system time in proportion to the external priorities is guaranteed for the all active enough computing channels (back-ground too). The fast answer is guaranteed for the interactive jobs, which use small time and memory. The external priority control is saved for the high level scheduler. The experience of the algorithm realization on the BESM-6 computer in JINR is discussed

  20. Scalable algorithms for contact problems

    CERN Document Server

    Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít

    2016-01-01

    This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...

  1. The Chandra Source Catalog: Algorithms

    Science.gov (United States)

    McDowell, Jonathan; Evans, I. N.; Primini, F. A.; Glotfelty, K. J.; McCollough, M. L.; Houck, J. C.; Nowak, M. A.; Karovska, M.; Davis, J. E.; Rots, A. H.; Siemiginowska, A. L.; Hain, R.; Evans, J. D.; Anderson, C. S.; Bonaventura, N. R.; Chen, J. C.; Doe, S. M.; Fabbiano, G.; Galle, E. C.; Gibbs, D. G., II; Grier, J. D.; Hall, D. M.; Harbo, P. N.; He, X.; Lauer, J.; Miller, J. B.; Mitschang, A. W.; Morgan, D. L.; Nichols, J. S.; Plummer, D. A.; Refsdal, B. L.; Sundheim, B. A.; Tibbetts, M. S.; van Stone, D. W.; Winkelman, S. L.; Zografou, P.

    2009-09-01

    Creation of the Chandra Source Catalog (CSC) required adjustment of existing pipeline processing, adaptation of existing interactive analysis software for automated use, and development of entirely new algorithms. Data calibration was based on the existing pipeline, but more rigorous data cleaning was applied and the latest calibration data products were used. For source detection, a local background map was created including the effects of ACIS source readout streaks. The existing wavelet source detection algorithm was modified and a set of post-processing scripts used to correct the results. To analyse the source properties we ran the SAO Traceray trace code for each source to generate a model point spread function, allowing us to find encircled energy correction factors and estimate source extent. Further algorithms were developed to characterize the spectral, spatial and temporal properties of the sources and to estimate the confidence intervals on count rates and fluxes. Finally, sources detected in multiple observations were matched, and best estimates of their merged properties derived. In this paper we present an overview of the algorithms used, with more detailed treatment of some of the newly developed algorithms presented in companion papers.

  2. Quantification of rice bran oil in oil blends

    Energy Technology Data Exchange (ETDEWEB)

    Mishra, R.; Sharma, H. K.; Sengar, G.

    2012-11-01

    Blends consisting of physically refined rice bran oil (PRBO): sunflower oil (SnF) and PRBO: safflower oil (SAF) in different proportions were analyzed for various physicochemical parameters. The quantification of pure rice bran oil in the blended oils was carried out using different methods including gas chromatographic, HPLC, ultrasonic velocity and methods based on physico-chemical parameters. The physicochemical parameters such as ultrasonic velocity, relative association and acoustic impedance at 2 MHz, iodine value, palmitic acid content and oryzanol content reflected significant changes with increased proportions of PRBO in the blended oils. These parameters were selected as dependent parameters and % PRBO proportion was selected as independent parameters. The study revealed that regression equations based on the oryzanol content, palmitic acid composition, ultrasonic velocity, relative association, acoustic impedance, and iodine value can be used for the quantification of rice bran oil in blended oils. The rice bran oil can easily be quantified in the blended oils based on the oryzanol content by HPLC even at a 1% level. The palmitic acid content in blended oils can also be used as an indicator to quantify rice bran oil at or above the 20% level in blended oils whereas the method based on ultrasonic velocity, acoustic impedance and relative association showed initial promise in the quantification of rice bran oil. (Author) 23 refs.

  3. Uncertainty Quantification in High Throughput Screening ...

    Science.gov (United States)

    Using uncertainty quantification, we aim to improve the quality of modeling data from high throughput screening assays for use in risk assessment. ToxCast is a large-scale screening program that analyzes thousands of chemicals using over 800 assays representing hundreds of biochemical and cellular processes, including endocrine disruption, cytotoxicity, and zebrafish development. Over 2.6 million concentration response curves are fit to models to extract parameters related to potency and efficacy. Models built on ToxCast results are being used to rank and prioritize the toxicological risk of tested chemicals and to predict the toxicity of tens of thousands of chemicals not yet tested in vivo. However, the data size also presents challenges. When fitting the data, the choice of models, model selection strategy, and hit call criteria must reflect the need for computational efficiency and robustness, requiring hard and somewhat arbitrary cutoffs. When coupled with unavoidable noise in the experimental concentration response data, these hard cutoffs cause uncertainty in model parameters and the hit call itself. The uncertainty will then propagate through all of the models built on the data. Left unquantified, this uncertainty makes it difficult to fully interpret the data for risk assessment. We used bootstrap resampling methods to quantify the uncertainty in fitting models to the concentration response data. Bootstrap resampling determines confidence intervals for

  4. Verification Validation and Uncertainty Quantification for CGS

    Energy Technology Data Exchange (ETDEWEB)

    Rider, William J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kamm, James R. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Weirs, V. Gregory [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-01-01

    The overall conduct of verification, validation and uncertainty quantification (VVUQ) is discussed through the construction of a workflow relevant to computational modeling including the turbulence problem in the coarse grained simulation (CGS) approach. The workflow contained herein is defined at a high level and constitutes an overview of the activity. Nonetheless, the workflow represents an essential activity in predictive simulation and modeling. VVUQ is complex and necessarily hierarchical in nature. The particular characteristics of VVUQ elements depend upon where the VVUQ activity takes place in the overall hierarchy of physics and models. In this chapter, we focus on the differences between and interplay among validation, calibration and UQ, as well as the difference between UQ and sensitivity analysis. The discussion in this chapter is at a relatively high level and attempts to explain the key issues associated with the overall conduct of VVUQ. The intention is that computational physicists can refer to this chapter for guidance regarding how VVUQ analyses fit into their efforts toward conducting predictive calculations.

  5. Information theoretic quantification of diagnostic uncertainty.

    Science.gov (United States)

    Westover, M Brandon; Eiseman, Nathaniel A; Cash, Sydney S; Bianchi, Matt T

    2012-01-01

    Diagnostic test interpretation remains a challenge in clinical practice. Most physicians receive training in the use of Bayes' rule, which specifies how the sensitivity and specificity of a test for a given disease combine with the pre-test probability to quantify the change in disease probability incurred by a new test result. However, multiple studies demonstrate physicians' deficiencies in probabilistic reasoning, especially with unexpected test results. Information theory, a branch of probability theory dealing explicitly with the quantification of uncertainty, has been proposed as an alternative framework for diagnostic test interpretation, but is even less familiar to physicians. We have previously addressed one key challenge in the practical application of Bayes theorem: the handling of uncertainty in the critical first step of estimating the pre-test probability of disease. This essay aims to present the essential concepts of information theory to physicians in an accessible manner, and to extend previous work regarding uncertainty in pre-test probability estimation by placing this type of uncertainty within a principled information theoretic framework. We address several obstacles hindering physicians' application of information theoretic concepts to diagnostic test interpretation. These include issues of terminology (mathematical meanings of certain information theoretic terms differ from clinical or common parlance) as well as the underlying mathematical assumptions. Finally, we illustrate how, in information theoretic terms, one can understand the effect on diagnostic uncertainty of considering ranges instead of simple point estimates of pre-test probability.

  6. Identification of Reliable Reference Genes for Quantification of MicroRNAs in Serum Samples of Sulfur Mustard-Exposed Veterans.

    Science.gov (United States)

    Gharbi, Sedigheh; Shamsara, Mehdi; Khateri, Shahriar; Soroush, Mohammad Reza; Ghorbanmehr, Nassim; Tavallaei, Mahmood; Nourani, Mohammad Reza; Mowla, Seyed Javad

    2015-01-01

    In spite of accumulating information about pathological aspects of sulfur mustard (SM), the precise mechanism responsible for its effects is not well understood. Circulating microRNAs (miRNAs) are promising biomarkers for disease diagnosis and prognosis. Accurate normalization using appropriate reference genes, is a critical step in miRNA expression studies. In this study, we aimed to identify appropriate reference gene for microRNA quantification in serum samples of SM victims. In this case and control experimental study, using quantitative real-time polymerase chain reaction (qRT-PCR), we evaluated the suitability of a panel of small RNAs including SNORD38B, SNORD49A, U6, 5S rRNA, miR-423-3p, miR-191, miR-16 and miR-103 in sera of 28 SM-exposed veterans of Iran-Iraq war (1980-1988) and 15 matched control volunteers. Different statistical algorithms including geNorm, Normfinder, best-keeper and comparative delta-quantification cycle (Cq) method were employed to find the least variable reference gene. miR-423-3p was identified as the most stably expressed reference gene, and miR- 103 and miR-16 ranked after that. We demonstrate that non-miRNA reference genes have the least stabil- ity in serum samples and that some house-keeping miRNAs may be used as more reliable reference genes for miRNAs in serum. In addition, using the geometric mean of two reference genes could increase the reliability of the normalizers.

  7. Detection and quantification of Leveillula taurica growth in pepper leaves.

    Science.gov (United States)

    Zheng, Zheng; Nonomura, Teruo; Bóka, Károly; Matsuda, Yoshinori; Visser, Richard G F; Toyoda, Hideyoshi; Kiss, Levente; Bai, Yuling

    2013-06-01

    Leveillula taurica is an obligate fungal pathogen that causes powdery mildew disease on a broad range of plants, including important crops such as pepper, tomato, eggplant, onion, cotton, and so on. The early stage of this disease is difficult to diagnose and the disease can easily spread unobserved; for example, in pepper and tomato production fields and greenhouses. The objective of this study was to develop a detection and quantification method of L. taurica biomass in pepper leaves with special regard to the early stages of infection. We monitored the development of the disease to time the infection process on the leaf surface as well as inside the pepper leaves. The initial and final steps of the infection taking place on the leaf surface were consecutively observed using a dissecting microscope and a scanning electron microscope. The development of the intercellular mycelium in the mesophyll was followed by light and transmission electron microscopy. A pair of L. taurica-specific primers was designed based on the internal transcribed spacer sequence of L. taurica and used in real-time polymerase chain reaction (PCR) assay to quantify the fungal DNA during infection. The specificity of this assay was confirmed by testing the primer pair with DNA from host plants and also from another powdery mildew species, Oidium neolycopersici, infecting tomato. A standard curve was obtained for absolute quantification of L. taurica biomass. In addition, we tested a relative quantification method by using a plant gene as reference and the obtained results were compared with the visual disease index scoring. The real-time PCR assay for L. taurica provides a valuable tool for detection and quantification of this pathogen in breeding activities as well in plant-microbe interaction studies.

  8. Quantification of global myocardial function by cine MRI deformable registration-based analysis: Comparison with MR feature tracking and speckle-tracking echocardiography.

    Science.gov (United States)

    Lamacie, Mariana M; Thavendiranathan, Paaladinesh; Hanneman, Kate; Greiser, Andreas; Jolly, Marie-Pierre; Ward, Richard; Wintersperger, Bernd J

    2017-04-01

    To evaluate deformable registration algorithms (DRA)-based quantification of cine steady-state free-precession (SSFP) for myocardial strain assessment in comparison with feature-tracking (FT) and speckle-tracking echocardiography (STE). Data sets of 28 patients/10 volunteers, undergoing same-day 1.5T cardiac MRI and echocardiography were included. LV global longitudinal (GLS), circumferential (GCS) and radial (GRS) peak systolic strain were assessed on cine SSFP data using commercially available FT algorithms and prototype DRA-based algorithms. STE was applied as standard of reference for accuracy, precision and intra-/interobserver reproducibility testing. DRA showed narrower limits of agreement compared to STE for GLS (-4.0 [-0.9,-7.9]) and GCS (-5.1 [1.1,-11.2]) than FT (3.2 [11.2,-4.9]; 3.8 [13.9,-6.3], respectively). While both DRA and FT demonstrated significant differences to STE for GLS and GCS (all pcine MRI. • Inverse DRA demonstrated superior reproducibility compared to feature-tracking (FT) methods. • Cine MR DRA and FT analysis demonstrate differences to speckle-tracking echocardiography • DRA demonstrated better correlation with STE than FT for MR-derived global strain data.

  9. Parallel Algorithms for Groebner-Basis Reduction

    Science.gov (United States)

    1987-09-25

    22209 ELEMENT NO. NO. NO. ACCESSION NO. 11. TITLE (Include Security Classification) * PARALLEL ALGORITHMS FOR GROEBNER -BASIS REDUCTION 12. PERSONAL...All other editions are obsolete. Productivity Engineering in the UNIXt Environment p Parallel Algorithms for Groebner -Basis Reduction Technical Report

  10. Learning from nature: Nature-inspired algorithms

    DEFF Research Database (Denmark)

    Albeanu, Grigore; Madsen, Henrik; Popentiu-Vladicescu, Florin

    2016-01-01

    .), genetic and evolutionary strategies, artificial immune systems etc. Well-known examples of applications include: aircraft wing design, wind turbine design, bionic car, bullet train, optimal decisions related to traffic, appropriate strategies to survive under a well-adapted immune system etc. Based......During last decade, the nature has inspired researchers to develop new algorithms. The largest collection of nature-inspired algorithms is biology-inspired: swarm intelligence (particle swarm optimization, ant colony optimization, cuckoo search, bees' algorithm, bat algorithm, firefly algorithm etc...... on collective social behaviour of organisms, researchers have developed optimization strategies taking into account not only the individuals, but also groups and environment. However, learning from nature, new classes of approaches can be identified, tested and compared against already available algorithms...

  11. Quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Shenvi, Neil; Whaley, K. Birgitta; Kempe, Julia

    2003-01-01

    Quantum random walks on graphs have been shown to display many interesting properties, including exponentially fast hitting times when compared with their classical counterparts. However, it is still unclear how to use these novel properties to gain an algorithmic speedup over classical algorithms. In this paper, we present a quantum search algorithm based on the quantum random-walk architecture that provides such a speedup. It will be shown that this algorithm performs an oracle search on a database of N items with O(√(N)) calls to the oracle, yielding a speedup similar to other quantum search algorithms. It appears that the quantum random-walk formulation has considerable flexibility, presenting interesting opportunities for development of other, possibly novel quantum algorithms

  12. Comparison of Suitability of the Most Common Ancient DNA Quantification Methods.

    Science.gov (United States)

    Brzobohatá, Kristýna; Drozdová, Eva; Smutný, Jiří; Zeman, Tomáš; Beňuš, Radoslav

    2017-04-01

    Ancient DNA (aDNA) extracted from historical bones is damaged and fragmented into short segments, present in low quantity, and usually copurified with microbial DNA. A wide range of DNA quantification methods are available. The aim of this study was to compare the five most common DNA quantification methods for aDNA. Quantification methods were tested on DNA extracted from skeletal material originating from an early medieval burial site. The tested methods included ultraviolet (UV) absorbance, real-time quantitative polymerase chain reaction (qPCR) based on SYBR ® green detection, real-time qPCR based on a forensic kit, quantification via fluorescent dyes bonded to DNA, and fragmentary analysis. Differences between groups were tested using a paired t-test. Methods that measure total DNA present in the sample (NanoDrop ™ UV spectrophotometer and Qubit ® fluorometer) showed the highest concentrations. Methods based on real-time qPCR underestimated the quantity of aDNA. The most accurate method of aDNA quantification was fragmentary analysis, which also allows DNA quantification of the desired length and is not affected by PCR inhibitors. Methods based on the quantification of the total amount of DNA in samples are unsuitable for ancient samples as they overestimate the amount of DNA presumably due to the presence of microbial DNA. Real-time qPCR methods give undervalued results due to DNA damage and the presence of PCR inhibitors. DNA quantification methods based on fragment analysis show not only the quantity of DNA but also fragment length.

  13. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data

    International Nuclear Information System (INIS)

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P.A.; Schmid, Adrien W.

    2016-01-01

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.

  14. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Ying-Xu [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); Mjøs, Svein Are, E-mail: svein.mjos@kj.uib.no [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); David, Fabrice P.A. [Bioinformatics and Biostatistics Core Facility, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL) and Swiss Institute of Bioinformatics (SIB), Lausanne (Switzerland); Schmid, Adrien W. [Proteomics Core Facility, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne (Switzerland)

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.

  15. Langevin simulations of QCD, including fermions

    International Nuclear Information System (INIS)

    Kronfeld, A.S.

    1986-02-01

    We encounter critical slow down in updating when xi/a -> infinite and in matrix inversion (needed to include fermions) when msub(q)a -> 0. A simulation that purports to solve QCD numerically will encounter these limits, so to face the challenge in the title of this workshop, we must cure the disease of critical slow down. Physically, this critical slow down is due to the reluctance of changes at short distances to propagate to large distances. Numerically, the stability of an algorithm at short wavelengths requires a (moderately) small step size; critical slow down occurs when the effective long wavelength step size becomes tiny. The remedy for this disease is an algorithm that propagates signals quickly throughout the system; i.e. one whose effective step size is not reduced for the long wavelength conponents of the fields. (Here the effective ''step size'' is essentially an inverse decorrelation time.) To do so one must resolve various wavelengths of the system and modify the dynamics (in CPU time) of the simulation so that all modes evolve at roughly the same rate. This can be achieved by introducing Fourier transforms. I show how to implement Fourier acceleration for Langevin updating and for conjugate gradient matrix inversion. The crucial feature of these algorithms that lends them to Fourier acceleration is that they update the lattice globally; hence the Fourier transforms are computed once per sweep rather than once per hit. (orig./HSI)

  16. A novel method for quantification of beam's-eye-view tumor tracking performance.

    Science.gov (United States)

    Hu, Yue-Houng; Myronakis, Marios; Rottmann, Joerg; Wang, Adam; Morf, Daniel; Shedlock, Daniel; Baturin, Paul; Star-Lack, Josh; Berbeco, Ross

    2017-11-01

    In-treatment imaging using an electronic portal imaging device (EPID) can be used to confirm patient and tumor positioning. Real-time tumor tracking performance using current digital megavolt (MV) imagers is hindered by poor image quality. Novel EPID designs may help to improve quantum noise response, while also preserving the high spatial resolution of the current clinical detector. Recently investigated EPID design improvements include but are not limited to multi-layer imager (MLI) architecture, thick crystalline and amorphous scintillators, and phosphor pixilation and focusing. The goal of the present study was to provide a method of quantitating improvement in tracking performance as well as to reveal the physical underpinnings of detector design that impact tracking quality. The study employs a generalizable ideal observer methodology for the quantification of tumor tracking performance. The analysis is applied to study both the effect of increasing scintillator thickness on a standard, single-layer imager (SLI) design as well as the effect of MLI architecture on tracking performance. The present study uses the ideal observer signal-to-noise ratio (d') as a surrogate for tracking performance. We employ functions which model clinically relevant tasks and generalized frequency-domain imaging metrics to connect image quality with tumor tracking. A detection task for relevant Cartesian shapes (i.e., spheres and cylinders) was used to quantitate trackability of cases employing fiducial markers. Automated lung tumor tracking algorithms often leverage the differences in benign and malignant lung tissue textures. These types of algorithms (e.g., soft-tissue localization - STiL) were simulated by designing a discrimination task, which quantifies the differentiation of tissue textures, measured experimentally and fit as a power-law in trend (with exponent β) using a cohort of MV images of patient lungs. The modeled MTF and NPS were used to investigate the effect of

  17. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Science.gov (United States)

    Huan, Xun; Safta, Cosmin; Sargsyan, Khachik; Geraci, Gianluca; Eldred, Michael S.; Vane, Zachary P.; Lacaze, Guilhem; Oefelein, Joseph C.; Najm, Habib N.

    2018-03-01

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the systems stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. These methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  18. Planar imaging quantification using 3D attenuation correction data and Monte Carlo simulated buildup factors

    International Nuclear Information System (INIS)

    Miller, C.; Filipow, L.; Jackson, S.; Riauka, T.

    1996-01-01

    A new method to correct for attenuation and the buildup of scatter in planar imaging quantification is presented. The method is based on the combined use of 3D density information provided by computed tomography to correct for attenuation and the application of Monte Carlo simulated buildup factors to correct for buildup in the projection pixels. CT and nuclear medicine images were obtained for a purpose-built nonhomogeneous phantom that models the human anatomy in the thoracic and abdominal regions. The CT transverse slices of the phantom were converted to a set of consecutive density maps. An algorithm was developed that projects the 3D information contained in the set of density maps to create opposing pairs of accurate 2D correction maps that were subsequently applied to planar images acquired from a dual-head gamma camera. A comparison of results obtained by the new method and the geometric mean approach based on published techniques is presented for some of the source arrangements used. Excellent results were obtained for various source - phantom configurations used to evaluate the method. Activity quantification of a line source at most locations in the nonhomogeneous phantom produced errors of less than 2%. Additionally, knowledge of the actual source depth is not required for accurate activity quantification. Quantification of volume sources placed in foam, Perspex and aluminium produced errors of less than 7% for the abdominal and thoracic configurations of the phantom. (author)

  19. Tool for objective quantification of pulmonary sequelae in monitoring of patients with tuberculosis

    International Nuclear Information System (INIS)

    Giacomini, Guilherme; Alvarez, Matheus; Pina, Diana R. de; Bacchim Neto, Fernando A.; Pereira, Paulo C.M.; Ribeiro, Sergio M.; Miranda, Jose Ricardo de A.

    2014-01-01

    Tuberculosis (TB), caused by Mycobacterium tuberculosis, is an ancient infectious disease that remains a global health problem. Chest radiography is the method commonly employed in assessing the evolution of TB. However, lung damage quantification methods are usually performed on a computerized tomography (CT). This objective quantification is important in the radiological monitoring of the patient by assessing the progression and treatment of TB. However, precise quantification is not feasible by the number of CT examinations necessary due to the high dose subjected to the patient and high cost to the institution. The purpose of this work is to develop a tool to quantify pulmonary sequelae caused by TB through chest X-rays. Aiming the proposed objective, a computational algorithm was developed, creating a three-dimensional representation of the lungs, with regions of dilated sequelae inside. It also made the quantification of pulmonary sequelae of these patients through CT scans performed in upcoming dates, minimizing the differences in disease progression. The measurements from the two methods were compared with results suggest that the effectiveness and applicability of the developed tool, allowing lower doses radiological monitoring of the patient during treatment

  20. Global Sensitivity Analysis and Estimation of Model Error, Toward Uncertainty Quantification in Scramjet Computations

    Energy Technology Data Exchange (ETDEWEB)

    Huan, Xun [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Geraci, Gianluca [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Eldred, Michael S. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Vane, Zachary P. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lacaze, Guilhem [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Oefelein, Joseph C. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)

    2018-02-09

    The development of scramjet engines is an important research area for advancing hypersonic and orbital flights. Progress toward optimal engine designs requires accurate flow simulations together with uncertainty quantification. However, performing uncertainty quantification for scramjet simulations is challenging due to the large number of uncertain parameters involved and the high computational cost of flow simulations. These difficulties are addressed in this paper by developing practical uncertainty quantification algorithms and computational methods, and deploying them in the current study to large-eddy simulations of a jet in crossflow inside a simplified HIFiRE Direct Connect Rig scramjet combustor. First, global sensitivity analysis is conducted to identify influential uncertain input parameters, which can help reduce the system’s stochastic dimension. Second, because models of different fidelity are used in the overall uncertainty quantification assessment, a framework for quantifying and propagating the uncertainty due to model error is presented. Finally, these methods are demonstrated on a nonreacting jet-in-crossflow test problem in a simplified scramjet geometry, with parameter space up to 24 dimensions, using static and dynamic treatments of the turbulence subgrid model, and with two-dimensional and three-dimensional geometries.

  1. Introduction to Evolutionary Algorithms

    CERN Document Server

    Yu, Xinjie

    2010-01-01

    Evolutionary algorithms (EAs) are becoming increasingly attractive for researchers from various disciplines, such as operations research, computer science, industrial engineering, electrical engineering, social science, economics, etc. This book presents an insightful, comprehensive, and up-to-date treatment of EAs, such as genetic algorithms, differential evolution, evolution strategy, constraint optimization, multimodal optimization, multiobjective optimization, combinatorial optimization, evolvable hardware, estimation of distribution algorithms, ant colony optimization, particle swarm opti

  2. Recursive forgetting algorithms

    DEFF Research Database (Denmark)

    Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan

    1992-01-01

    In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...

  3. Packet-Based Control Algorithms for Cooperative Surveillance and Reconnaissance

    National Research Council Canada - National Science Library

    Murray, Richard M

    2007-01-01

    ..., and repeated transmissions. Results include analysis and design of estimation and control algorithms in the presence of packet loss and across multi-hop data networks, distributed estimation and sensor fusion algorithms...

  4. Robustness of Multiple Clustering Algorithms on Hyperspectral Images

    National Research Council Canada - National Science Library

    Williams, Jason P

    2007-01-01

    .... Various clustering algorithms were employed, including a hierarchical method, ISODATA, K-means, and X-means, and were used on a simple two dimensional dataset in order to discover potential problems with the algorithms...

  5. Breast density quantification with cone-beam CT: a post-mortem study

    International Nuclear Information System (INIS)

    Johnson, Travis; Ding, Huanjun; Le, Huy Q; Ducote, Justin L; Molloi, Sabee

    2013-01-01

    Forty post-mortem breasts were imaged with a flat-panel based cone-beam x-ray CT system at 50 kVp. The feasibility of breast density quantification has been investigated using standard histogram thresholding and an automatic segmentation method based on the fuzzy c-means algorithm (FCM). The breasts were chemically decomposed into water, lipid, and protein immediately after image acquisition was completed. The per cent fibroglandular volume (%FGV) from chemical analysis was used as the gold standard for breast density comparison. Both image-based segmentation techniques showed good precision in breast density quantification with high linear coefficients between the right and left breast of each pair. When comparing with the gold standard using %FGV from chemical analysis, Pearson's r-values were estimated to be 0.983 and 0.968 for the FCM clustering and the histogram thresholding techniques, respectively. The standard error of the estimate was also reduced from 3.92% to 2.45% by applying the automatic clustering technique. The results of the postmortem study suggested that breast tissue can be characterized in terms of water, lipid and protein contents with high accuracy by using chemical analysis, which offers a gold standard for breast density studies comparing different techniques. In the investigated image segmentation techniques, the FCM algorithm had high precision and accuracy in breast density quantification. In comparison to conventional histogram thresholding, it was more efficient and reduced inter-observer variation. (paper)

  6. Serendipity: Global Detection and Quantification of Plant Stress

    Science.gov (United States)

    Schimel, D.; Verma, M.; Drewry, D.

    2016-12-01

    Detecting and quantifying plant stress is a grand challenge for remote sensing, and is important for understanding climate impacts on ecosystems broadly and also for early warning systems supporting food security. The long record from moderate resolution sensors providing frequent data has allowed using phenology to detect stress in forest and agroecosystems, but can fail or give ambiguous results when stress occurs during later phases of growth and in high leaf area systems. The recent recognition that greenhouse gas satellites such as GOSAT and OCO-2 observe Solar-Induced Fluorescence has added a new and complementary tool for the quantification of stress but algorithms to detect and quantify stress using SIF are in their infancy. Here we report new results showing a more complex response of SIF to stress by evaluating spaceborne SIF against in situ eddy covariance data. The response observed is as predicted by theory, and shows that SIF, used in conjunction with moderate resolution remote sensing, can detect and likely quantify stress by indexing the nonlinear part of the SIF-GPP relationship using the photochemical reflectance index and remotely observed light absorption. There are several exciting opportunities on the near horizon for the implementation of SIF, together with syngeristic measurements such as PRI and evapotranspiration that suggest the next few years will be a golden age for global ecology. Adancing the science and associated algorithms now is essential to fully exploiting the next wave of missions.

  7. Explaining algorithms using metaphors

    CERN Document Server

    Forišek, Michal

    2013-01-01

    There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo

  8. Algorithms in Algebraic Geometry

    CERN Document Server

    Dickenstein, Alicia; Sommese, Andrew J

    2008-01-01

    In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its

  9. Shadow algorithms data miner

    CERN Document Server

    Woo, Andrew

    2012-01-01

    Digital shadow generation continues to be an important aspect of visualization and visual effects in film, games, simulations, and scientific applications. This resource offers a thorough picture of the motivations, complexities, and categorized algorithms available to generate digital shadows. From general fundamentals to specific applications, it addresses shadow algorithms and how to manage huge data sets from a shadow perspective. The book also examines the use of shadow algorithms in industrial applications, in terms of what algorithms are used and what software is applicable.

  10. Spectral Decomposition Algorithm (SDA)

    Data.gov (United States)

    National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...

  11. Quick fuzzy backpropagation algorithm.

    Science.gov (United States)

    Nikov, A; Stoeva, S

    2001-03-01

    A modification of the fuzzy backpropagation (FBP) algorithm called QuickFBP algorithm is proposed, where the computation of the net function is significantly quicker. It is proved that the FBP algorithm is of exponential time complexity, while the QuickFBP algorithm is of polynomial time complexity. Convergence conditions of the QuickFBP, resp. the FBP algorithm are defined and proved for: (1) single output neural networks in case of training patterns with different targets; and (2) multiple output neural networks in case of training patterns with equivalued target vector. They support the automation of the weights training process (quasi-unsupervised learning) establishing the target value(s) depending on the network's input values. In these cases the simulation results confirm the convergence of both algorithms. An example with a large-sized neural network illustrates the significantly greater training speed of the QuickFBP rather than the FBP algorithm. The adaptation of an interactive web system to users on the basis of the QuickFBP algorithm is presented. Since the QuickFBP algorithm ensures quasi-unsupervised learning, this implies its broad applicability in areas of adaptive and adaptable interactive systems, data mining, etc. applications.

  12. Portfolios of quantum algorithms.

    Science.gov (United States)

    Maurer, S M; Hogg, T; Huberman, B A

    2001-12-17

    Quantum computation holds promise for the solution of many intractable problems. However, since many quantum algorithms are stochastic in nature they can find the solution of hard problems only probabilistically. Thus the efficiency of the algorithms has to be characterized by both the expected time to completion and the associated variance. In order to minimize both the running time and its uncertainty, we show that portfolios of quantum algorithms analogous to those of finance can outperform single algorithms when applied to the NP-complete problems such as 3-satisfiability.

  13. Hybrid Reduced Order Modeling Algorithms for Reactor Physics Calculations

    Science.gov (United States)

    Bang, Youngsuk

    Reduced order modeling (ROM) has been recognized as an indispensable approach when the engineering analysis requires many executions of high fidelity simulation codes. Examples of such engineering analyses in nuclear reactor core calculations, representing the focus of this dissertation, include the functionalization of the homogenized few-group cross-sections in terms of the various core conditions, e.g. burn-up, fuel enrichment, temperature, etc. This is done via assembly calculations which are executed many times to generate the required functionalization for use in the downstream core calculations. Other examples are sensitivity analysis used to determine important core attribute variations due to input parameter variations, and uncertainty quantification employed to estimate core attribute uncertainties originating from input parameter uncertainties. ROM constructs a surrogate model with quantifiable accuracy which can replace the original code for subsequent engineering analysis calculations. This is achieved by reducing the effective dimensionality of the input parameter, the state variable, or the output response spaces, by projection onto the so-called active subspaces. Confining the variations to the active subspace allows one to construct an ROM model of reduced complexity which can be solved more efficiently. This dissertation introduces a new algorithm to render reduction with the reduction errors bounded based on a user-defined error tolerance which represents the main challenge of existing ROM techniques. Bounding the error is the key to ensuring that the constructed ROM models are robust for all possible applications. Providing such error bounds represents one of the algorithmic contributions of this dissertation to the ROM state-of-the-art. Recognizing that ROM techniques have been developed to render reduction at different levels, e.g. the input parameter space, the state space, and the response space, this dissertation offers a set of novel

  14. Algorithm 426 : Merge sort algorithm [M1

    NARCIS (Netherlands)

    Bron, C.

    1972-01-01

    Sorting by means of a two-way merge has a reputation of requiring a clerically complicated and cumbersome program. This ALGOL 60 procedure demonstrates that, using recursion, an elegant and efficient algorithm can be designed, the correctness of which is easily proved [2]. Sorting n objects gives

  15. Strategy study of quantification harmonization of SUV in PET/CT images; Estudo da estrategia de harmonizacao da quantificacao do SUV em imagens de PET/CT

    Energy Technology Data Exchange (ETDEWEB)

    Fischer, Andreia Caroline Fischer da Silveira

    2014-07-01

    In clinical practice, PET/CT images are often analyzed qualitatively by visual comparison of tumor lesions and normal tissues uptake; and semi-quantitatively by means of a parameter called SUV (Standardized Uptake Value). To ensure that longitudinal studies acquired on different scanners are interchangeable, and information of quantification is comparable, it is necessary to establish a strategy to harmonize the quantification of SUV. The aim of this study is to evaluate the strategy to harmonize the quantification of PET/CT images, performed with different scanner models and manufacturers. For this purpose, a survey of the technical characteristics of equipment and acquisition protocols of clinical images of different services of PET/CT in the state of Rio Grande do Sul was conducted. For each scanner, the accuracy of SUV quantification, and the Recovery Coefficient (RC) curves were determined, using the reconstruction parameters clinically relevant and available. From these data, harmonized performance specifications among the evaluated scanners were identified, as well as the algorithm that produces, for each one, the most accurate quantification. Finally, the most appropriate reconstruction parameters to harmonize the SUV quantification in each scanner, either regionally or internationally were identified. It was found that the RC values of the analyzed scanners proved to be overestimated by up to 38%, particularly for objects larger than 17mm. These results demonstrate the need for further optimization, through the reconstruction parameters modification, and even the change of the reconstruction algorithm used in each scanner. It was observed that there is a decoupling between the best image for PET/CT qualitative analysis and the best image for quantification studies. Thus, the choice of reconstruction method should be tied to the purpose of the PET/CT study in question, since the same reconstruction algorithm is not adequate, in one scanner, for qualitative

  16. Cell motility dynamics: a novel segmentation algorithm to quantify multi-cellular bright field microscopy images.

    Directory of Open Access Journals (Sweden)

    Assaf Zaritsky

    Full Text Available Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional

  17. Cell motility dynamics: a novel segmentation algorithm to quantify multi-cellular bright field microscopy images.

    Science.gov (United States)

    Zaritsky, Assaf; Natan, Sari; Horev, Judith; Hecht, Inbal; Wolf, Lior; Ben-Jacob, Eshel; Tsarfaty, Ilan

    2011-01-01

    Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs) is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF) on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC) images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional fluorescence single

  18. Quantitative analysis of emphysema in 3D using MDCT: Influence of different reconstruction algorithms

    International Nuclear Information System (INIS)

    Ley-Zaporozhan, Julia; Ley, Sebastian; Weinheimer, Oliver; Iliyushenko, Svitlana; Erdugan, Serap; Eberhardt, Ralf; Fuxa, Adelheid; Mews, Juergen; Kauczor, Hans-Ulrich

    2008-01-01

    Purpose: The aim of the study was to compare the influence of different reconstruction algorithms on quantitative emphysema analysis in patients with severe emphysema. Material and methods: Twenty-five patients suffering from severe emphysema were included in the study. All patients underwent inspiratory MDCT (Aquilion-16, slice thickness 1/0.8 mm). The raw data were reconstructed using six different algorithms: bone kernel with beam hardening correction (BHC), soft tissue kernel with BHC; standard soft tissue kernel, smooth soft tissue kernel (internal reference standard), standard lung kernel, and high-convolution kernel. The only difference between image data sets was the algorithm employed to reconstruct the raw data, no additional radiation was required. CT data were analysed using self-written emphysema detection and quantification software providing lung volume, emphysema volume (EV), emphysema index (EI) and mean lung density (MLD). Results: The use of kernels with BHC led to a significant decrease in MLD (5%) and EI (61-79%) in comparison with kernels without BHC. The absolute difference (from smooth soft tissue kernel) in MLD ranged from -0.6 to -6.1 HU and were significant different for all kernels. The EV showed absolute differences between -0.05 and -0.4 L and was significantly different for all kernels. The EI showed absolute differences between -0.8 and -5.1 and was significantly different for all kernels. Conclusion: The use of kernels with BHC led to a significant decrease in MLD and EI. The absolute differences between different kernels without BHC were small but they were larger than the known interscan variation in patients. Thus, for follow-up examinations the same reconstruction algorithm has to be used and use of BHC has to be avoided

  19. Algorithms and Their Explanations

    NARCIS (Netherlands)

    Benini, M.; Gobbo, F.; Beckmann, A.; Csuhaj-Varjú, E.; Meer, K.

    2014-01-01

    By analysing the explanation of the classical heapsort algorithm via the method of levels of abstraction mainly due to Floridi, we give a concrete and precise example of how to deal with algorithmic knowledge. To do so, we introduce a concept already implicit in the method, the ‘gradient of

  20. Finite lattice extrapolation algorithms

    International Nuclear Information System (INIS)

    Henkel, M.; Schuetz, G.

    1987-08-01

    Two algorithms for sequence extrapolation, due to von den Broeck and Schwartz and Bulirsch and Stoer are reviewed and critically compared. Applications to three states and six states quantum chains and to the (2+1)D Ising model show that the algorithm of Bulirsch and Stoer is superior, in particular if only very few finite lattice data are available. (orig.)

  1. Recursive automatic classification algorithms

    Energy Technology Data Exchange (ETDEWEB)

    Bauman, E V; Dorofeyuk, A A

    1982-03-01

    A variational statement of the automatic classification problem is given. The dependence of the form of the optimal partition surface on the form of the classification objective functional is investigated. A recursive algorithm is proposed for maximising a functional of reasonably general form. The convergence problem is analysed in connection with the proposed algorithm. 8 references.

  2. Graph Colouring Algorithms

    DEFF Research Database (Denmark)

    Husfeldt, Thore

    2015-01-01

    This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...

  3. 8. Algorithm Design Techniques

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 2; Issue 8. Algorithms - Algorithm Design Techniques. R K Shyamasundar. Series Article Volume 2 ... Author Affiliations. R K Shyamasundar1. Computer Science Group, Tata Institute of Fundamental Research, Homi Bhabha Road, Mumbai 400 005, India ...

  4. The LLL algorithm survey and applications

    CERN Document Server

    Nguyen, Phong Q

    2010-01-01

    The first book to offer a comprehensive view of the LLL algorithm, this text surveys computational aspects of Euclidean lattices and their main applications. It includes many detailed motivations, explanations and examples.

  5. Autonomous intelligent vehicles theory, algorithms, and implementation

    CERN Document Server

    Cheng, Hong

    2011-01-01

    Here is the latest on intelligent vehicles, covering object and obstacle detection and recognition and vehicle motion control. Includes a navigation approach using global views; introduces algorithms for lateral and longitudinal motion control and more.

  6. Group leaders optimization algorithm

    Science.gov (United States)

    Daskin, Anmer; Kais, Sabre

    2011-03-01

    We present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique which is designed into a group architecture. To demonstrate the efficiency of the method, a standard suite of single and multi-dimensional optimization functions along with the energies and the geometric structures of Lennard-Jones clusters are given as well as the application of the algorithm on quantum circuit design problems. We show that as an improvement over previous methods, the algorithm scales as N 2.5 for the Lennard-Jones clusters of N-particles. In addition, an efficient circuit design is shown for a two-qubit Grover search algorithm which is a quantum algorithm providing quadratic speedup over the classical counterpart.

  7. Totally parallel multilevel algorithms

    Science.gov (United States)

    Frederickson, Paul O.

    1988-01-01

    Four totally parallel algorithms for the solution of a sparse linear system have common characteristics which become quite apparent when they are implemented on a highly parallel hypercube such as the CM2. These four algorithms are Parallel Superconvergent Multigrid (PSMG) of Frederickson and McBryan, Robust Multigrid (RMG) of Hackbusch, the FFT based Spectral Algorithm, and Parallel Cyclic Reduction. In fact, all four can be formulated as particular cases of the same totally parallel multilevel algorithm, which are referred to as TPMA. In certain cases the spectral radius of TPMA is zero, and it is recognized to be a direct algorithm. In many other cases the spectral radius, although not zero, is small enough that a single iteration per timestep keeps the local error within the required tolerance.

  8. Governance by algorithms

    Directory of Open Access Journals (Sweden)

    Francesca Musiani

    2013-08-01

    Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.

  9. Where genetic algorithms excel.

    Science.gov (United States)

    Baum, E B; Boneh, D; Garrett, C

    2001-01-01

    We analyze the performance of a genetic algorithm (GA) we call Culling, and a variety of other algorithms, on a problem we refer to as the Additive Search Problem (ASP). We show that the problem of learning the Ising perceptron is reducible to a noisy version of ASP. Noisy ASP is the first problem we are aware of where a genetic-type algorithm bests all known competitors. We generalize ASP to k-ASP to study whether GAs will achieve "implicit parallelism" in a problem with many more schemata. GAs fail to achieve this implicit parallelism, but we describe an algorithm we call Explicitly Parallel Search that succeeds. We also compute the optimal culling point for selective breeding, which turns out to be independent of the fitness function or the population distribution. We also analyze a mean field theoretic algorithm performing similarly to Culling on many problems. These results provide insight into when and how GAs can beat competing methods.

  10. Network-Oblivious Algorithms

    DEFF Research Database (Denmark)

    Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino

    2016-01-01

    A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....

  11. 3D automatic quantification applied to optically sectioned images to improve microscopy analysis

    Directory of Open Access Journals (Sweden)

    JE Diaz-Zamboni

    2009-08-01

    Full Text Available New fluorescence microscopy techniques, such as confocal or digital deconvolution microscopy, allow to easily obtain three-dimensional (3D information from specimens. However, there are few 3D quantification tools that allow extracting information of these volumes. Therefore, the amount of information acquired by these techniques is difficult to manipulate and analyze manually. The present study describes a model-based method, which for the first time shows 3D visualization and quantification of fluorescent apoptotic body signals, from optical serial sections of porcine hepatocyte spheroids correlating them to their morphological structures. The method consists on an algorithm that counts apoptotic bodies in a spheroid structure and extracts information from them, such as their centroids in cartesian and radial coordinates, relative to the spheroid centre, and their integrated intensity. 3D visualization of the extracted information, allowed us to quantify the distribution of apoptotic bodies in three different zones of the spheroid.

  12. Computational algorithm for molybdenite concentrate annealing

    International Nuclear Information System (INIS)

    Alkatseva, V.M.

    1995-01-01

    Computational algorithm is presented for annealing of molybdenite concentrate with granulated return dust and that of granulated molybdenite concentrate. The algorithm differs from the known analogies for sulphide raw material annealing by including the calculation of return dust mass in stationary annealing; the latter quantity varies form the return dust mass value obtained in the first iteration step. Masses of solid products are determined by distribution of concentrate annealing products, including return dust and benthonite. The algorithm is applied to computations for annealing of other sulphide materials. 3 refs

  13. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: a postmortem study.

    Science.gov (United States)

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q; Ducote, Justin L; Su, Min-Ying; Molloi, Sabee

    2013-12-01

    Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left-right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson's r, was used to evaluate the two image segmentation algorithms and the effect of bias field. The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left-right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left-right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson's r increased from 0.86 to 0.92 with the bias field correction. The investigated CLIC method

  14. Breast density quantification using magnetic resonance imaging (MRI) with bias field correction: A postmortem study

    International Nuclear Information System (INIS)

    Ding, Huanjun; Johnson, Travis; Lin, Muqing; Le, Huy Q.; Ducote, Justin L.; Su, Min-Ying; Molloi, Sabee

    2013-01-01

    Purpose: Quantification of breast density based on three-dimensional breast MRI may provide useful information for the early detection of breast cancer. However, the field inhomogeneity can severely challenge the computerized image segmentation process. In this work, the effect of the bias field in breast density quantification has been investigated with a postmortem study. Methods: T1-weighted images of 20 pairs of postmortem breasts were acquired on a 1.5 T breast MRI scanner. Two computer-assisted algorithms were used to quantify the volumetric breast density. First, standard fuzzy c-means (FCM) clustering was used on raw images with the bias field present. Then, the coherent local intensity clustering (CLIC) method estimated and corrected the bias field during the iterative tissue segmentation process. Finally, FCM clustering was performed on the bias-field-corrected images produced by CLIC method. The left–right correlation for breasts in the same pair was studied for both segmentation algorithms to evaluate the precision of the tissue classification. Finally, the breast densities measured with the three methods were compared to the gold standard tissue compositions obtained from chemical analysis. The linear correlation coefficient, Pearson'sr, was used to evaluate the two image segmentation algorithms and the effect of bias field. Results: The CLIC method successfully corrected the intensity inhomogeneity induced by the bias field. In left–right comparisons, the CLIC method significantly improved the slope and the correlation coefficient of the linear fitting for the glandular volume estimation. The left–right breast density correlation was also increased from 0.93 to 0.98. When compared with the percent fibroglandular volume (%FGV) from chemical analysis, results after bias field correction from both the CLIC the FCM algorithms showed improved linear correlation. As a result, the Pearson'sr increased from 0.86 to 0.92 with the bias field correction

  15. Terahertz identification and quantification of penicillamine enantiomers

    International Nuclear Information System (INIS)

    Ji Te; Zhao Hongwei; Chen Min; Xiao Tiqiao; Han Pengyu

    2013-01-01

    Identification and characterization of L-, D- and DL- penicillamine were demonstrated by Terahertz time-domain spectroscopy (THz-TDS). To understand the physical origins of the low frequency resonant modes, the density functional theory (DFT) was adopted for theoretical calculation. It was found that the collective THz frequency motions were decided by the intramolecular and intermolecular hydrogen bond interactions. Moreover, the quantification of penicillamine enantiomers mixture was demonstrated by a THz spectra fitting method with a relative error of less than 3.5%. This technique can be a valuable tool for the discrimination and quantification of chiral drugs in pharmaceutical industry. (authors)

  16. Quantification accuracy and partial volume effect in dependence of the attenuation correction of a state-of-the-art small animal PET scanner

    International Nuclear Information System (INIS)

    Mannheim, Julia G; Judenhofer, Martin S; Schmid, Andreas; Pichler, Bernd J; Tillmanns, Julia; Stiller, Detlef; Sossi, Vesna

    2012-01-01

    Quantification accuracy and partial volume effect (PVE) of the Siemens Inveon PET scanner were evaluated. The influence of transmission source activities (40 and 160 MBq) on the quantification accuracy and the PVE were determined. Dynamic range, object size and PVE for different sphere sizes, contrast ratios and positions in the field of view (FOV) were evaluated. The acquired data were reconstructed using different algorithms and correction methods. The activity level of the transmission source and the total emission activity in the FOV strongly influenced the attenuation maps. Reconstruction algorithms, correction methods, object size and location within the FOV had a strong influence on the PVE in all configurations. All evaluated parameters potentially influence the quantification accuracy. Hence, all protocols should be kept constant during a study to allow a comparison between different scans. (paper)

  17. Machine vision theory, algorithms, practicalities

    CERN Document Server

    Davies, E R

    2005-01-01

    In the last 40 years, machine vision has evolved into a mature field embracing a wide range of applications including surveillance, automated inspection, robot assembly, vehicle guidance, traffic monitoring and control, signature verification, biometric measurement, and analysis of remotely sensed images. While researchers and industry specialists continue to document their work in this area, it has become increasingly difficult for professionals and graduate students to understand the essential theory and practicalities well enough to design their own algorithms and systems. This book directl

  18. The use of zeolites to generate PET phantoms for the validation of quantification strategies in oncology

    Energy Technology Data Exchange (ETDEWEB)

    Zito, Felicia; De Bernardi, Elisabetta; Soffientini, Chiara; Canzi, Cristina; Casati, Rosangela; Gerundini, Paolo; Baselli, Giuseppe [Nuclear Medicine Department, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, 20122 Milan (Italy); Bioengineering Department, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milan (Italy) and Tecnomed Foundation, University of Milano-Bicocca, via Pergolesi 33, 20900 Monza (Italy); Bioengineering Department, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milan (Italy); Nuclear Medicine Department, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, 20122 Milan (Italy); Bioengineering Department, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milan (Italy)

    2012-09-15

    -FDG solution is able to saturate the zeolite pores and that the concentration does not influence the distribution uniformity of both solution and solute, at least at the trace concentrations used for zeolite activation. An additional proof of uniformity of zeolite saturation was obtained observing a correspondence between uptake and adsorbed volume of solution, corresponding to about 27.8% of zeolite volume. As to the ground truth for zeolites positioned inside the phantom, the segmentation of finely aligned CT images provided reliable borders, as demonstrated by a mean absolute volume error of 2.8% with respect to the PET threshold segmentation corresponding to the maximum Dice. Conclusions: The proposed methodology allowed obtaining an experimental phantom data set that can be used as a feasible tool to test and validate quantification and segmentation algorithms for PET in oncology. The phantom is currently under consideration for being included in a benchmark designed by AAPM TG211, which will be available to the community to evaluate PET automatic segmentation methods.

  19. The use of zeolites to generate PET phantoms for the validation of quantification strategies in oncology

    International Nuclear Information System (INIS)

    Zito, Felicia; De Bernardi, Elisabetta; Soffientini, Chiara; Canzi, Cristina; Casati, Rosangela; Gerundini, Paolo; Baselli, Giuseppe

    2012-01-01

    saturate the zeolite pores and that the concentration does not influence the distribution uniformity of both solution and solute, at least at the trace concentrations used for zeolite activation. An additional proof of uniformity of zeolite saturation was obtained observing a correspondence between uptake and adsorbed volume of solution, corresponding to about 27.8% of zeolite volume. As to the ground truth for zeolites positioned inside the phantom, the segmentation of finely aligned CT images provided reliable borders, as demonstrated by a mean absolute volume error of 2.8% with respect to the PET threshold segmentation corresponding to the maximum Dice. Conclusions: The proposed methodology allowed obtaining an experimental phantom data set that can be used as a feasible tool to test and validate quantification and segmentation algorithms for PET in oncology. The phantom is currently under consideration for being included in a benchmark designed by AAPM TG211, which will be available to the community to evaluate PET automatic segmentation methods.

  20. Benchmarking common quantification strategies for large-scale phosphoproteomics

    DEFF Research Database (Denmark)

    Hogrebe, Alexander; von Stechow, Louise; Bekker-Jensen, Dorte B

    2018-01-01

    Comprehensive mass spectrometry (MS)-based proteomics is now feasible, but reproducible quantification remains challenging, especially for post-translational modifications such as phosphorylation. Here, we compare the most popular quantification techniques for global phosphoproteomics: label-free...

  1. Quantification of organ motion based on an adaptive image-based scale invariant feature method

    Energy Technology Data Exchange (ETDEWEB)

    Paganelli, Chiara [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133 (Italy); Peroni, Marta [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Paul Scherrer Institut, Zentrum für Protonentherapie, WMSA/C15, CH-5232 Villigen PSI (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, strada Campeggi 53, Pavia 27100 (Italy)

    2013-11-15

    Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT

  2. A novel validation algorithm allows for automated cell tracking and the extraction of biologically meaningful parameters.

    Directory of Open Access Journals (Sweden)

    Daniel H Rapoport

    with high reliability and statistical significance. These include the distribution of life/cycle times and cell areas, as well as of the symmetry of cell divisions and motion analyses. The new algorithm thus allows for the quantification and parameterization of cell culture with unprecedented accuracy. To evaluate our validation algorithm, two large reference data sets were manually created. These data sets comprise more than 320,000 unstained adult pancreatic stem cells from rat, including 2592 mitotic events. The reference data sets specify every cell position and shape, and assign each cell to the correct branch of its genealogic tree. We provide these reference data sets for free use by others as a benchmark for the future improvement of automated tracking methods.

  3. A Probabilistic Framework for Peptide and Protein Quantification from Data-Dependent and Data-Independent LC-MS Proteomics Experiments

    DEFF Research Database (Denmark)

    Richardson, Katherine; Denny, R.; Hughes, C.

    2012-01-01

    A probability-based quantification framework is presented for the calculation of relative peptide and protein abundance in label-free and label-dependent LC-MS proteomics data. The results are accompanied by credible intervals and regulation probabilities. The algorithm takes into account data un...

  4. Quantification of trace-level DNA by real-time whole genome amplification.

    Science.gov (United States)

    Kang, Min-Jung; Yu, Hannah; Kim, Sook-Kyung; Park, Sang-Ryoul; Yang, Inchul

    2011-01-01

    Quantification of trace amounts of DNA is a challenge in analytical applications where the concentration of a target DNA is very low or only limited amounts of samples are available for analysis. PCR-based methods including real-time PCR are highly sensitive and widely used for quantification of low-level DNA samples. However, ordinary PCR methods require at least one copy of a specific gene sequence for amplification and may not work for a sub-genomic amount of DNA. We suggest a real-time whole genome amplification method adopting the degenerate oligonucleotide primed PCR (DOP-PCR) for quantification of sub-genomic amounts of DNA. This approach enabled quantification of sub-picogram amounts of DNA independently of their sequences. When the method was applied to the human placental DNA of which amount was accurately determined by inductively coupled plasma-optical emission spectroscopy (ICP-OES), an accurate and stable quantification capability for DNA samples ranging from 80 fg to 8 ng was obtained. In blind tests of laboratory-prepared DNA samples, measurement accuracies of 7.4%, -2.1%, and -13.9% with analytical precisions around 15% were achieved for 400-pg, 4-pg, and 400-fg DNA samples, respectively. A similar quantification capability was also observed for other DNA species from calf, E. coli, and lambda phage. Therefore, when provided with an appropriate standard DNA, the suggested real-time DOP-PCR method can be used as a universal method for quantification of trace amounts of DNA.

  5. A New Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Medha Gupta

    2016-07-01

    Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.

  6. Gems of combinatorial optimization and graph algorithms

    CERN Document Server

    Skutella, Martin; Stiller, Sebastian; Wagner, Dorothea

    2015-01-01

    Are you looking for new lectures for your course on algorithms, combinatorial optimization, or algorithmic game theory?  Maybe you need a convenient source of relevant, current topics for a graduate student or advanced undergraduate student seminar?  Or perhaps you just want an enjoyable look at some beautiful mathematical and algorithmic results, ideas, proofs, concepts, and techniques in discrete mathematics and theoretical computer science?   Gems of Combinatorial Optimization and Graph Algorithms is a handpicked collection of up-to-date articles, carefully prepared by a select group of international experts, who have contributed some of their most mathematically or algorithmically elegant ideas.  Topics include longest tours and Steiner trees in geometric spaces, cartograms, resource buying games, congestion games, selfish routing, revenue equivalence and shortest paths, scheduling, linear structures in graphs, contraction hierarchies, budgeted matching problems, and motifs in networks.   This ...

  7. Adaptive Filtering Algorithms and Practical Implementation

    CERN Document Server

    Diniz, Paulo S R

    2013-01-01

    In the fourth edition of Adaptive Filtering: Algorithms and Practical Implementation, author Paulo S.R. Diniz presents the basic concepts of adaptive signal processing and adaptive filtering in a concise and straightforward manner. The main classes of adaptive filtering algorithms are presented in a unified framework, using clear notations that facilitate actual implementation. The main algorithms are described in tables, which are detailed enough to allow the reader to verify the covered concepts. Many examples address problems drawn from actual applications. New material to this edition includes: Analytical and simulation examples in Chapters 4, 5, 6 and 10 Appendix E, which summarizes the analysis of set-membership algorithm Updated problems and references Providing a concise background on adaptive filtering, this book covers the family of LMS, affine projection, RLS and data-selective set-membership algorithms as well as nonlinear, sub-band, blind, IIR adaptive filtering, and more. Several problems are...

  8. A new algorithm for coding geological terminology

    Science.gov (United States)

    Apon, W.

    The Geological Survey of The Netherlands has developed an algorithm to convert the plain geological language of lithologic well logs into codes suitable for computer processing and link these to existing plotting programs. The algorithm is based on the "direct method" and operates in three steps: (1) searching for defined word combinations and assigning codes; (2) deleting duplicated codes; (3) correcting incorrect code combinations. Two simple auxiliary files are used. A simple PC demonstration program is included to enable readers to experiment with this algorithm. The Department of Quarternary Geology of the Geological Survey of The Netherlands possesses a large database of shallow lithologic well logs in plain language and has been using a program based on this algorithm for about 3 yr. Erroneous codes resulting from using this algorithm are less than 2%.

  9. Learning theory of distributed spectral algorithms

    International Nuclear Information System (INIS)

    Guo, Zheng-Chu; Lin, Shao-Bo; Zhou, Ding-Xuan

    2017-01-01

    Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms. (paper)

  10. Single DNA imaging and length quantification through a mobile phone microscope

    Science.gov (United States)

    Wei, Qingshan; Luo, Wei; Chiang, Samuel; Kappel, Tara; Mejia, Crystal; Tseng, Derek; Chan, Raymond Yan L.; Yan, Eddie; Qi, Hangfei; Shabbir, Faizan; Ozkan, Haydar; Feng, Steve; Ozcan, Aydogan

    2016-03-01

    The development of sensitive optical microscopy methods for the detection of single DNA molecules has become an active research area which cultivates various promising applications including point-of-care (POC) genetic testing and diagnostics. Direct visualization of individual DNA molecules usually relies on sophisticated optical microscopes that are mostly available in well-equipped laboratories. For POC DNA testing/detection, there is an increasing need for the development of new single DNA imaging and sensing methods that are field-portable, cost-effective, and accessible for diagnostic applications in resource-limited or field-settings. For this aim, we developed a mobile-phone integrated fluorescence microscopy platform that allows imaging and sizing of single DNA molecules that are stretched on a chip. This handheld device contains an opto-mechanical attachment integrated onto a smartphone camera module, which creates a high signal-to-noise ratio dark-field imaging condition by using an oblique illumination/excitation configuration. Using this device, we demonstrated imaging of individual linearly stretched λ DNA molecules (48 kilobase-pair, kbp) over 2 mm2 field-of-view. We further developed a robust computational algorithm and a smartphone app that allowed the users to quickly quantify the length of each DNA fragment imaged using this mobile interface. The cellphone based device was tested by five different DNA samples (5, 10, 20, 40, and 48 kbp), and a sizing accuracy of <1 kbp was demonstrated for DNA strands longer than 10 kbp. This mobile DNA imaging and sizing platform can be very useful for various diagnostic applications including the detection of disease-specific genes and quantification of copy-number-variations at POC settings.

  11. Magnet sorting algorithms

    International Nuclear Information System (INIS)

    Dinev, D.

    1996-01-01

    Several new algorithms for sorting of dipole and/or quadrupole magnets in synchrotrons and storage rings are described. The algorithms make use of a combinatorial approach to the problem and belong to the class of random search algorithms. They use an appropriate metrization of the state space. The phase-space distortion (smear) is used as a goal function. Computational experiments for the case of the JINR-Dubna superconducting heavy ion synchrotron NUCLOTRON have shown a significant reduction of the phase-space distortion after the magnet sorting. (orig.)

  12. Strain quantification in epitaxial thin films

    International Nuclear Information System (INIS)

    Cushley, M

    2008-01-01

    Strain arising in epitaxial thin films can be beneficial in some cases but devastating in others. By altering the lattice parameters, strain may give a thin film properties hitherto unseen in the bulk material. On the other hand, heavily strained systems are prone to develop lattice defects in order to relieve the strain, which can cause device failure or, at least, a decrease in functionality. Using convergent beam electron diffraction (CBED) and high-resolution transmission electron microscopy (HRTEM), it is possible to determine local strains within a material. By comparing the results from CBED and HRTEM experiments, it is possible to gain a complete view of a material, including the strain and any lattice defects present. As well as looking at how the two experimental techniques differ from each other, I will also look at how results from different image analysis algorithms compare. Strain in Si/SiGe samples and BST/SRO/MgO capacitor structures will be discussed.

  13. Colour thresholding and objective quantification in bioimaging

    Science.gov (United States)

    Fermin, C. D.; Gerber, M. A.; Torre-Bueno, J. R.

    1992-01-01

    Computer imaging is rapidly becoming an indispensable tool for the quantification of variables in research and medicine. Whilst its use in medicine has largely been limited to qualitative observations, imaging in applied basic sciences, medical research and biotechnology demands objective quantification of the variables in question. In black and white densitometry (0-256 levels of intensity) the separation of subtle differences between closely related hues from stains is sometimes very difficult. True-colour and real-time video microscopy analysis offer choices not previously available with monochrome systems. In this paper we demonstrate the usefulness of colour thresholding, which has so far proven indispensable for proper objective quantification of the products of histochemical reactions and/or subtle differences in tissue and cells. In addition, we provide interested, but untrained readers with basic information that may assist decisions regarding the most suitable set-up for a project under consideration. Data from projects in progress at Tulane are shown to illustrate the advantage of colour thresholding over monochrome densitometry and for objective quantification of subtle colour differences between experimental and control samples.

  14. Recurrence quantification analysis in Liu's attractor

    International Nuclear Information System (INIS)

    Balibrea, Francisco; Caballero, M. Victoria; Molera, Lourdes

    2008-01-01

    Recurrence Quantification Analysis is used to detect transitions chaos to periodical states or chaos to chaos in a new dynamical system proposed by Liu et al. This system contains a control parameter in the second equation and was originally introduced to investigate the forming mechanism of the compound structure of the chaotic attractor which exists when the control parameter is zero

  15. Quantification of coating aging using impedance measurements

    NARCIS (Netherlands)

    Westing, E.P.M. van; Weijde, D.H. van der; Vreijling, M.P.W.; Ferrari, G.M.; Wit, J.H.W. de

    1998-01-01

    This chapter shows the application results of a novel approach to quantify the ageing of organic coatings using impedance measurements. The ageing quantification is based on the typical impedance behaviour of barrier coatings in immersion. This immersion behaviour is used to determine the limiting

  16. Quantification analysis of CT for aphasic patients

    International Nuclear Information System (INIS)

    Watanabe, Shunzo; Ooyama, Hiroshi; Hojo, Kei; Tasaki, Hiroichi; Hanazono, Toshihide; Sato, Tokijiro; Metoki, Hirobumi; Totsuka, Motokichi; Oosumi, Noboru.

    1987-01-01

    Using a microcomputer, the locus and extent of the lesions, as demonstrated by computed tomography, for 44 aphasic patients with various types of aphasia were superimposed onto standardized matrices, composed of 10 slices with 3000 points (50 by 60). The relationships between the foci of the lesions and types of aphasia were investigated on the slices numbered 3, 4, 5, and 6 using a quantification theory, Type 3 (pattern analysis). Some types of regularities were observed on Slices 3, 4, 5, and 6. The group of patients with Broca's aphasia and the group with Wernicke's aphasia were generally separated on the 1st component and the 2nd component of the quantification theory, Type 3. On the other hand, the group with global aphasia existed between the group with Broca's aphasia and that with Wernicke's aphasia. The group of patients with amnestic aphasia had no specific findings, and the group with conduction aphasia existed near those with Wernicke's aphasia. The above results serve to establish the quantification theory, Type 2 (discrimination analysis) and the quantification theory, Type 1 (regression analysis). (author)

  17. Quantification analysis of CT for aphasic patients

    Energy Technology Data Exchange (ETDEWEB)

    Watanabe, S.; Ooyama, H.; Hojo, K.; Tasaki, H.; Hanazono, T.; Sato, T.; Metoki, H.; Totsuka, M.; Oosumi, N.

    1987-02-01

    Using a microcomputer, the locus and extent of the lesions, as demonstrated by computed tomography, for 44 aphasic patients with various types of aphasia were superimposed onto standardized matrices, composed of 10 slices with 3000 points (50 by 60). The relationships between the foci of the lesions and types of aphasia were investigated on the slices numbered 3, 4, 5, and 6 using a quantification theory, Type 3 (pattern analysis). Some types of regularities were observed on slices 3, 4, 5, and 6. The group of patients with Broca's aphasia and the group with Wernicke's aphasia were generally separated on the 1st component and the 2nd component of the quantification theory, Type 3. On the other hand, the group with global aphasia existed between the group with Broca's aphasia and that with Wernicke's aphasia. The group of patients with amnestic aphasia had no specific findings, and the group with conduction aphasia existed near those with Wernicke's aphasia. The above results serve to establish the quantification theory, Type 2 (discrimination analysis) and the quantification theory, Type 1 (regression analysis).

  18. Quantification of Cannabinoid Content in Cannabis

    Science.gov (United States)

    Tian, Y.; Zhang, F.; Jia, K.; Wen, M.; Yuan, Ch.

    2015-09-01

    Cannabis is an economically important plant that is used in many fields, in addition to being the most commonly consumed illicit drug worldwide. Monitoring the spatial distribution of cannabis cultivation and judging whether it is drug- or fiber-type cannabis is critical for governments and international communities to understand the scale of the illegal drug trade. The aim of this study was to investigate whether the cannabinoids content in cannabis could be spectrally quantified using a spectrometer and to identify the optimal wavebands for quantifying the cannabinoid content. Spectral reflectance data of dried cannabis leaf samples and the cannabis canopy were measured in the laboratory and in the field, respectively. Correlation analysis and the stepwise multivariate regression method were used to select the optimal wavebands for cannabinoid content quantification based on the laboratory-measured spectral data. The results indicated that the delta-9-tetrahydrocannabinol (THC) content in cannabis leaves could be quantified using laboratory-measured spectral reflectance data and that the 695 nm band is the optimal band for THC content quantification. This study provides prerequisite information for designing spectral equipment to enable immediate quantification of THC content in cannabis and to discriminate drug- from fiber-type cannabis based on THC content quantification in the field.

  19. Quantification of glycyrrhizin biomarker in Glycyrrhiza glabra ...

    African Journals Online (AJOL)

    Background: A simple and sensitive thin-layer chromatographic method has been established for quantification of glycyrrhizin in Glycyrrhiza glabra rhizome and baby herbal formulations by validated Reverse Phase HPTLC method. Materials and Methods: RP-HPTLC Method was carried out using glass coated with RP-18 ...

  20. Noninvasive Quantification of Pancreatic Fat in Humans

    OpenAIRE

    Lingvay, Ildiko; Esser, Victoria; Legendre, Jaime L.; Price, Angela L.; Wertz, Kristen M.; Adams-Huet, Beverley; Zhang, Song; Unger, Roger H.; Szczepaniak, Lidia S.

    2009-01-01

    Objective: To validate magnetic resonance spectroscopy (MRS) as a tool for non-invasive quantification of pancreatic triglyceride (TG) content and to measure the pancreatic TG content in a diverse human population with a wide range of body mass index (BMI) and glucose control.

  1. Cues, quantification, and agreement in language comprehension.

    Science.gov (United States)

    Tanner, Darren; Bulkes, Nyssa Z

    2015-12-01

    We investigated factors that affect the comprehension of subject-verb agreement in English, using quantification as a window into the relationship between morphosyntactic processes in language production and comprehension. Event-related brain potentials (ERPs) were recorded while participants read sentences with grammatical and ungrammatical verbs, in which the plurality of the subject noun phrase was either doubly marked (via overt plural quantification and morphological marking on the noun) or singly marked (via only plural morphology on the noun). Both acceptability judgments and the ERP data showed heightened sensitivity to agreement violations when quantification provided an additional cue to the grammatical number of the subject noun phrase, over and above plural morphology. This is consistent with models of grammatical comprehension that emphasize feature prediction in tandem with cue-based memory retrieval. Our results additionally contrast with those of prior studies that showed no effects of plural quantification on agreement in language production. These findings therefore highlight some nontrivial divergences in the cues and mechanisms supporting morphosyntactic processing in language production and comprehension.

  2. Perfusion Quantification Using Gaussian Process Deconvolution

    DEFF Research Database (Denmark)

    Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward

    2002-01-01

    The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...

  3. A new taxonomy of sublinear keyword pattern matching algorithms

    NARCIS (Netherlands)

    Cleophas, L.G.W.A.; Watson, B.W.; Zwaan, G.

    2004-01-01

    Abstract This paper presents a new taxonomy of sublinear (multiple) keyword pattern matching algorithms. Based on an earlier taxonomy by Watson and Zwaan [WZ96, WZ95], this new taxonomy includes not only suffix-based algorithms related to the Boyer-Moore, Commentz-Walter and Fan-Su algorithms, but

  4. New algorithms for the symmetric tridiagonal eigenvalue computation

    Energy Technology Data Exchange (ETDEWEB)

    Pan, V. [City Univ. of New York, Bronx, NY (United States)]|[International Computer Sciences Institute, Berkeley, CA (United States)

    1994-12-31

    The author presents new algorithms that accelerate the bisection method for the symmetric eigenvalue problem. The algorithms rely on some new techniques, which include acceleration of Newton`s iteration and can also be further applied to acceleration of some other iterative processes, in particular, of iterative algorithms for approximating polynomial zeros.

  5. Adaptive discrete-ordinates algorithms and strategies

    International Nuclear Information System (INIS)

    Stone, J.C.; Adams, M.L.

    2005-01-01

    We present our latest algorithms and strategies for adaptively refined discrete-ordinates quadrature sets. In our basic strategy, which we apply here in two-dimensional Cartesian geometry, the spatial domain is divided into regions. Each region has its own quadrature set, which is adapted to the region's angular flux. Our algorithms add a 'test' direction to the quadrature set if the angular flux calculated at that direction differs by more than a user-specified tolerance from the angular flux interpolated from other directions. Different algorithms have different prescriptions for the method of interpolation and/or choice of test directions and/or prescriptions for quadrature weights. We discuss three different algorithms of different interpolation orders. We demonstrate through numerical results that each algorithm is capable of generating solutions with negligible angular discretization error. This includes elimination of ray effects. We demonstrate that all of our algorithms achieve a given level of error with far fewer unknowns than does a standard quadrature set applied to an entire problem. To address a potential issue with other algorithms, we present one algorithm that retains exact integration of high-order spherical-harmonics functions, no matter how much local refinement takes place. To address another potential issue, we demonstrate that all of our methods conserve partial currents across interfaces where quadrature sets change. We conclude that our approach is extremely promising for solving the long-standing problem of angular discretization error in multidimensional transport problems. (authors)

  6. Analysis and Improvement of Fireworks Algorithm

    Directory of Open Access Journals (Sweden)

    Xi-Guang Li

    2017-02-01

    Full Text Available The Fireworks Algorithm is a recently developed swarm intelligence algorithm to simulate the explosion process of fireworks. Based on the analysis of each operator of Fireworks Algorithm (FWA, this paper improves the FWA and proves that the improved algorithm converges to the global optimal solution with probability 1. The proposed algorithm improves the goal of further boosting performance and achieving global optimization where mainly include the following strategies. Firstly using the opposition-based learning initialization population. Secondly a new explosion amplitude mechanism for the optimal firework is proposed. In addition, the adaptive t-distribution mutation for non-optimal individuals and elite opposition-based learning for the optimal individual are used. Finally, a new selection strategy, namely Disruptive Selection, is proposed to reduce the running time of the algorithm compared with FWA. In our simulation, we apply the CEC2013 standard functions and compare the proposed algorithm (IFWA with SPSO2011, FWA, EFWA and dynFWA. The results show that the proposed algorithm has better overall performance on the test functions.

  7. Fluid structure coupling algorithm

    International Nuclear Information System (INIS)

    McMaster, W.H.; Gong, E.Y.; Landram, C.S.; Quinones, D.F.

    1980-01-01

    A fluid-structure-interaction algorithm has been developed and incorporated into the two-dimensional code PELE-IC. This code combines an Eulerian incompressible fluid algorithm with a Lagrangian finite element shell algorithm and incorporates the treatment of complex free surfaces. The fluid structure and coupling algorithms have been verified by the calculation of solved problems from the literature and from air and steam blowdown experiments. The code has been used to calculate loads and structural response from air blowdown and the oscillatory condensation of steam bubbles in water suppression pools typical of boiling water reactors. The techniques developed have been extended to three dimensions and implemented in the computer code PELE-3D

  8. Algorithmic phase diagrams

    Science.gov (United States)

    Hockney, Roger

    1987-01-01

    Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.

  9. Diagnostic Algorithm Benchmarking

    Science.gov (United States)

    Poll, Scott

    2011-01-01

    A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.

  10. Inclusive Flavour Tagging Algorithm

    International Nuclear Information System (INIS)

    Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex

    2016-01-01

    Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment. (paper)

  11. Comparison of turbulence mitigation algorithms

    Science.gov (United States)

    Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric

    2017-07-01

    When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.

  12. Vector Network Coding Algorithms

    OpenAIRE

    Ebrahimi, Javad; Fragouli, Christina

    2010-01-01

    We develop new algebraic algorithms for scalar and vector network coding. In vector network coding, the source multicasts information by transmitting vectors of length L, while intermediate nodes process and combine their incoming packets by multiplying them with L x L coding matrices that play a similar role as coding c in scalar coding. Our algorithms for scalar network jointly optimize the employed field size while selecting the coding coefficients. Similarly, for vector coding, our algori...

  13. Optimization algorithms and applications

    CERN Document Server

    Arora, Rajesh Kumar

    2015-01-01

    Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc

  14. Generalized phase retrieval algorithm based on information measures

    OpenAIRE

    Shioya, Hiroyuki; Gohara, Kazutoshi

    2006-01-01

    An iterative phase retrieval algorithm based on the maximum entropy method (MEM) is presented. Introducing a new generalized information measure, we derive a novel class of algorithms which includes the conventionally used error reduction algorithm and a MEM-type iterative algorithm which is presented for the first time. These different phase retrieval methods are unified on the basis of the framework of information measures used in information theory.

  15. Semiconvergence and Relaxation Parameters for Projected SIRT Algorithms

    DEFF Research Database (Denmark)

    Elfving, Tommy; Hansen, Per Christian; Nikazad, Touraj

    2012-01-01

    We give a detailed study of the semiconverg ence behavior of projected nonstationary simultaneous iterative reconstruction technique (SIRT) algorithms, including the projected Landweber algorithm. We also consider the use of a relaxation parameter strategy, proposed recently for the standard...... algorithms, for controlling the semiconvergence of the projected algorithms. We demonstrate the semiconvergence and the performance of our strategies by examples taken from tomographic imaging. © 2012 Society for Industrial and Applied Mathematics....

  16. From Genetics to Genetic Algorithms

    Indian Academy of Sciences (India)

    Genetic algorithms (GAs) are computational optimisation schemes with an ... The algorithms solve optimisation problems ..... Genetic Algorithms in Search, Optimisation and Machine. Learning, Addison-Wesley Publishing Company, Inc. 1989.

  17. Algorithmic Principles of Mathematical Programming

    NARCIS (Netherlands)

    Faigle, Ulrich; Kern, Walter; Still, Georg

    2002-01-01

    Algorithmic Principles of Mathematical Programming investigates the mathematical structures and principles underlying the design of efficient algorithms for optimization problems. Recent advances in algorithmic theory have shown that the traditionally separate areas of discrete optimization, linear

  18. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc

    2016-06-20

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  19. RNAontheBENCH: computational and empirical resources for benchmarking RNAseq quantification and differential expression methods

    KAUST Repository

    Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe

    2016-01-01

    RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.

  20. RFID Location Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Zi Min

    2016-01-01

    Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.

  1. Modified Firefly Algorithm

    Directory of Open Access Journals (Sweden)

    Surafel Luleseged Tilahun

    2012-01-01

    Full Text Available Firefly algorithm is one of the new metaheuristic algorithms for optimization problems. The algorithm is inspired by the flashing behavior of fireflies. In the algorithm, randomly generated solutions will be considered as fireflies, and brightness is assigned depending on their performance on the objective function. One of the rules used to construct the algorithm is, a firefly will be attracted to a brighter firefly, and if there is no brighter firefly, it will move randomly. In this paper we modify this random movement of the brighter firefly by generating random directions in order to determine the best direction in which the brightness increases. If such a direction is not generated, it will remain in its current position. Furthermore the assignment of attractiveness is modified in such a way that the effect of the objective function is magnified. From the simulation result it is shown that the modified firefly algorithm performs better than the standard one in finding the best solution with smaller CPU time.

  2. Algorithms for adaptive histogram equalization

    International Nuclear Information System (INIS)

    Pizer, S.M.; Austin, J.D.; Cromartie, R.; Geselowitz, A.; Ter Haar Romeny, B.; Zimmerman, J.B.; Zuiderveld, K.

    1986-01-01

    Adaptive histogram equalization (ahe) is a contrast enhancement method designed to be broadly applicable and having demonstrated effectiveness [Zimmerman, 1985]. However, slow speed and the overenhancement of noise it produces in relatively homogeneous regions are two problems. The authors summarize algorithms designed to overcome these and other concerns. These algorithms include interpolated ahe, to speed up the method on general purpose computers; a version of interpolated ahe designed to run in a few seconds on feedback processors; a version of full ahe designed to run in under one second on custom VLSI hardware; and clipped ahe, designed to overcome the problem of overenhancement of noise contrast. The authors conclude that clipped ahe should become a method of choice in medical imaging and probably also in other areas of digital imaging, and that clipped ahe can be made adequately fast to be routinely applied in the normal display sequence

  3. A Probabilistic Framework for Peptide and Protein Quantification from Data-Dependent and Data-Independent LC-MS Proteomics Experiments

    Science.gov (United States)

    Richardson, Keith; Denny, Richard; Hughes, Chris; Skilling, John; Sikora, Jacek; Dadlez, Michał; Manteca, Angel; Jung, Hye Ryung; Jensen, Ole Nørregaard; Redeker, Virginie; Melki, Ronald; Langridge, James I.; Vissers, Johannes P.C.

    2013-01-01

    A probability-based quantification framework is presented for the calculation of relative peptide and protein abundance in label-free and label-dependent LC-MS proteomics data. The results are accompanied by credible intervals and regulation probabilities. The algorithm takes into account data uncertainties via Poisson statistics modified by a noise contribution that is determined automatically during an initial normalization stage. Protein quantification relies on assignments of component peptides to the acquired data. These assignments are generally of variable reliability and may not be present across all of the experiments comprising an analysis. It is also possible for a peptide to be identified to more than one protein in a given mixture. For these reasons the algorithm accepts a prior probability of peptide assignment for each intensity measurement. The model is constructed in such a way that outliers of any type can be automatically reweighted. Two discrete normalization methods can be employed. The first method is based on a user-defined subset of peptides, while the second method relies on the presence of a dominant background of endogenous peptides for which the concentration is assumed to be unaffected. Normalization is performed using the same computational and statistical procedures employed by the main quantification algorithm. The performance of the algorithm will be illustrated on example data sets, and its utility demonstrated for typical proteomics applications. The quantification algorithm supports relative protein quantification based on precursor and product ion intensities acquired by means of data-dependent methods, originating from all common isotopically-labeled approaches, as well as label-free ion intensity-based data-independent methods. PMID:22871168

  4. Fast quantification of uranium ores by X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Morales, E.; De la Roca, W.

    1988-09-01

    A fast and reliable method for batch quantification of uranium in mineral ores is described. It is based on the verification of the homogeneity of the samples through measuring the ratio of coherent and incoherent dispersion. A few samples, 6 or more, are taken from the whole batch and analyzed by fundamental parameters method. A calibration curve using Lancance-Traill or Lucas-Tooth algorithms is constructed in order that the absorption and enhancement effects could be corrected. Simple interpolation of the intensities of the remaining samples gives their concentration. (author)

  5. A Short Review of FDTD-Based Methods for Uncertainty Quantification in Computational Electromagnetics

    Directory of Open Access Journals (Sweden)

    Theodoros T. Zygiridis

    2017-01-01

    Full Text Available We provide a review of selected computational methodologies that are based on the deterministic finite-difference time-domain algorithm and are suitable for the investigation of electromagnetic problems involving uncertainties. As it will become apparent, several alternatives capable of performing uncertainty quantification in a variety of cases exist, each one exhibiting different qualities and ranges of applicability, which we intend to point out here. Given the numerous available approaches, the purpose of this paper is to clarify the main strengths and weaknesses of the described methodologies and help the potential readers to safely select the most suitable approach for their problem under consideration.

  6. Development of the quantification procedures for in situ XRF analysis

    International Nuclear Information System (INIS)

    Kump, P.; Necemer, M.; Rupnik, P.

    2005-01-01

    For in situ XRF applications, two excitation systems (radioisotope and tube excited) and an X ray spectrometer based on an Si-PIN detector were assembled and used. The radioisotope excitation system with an Am-241 source was assembled into a prototype of a compact XRF analyser PEDUZO-01, which is also applicable in field work. The existing quantification software QAES (quantitative analysis of environmental samples) was assessed to be adequate also in field work. This QAES software was also integrated into a new software attached to the developed XRF analyser PEDUZO-01, which includes spectrum acquisition, spectrum analysis and quantification and runs in the LABVIEW environment. In a process of assessment of the Si-PIN based X ray spectrometers and QAES quantification software in field work, a comparison was made with the results obtained by the standard Si(Li) based spectrometer. The results of this study prove that the use of this spectrometer is adequate for field work. This work was accepted for publication in X ray Spectrometry. Application of a simple sample preparation of solid samples was studied in view of the analytical results obtained. It has been established that under definite conditions the results are not very different from the ones obtained by the homogenized sample pressed into the pellet. The influence of particle size and mineralogical effects on quantitative results was studied. A simple sample preparation kit was proposed. Sample preparation for the analysis of water samples by precipitation with APDC and aerosol analysis using a dichotomous sampler were also adapted and used in the field work. An adequate sample preparation kit was proposed. (author)

  7. Simultaneous quantification of flavonoids and triterpenoids in licorice using HPLC.

    Science.gov (United States)

    Wang, Yuan-Chuen; Yang, Yi-Shan

    2007-05-01

    Numerous bioactive compounds are present in licorice (Glycyrrhizae Radix), including flavonoids and triterpenoids. In this study, a reversed-phase high-performance liquid chromatography (HPLC) method for simultaneous quantification of three flavonoids (liquiritin, liquiritigenin and isoliquiritigenin) and four triterpenoids (glycyrrhizin, 18alpha-glycyrrhetinic acid, 18beta-glycyrrhetinic acid and 18beta-glycyrrhetinic acid methyl ester) from licorice was developed, and further, to quantify these 7 compounds from 20 different licorice samples. Specifically, the reverse-phase HPLC was performed with a gradient mobile phase composed of 25 mM phosphate buffer (pH 2.5)-acetonitrile featuring gradient elution steps as follows: 0 min, 100:0; 10 min, 80:20; 50 min, 70:30; 73 min, 50:50; 110 min, 50:50; 125 min, 20:80; 140 min, 20:80, and peaks were detected at 254 nm. By using our technique, a rather good specificity was obtained regarding to the separation of these seven compounds. The regression coefficient for the linear equations for the seven compounds lay between 0.9978 and 0.9992. The limits of detection and quantification lay in the range of 0.044-0.084 and 0.13-0.25 microg/ml, respectively. The relative recovery rates for the seven compounds lay between 96.63+/-2.43 and 103.55+/-2.77%. Coefficient variation for intra-day and inter-day precisions lay in the range of 0.20-1.84 and 0.28-1.86%, respectively. Based upon our validation results, this analytical technique is a convenient method to simultaneous quantify numerous bioactive compounds derived from licorice, featuring good quantification parameters, accuracy and precision.

  8. The role of PET quantification in cardiovascular imaging.

    Science.gov (United States)

    Slomka, Piotr; Berman, Daniel S; Alexanderson, Erick; Germano, Guido

    2014-08-01

    Positron Emission Tomography (PET) has several clinical and research applications in cardiovascular imaging. Myocardial perfusion imaging with PET allows accurate global and regional measurements of myocardial perfusion, myocardial blood flow and function at stress and rest in one exam. Simultaneous assessment of function and perfusion by PET with quantitative software is currently the routine practice. Combination of ejection fraction reserve with perfusion information may improve the identification of severe disease. The myocardial viability can be estimated by quantitative comparison of fluorodeoxyglucose ( 18 FDG) and rest perfusion imaging. The myocardial blood flow and coronary flow reserve measurements are becoming routinely included in the clinical assessment due to enhanced dynamic imaging capabilities of the latest PET/CT scanners. Absolute flow measurements allow evaluation of the coronary microvascular dysfunction and provide additional prognostic and diagnostic information for coronary disease. Standard quantitative approaches to compute myocardial blood flow from kinetic PET data in automated and rapid fashion have been developed for 13 N-ammonia, 15 O-water and 82 Rb radiotracers. The agreement between software methods available for such analysis is excellent. Relative quantification of 82 Rb PET myocardial perfusion, based on comparisons to normal databases, demonstrates high performance for the detection of obstructive coronary disease. New tracers, such as 18 F-flurpiridaz may allow further improvements in the disease detection. Computerized analysis of perfusion at stress and rest reduces the variability of the assessment as compared to visual analysis. PET quantification can be enhanced by precise coregistration with CT angiography. In emerging clinical applications, the potential to identify vulnerable plaques by quantification of atherosclerotic plaque uptake of 18 FDG and 18 F-sodium fluoride tracers in carotids, aorta and coronary arteries

  9. Synthesis of nanodiamond derivatives carrying amino functions and quantification by a modified Kaiser test

    Directory of Open Access Journals (Sweden)

    Gerald Jarre

    2014-11-01

    Full Text Available Nanodiamonds functionalized with different organic moieties carrying terminal amino groups have been synthesized. These include conjugates generated by Diels–Alder reactions of ortho-quinodimethanes formed in situ from pyrazine and 5,6-dihydrocyclobuta[d]pyrimidine derivatives. For the quantification of primary amino groups a modified photometric assay based on the Kaiser test has been developed and validated for different types of aminated nanodiamond. The results correspond well to values obtained by thermogravimetry. The method represents an alternative wet-chemical quantification method in cases where other techniques like elemental analysis fail due to unfavourable combustion behaviour of the analyte or other impediments.

  10. Parallel algorithms for numerical linear algebra

    CERN Document Server

    van der Vorst, H

    1990-01-01

    This is the first in a new series of books presenting research results and developments concerning the theory and applications of parallel computers, including vector, pipeline, array, fifth/future generation computers, and neural computers.All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for p

  11. Kernel learning algorithms for face recognition

    CERN Document Server

    Li, Jun-Bao; Pan, Jeng-Shyang

    2013-01-01

    Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new

  12. Generalization of Risch's algorithm to special functions

    International Nuclear Information System (INIS)

    Raab, Clemens G.

    2013-05-01

    Symbolic integration deals with the evaluation of integrals in closed form. We present an overview of Risch's algorithm including recent developments. The algorithms discussed are suited for both indefinite and definite integration. They can also be used to compute linear relations among integrals and to find identities for special functions given by parameter integrals. The aim of this presentation is twofold: to introduce the reader to some basic ideas of differential algebra in the context of integration and to raise awareness in the physics community of computer algebra algorithms for indefinite and definite integration.

  13. Glowworm swarm optimization theory, algorithms, and applications

    CERN Document Server

    Kaipa, Krishnanand N

    2017-01-01

    This book provides a comprehensive account of the glowworm swarm optimization (GSO) algorithm, including details of the underlying ideas, theoretical foundations, algorithm development, various applications, and MATLAB programs for the basic GSO algorithm. It also discusses several research problems at different levels of sophistication that can be attempted by interested researchers. The generality of the GSO algorithm is evident in its application to diverse problems ranging from optimization to robotics. Examples include computation of multiple optima, annual crop planning, cooperative exploration, distributed search, multiple source localization, contaminant boundary mapping, wireless sensor networks, clustering, knapsack, numerical integration, solving fixed point equations, solving systems of nonlinear equations, and engineering design optimization. The book is a valuable resource for researchers as well as graduate and undergraduate students in the area of swarm intelligence and computational intellige...

  14. Applicability of a set of tomographic reconstruction algorithms for quantitative SPECT on irradiated nuclear fuel assemblies

    Energy Technology Data Exchange (ETDEWEB)

    Jacobsson Svärd, Staffan, E-mail: staffan.jacobsson_svard@physics.uu.se; Holcombe, Scott; Grape, Sophie

    2015-05-21

    A fuel assembly operated in a nuclear power plant typically contains 100–300 fuel rods, depending on fuel type, which become strongly radioactive during irradiation in the reactor core. For operational and security reasons, it is of interest to experimentally deduce rod-wise information from the fuel, preferably by means of non-destructive measurements. The tomographic SPECT technique offers such possibilities through its two-step application; (1) recording the gamma-ray flux distribution around the fuel assembly, and (2) reconstructing the assembly's internal source distribution, based on the recorded radiation field. In this paper, algorithms for performing the latter step and extracting quantitative relative rod-by-rod data are accounted for. As compared to application of SPECT in nuclear medicine, nuclear fuel assemblies present a much more heterogeneous distribution of internal attenuation to gamma radiation than the human body, typically with rods containing pellets of heavy uranium dioxide surrounded by cladding of a zirconium alloy placed in water or air. This inhomogeneity severely complicates the tomographic quantification of the rod-wise relative source content, and the deduction of conclusive data requires detailed modelling of the attenuation to be introduced in the reconstructions. However, as shown in this paper, simplified models may still produce valuable information about the fuel. Here, a set of reconstruction algorithms for SPECT on nuclear fuel assemblies are described and discussed in terms of their quantitative performance for two applications; verification of fuel assemblies' completeness in nuclear safeguards, and rod-wise fuel characterization. It is argued that a request not to base the former assessment on any a priori information brings constraints to which reconstruction methods that may be used in that case, whereas the use of a priori information on geometry and material content enables highly accurate quantitative

  15. Sedimentary Processes. Quantification Using Radionuclides

    International Nuclear Information System (INIS)

    Carroll, J.; Lerche, I.

    2003-01-01

    The advent of radionuclide methods in geochronology has revolutionized our understanding of modern sedimentary processes in aquatic systems. This book examines the principles of the method and its use as a quantitative tool in marine geology, with emphasis on the Pb-210 method. The assumptions and consequences of models and their behaviour are described providing the necessary background to assess the advantages and trade-offs involved when choosing a particular model for application. One of the purposes of this volume is to disentangle the influences of complicating factors, such as sediment flux variations, post-depositional diffusion of radionuclides, and bio-irrigation of sediments, to arrive at sediment ages and to properly assess the attendant data uncertainty. Environmental impacts of chemical, nuclear, or other waste material are of concern in a variety of areas around the world today. A number of relevant examples are included, demonstrating how dating models are useful for determining sources of contaminants and interpreting their influence on the environment. The book is set at a level so that an able student or professional should have no difficulty in following the procedures and methods developed. Each chapter includes case histories showing the strengths and weaknesses of a given procedure with respect to a data example. Included with this volume is the computer source code of a new generation of modelling tools based on inverse numerical analysis techniques. This first generation of the modelling tool is included, along with detailed instructions and examples for its use, in an appendix

  16. Examination of packaging materials in bakery products : a validated method for detection and quantification

    NARCIS (Netherlands)

    Raamsdonk, van L.W.D.; Pinckaers, V.G.Z.; Vliege, J.J.M.; Egmond, van H.J.

    2012-01-01

    Methods for the detection and quantification of packaging materials are necessary for the control of the prohibition of these materials according to Regulation (EC)767/2009. A method has been developed and validated at RIKILT for bakery products, including sweet bread and raisin bread. This choice

  17. Quantification methods of Black Carbon: Comparison of Rock-Eval analysis with traditional methods

    NARCIS (Netherlands)

    Poot, A.; Quik, J.T.K.; Veld, H.; Koelmans, A.A.

    2009-01-01

    Black Carbon (BC) quantification methods are reviewed, including new Rock-Eval 6 data on BC reference materials. BC has been reported to have major impacts on climate, human health and environmental quality. Especially for risk assessment of persistent organic pollutants (POPs) it is important to

  18. Comprehensive screening and quantification of veterinary drugs in milk using UPLC-ToF-MS

    NARCIS (Netherlands)

    Stolker, A.A.M.; Rutgers, P.; Oosterink, J.E.; Lasaroms, J.J.P.; Peters, R.J.B.; Rhijn, van J.A.; Nielen, M.W.F.

    2008-01-01

    Ultra-performance liquid chromatography combined with time-of-flight mass spectrometry (UPLC¿ToF-MS) has been used for screening and quantification of more than 100 veterinary drugs in milk. The veterinary drugs represent different classes including benzimidazoles, macrolides, penicillins,

  19. The Dropout Learning Algorithm

    Science.gov (United States)

    Baldi, Pierre; Sadowski, Peter

    2014-01-01

    Dropout is a recently introduced algorithm for training neural network by randomly dropping units during training to prevent their co-adaptation. A mathematical analysis of some of the static and dynamic properties of dropout is provided using Bernoulli gating variables, general enough to accommodate dropout on units or connections, and with variable rates. The framework allows a complete analysis of the ensemble averaging properties of dropout in linear networks, which is useful to understand the non-linear case. The ensemble averaging properties of dropout in non-linear logistic networks result from three fundamental equations: (1) the approximation of the expectations of logistic functions by normalized geometric means, for which bounds and estimates are derived; (2) the algebraic equality between normalized geometric means of logistic functions with the logistic of the means, which mathematically characterizes logistic functions; and (3) the linearity of the means with respect to sums, as well as products of independent variables. The results are also extended to other classes of transfer functions, including rectified linear functions. Approximation errors tend to cancel each other and do not accumulate. Dropout can also be connected to stochastic neurons and used to predict firing rates, and to backpropagation by viewing the backward propagation as ensemble averaging in a dropout linear network. Moreover, the convergence properties of dropout can be understood in terms of stochastic gradient descent. Finally, for the regularization properties of dropout, the expectation of the dropout gradient is the gradient of the corresponding approximation ensemble, regularized by an adaptive weight decay term with a propensity for self-consistent variance minimization and sparse representations. PMID:24771879

  20. An implementation of the Heaviside algorithm

    International Nuclear Information System (INIS)

    Dimovski, I.H.; Spiridonova, M.N.

    2011-01-01

    The so-called Heaviside algorithm based on the operational calculus approach is intended for solving initial value problems for linear ordinary differential equations with constant coefficients. We use it in the framework of Mikusinski's operational calculus. A description and implementation of the Heaviside algorithm using a computer algebra system are considered. Special attention is paid to the features making this implementation efficient. Illustrative examples are included

  1. Algorithmic strategies for FPGA-based vision

    OpenAIRE

    Lim, Yoong Kang

    2016-01-01

    As demands for real-time computer vision applications increase, implementations on alternative architectures have been explored. These architectures include Field-Programmable Gate Arrays (FPGAs), which offer a high degree of flexibility and parallelism. A problem with this is that many computer vision algorithms have been optimized for serial processing, and this often does not map well to FPGA implementation. This thesis introduces the concept of FPGA-tailored computer vision algorithms...

  2. Patient motion effects on the quantification of regional myocardial blood flow with dynamic PET imaging

    Energy Technology Data Exchange (ETDEWEB)

    Hunter, Chad R. R. N.; Kemp, Robert A. de, E-mail: RAdeKemp@ottawaheart.ca [Physics Department, Room 3302 Herzberg Laboratories, Carleton University, 1125 Colonel By Drive, Ottawa, Ontario K1S 5B6, Canada and Cardiac Imaging, University of Ottawa Heart Institute, 40 Ruskin Street, Ottawa, Ontario K1Y 4W7 (Canada); Klein, Ran [Department of Nuclear Medicine, Ottawa Hospital, Civic Campus, 1053 Carling Avenue, Ottawa, Ontario K1Y 4E9 (Canada); Beanlands, Rob S. [Cardiac Imaging, University of Ottawa Heart Institute, 40 Ruskin Street, Ottawa, Ontario K1Y 4W7 (Canada)

    2016-04-15

    Purpose: Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET–CT misalignment. Methods: A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers was resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. Results: In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations

  3. Patient motion effects on the quantification of regional myocardial blood flow with dynamic PET imaging

    International Nuclear Information System (INIS)

    Hunter, Chad R. R. N.; Kemp, Robert A. de; Klein, Ran; Beanlands, Rob S.

    2016-01-01

    Purpose: Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET–CT misalignment. Methods: A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers was resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. Results: In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations

  4. Patient motion effects on the quantification of regional myocardial blood flow with dynamic PET imaging.

    Science.gov (United States)

    Hunter, Chad R R N; Klein, Ran; Beanlands, Rob S; deKemp, Robert A

    2016-04-01

    Patient motion is a common problem during dynamic positron emission tomography (PET) scans for quantification of myocardial blood flow (MBF). The purpose of this study was to quantify the prevalence of body motion in a clinical setting and evaluate with realistic phantoms the effects of motion on blood flow quantification, including CT attenuation correction (CTAC) artifacts that result from PET-CT misalignment. A cohort of 236 sequential patients was analyzed for patient motion under resting and peak stress conditions by two independent observers. The presence of motion, affected time-frames, and direction of motion was recorded; discrepancy between observers was resolved by consensus review. Based on these results, patient body motion effects on MBF quantification were characterized using the digital NURBS-based cardiac-torso phantom, with characteristic time activity curves (TACs) assigned to the heart wall (myocardium) and blood regions. Simulated projection data were corrected for attenuation and reconstructed using filtered back-projection. All simulations were performed without noise added, and a single CT image was used for attenuation correction and aligned to the early- or late-frame PET images. In the patient cohort, mild motion of 0.5 ± 0.1 cm occurred in 24% and moderate motion of 1.0 ± 0.3 cm occurred in 38% of patients. Motion in the superior/inferior direction accounted for 45% of all detected motion, with 30% in the superior direction. Anterior/posterior motion was predominant (29%) in the posterior direction. Left/right motion occurred in 24% of cases, with similar proportions in the left and right directions. Computer simulation studies indicated that errors in MBF can approach 500% for scans with severe patient motion (up to 2 cm). The largest errors occurred when the heart wall was shifted left toward the adjacent lung region, resulting in a severe undercorrection for attenuation of the heart wall. Simulations also indicated that the

  5. Prospective comparison of liver stiffness measurements between two point wave elastography methods: Virtual ouch quantification and elastography point quantification

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Hyun Suk; Lee, Jeong Min; Yoon, Jeong Hee; Lee, Dong Ho; Chang, Won; Han, Joon Koo [Seoul National University Hospital, Seoul (Korea, Republic of)

    2016-09-15

    To prospectively compare technical success rate and reliable measurements of virtual touch quantification (VTQ) elastography and elastography point quantification (ElastPQ), and to correlate liver stiffness (LS) measurements obtained by the two elastography techniques. Our study included 85 patients, 80 of whom were previously diagnosed with chronic liver disease. The technical success rate and reliable measurements of the two kinds of point shear wave elastography (pSWE) techniques were compared by χ{sup 2} analysis. LS values measured using the two techniques were compared and correlated via Wilcoxon signed-rank test, Spearman correlation coefficient, and 95% Bland-Altman limit of agreement. The intraobserver reproducibility of ElastPQ was determined by 95% Bland-Altman limit of agreement and intraclass correlation coefficient (ICC). The two pSWE techniques showed similar technical success rate (98.8% for VTQ vs. 95.3% for ElastPQ, p = 0.823) and reliable LS measurements (95.3% for VTQ vs. 90.6% for ElastPQ, p = 0.509). The mean LS measurements obtained by VTQ (1.71 ± 0.47 m/s) and ElastPQ (1.66 ± 0.41 m/s) were not significantly different (p = 0.209). The LS measurements obtained by the two techniques showed strong correlation (r = 0.820); in addition, the 95% limit of agreement of the two methods was 27.5% of the mean. Finally, the ICC of repeat ElastPQ measurements was 0.991. Virtual touch quantification and ElastPQ showed similar technical success rate and reliable measurements, with strongly correlated LS measurements. However, the two methods are not interchangeable due to the large limit of agreement.

  6. The use of self-quantification systems for personal health information: big data management activities and prospects.

    Science.gov (United States)

    Almalki, Manal; Gray, Kathleen; Sanchez, Fernando Martin

    2015-01-01

    Self-quantification is seen as an emerging paradigm for health care self-management. Self-quantification systems (SQS) can be used for tracking, monitoring, and quantifying health aspects including mental, emotional, physical, and social aspects in order to gain self-knowledge. However, there has been a lack of a systematic approach for conceptualising and mapping the essential activities that are undertaken by individuals who are using SQS in order to improve health outcomes. In this paper, we propose a new model of personal health information self-quantification systems (PHI-SQS). PHI-SQS model describes two types of activities that individuals go through during their journey of health self-managed practice, which are 'self-quantification' and 'self-activation'. In this paper, we aimed to examine thoroughly the first type of activity in PHI-SQS which is 'self-quantification'. Our objectives were to review the data management processes currently supported in a representative set of self-quantification tools and ancillary applications, and provide a systematic approach for conceptualising and mapping these processes with the individuals' activities. We reviewed and compared eleven self-quantification tools and applications (Zeo Sleep Manager, Fitbit, Actipressure, MoodPanda, iBGStar, Sensaris Senspod, 23andMe, uBiome, Digifit, BodyTrack, and Wikilife), that collect three key health data types (Environmental exposure, Physiological patterns, Genetic traits). We investigated the interaction taking place at different data flow stages between the individual user and the self-quantification technology used. We found that these eleven self-quantification tools and applications represent two major tool types (primary and secondary self-quantification systems). In each type, the individuals experience different processes and activities which are substantially influenced by the technologies' data management capabilities. Self-quantification in personal health maintenance

  7. Statistical modeling and MAP estimation for body fat quantification with MRI ratio imaging

    Science.gov (United States)

    Wong, Wilbur C. K.; Johnson, David H.; Wilson, David L.

    2008-03-01

    We are developing small animal imaging techniques to characterize the kinetics of lipid accumulation/reduction of fat depots in response to genetic/dietary factors associated with obesity and metabolic syndromes. Recently, we developed an MR ratio imaging technique that approximately yields lipid/{lipid + water}. In this work, we develop a statistical model for the ratio distribution that explicitly includes a partial volume (PV) fraction of fat and a mixture of a Rician and multiple Gaussians. Monte Carlo hypothesis testing showed that our model was valid over a wide range of coefficient of variation of the denominator distribution (c.v.: 0-0:20) and correlation coefficient among the numerator and denominator (ρ 0-0.95), which cover the typical values that we found in MRI data sets (c.v.: 0:027-0:063, ρ: 0:50-0:75). Then a maximum a posteriori (MAP) estimate for the fat percentage per voxel is proposed. Using a digital phantom with many PV voxels, we found that ratio values were not linearly related to PV fat content and that our method accurately described the histogram. In addition, the new method estimated the ground truth within +1.6% vs. +43% for an approach using an uncorrected ratio image, when we simply threshold the ratio image. On the six genetically obese rat data sets, the MAP estimate gave total fat volumes of 279 +/- 45mL, values 21% smaller than those from the uncorrected ratio images, principally due to the non-linear PV effect. We conclude that our algorithm can increase the accuracy of fat volume quantification even in regions having many PV voxels, e.g. ectopic fat depots.

  8. Designing the Social Context for Easier Verification, Validation, and Uncertainty Quantification of Earth Science Data

    Science.gov (United States)

    Barkstrom, B. R.; Loeb, N. G.; Wielicki, B. A.

    2017-12-01

    Verification, Validation, and Uncertainty Quantification (VVUQ) are key actions that support conclusions based on Earth science data. Communities of data producers and users must undertake VVUQ when they create and use their data. The strategies [S] and tools [T] suggested below come from successful use on two large NASA projects. The first was the Earth Radiation Budget Experiment (ERBE). The second is the investigation of Clouds and the Earth's Radiant Energy System (CERES). [S] 1. Partition the production system into subsystems that deal with data transformations confined to limited space and time scales. Simplify the subsystems to minimize the number of data transformations in each subsystem. [S] 2. Derive algorithms from the fundamental physics and chemistry governing the parameters in each subsystem including those for instrument calibration. [S] 3. Use preliminary uncertainty estimates to detect unexpected discrepancies. Removing these requires diagnostic work as well as development and testing of fixes. [S] 4. Make sure there are adequate resources to support multiple end-to-end reprocessing of all data products. [T] 1. Create file identifiers that accommodate temporal and spatial sequences of data files and subsystem version changes. [T] 2. Create libraries of parameters used in common by different subsystems to reduce errors due to inconsistent values. [T] 3. Maintain a list of action items to record progress on resolving discrepancies. [T] 4. Plan on VVUQ activities that use independent data sources and peer review before distributing and archiving data. The goal of VVUQ is to provide a transparent link between the data and the physics and chemistry governing the measured quantities. The VVUQ effort also involves specialized domain experience and nomenclature. It often requires as much effort as the original system development. ERBE and CERES demonstrated that these strategies and tools can reduce the cost of VVUQ for Earth science data products.

  9. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    Science.gov (United States)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  10. HPTLC in Herbal Drug Quantification

    Science.gov (United States)

    Shinde, Devanand B.; Chavan, Machindra J.; Wakte, Pravin S.

    For the past few decades, compounds from natural sources have been gaining importance because of the vast chemical diversity they offer. This has led to phenomenal increase in the demand for herbal medicines in the last two decades and need has been felt for ensuring the quality, safety, and efficacy of herbal drugs. Phytochemical evaluation is one of the tools for the quality assessment, which include preliminary phytochemical screening, chemoprofiling, and marker compound analysis using modern analytical techniques. High-performance thin-layer chromatography (HPTLC) has been emerged as an important tool for the qualitative, semiquantitative, and quantitative phytochemical analysis of the herbal drugs and formulations. This includes developing TLC fingerprinting profiles and estimation of biomarkers. This review has an attempt to focus on the theoretical considerations of HPTLC and some examples of herbal drugs and formulations analyzed by HPTLC.

  11. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  12. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  13. Agency and Algorithms

    Directory of Open Access Journals (Sweden)

    Hanns Holger Rutz

    2016-11-01

    Full Text Available Although the concept of algorithms has been established a long time ago, their current topicality indicates a shift in the discourse. Classical definitions based on logic seem to be inadequate to describe their aesthetic capabilities. New approaches stress their involvement in material practices as well as their incompleteness. Algorithmic aesthetics can no longer be tied to the static analysis of programs, but must take into account the dynamic and experimental nature of coding practices. It is suggested that the aesthetic objects thus produced articulate something that could be called algorithmicity or the space of algorithmic agency. This is the space or the medium – following Luhmann’s form/medium distinction – where human and machine undergo mutual incursions. In the resulting coupled “extimate” writing process, human initiative and algorithmic speculation cannot be clearly divided out any longer. An observation is attempted of defining aspects of such a medium by drawing a trajectory across a number of sound pieces. The operation of exchange between form and medium I call reconfiguration and it is indicated by this trajectory. 

  14. Uncertainty Quantification in Numerical Aerodynamics

    KAUST Repository

    Litvinenko, Alexander; Matthies, Hermann G.; Liu, Dishi; Schillings, Claudia; Schulz, Volker

    2017-01-01

    In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al '17]. For modeling we used the TAU code, developed in DLR, Germany.

  15. Algebraic dynamics algorithm: Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    WANG ShunJin; ZHANG Hua

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations,a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm.A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models.The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision,and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  16. Algebraic dynamics algorithm:Numerical comparison with Runge-Kutta algorithm and symplectic geometric algorithm

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Based on the exact analytical solution of ordinary differential equations, a truncation of the Taylor series of the exact solution to the Nth order leads to the Nth order algebraic dynamics algorithm. A detailed numerical comparison is presented with Runge-Kutta algorithm and symplectic geometric algorithm for 12 test models. The results show that the algebraic dynamics algorithm can better preserve both geometrical and dynamical fidelity of a dynamical system at a controllable precision, and it can solve the problem of algorithm-induced dissipation for the Runge-Kutta algorithm and the problem of algorithm-induced phase shift for the symplectic geometric algorithm.

  17. Wavelets in quantification of liver tumors in contrasted computed tomography images; Wavelets na quantificacao de tumores de figado em exames contrastados de tomografia computadorizada

    Energy Technology Data Exchange (ETDEWEB)

    Rodrigues, Bruna T.; Alvarez, Matheus; Souza, Rafael T.F.; Miranda, Jose R.A., E-mail: matheus@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Romeiro, Fernando G. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Fac de Mediciana. Departamento de Clinica Medica; Pina, Diana R. de; Trindade, Andre Petean [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Fac. de Medicina. Departamento de Doencas Tropicais e Diagnostico por Imagem

    2012-12-15

    This paper presents an original methodology of liver tumors segmentation, based on wavelet transform. A virtual phantom was constructed with the same mean and standard deviation of the intensity of gray presented by the measured liver tissue. The optimized algorithm had a sensitivity ranging from 0.81 to 0.83, with a specificity of 0.95 for differentiation of hepatic tumors from normal tissues. We obtained a 96% agreement between the pixels segmented by an experienced radiologist and the algorithm presented here. According to the results shown in this work, the algorithm is optimal for the beginning of the tests for quantification of liver tumors in retrospective surveys. (author)

  18. An Innovative Thinking-Based Intelligent Information Fusion Algorithm

    Directory of Open Access Journals (Sweden)

    Huimin Lu

    2013-01-01

    Full Text Available This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.

  19. Evaluation Of Algorithms Of Anti- HIV Antibody Tests

    Directory of Open Access Journals (Sweden)

    Paranjape R.S

    1997-01-01

    Full Text Available Research question: Can alternate algorithms be used in place of conventional algorithm for epidemiological studies of HIV infection with less expenses? Objective: To compare the results of HIV sero- prevalence as determined by test algorithms combining three kits with conventional test algorithm. Study design: Cross â€" sectional. Participants: 282 truck drivers. Statistical analysis: Sensitivity and specificity analysis and predictive values. Results: Three different algorithms that do not include Western Blot (WB were compared with the conventional algorithm, in a truck driver population with 5.6% prevalence of HIV â€"I infection. Algorithms with one EIA (Genetic Systems or Biotest and a rapid test (immunocomb or with two EIAs showed 100% positive predictive value in relation to the conventional algorithm. Using an algorithm with EIA as screening test and a rapid test as a confirmatory test was 50 to 70% less expensive than the conventional algorithm per positive scrum sample. These algorithms obviate the interpretation of indeterminate results and also give differential diagnosis of HIV-2 infection. Alternate algorithms are ideally suited for community based control programme in developing countries. Application of these algorithms in population with low prevalence should also be studied in order to evaluate universal applicability.

  20. Uncertainty Quantification in Alchemical Free Energy Methods.

    Science.gov (United States)

    Bhati, Agastya P; Wan, Shunzhou; Hu, Yuan; Sherborne, Brad; Coveney, Peter V

    2018-05-02

    Alchemical free energy methods have gained much importance recently from several reports of improved ligand-protein binding affinity predictions based on their implementation using molecular dynamics simulations. A large number of variants of such methods implementing different accelerated sampling techniques and free energy estimators are available, each claimed to be better than the others in its own way. However, the key features of reproducibility and quantification of associated uncertainties in such methods have barely been discussed. Here, we apply a systematic protocol for uncertainty quantification to a number of popular alchemical free energy methods, covering both absolute and relative free energy predictions. We show that a reliable measure of error estimation is provided by ensemble simulation-an ensemble of independent MD simulations-which applies irrespective of the free energy method. The need to use ensemble methods is fundamental and holds regardless of the duration of time of the molecular dynamics simulations performed.

  1. Uncertainty Quantification with Applications to Engineering Problems

    DEFF Research Database (Denmark)

    Bigoni, Daniele

    in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor...... and thus the UQ analysis of the associated systems will benefit greatly from the application of methods which require few function evaluations. We first consider the propagation of the uncertainty and the sensitivity analysis of the non-linear dynamics of railway vehicles with suspension components whose......-scale problems, where efficient methods are necessary with today’s computational resources. The outcome of this work was also the creation of several freely available Python modules for Uncertainty Quantification, which are listed and described in the appendix....

  2. Level 2 probabilistic event analyses and quantification

    International Nuclear Information System (INIS)

    Boneham, P.

    2003-01-01

    In this paper an example of quantification of a severe accident phenomenological event is given. The performed analysis for assessment of the probability that the debris released from the reactor vessel was in a coolable configuration in the lower drywell is presented. It is also analysed the assessment of the type of core/concrete attack that would occur. The coolability of the debris ex-vessel evaluation by an event in the Simplified Boiling Water Reactor (SBWR) Containment Event Tree (CET) and a detailed Decomposition Event Tree (DET) developed to aid in the quantification of this CET event are considered. The headings in the DET selected to represent plant physical states (e.g., reactor vessel pressure at the time of vessel failure) and the uncertainties associated with the occurrence of critical physical phenomena (e.g., debris configuration in the lower drywell) considered important to assessing whether the debris was coolable or not coolable ex-vessel are also discussed

  3. Adaptive polynomial chaos techniques for uncertainty quantification of a gas cooled fast reactor transient

    International Nuclear Information System (INIS)

    Perko, Z.; Gilli, L.; Lathouwers, D.; Kloosterman, J. L.

    2013-01-01

    Uncertainty quantification plays an increasingly important role in the nuclear community, especially with the rise of Best Estimate Plus Uncertainty methodologies. Sensitivity analysis, surrogate models, Monte Carlo sampling and several other techniques can be used to propagate input uncertainties. In recent years however polynomial chaos expansion has become a popular alternative providing high accuracy at affordable computational cost. This paper presents such polynomial chaos (PC) methods using adaptive sparse grids and adaptive basis set construction, together with an application to a Gas Cooled Fast Reactor transient. Comparison is made between a new sparse grid algorithm and the traditionally used technique proposed by Gerstner. An adaptive basis construction method is also introduced and is proved to be advantageous both from an accuracy and a computational point of view. As a demonstration the uncertainty quantification of a 50% loss of flow transient in the GFR2400 Gas Cooled Fast Reactor design was performed using the CATHARE code system. The results are compared to direct Monte Carlo sampling and show the superior convergence and high accuracy of the polynomial chaos expansion. Since PC techniques are easy to implement, they can offer an attractive alternative to traditional techniques for the uncertainty quantification of large scale problems. (authors)

  4. Application of adaptive hierarchical sparse grid collocation to the uncertainty quantification of nuclear reactor simulators

    Energy Technology Data Exchange (ETDEWEB)

    Yankov, A.; Downar, T. [University of Michigan, 2355 Bonisteel Blvd, Ann Arbor, MI 48109 (United States)

    2013-07-01

    Recent efforts in the application of uncertainty quantification to nuclear systems have utilized methods based on generalized perturbation theory and stochastic sampling. While these methods have proven to be effective they both have major drawbacks that may impede further progress. A relatively new approach based on spectral elements for uncertainty quantification is applied in this paper to several problems in reactor simulation. Spectral methods based on collocation attempt to couple the approximation free nature of stochastic sampling methods with the determinism of generalized perturbation theory. The specific spectral method used in this paper employs both the Smolyak algorithm and adaptivity by using Newton-Cotes collocation points along with linear hat basis functions. Using this approach, a surrogate model for the outputs of a computer code is constructed hierarchically by adaptively refining the collocation grid until the interpolant is converged to a user-defined threshold. The method inherently fits into the framework of parallel computing and allows for the extraction of meaningful statistics and data that are not within reach of stochastic sampling and generalized perturbation theory. This paper aims to demonstrate the advantages of spectral methods-especially when compared to current methods used in reactor physics for uncertainty quantification-and to illustrate their full potential. (authors)

  5. Specificity and affinity quantification of protein-protein interactions.

    Science.gov (United States)

    Yan, Zhiqiang; Guo, Liyong; Hu, Liang; Wang, Jin

    2013-05-01

    Most biological processes are mediated by the protein-protein interactions. Determination of the protein-protein structures and insight into their interactions are vital to understand the mechanisms of protein functions. Currently, compared with the isolated protein structures, only a small fraction of protein-protein structures are experimentally solved. Therefore, the computational docking methods play an increasing role in predicting the structures and interactions of protein-protein complexes. The scoring function of protein-protein interactions is the key responsible for the accuracy of the computational docking. Previous scoring functions were mostly developed by optimizing the binding affinity which determines the stability of the protein-protein complex, but they are often lack of the consideration of specificity which determines the discrimination of native protein-protein complex against competitive ones. We developed a scoring function (named as SPA-PP, specificity and affinity of the protein-protein interactions) by incorporating both the specificity and affinity into the optimization strategy. The testing results and comparisons with other scoring functions show that SPA-PP performs remarkably on both predictions of binding pose and binding affinity. Thus, SPA-PP is a promising quantification of protein-protein interactions, which can be implemented into the protein docking tools and applied for the predictions of protein-protein structure and affinity. The algorithm is implemented in C language, and the code can be downloaded from http://dl.dropbox.com/u/1865642/Optimization.cpp.

  6. The Method of Manufactured Universes for validating uncertainty quantification methods

    KAUST Repository

    Stripling, H.F.

    2011-09-01

    The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which experimental data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented in this paper manufactures a particle-transport universe, models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new experiments within the manufactured reality. The results of this preliminary study indicate that, even in a simple problem, the improper application of a specific UQ method or unrealized effects of a modeling assumption may produce inaccurate predictions. We conclude that the validation framework presented in this paper is a powerful and flexible tool for the investigation and understanding of UQ methodologies. © 2011 Elsevier Ltd. All rights reserved.

  7. Crowdsourced Quantification and Visualization of Urban Mobility Space Inequality

    Directory of Open Access Journals (Sweden)

    Michael Szell

    2018-03-01

    Full Text Available Most cities are car-centric, allocating a privileged amount of urban space to cars at the expense of sustainable mobility like cycling. Simultaneously, privately owned vehicles are vastly underused, wasting valuable opportunities for accommodating more people in a livable urban environment by occupying spacious parking areas. Since a data-driven quantification and visualization of such urban mobility space inequality is lacking, here we explore how crowdsourced data can help to advance its understanding. In particular, we describe how the open-source online platform What the Street!? uses massive user-generated data from OpenStreetMap for the interactive exploration of city-wide mobility spaces. Using polygon packing and graph algorithms, the platform rearranges all parking and mobility spaces of cars, rails, and bicycles of a city to be directly comparable, making mobility space inequality accessible to a broad public. This crowdsourced method confirms a prevalent imbalance between modal share and space allocation in 23 cities worldwide, typically discriminating bicycles. Analyzing the guesses of the platform’s visitors about mobility space distributions, we find that this discrimination is consistently underestimated in the public opinion. Finally, we discuss a visualized scenario in which extensive parking areas are regained through fleets of shared, autonomous vehicles. We outline how such accessible visualization platforms can facilitate urban planners and policy makers to reclaim road and parking space for pushing forward sustainable transport solutions.

  8. Detection of algorithmic trading

    Science.gov (United States)

    Bogoev, Dimitar; Karam, Arzé

    2017-10-01

    We develop a new approach to reflect the behavior of algorithmic traders. Specifically, we provide an analytical and tractable way to infer patterns of quote volatility and price momentum consistent with different types of strategies employed by algorithmic traders, and we propose two ratios to quantify these patterns. Quote volatility ratio is based on the rate of oscillation of the best ask and best bid quotes over an extremely short period of time; whereas price momentum ratio is based on identifying patterns of rapid upward or downward movement in prices. The two ratios are evaluated across several asset classes. We further run a two-stage Artificial Neural Network experiment on the quote volatility ratio; the first stage is used to detect the quote volatility patterns resulting from algorithmic activity, while the second is used to validate the quality of signal detection provided by our measure.

  9. Algorithms in invariant theory

    CERN Document Server

    Sturmfels, Bernd

    2008-01-01

    J. Kung and G.-C. Rota, in their 1984 paper, write: "Like the Arabian phoenix rising out of its ashes, the theory of invariants, pronounced dead at the turn of the century, is once again at the forefront of mathematics". The book of Sturmfels is both an easy-to-read textbook for invariant theory and a challenging research monograph that introduces a new approach to the algorithmic side of invariant theory. The Groebner bases method is the main tool by which the central problems in invariant theory become amenable to algorithmic solutions. Students will find the book an easy introduction to this "classical and new" area of mathematics. Researchers in mathematics, symbolic computation, and computer science will get access to a wealth of research ideas, hints for applications, outlines and details of algorithms, worked out examples, and research problems.

  10. The Retina Algorithm

    CERN Multimedia

    CERN. Geneva; PUNZI, Giovanni

    2015-01-01

    Charge particle reconstruction is one of the most demanding computational tasks found in HEP, and it becomes increasingly important to perform it in real time. We envision that HEP would greatly benefit from achieving a long-term goal of making track reconstruction happen transparently as part of the detector readout ("detector-embedded tracking"). We describe here a track-reconstruction approach based on a massively parallel pattern-recognition algorithm, inspired by studies of the processing of visual images by the brain as it happens in nature ('RETINA algorithm'). It turns out that high-quality tracking in large HEP detectors is possible with very small latencies, when this algorithm is implemented in specialized processors, based on current state-of-the-art, high-speed/high-bandwidth digital devices.

  11. Heterogeneous architecture to process swarm optimization algorithms

    Directory of Open Access Journals (Sweden)

    Maria A. Dávila-Guzmán

    2014-01-01

    Full Text Available Since few years ago, the parallel processing has been embedded in personal computers by including co-processing units as the graphics processing units resulting in a heterogeneous platform. This paper presents the implementation of swarm algorithms on this platform to solve several functions from optimization problems, where they highlight their inherent parallel processing and distributed control features. In the swarm algorithms, each individual and dimension problem are parallelized by the granularity of the processing system which also offer low communication latency between individuals through the embedded processing. To evaluate the potential of swarm algorithms on graphics processing units we have implemented two of them: the particle swarm optimization algorithm and the bacterial foraging optimization algorithm. The algorithms’ performance is measured using the acceleration where they are contrasted between a typical sequential processing platform and the NVIDIA GeForce GTX480 heterogeneous platform; the results show that the particle swarm algorithm obtained up to 36.82x and the bacterial foraging swarm algorithm obtained up to 9.26x. Finally, the effect to increase the size of the population is evaluated where we show both the dispersion and the quality of the solutions are decreased despite of high acceleration performance since the initial distribution of the individuals can converge to local optimal solution.

  12. A Spanish model for quantification and management of construction waste

    International Nuclear Information System (INIS)

    Solis-Guzman, Jaime; Marrero, Madelyn; Montes-Delgado, Maria Victoria; Ramirez-de-Arellano, Antonio

    2009-01-01

    Currently, construction and demolition waste (C and D waste) is a worldwide issue that concerns not only governments but also the building actors involved in construction activity. In Spain, a new national decree has been regulating the production and management of C and D waste since February 2008. The present work describes the waste management model that has inspired this decree: the Alcores model implemented with good results in Los Alcores Community (Seville, Spain). A detailed model is also provided to estimate the volume of waste that is expected to be generated on the building site. The quantification of C and D waste volume, from the project stage, is essential for the building actors to properly plan and control its disposal. This quantification model has been developed by studying 100 dwelling projects, especially their bill of quantities, and defining three coefficients to estimate the demolished volume (CT), the wreckage volume (CR) and the packaging volume (CE). Finally, two case studies are included to illustrate the usefulness of the model to estimate C and D waste volume in both new construction and demolition projects.

  13. Biomass to energy : GHG reduction quantification protocols and case study

    International Nuclear Information System (INIS)

    Reusing, G.; Taylor, C.; Nolan, W.; Kerr, G.

    2009-01-01

    With the growing concerns over greenhouses gases and their contribution to climate change, it is necessary to find ways of reducing environmental impacts by diversifying energy sources to include non-fossil fuel energy sources. Among the fastest growing green energy sources is energy from waste facilities that use biomass that would otherwise be landfilled or stockpiled. The quantification of greenhouse gas reductions through the use of biomass to energy systems can be calculated using various protocols and methodologies. This paper described each of these methodologies and presented a case study comparing some of these quantification methodologies. A summary and comparison of biomass to energy greenhouse gas reduction protocols in use or under development by the United Nations, the European Union, the Province of Alberta and Environment Canada was presented. It was concluded that regulatory, environmental pressures, and public policy will continue to impact the practices associated with biomass processing or landfill operations, such as composting, or in the case of landfills, gas collection systems, thus reducing the amount of potential credit available for biomass to energy facility offset projects. 10 refs., 2 tabs., 6 figs

  14. Biomass to energy : GHG reduction quantification protocols and case study

    Energy Technology Data Exchange (ETDEWEB)

    Reusing, G.; Taylor, C. [Conestoga - Rovers and Associates, Waterloo, ON (Canada); Nolan, W. [Liberty Energy, Hamilton, ON (Canada); Kerr, G. [Index Energy, Ajax, ON (Canada)

    2009-07-01

    With the growing concerns over greenhouses gases and their contribution to climate change, it is necessary to find ways of reducing environmental impacts by diversifying energy sources to include non-fossil fuel energy sources. Among the fastest growing green energy sources is energy from waste facilities that use biomass that would otherwise be landfilled or stockpiled. The quantification of greenhouse gas reductions through the use of biomass to energy systems can be calculated using various protocols and methodologies. This paper described each of these methodologies and presented a case study comparing some of these quantification methodologies. A summary and comparison of biomass to energy greenhouse gas reduction protocols in use or under development by the United Nations, the European Union, the Province of Alberta and Environment Canada was presented. It was concluded that regulatory, environmental pressures, and public policy will continue to impact the practices associated with biomass processing or landfill operations, such as composting, or in the case of landfills, gas collection systems, thus reducing the amount of potential credit available for biomass to energy facility offset projects. 10 refs., 2 tabs., 6 figs.

  15. A Spanish model for quantification and management of construction waste.

    Science.gov (United States)

    Solís-Guzmán, Jaime; Marrero, Madelyn; Montes-Delgado, Maria Victoria; Ramírez-de-Arellano, Antonio

    2009-09-01

    Currently, construction and demolition waste (C&D waste) is a worldwide issue that concerns not only governments but also the building actors involved in construction activity. In Spain, a new national decree has been regulating the production and management of C&D waste since February 2008. The present work describes the waste management model that has inspired this decree: the Alcores model implemented with good results in Los Alcores Community (Seville, Spain). A detailed model is also provided to estimate the volume of waste that is expected to be generated on the building site. The quantification of C&D waste volume, from the project stage, is essential for the building actors to properly plan and control its disposal. This quantification model has been developed by studying 100 dwelling projects, especially their bill of quantities, and defining three coefficients to estimate the demolished volume (CT), the wreckage volume (CR) and the packaging volume (CE). Finally, two case studies are included to illustrate the usefulness of the model to estimate C&D waste volume in both new construction and demolition projects.

  16. Quantification of 5-methyl-2'-deoxycytidine in the DNA.

    Science.gov (United States)

    Giel-Pietraszuk, Małgorzata; Insińska-Rak, Małgorzata; Golczak, Anna; Sikorski, Marek; Barciszewska, Mirosława; Barciszewski, Jan

    2015-01-01

    Methylation at position 5 of cytosine (Cyt) at the CpG sequences leading to formation of 5-methyl-cytosine (m(5)Cyt) is an important element of epigenetic regulation of gene expression. Modification of the normal methylation pattern, unique to each organism, leads to the development of pathological processes and diseases, including cancer. Therefore, quantification of the DNA methylation and analysis of changes in the methylation pattern is very important from a practical point of view and can be used for diagnostic purposes, as well as monitoring of the treatment progress. In this paper we present a new method for quantification of 5-methyl-2'deoxycytidine (m(5)C) in the DNA. The technique is based on conversion of m(5)C into fluorescent 3,N(4)-etheno-5-methyl-2'deoxycytidine (εm(5)C) and its identification by reversed-phase high-performance liquid chromatography (RP-HPLC). The assay was used to evaluate m(5)C concentration in DNA of calf thymus and peripheral blood of cows bred under different conditions. This approach can be applied for measuring of 5-methylcytosine in cellular DNA from different cells and tissues.

  17. Quantification of heterogeneity observed in medical images

    OpenAIRE

    Brooks, Frank J; Grigsby, Perry W

    2013-01-01

    Background There has been much recent interest in the quantification of visually evident heterogeneity within functional grayscale medical images, such as those obtained via magnetic resonance or positron emission tomography. In the case of images of cancerous tumors, variations in grayscale intensity imply variations in crucial tumor biology. Despite these considerable clinical implications, there is as yet no standardized method for measuring the heterogeneity observed via these imaging mod...

  18. Named Entity Linking Algorithm

    Directory of Open Access Journals (Sweden)

    M. F. Panteleev

    2017-01-01

    Full Text Available In the tasks of processing text in natural language, Named Entity Linking (NEL represents the task to define and link some entity, which is found in the text, with some entity in the knowledge base (for example, Dbpedia. Currently, there is a diversity of approaches to solve this problem, but two main classes can be identified: graph-based approaches and machine learning-based ones. Graph and Machine Learning approaches-based algorithm is proposed accordingly to the stated assumptions about the interrelations of named entities in a sentence and in general.In the case of graph-based approaches, it is necessary to solve the problem of identifying an optimal set of the related entities according to some metric that characterizes the distance between these entities in a graph built on some knowledge base. Due to limitations in processing power, to solve this task directly is impossible. Therefore, its modification is proposed. Based on the algorithms of machine learning, an independent solution cannot be built due to small volumes of training datasets relevant to NEL task. However, their use can contribute to improving the quality of the algorithm. The adaptation of the Latent Dirichlet Allocation model is proposed in order to obtain a measure of the compatibility of attributes of various entities encountered in one context.The efficiency of the proposed algorithm was experimentally tested. A test dataset was independently generated. On its basis the performance of the model was compared using the proposed algorithm with the open source product DBpedia Spotlight, which solves the NEL problem.The mockup, based on the proposed algorithm, showed a low speed as compared to DBpedia Spotlight. However, the fact that it has shown higher accuracy, stipulates the prospects for work in this direction.The main directions of development were proposed in order to increase the accuracy of the system and its productivity.

  19. Visualization and quantification of evolving datasets. Final report: 8-1-93 - 4-30-97

    International Nuclear Information System (INIS)

    Zabusky, N.; Silver, D.

    1999-01-01

    The material below is the final technical/progress report of the Laboratory for Visiometrics and Modeling (Vizlab) in visiometrics for the grant entitled Visualization and Quantification of Evolving Phenomena. This includes coordination with DOE supported scientists at Los Alamos National Laboratory (LANL) and Princeton Plasma Physics Laboratory (PPPL), and with theoretical and computational physicists at the National Institute of Fusion Science (NIFS) in Nagoya, Japan and the Institute of Laser Engineering (ILE) in Osaka, Japan. The authors research areas included: Enhancement and distribution of the DAVID environment, this is a 2D visualization environment incorporating many advanced quantifications and diagnostics useful for prediction, understanding, and reduced model formation; Feature extraction, tracking and quantification of 3D time-dependent datasets of non-linear and turbulent simulations both compressible and incompressible. This work is applicable to all 3D time-varying simulations; Visiometrics in shock-interface interactions and mixing for the Richtmyer-Meshkov (RM) environment. This work highlights reduced models for nonlinear evolutions and the role of density stratified interfaces (contact discontinuities) and has application to supernova physics, laser fusion and supersonic combustion. The collaborative projects included areas of (1) Feature extraction, tracking and quantification in 3D turbulence: compressible and incompressible; (2) Numerical Tokamak Project (NTP); (3) Data projection and reduced modeling for shock-interface interactions and mixing. (The Richtmyer-Meshkov (RM) environment relevant to laser fusion and combustion)

  20. Artifacts Quantification of Metal Implants in MRI

    Science.gov (United States)

    Vrachnis, I. N.; Vlachopoulos, G. F.; Maris, T. G.; Costaridou, L. I.

    2017-11-01

    The presence of materials with different magnetic properties, such as metal implants, causes distortion of the magnetic field locally, resulting in signal voids and pile ups, i.e. susceptibility artifacts in MRI. Quantitative and unbiased measurement of the artifact is prerequisite for optimization of acquisition parameters. In this study an image gradient based segmentation method is proposed for susceptibility artifact quantification. The method captures abrupt signal alterations by calculation of the image gradient. Then the artifact is quantified in terms of its extent by an automated cross entropy thresholding method as image area percentage. The proposed method for artifact quantification was tested in phantoms containing two orthopedic implants with significantly different magnetic permeabilities. The method was compared against a method proposed in the literature, considered as a reference, demonstrating moderate to good correlation (Spearman’s rho = 0.62 and 0.802 in case of titanium and stainless steel implants). The automated character of the proposed quantification method seems promising towards MRI acquisition parameter optimization.

  1. Law and Order in Algorithmics

    NARCIS (Netherlands)

    Fokkinga, M.M.

    1992-01-01

    An algorithm is the input-output effect of a computer program; mathematically, the notion of algorithm comes close to the notion of function. Just as arithmetic is the theory and practice of calculating with numbers, so is ALGORITHMICS the theory and practice of calculating with algorithms. Just as

  2. A cluster algorithm for graphs

    NARCIS (Netherlands)

    S. van Dongen

    2000-01-01

    textabstractA cluster algorithm for graphs called the emph{Markov Cluster algorithm (MCL~algorithm) is introduced. The algorithm provides basically an interface to an algebraic process defined on stochastic matrices, called the MCL~process. The graphs may be both weighted (with nonnegative weight)

  3. Algorithms for Reinforcement Learning

    CERN Document Server

    Szepesvari, Csaba

    2010-01-01

    Reinforcement learning is a learning paradigm concerned with learning to control a system so as to maximize a numerical performance measure that expresses a long-term objective. What distinguishes reinforcement learning from supervised learning is that only partial feedback is given to the learner about the learner's predictions. Further, the predictions may have long term effects through influencing the future state of the controlled system. Thus, time plays a special role. The goal in reinforcement learning is to develop efficient learning algorithms, as well as to understand the algorithms'

  4. Animation of planning algorithms

    OpenAIRE

    Sun, Fan

    2014-01-01

    Planning is the process of creating a sequence of steps/actions that will satisfy a goal of a problem. The partial order planning (POP) algorithm is one of Artificial Intelligence approach for problem planning. By learning G52PAS module, I find that it is difficult for students to understand this planning algorithm by just reading its pseudo code and doing some exercise in writing. Students cannot know how each actual step works clearly and might miss some steps because of their confusion. ...

  5. Secondary Vertex Finder Algorithm

    CERN Document Server

    Heer, Sebastian; The ATLAS collaboration

    2017-01-01

    If a jet originates from a b-quark, a b-hadron is formed during the fragmentation process. In its dominant decay modes, the b-hadron decays into a c-hadron via the electroweak interaction. Both b- and c-hadrons have lifetimes long enough, to travel a few millimetres before decaying. Thus displaced vertices from b- and subsequent c-hadron decays provide a strong signature for a b-jet. Reconstructing these secondary vertices (SV) and their properties is the aim of this algorithm. The performance of this algorithm is studied with tt̄ events, requiring at least one lepton, simulated at 13 TeV.

  6. Randomized Filtering Algorithms

    DEFF Research Database (Denmark)

    Katriel, Irit; Van Hentenryck, Pascal

    2008-01-01

    of AllDifferent and is generalization, the Global Cardinality Constraint. The first delayed filtering scheme is a Monte Carlo algorithm: its running time is superior, in the worst case, to that of enforcing are consistency after every domain event, while its filtering effectiveness is analyzed...... in the expected sense. The second scheme is a Las Vegas algorithm using filtering triggers: Its effectiveness is the same as enforcing are consistency after every domain event, while in the expected case it is faster by a factor of m/n, where n and m are, respectively, the number of nodes and edges...

  7. UTV Expansion Pack: Special-Purpose Rank-Revealing Algorithms

    DEFF Research Database (Denmark)

    Fierro, Ricardo D.; Hansen, Per Christian

    2005-01-01

    This collection of Matlab 7.0 software supplements and complements the package UTV Tools from 1999, and includes implementations of special-purpose rank-revealing algorithms developed since the publication of the original package. We provide algorithms for computing and modifying symmetric rank-r...... values of a sparse or structured matrix. These new algorithms have applications in signal processing, optimization and LSI information retrieval.......This collection of Matlab 7.0 software supplements and complements the package UTV Tools from 1999, and includes implementations of special-purpose rank-revealing algorithms developed since the publication of the original package. We provide algorithms for computing and modifying symmetric rank......-revealing VSV decompositions, we expand the algorithms for the ULLV decomposition of a matrix pair to handle interference-type problems with a rank-deficient covariance matrix, and we provide a robust and reliable Lanczos algorithm which - despite its simplicity - is able to capture all the dominant singular...

  8. An Ordering Linear Unification Algorithm

    Institute of Scientific and Technical Information of China (English)

    胡运发

    1989-01-01

    In this paper,we present an ordering linear unification algorithm(OLU).A new idea on substituteion of the binding terms is introduced to the algorithm,which is able to overcome some drawbacks of other algorithms,e.g.,MM algorithm[1],RG1 and RG2 algorithms[2],Particularly,if we use the directed eyclie graphs,the algoritm needs not check the binding order,then the OLU algorithm can also be aplied to the infinite tree data struceture,and a higher efficiency can be expected.The paper focuses upon the discussion of OLU algorithm and a partial order structure with respect to the unification algorithm.This algorithm has been implemented in the GKD-PROLOG/VAX 780 interpreting system.Experimental results have shown that the algorithm is very simple and efficient.

  9. Complex fluids modeling and algorithms

    CERN Document Server

    Saramito, Pierre

    2016-01-01

    This book presents a comprehensive overview of the modeling of complex fluids, including many common substances, such as toothpaste, hair gel, mayonnaise, liquid foam, cement and blood, which cannot be described by Navier-Stokes equations. It also offers an up-to-date mathematical and numerical analysis of the corresponding equations, as well as several practical numerical algorithms and software solutions for the approximation of the solutions. It discusses industrial (molten plastics, forming process), geophysical (mud flows, volcanic lava, glaciers and snow avalanches), and biological (blood flows, tissues) modeling applications. This book is a valuable resource for undergraduate students and researchers in applied mathematics, mechanical engineering and physics.

  10. Ultra-Scalable Algorithms for Large-Scale Uncertainty Quantification in Inverse Wave Propagation

    Science.gov (United States)

    2016-03-04

    gradient), as well as linear systems with Hessian operators that arise in the trace estimation (along with incremental forward/adjoint wave equations ...with the Elemental library [54] to enable fast and scalable randomized linear algebra . We have also been working on domain decomposition...discontinuous Petrov Galerkin method, in Recent Developments in Discontinuous Galerkin Finite Element Methods for Partial Differential Equations : 2012

  11. New Parametric Imaging Algorithm for Quantification of Binding Parameter in non-reversible compartment model: MLAIR

    International Nuclear Information System (INIS)

    Kim, Su Jin; Lee, Jae Sung; Kim, Yu Kyeong; Lee, Dong Soo

    2007-01-01

    Parametric imaging allows us analysis of the entire brain or body image. Graphical approaches are commonly employed to generate parametric imaging through linear or multilinear regression. However, this linear regression method has limited accuracy due to bias in high level of noise data. Several methods have been proposed to reduce bias for linear regression estimation especially in reversible model. In this study, we focus on generating a net accumulation rate (K i ), which is related to binding parameter in brain receptor study, parametric imaging in an irreversible compartment model using multiple linear analysis. The reliability of a newly developed multiple linear analysis method (MLAIR) was assessed through the Monte Carlo simulation, and we applied it to a [ 11 C]MeNTI PET for opioid receptor

  12. Adaptive multiscale MCMC algorithm for uncertainty quantification in seismic parameter estimation

    KAUST Repository

    Tan, Xiaosi; Gibson, Richard L.; Leung, Wing Tat; Efendiev, Yalchin R.

    2014-01-01

    problem. In this paper, we consider Bayesian inversion for the parameter estimation in seismic wave propagation. The Bayes' theorem allows writing the posterior distribution via the likelihood function and the prior distribution where the latter represents

  13. Dicotyledon Weed Quantification Algorithm for Selective Herbicide Application in Maize Crops

    DEFF Research Database (Denmark)

    Laursen, Morten Stigaard; Nyholm Jørgensen, Rasmus; Midtiby, Henrik Skov

    2016-01-01

    The stricter legislation within the European Union for the regulation of herbicides that are prone to leaching causes a greater economic burden on the agricultural industry through taxation. Owing to the increased economic burden, research in reducing herbicide usage has been prompted. High-resol...

  14. Routing algorithms in networks-on-chip

    CERN Document Server

    Daneshtalab, Masoud

    2014-01-01

    This book provides a single-source reference to routing algorithms for Networks-on-Chip (NoCs), as well as in-depth discussions of advanced solutions applied to current and next generation, many core NoC-based Systems-on-Chip (SoCs). After a basic introduction to the NoC design paradigm and architectures, routing algorithms for NoC architectures are presented and discussed at all abstraction levels, from the algorithmic level to actual implementation.  Coverage emphasizes the role played by the routing algorithm and is organized around key problems affecting current and next generation, many-core SoCs. A selection of routing algorithms is included, specifically designed to address key issues faced by designers in the ultra-deep sub-micron (UDSM) era, including performance improvement, power, energy, and thermal issues, fault tolerance and reliability.   ·         Provides a comprehensive overview of routing algorithms for Networks-on-Chip and NoC-based, manycore systems; ·         Describe...

  15. FPGA helix tracking algorithm for PANDA

    Energy Technology Data Exchange (ETDEWEB)

    Liang, Yutie; Galuska, Martin; Gessler, Thomas; Kuehn, Wolfgang; Lange, Jens Soeren; Muenchow, David [II. Physikalisches Institut, University of Giessen (Germany); Ye, Hua [Institute of High Energy Physics, CAS (China); Collaboration: PANDA-Collaboration

    2016-07-01

    The PANDA detector is a general-purpose detector for physics with high luminosity cooled antiproton beams, planed to operate at the FAIR facility in Darmstadt, Germany. The central detector includes a silicon Micro Vertex Detector (MVD) and a Straw Tube Tracker (STT). Without any hardware trigger, large amounts of raw data are streaming into the data acquisition system. The data reduction task is performed in the online system by reconstruction algorithms programmed on FPGAs (Field Programmable Gate Arrays) as first level and on a farm of GPUs or PCs as a second level. One important part in the system is the online track reconstruction. In this presentation, an online tracking algorithm for helix tracking reconstruction in the solenoidal field is shown. The VHDL-based algorithm is tested with different types of events, at different event rate. Furthermore, a study of T0 extraction from the tracking algorithm is performed. A concept of simultaneous tracking and T0 determination is presented.

  16. Algorithmic Finance and (Limits to) Governmentality

    DEFF Research Database (Denmark)

    Borch, Christian

    2017-01-01

    -frequency trading, such as how algorithms are designed to govern other market participants' anticipations of market dynamics. However, I also argue that, to fully understand the realm of algorithmic finance and high-frequency trading, it is important to supplement a governmentality approach with an analytical......In this essay I discuss algorithmic finance, specifically the use of fully automated trading, including high-frequency trading, in the light of Michel Foucault's notion of governmentality. I argue that a governmentality perspective offers a fruitful way of understanding particular aspects of high...... lexicon which is not primarily centred on productive forms of power. Specifically, I suggest that, according to media discourses on high-frequency trading, algorithmic finance often works in ways that are better grasped through, e.g. Elias Canetti's work on predatory power and Roger Caillois's work...

  17. Parallelization of a blind deconvolution algorithm

    Science.gov (United States)

    Matson, Charles L.; Borelli, Kathy J.

    2006-09-01

    Often it is of interest to deblur imagery in order to obtain higher-resolution images. Deblurring requires knowledge of the blurring function - information that is often not available separately from the blurred imagery. Blind deconvolution algorithms overcome this problem by jointly estimating both the high-resolution image and the blurring function from the blurred imagery. Because blind deconvolution algorithms are iterative in nature, they can take minutes to days to deblur an image depending how many frames of data are used for the deblurring and the platforms on which the algorithms are executed. Here we present our progress in parallelizing a blind deconvolution algorithm to increase its execution speed. This progress includes sub-frame parallelization and a code structure that is not specialized to a specific computer hardware architecture.

  18. Algorithmic Finance and (Limits to) Governmentality

    DEFF Research Database (Denmark)

    Borch, Christian

    2017-01-01

    In this essay I discuss algorithmic finance, specifically the use of fully automated trading, including high-frequency trading, in the light of Michel Foucault's notion of governmentality. I argue that a governmentality perspective offers a fruitful way of understanding particular aspects of high......-frequency trading, such as how algorithms are designed to govern other market participants' anticipations of market dynamics. However, I also argue that, to fully understand the realm of algorithmic finance and high-frequency trading, it is important to supplement a governmentality approach with an analytical...... lexicon which is not primarily centred on productive forms of power. Specifically, I suggest that, according to media discourses on high-frequency trading, algorithmic finance often works in ways that are better grasped through, e.g. Elias Canetti's work on predatory power and Roger Caillois's work...

  19. Star point centroid algorithm based on background forecast

    Science.gov (United States)

    Wang, Jin; Zhao, Rujin; Zhu, Nan

    2014-09-01

    The calculation of star point centroid is a key step of improving star tracker measuring error. A star map photoed by APS detector includes several noises which have a great impact on veracity of calculation of star point centroid. Through analysis of characteristic of star map noise, an algorithm of calculation of star point centroid based on background forecast is presented in this paper. The experiment proves the validity of the algorithm. Comparing with classic algorithm, this algorithm not only improves veracity of calculation of star point centroid, but also does not need calibration data memory. This algorithm is applied successfully in a certain star tracker.

  20. Quantification of lung fibrosis and emphysema in mice using automated micro-computed tomography.

    Directory of Open Access Journals (Sweden)

    Ellen De Langhe

    Full Text Available BACKGROUND: In vivo high-resolution micro-computed tomography allows for longitudinal image-based measurements in animal models of lung disease. The combination of repetitive high resolution imaging with fully automated quantitative image analysis in mouse models of lung fibrosis lung benefits preclinical research. This study aimed to develop and validate such an automated micro-computed tomography analysis algorithm for quantification of aerated lung volume in mice; an indicator of pulmonary fibrosis and emphysema severity. METHODOLOGY: Mice received an intratracheal instillation of bleomycin (n = 8, elastase (0.25 U elastase n = 9, 0.5 U elastase n = 8 or saline control (n = 6 for fibrosis, n = 5 for emphysema. A subset of mice was scanned without intervention, to evaluate potential radiation-induced toxicity (n = 4. Some bleomycin-instilled mice were treated with imatinib for proof of concept (n = 8. Mice were scanned weekly, until four weeks after induction, when they underwent pulmonary function testing, lung histology and collagen quantification. Aerated lung volumes were calculated with our automated algorithm. PRINCIPAL FINDINGS: Our automated image-based aerated lung volume quantification method is reproducible with low intra-subject variability. Bleomycin-treated mice had significantly lower scan-derived aerated lung volumes, compared to controls. Aerated lung volume correlated with the histopathological fibrosis score and total lung collagen content. Inversely, a dose-dependent increase in lung volume was observed in elastase-treated mice. Serial scanning of individual mice is feasible and visualized dynamic disease progression. No radiation-induced toxicity was observed. Three-dimensional images provided critical topographical information. CONCLUSIONS: We report on a high resolution in vivo micro-computed tomography image analysis algorithm that runs fully automated and allows quantification of aerated lung volume in mice. This

  1. A propositional CONEstrip algorithm

    NARCIS (Netherlands)

    E. Quaeghebeur (Erik); A. Laurent; O. Strauss; B. Bouchon-Meunier; R.R. Yager (Ronald)

    2014-01-01

    textabstractWe present a variant of the CONEstrip algorithm for checking whether the origin lies in a finitely generated convex cone that can be open, closed, or neither. This variant is designed to deal efficiently with problems where the rays defining the cone are specified as linear combinations

  2. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...

  3. Efficient graph algorithms

    Indian Academy of Sciences (India)

    Shortest path problems. Road network on cities and we want to navigate between cities. . – p.8/30 ..... The rest of the talk... Computing connectivities between all pairs of vertices good algorithm wrt both space and time to compute the exact solution. . – p.15/30 ...

  4. de Casteljau's Algorithm Revisited

    DEFF Research Database (Denmark)

    Gravesen, Jens

    1998-01-01

    It is demonstrated how all the basic properties of Bezier curves can be derived swiftly and efficiently without any reference to the Bernstein polynomials and essentially with only geometric arguments. This is achieved by viewing one step in de Casteljau's algorithm as an operator (the de Casteljau...

  5. General Algorithm (High level)

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. General Algorithm (High level). Iteratively. Use Tightness Property to remove points of P1,..,Pi. Use random sampling to get a Random Sample (of enough points) from the next largest cluster, Pi+1. Use the Random Sampling Procedure to approximate ci+1 using the ...

  6. Comprehensive eye evaluation algorithm

    Science.gov (United States)

    Agurto, C.; Nemeth, S.; Zamora, G.; Vahtel, M.; Soliz, P.; Barriga, S.

    2016-03-01

    In recent years, several research groups have developed automatic algorithms to detect diabetic retinopathy (DR) in individuals with diabetes (DM), using digital retinal images. Studies have indicated that diabetics have 1.5 times the annual risk of developing primary open angle glaucoma (POAG) as do people without DM. Moreover, DM patients have 1.8 times the risk for age-related macular degeneration (AMD). Although numerous investigators are developing automatic DR detection algorithms, there have been few successful efforts to create an automatic algorithm that can detect other ocular diseases, such as POAG and AMD. Consequently, our aim in the current study was to develop a comprehensive eye evaluation algorithm that not only detects DR in retinal images, but also automatically identifies glaucoma suspects and AMD by integrating other personal medical information with the retinal features. The proposed system is fully automatic and provides the likelihood of each of the three eye disease. The system was evaluated in two datasets of 104 and 88 diabetic cases. For each eye, we used two non-mydriatic digital color fundus photographs (macula and optic disc centered) and, when available, information about age, duration of diabetes, cataracts, hypertension, gender, and laboratory data. Our results show that the combination of multimodal features can increase the AUC by up to 5%, 7%, and 8% in the detection of AMD, DR, and glaucoma respectively. Marked improvement was achieved when laboratory results were combined with retinal image features.

  7. Enhanced sampling algorithms.

    Science.gov (United States)

    Mitsutake, Ayori; Mori, Yoshiharu; Okamoto, Yuko

    2013-01-01

    In biomolecular systems (especially all-atom models) with many degrees of freedom such as proteins and nucleic acids, there exist an astronomically large number of local-minimum-energy states. Conventional simulations in the canonical ensemble are of little use, because they tend to get trapped in states of these energy local minima. Enhanced conformational sampling techniques are thus in great demand. A simulation in generalized ensemble performs a random walk in potential energy space and can overcome this difficulty. From only one simulation run, one can obtain canonical-ensemble averages of physical quantities as functions of temperature by the single-histogram and/or multiple-histogram reweighting techniques. In this article we review uses of the generalized-ensemble algorithms in biomolecular systems. Three well-known methods, namely, multicanonical algorithm, simulated tempering, and replica-exchange method, are described first. Both Monte Carlo and molecular dynamics versions of the algorithms are given. We then present various extensions of these three generalized-ensemble algorithms. The effectiveness of the methods is tested with short peptide and protein systems.

  8. Algorithm Theory - SWAT 2006

    DEFF Research Database (Denmark)

    This book constitutes the refereed proceedings of the 10th Scandinavian Workshop on Algorithm Theory, SWAT 2006, held in Riga, Latvia, in July 2006. The 36 revised full papers presented together with 3 invited papers were carefully reviewed and selected from 154 submissions. The papers address all...

  9. Optimal Quadratic Programming Algorithms

    CERN Document Server

    Dostal, Zdenek

    2009-01-01

    Quadratic programming (QP) is one technique that allows for the optimization of a quadratic function in several variables in the presence of linear constraints. This title presents various algorithms for solving large QP problems. It is suitable as an introductory text on quadratic programming for graduate students and researchers

  10. Quantification of surgical blood loss.

    Science.gov (United States)

    Lee, Marcel H; Ingvertsen, Britt T; Kirpensteijn, Jolle; Jensen, Asger L; Kristensen, Annemarie T

    2006-06-01

    To compare gravimetric and colorimetric methods of quantifying surgical blood loss, and to determine if there is a correlation between preoperative hemostatic tests (buccal mucosa bleeding time [BMBT] and intraoperative blood loss). Prospective clinical study. Dogs (n=15) admitted for cutaneous tumor excision, orthopedic procedure, or exploratory laparotomy. Intraoperative blood loss was quantified by measuring irrigation fluid and weighing surgical sponges used for blood and fluid collection during surgery. Results of gravimetric measurements were then correlated to blood loss quantified using spectrophotometric analysis of hemoglobin (Hb) content. Hemostatic variables including BMBT were measured before surgery and compared with the calculated amount of blood loss. Blood loss quantified by gravimetric measurement showed a significant correlation with colorimetric determination of Hb content in surgical sponges and collected irrigation fluid (r=0.93, P<.0001). BMBT correlated weakly but significantly with intraoperative blood loss (r=0.56, P<.05). Quantifying intraoperative blood loss using spectrophotometric Hb analysis accurately assessed the amount of blood loss; however, it is a time-consuming procedure, primarily applicable as a research tool. Gravimetric evaluation of intraoperative blood loss was found to be an accurate method, which can be recommended for use in a clinical setting. Estimation of blood loss using a gravimetric method is accurate and applicable in the clinical setting and provides surgeons with a simple and objective tool to evaluate intraoperative blood loss.

  11. Online updating and uncertainty quantification using nonstationary output-only measurement

    Science.gov (United States)

    Yuen, Ka-Veng; Kuok, Sin-Chi

    2016-01-01

    Extended Kalman filter (EKF) is widely adopted for state estimation and parametric identification of dynamical systems. In this algorithm, it is required to specify the covariance matrices of the process noise and measurement noise based on prior knowledge. However, improper assignment of these noise covariance matrices leads to unreliable estimation and misleading uncertainty estimation on the system state and model parameters. Furthermore, it may induce diverging estimation. To resolve these problems, we propose a Bayesian probabilistic algorithm for online estimation of the noise parameters which are used to characterize the noise covariance matrices. There are three major appealing features of the proposed approach. First, it resolves the divergence problem in the conventional usage of EKF due to improper choice of the noise covariance matrices. Second, the proposed approach ensures the reliability of the uncertainty quantification. Finally, since the noise parameters are allowed to be time-varying, nonstationary process noise and/or measurement noise are explicitly taken into account. Examples using stationary/nonstationary response of linear/nonlinear time-varying dynamical systems are presented to demonstrate the efficacy of the proposed approach. Furthermore, comparison with the conventional usage of EKF will be provided to reveal the necessity of the proposed approach for reliable model updating and uncertainty quantification.

  12. (including travel dates) Proposed itinerary

    Indian Academy of Sciences (India)

    Ashok

    31 July to 22 August 2012 (including travel dates). Proposed itinerary: Arrival in Bangalore on 1 August. 1-5 August: Bangalore, Karnataka. Suggested institutions: Indian Institute of Science, Bangalore. St Johns Medical College & Hospital, Bangalore. Jawaharlal Nehru Centre, Bangalore. 6-8 August: Chennai, TN.

  13. Python algorithms mastering basic algorithms in the Python language

    CERN Document Server

    Hetland, Magnus Lie

    2014-01-01

    Python Algorithms, Second Edition explains the Python approach to algorithm analysis and design. Written by Magnus Lie Hetland, author of Beginning Python, this book is sharply focused on classical algorithms, but it also gives a solid understanding of fundamental algorithmic problem-solving techniques. The book deals with some of the most important and challenging areas of programming and computer science in a highly readable manner. It covers both algorithmic theory and programming practice, demonstrating how theory is reflected in real Python programs. Well-known algorithms and data struc

  14. Quantification of regional fat volume in rat MRI

    Science.gov (United States)

    Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren

    2003-05-01

    Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been

  15. Collaborative framework for PIV uncertainty quantification: the experimental database

    International Nuclear Information System (INIS)

    Neal, Douglas R; Sciacchitano, Andrea; Scarano, Fulvio; Smith, Barton L

    2015-01-01

    The uncertainty quantification of particle image velocimetry (PIV) measurements has recently become a topic of great interest as shown by the recent appearance of several different methods within the past few years. These approaches have different working principles, merits and limitations, which have been speculated upon in subsequent studies. This paper reports a unique experiment that has been performed specifically to test the efficacy of PIV uncertainty methods. The case of a rectangular jet, as previously studied by Timmins et al (2012) and Wilson and Smith (2013b), is used. The novel aspect of the experiment is simultaneous velocity measurements using two different time-resolved PIV systems and a hot-wire anemometry (HWA) system. The first PIV system, called the PIV measurement system (‘PIV-MS’), is intended for nominal measurements of which the uncertainty is to be evaluated. It is based on a single camera and features a dynamic velocity range (DVR) representative of typical PIV experiments. The second PIV system, called the ‘PIV-HDR’ (high dynamic range) system, features a significantly higher DVR obtained with a higher digital imaging resolution. The hot-wire is placed in close proximity to the PIV measurement domain. The three measurement systems were carefully set to simultaneously measure the flow velocity at the same time and location. The comparison between the PIV-HDR system and the HWA provides an estimate of the measurement precision of the reference velocity for evaluation of the instantaneous error in the measurement system. The discrepancy between the PIV-MS and the reference data provides the measurement error, which is later used to assess the different uncertainty quantification methods proposed in the literature. A detailed comparison of the uncertainty estimation methods based on the present datasets is presented in a second paper from Sciacchitano et al (2015). Furthermore, this database offers the potential to be used for

  16. Theory including future not excluded

    DEFF Research Database (Denmark)

    Nagao, K.; Nielsen, H.B.

    2013-01-01

    We study a complex action theory (CAT) whose path runs over not only past but also future. We show that, if we regard a matrix element defined in terms of the future state at time T and the past state at time TA as an expectation value in the CAT, then we are allowed to have the Heisenberg equation......, Ehrenfest's theorem, and the conserved probability current density. In addition,we showthat the expectation value at the present time t of a future-included theory for large T - t and large t - T corresponds to that of a future-not-included theory with a proper inner product for large t - T. Hence, the CAT...

  17. The Algorithm for Algorithms: An Evolutionary Algorithm Based on Automatic Designing of Genetic Operators

    Directory of Open Access Journals (Sweden)

    Dazhi Jiang

    2015-01-01

    Full Text Available At present there is a wide range of evolutionary algorithms available to researchers and practitioners. Despite the great diversity of these algorithms, virtually all of the algorithms share one feature: they have been manually designed. A fundamental question is “are there any algorithms that can design evolutionary algorithms automatically?” A more complete definition of the question is “can computer construct an algorithm which will generate algorithms according to the requirement of a problem?” In this paper, a novel evolutionary algorithm based on automatic designing of genetic operators is presented to address these questions. The resulting algorithm not only explores solutions in the problem space like most traditional evolutionary algorithms do, but also automatically generates genetic operators in the operator space. In order to verify the performance of the proposed algorithm, comprehensive experiments on 23 well-known benchmark optimization problems are conducted. The results show that the proposed algorithm can outperform standard differential evolution algorithm in terms of convergence speed and solution accuracy which shows that the algorithm designed automatically by computers can compete with the algorithms designed by human beings.

  18. Reactive Collision Avoidance Algorithm

    Science.gov (United States)

    Scharf, Daniel; Acikmese, Behcet; Ploen, Scott; Hadaegh, Fred

    2010-01-01

    The reactive collision avoidance (RCA) algorithm allows a spacecraft to find a fuel-optimal trajectory for avoiding an arbitrary number of colliding spacecraft in real time while accounting for acceleration limits. In addition to spacecraft, the technology can be used for vehicles that can accelerate in any direction, such as helicopters and submersibles. In contrast to existing, passive algorithms that simultaneously design trajectories for a cluster of vehicles working to achieve a common goal, RCA is implemented onboard spacecraft only when an imminent collision is detected, and then plans a collision avoidance maneuver for only that host vehicle, thus preventing a collision in an off-nominal situation for which passive algorithms cannot. An example scenario for such a situation might be when a spacecraft in the cluster is approaching another one, but enters safe mode and begins to drift. Functionally, the RCA detects colliding spacecraft, plans an evasion trajectory by solving the Evasion Trajectory Problem (ETP), and then recovers after the collision is avoided. A direct optimization approach was used to develop the algorithm so it can run in real time. In this innovation, a parameterized class of avoidance trajectories is specified, and then the optimal trajectory is found by searching over the parameters. The class of trajectories is selected as bang-off-bang as motivated by optimal control theory. That is, an avoiding spacecraft first applies full acceleration in a constant direction, then coasts, and finally applies full acceleration to stop. The parameter optimization problem can be solved offline and stored as a look-up table of values. Using a look-up table allows the algorithm to run in real time. Given a colliding spacecraft, the properties of the collision geometry serve as indices of the look-up table that gives the optimal trajectory. For multiple colliding spacecraft, the set of trajectories that avoid all spacecraft is rapidly searched on

  19. New management algorithms in multiple sclerosis

    DEFF Research Database (Denmark)

    Sorensen, Per Soelberg

    2014-01-01

    complex. The purpose of the review has been to work out new management algorithms for treatment of relapsing-remitting multiple sclerosis including new oral therapies and therapeutic monoclonal antibodies. RECENT FINDINGS: Recent large placebo-controlled trials in relapsing-remitting multiple sclerosis......PURPOSE OF REVIEW: Our current treatment algorithms include only IFN-β and glatiramer as available first-line disease-modifying drugs and natalizumab and fingolimod as second-line therapies. Today, 10 drugs have been approved in Europe and nine in the United States making the choice of therapy more...

  20. Algorithms for the Computation of Debris Risk

    Science.gov (United States)

    Matney, Mark J.

    2017-01-01

    Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of satellites. A number of tools have been developed in NASA’s Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA’s Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper presents an introduction to these algorithms and the assumptions upon which they are based.

  1. Algorithms for the Computation of Debris Risks

    Science.gov (United States)

    Matney, Mark

    2017-01-01

    Determining the risks from space debris involve a number of statistical calculations. These calculations inevitably involve assumptions about geometry - including the physical geometry of orbits and the geometry of non-spherical satellites. A number of tools have been developed in NASA's Orbital Debris Program Office to handle these calculations; many of which have never been published before. These include algorithms that are used in NASA's Orbital Debris Engineering Model ORDEM 3.0, as well as other tools useful for computing orbital collision rates and ground casualty risks. This paper will present an introduction to these algorithms and the assumptions upon which they are based.

  2. Quantification in single photon emission computed tomography (SPECT)

    International Nuclear Information System (INIS)

    Buvat, Irene

    2005-01-01

    The objective of this lecture is to understand the possibilities and limitations of the quantitative analysis of single photon emission computed tomography (SPECT) images. It is also to identify the conditions to be fulfilled to obtain reliable quantitative measurements from images. Content: 1 - Introduction: Quantification in emission tomography - definition and challenges; quantification biasing phenomena; 2 - quantification in SPECT, problems and correction methods: Attenuation, scattering, un-stationary spatial resolution, partial volume effect, movement, tomographic reconstruction, calibration; 3 - Synthesis: actual quantification accuracy; 4 - Beyond the activity concentration measurement

  3. Digital PCR for direct quantification of viruses without DNA extraction.

    Science.gov (United States)

    Pavšič, Jernej; Žel, Jana; Milavec, Mojca

    2016-01-01

    DNA extraction before amplification is considered an essential step for quantification of viral DNA using real-time PCR (qPCR). However, this can directly affect the final measurements due to variable DNA yields and removal of inhibitors, which leads to increased inter-laboratory variability of qPCR measurements and reduced agreement on viral loads. Digital PCR (dPCR) might be an advantageous methodology for the measurement of virus concentrations, as it does not depend on any calibration material and it has higher tolerance to inhibitors. DNA quantification without an extraction step (i.e. direct quantification) was performed here using dPCR and two different human cytomegalovirus whole-virus materials. Two dPCR platforms were used for this direct quantification of the viral DNA, and these were compared with quantification of the extracted viral DNA in terms of yield and variability. Direct quantification of both whole-virus materials present in simple matrices like cell lysate or Tris-HCl buffer provided repeatable measurements of virus concentrations that were probably in closer agreement with the actual viral load than when estimated through quantification of the extracted DNA. Direct dPCR quantification of other viruses, reference materials and clinically relevant matrices is now needed to show the full versatility of this very promising and cost-efficient development in virus quantification.

  4. Development of a VHH-Based Erythropoietin Quantification Assay

    DEFF Research Database (Denmark)

    Kol, Stefan; Beuchert Kallehauge, Thomas; Adema, Simon

    2015-01-01

    Erythropoietin (EPO) quantification during cell line selection and bioreactor cultivation has traditionally been performed with ELISA or HPLC. As these techniques suffer from several drawbacks, we developed a novel EPO quantification assay. A camelid single-domain antibody fragment directed against...... human EPO was evaluated as a capturing antibody in a label-free biolayer interferometry-based quantification assay. Human recombinant EPO can be specifically detected in Chinese hamster ovary cell supernatants in a sensitive and pH-dependent manner. This method enables rapid and robust quantification...

  5. Quantification procedures in micro X-ray fluorescence analysis

    International Nuclear Information System (INIS)

    Kanngiesser, Birgit

    2003-01-01

    For the quantification in micro X-ray fluorescence analysis standardfree quantification procedures have become especially important. An introduction to the basic concepts of these quantification procedures is given, followed by a short survey of the procedures which are available now and what kind of experimental situations and analytical problems are addressed. The last point is extended by the description of an own development for the fundamental parameter method, which renders the inclusion of nonparallel beam geometries possible. Finally, open problems for the quantification procedures are discussed

  6. Quantification of global myocardial function by cine MRI deformable registration-based analysis: Comparison with MR feature tracking and speckle-tracking echocardiography

    International Nuclear Information System (INIS)

    Lamacie, Mariana M.; Thavendiranathan, Paaladinesh; Hanneman, Kate; Greiser, Andreas; Jolly, Marie-Pierre; Ward, Richard; Wintersperger, Bernd J.

    2017-01-01

    To evaluate deformable registration algorithms (DRA)-based quantification of cine steady-state free-precession (SSFP) for myocardial strain assessment in comparison with feature-tracking (FT) and speckle-tracking echocardiography (STE). Data sets of 28 patients/10 volunteers, undergoing same-day 1.5T cardiac MRI and echocardiography were included. LV global longitudinal (GLS), circumferential (GCS) and radial (GRS) peak systolic strain were assessed on cine SSFP data using commercially available FT algorithms and prototype DRA-based algorithms. STE was applied as standard of reference for accuracy, precision and intra-/interobserver reproducibility testing. DRA showed narrower limits of agreement compared to STE for GLS (-4.0 [-0.9,-7.9]) and GCS (-5.1 [1.1,-11.2]) than FT (3.2 [11.2,-4.9]; 3.8 [13.9,-6.3], respectively). While both DRA and FT demonstrated significant differences to STE for GLS and GCS (all p<0.001), only DRA correlated significantly to STE for GLS (r=0.47; p=0.006). However, good correlation was demonstrated between MR techniques (GLS:r=0.74; GCS:r=0.80; GRS:r=0.45, all p<0.05). Comparing DRA with FT, intra-/interobserver coefficient of variance was lower (1.6 %/3.2 % vs. 6.4 %/5.7 %) and intraclass-correlation coefficient was higher. DRA GCS and GRS data presented zero variability for repeated observations. DRA is an automated method that allows myocardial deformation assessment with superior reproducibility compared to FT. (orig.)

  7. Quantification of global myocardial function by cine MRI deformable registration-based analysis: Comparison with MR feature tracking and speckle-tracking echocardiography

    Energy Technology Data Exchange (ETDEWEB)

    Lamacie, Mariana M. [University Health Network, Department of Medical Imaging, Toronto, Ontario (Canada); Thavendiranathan, Paaladinesh [University Health Network, Department of Medical Imaging, Toronto, Ontario (Canada); University of Toronto, Department of Medicine, Division of Cardiology, Toronto, Ontario (Canada); Hanneman, Kate [University Health Network, Department of Medical Imaging, Toronto, Ontario (Canada); University of Toronto, Department of Medical Imaging, Toronto, Ontario (Canada); Greiser, Andreas [Siemens Healthcare, Erlangen (Germany); Jolly, Marie-Pierre [Medical Imaging Technologies, Siemens Healthcare, Princeton, NJ (United States); Ward, Richard [University of Toronto, Department of Medicine, Division of Cardiology, Toronto, Ontario (Canada); Wintersperger, Bernd J. [University Health Network, Department of Medical Imaging, Toronto, Ontario (Canada); University of Toronto, Department of Medical Imaging, Toronto, Ontario (Canada); Toronto General Hospital, Department of Medical Imaging, Toronto, Ontario (Canada)

    2017-04-15

    To evaluate deformable registration algorithms (DRA)-based quantification of cine steady-state free-precession (SSFP) for myocardial strain assessment in comparison with feature-tracking (FT) and speckle-tracking echocardiography (STE). Data sets of 28 patients/10 volunteers, undergoing same-day 1.5T cardiac MRI and echocardiography were included. LV global longitudinal (GLS), circumferential (GCS) and radial (GRS) peak systolic strain were assessed on cine SSFP data using commercially available FT algorithms and prototype DRA-based algorithms. STE was applied as standard of reference for accuracy, precision and intra-/interobserver reproducibility testing. DRA showed narrower limits of agreement compared to STE for GLS (-4.0 [-0.9,-7.9]) and GCS (-5.1 [1.1,-11.2]) than FT (3.2 [11.2,-4.9]; 3.8 [13.9,-6.3], respectively). While both DRA and FT demonstrated significant differences to STE for GLS and GCS (all p<0.001), only DRA correlated significantly to STE for GLS (r=0.47; p=0.006). However, good correlation was demonstrated between MR techniques (GLS:r=0.74; GCS:r=0.80; GRS:r=0.45, all p<0.05). Comparing DRA with FT, intra-/interobserver coefficient of variance was lower (1.6 %/3.2 % vs. 6.4 %/5.7 %) and intraclass-correlation coefficient was higher. DRA GCS and GRS data presented zero variability for repeated observations. DRA is an automated method that allows myocardial deformation assessment with superior reproducibility compared to FT. (orig.)

  8. Microplastics in Baltic bottom sediments: Quantification procedures and first results.

    Science.gov (United States)

    Zobkov, M; Esiukova, E

    2017-01-30

    Microplastics in the marine environment are known as a global ecological problem but there are still no standardized analysis procedures for their quantification. The first breakthrough in this direction was the NOAA Laboratory Methods for quantifying synthetic particles in water and sediments, but fibers numbers have been found to be underestimated with this approach. We propose modifications for these methods that will allow us to analyze microplastics in bottom sediments, including small fibers. Addition of an internal standard to sediment samples and occasional empty runs are advised for analysis quality control. The microplastics extraction efficiency using the proposed modifications is 92±7%. Distribution of microplastics in bottom sediments of the Russian part of the Baltic Sea is presented. Microplastic particles were found in all of the samples with an average concentration of 34±10 items/kg DW and have the same order of magnitude as neighbor studies reported. Copyright © 2016 Elsevier Ltd. All rights reserved.

  9. Improved core protection calculator system algorithm

    International Nuclear Information System (INIS)

    Yoon, Tae Young; Park, Young Ho; In, Wang Kee; Bae, Jong Sik; Baeg, Seung Yeob

    2009-01-01

    Core Protection Calculator System (CPCS) is a digitized core protection system which provides core protection functions based on two reactor core operation parameters, Departure from Nucleate Boiling Ratio (DNBR) and Local Power Density (LPD). It generates a reactor trip signal when the core condition exceeds the DNBR or LPD design limit. It consists of four independent channels which adapted a two out of four trip logic. CPCS algorithm improvement for the newly designed core protection calculator system, RCOPS (Reactor COre Protection System), is described in this paper. New features include the improvement of DNBR algorithm for thermal margin, the addition of pre trip alarm generation for auxiliary trip function, VOPT (Variable Over Power Trip) prevention during RPCS (Reactor Power Cutback System) actuation and the improvement of CEA (Control Element Assembly) signal checking algorithm. To verify the improved CPCS algorithm, CPCS algorithm verification tests, 'Module Test' and 'Unit Test', would be performed on RCOPS single channel facility. It is expected that the improved CPCS algorithm will increase DNBR margin and enhance the plant availability by reducing unnecessary reactor trips

  10. The Applications of Genetic Algorithms in Medicine

    Directory of Open Access Journals (Sweden)

    Ali Ghaheri

    2015-11-01

    Full Text Available A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.

  11. The Applications of Genetic Algorithms in Medicine.

    Science.gov (United States)

    Ghaheri, Ali; Shoar, Saeed; Naderan, Mohammad; Hoseini, Sayed Shahabuddin

    2015-11-01

    A great wealth of information is hidden amid medical research data that in some cases cannot be easily analyzed, if at all, using classical statistical methods. Inspired by nature, metaheuristic algorithms have been developed to offer optimal or near-optimal solutions to complex data analysis and decision-making tasks in a reasonable time. Due to their powerful features, metaheuristic algorithms have frequently been used in other fields of sciences. In medicine, however, the use of these algorithms are not known by physicians who may well benefit by applying them to solve complex medical problems. Therefore, in this paper, we introduce the genetic algorithm and its applications in medicine. The use of the genetic algorithm has promising implications in various medical specialties including radiology, radiotherapy, oncology, pediatrics, cardiology, endocrinology, surgery, obstetrics and gynecology, pulmonology, infectious diseases, orthopedics, rehabilitation medicine, neurology, pharmacotherapy, and health care management. This review introduces the applications of the genetic algorithm in disease screening, diagnosis, treatment planning, pharmacovigilance, prognosis, and health care management, and enables physicians to envision possible applications of this metaheuristic method in their medical career.].

  12. Quantification of competitive value of documents

    Directory of Open Access Journals (Sweden)

    Pavel Šimek

    2009-01-01

    Full Text Available The majority of Internet users use the global network to search for different information using fulltext search engines such as Google, Yahoo!, or Seznam. The web presentation operators are trying, with the help of different optimization techniques, to get to the top places in the results of fulltext search engines. Right there is a great importance of Search Engine Optimization and Search Engine Marketing, because normal users usually try links only on the first few pages of the fulltext search engines results on certain keywords and in catalogs they use primarily hierarchically higher placed links in each category. Key to success is the application of optimization methods which deal with the issue of keywords, structure and quality of content, domain names, individual sites and quantity and reliability of backward links. The process is demanding, long-lasting and without a guaranteed outcome. A website operator without advanced analytical tools do not identify the contribution of individual documents from which the entire web site consists. If the web presentation operators want to have an overview of their documents and web site in global, it is appropriate to quantify these positions in a specific way, depending on specific key words. For this purpose serves the quantification of competitive value of documents, which consequently sets global competitive value of a web site. Quantification of competitive values is performed on a specific full-text search engine. For each full-text search engine can be and often are, different results. According to published reports of ClickZ agency or Market Share is according to the number of searches by English-speaking users most widely used Google search engine, which has a market share of more than 80%. The whole procedure of quantification of competitive values is common, however, the initial step which is the analysis of keywords depends on a choice of the fulltext search engine.

  13. Advances in forensic DNA quantification: a review.

    Science.gov (United States)

    Lee, Steven B; McCord, Bruce; Buel, Eric

    2014-11-01

    This review focuses upon a critical step in forensic biology: detection and quantification of human DNA from biological samples. Determination of the quantity and quality of human DNA extracted from biological evidence is important for several reasons. Firstly, depending on the source and extraction method, the quality (purity and length), and quantity of the resultant DNA extract can vary greatly. This affects the downstream method as the quantity of input DNA and its relative length can determine which genotyping procedure to use-standard short-tandem repeat (STR) typing, mini-STR typing or mitochondrial DNA sequencing. Secondly, because it is important in forensic analysis to preserve as much of the evidence as possible for retesting, it is important to determine the total DNA amount available prior to utilizing any destructive analytical method. Lastly, results from initial quantitative and qualitative evaluations permit a more informed interpretation of downstream analytical results. Newer quantitative techniques involving real-time PCR can reveal the presence of degraded DNA and PCR inhibitors, that provide potential reasons for poor genotyping results and may indicate methods to use for downstream typing success. In general, the more information available, the easier it is to interpret and process the sample resulting in a higher likelihood of successful DNA typing. The history of the development of quantitative methods has involved two main goals-improving precision of the analysis and increasing the information content of the result. This review covers advances in forensic DNA quantification methods and recent developments in RNA quantification. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  14. Stereo-particle image velocimetry uncertainty quantification

    International Nuclear Information System (INIS)

    Bhattacharya, Sayantan; Vlachos, Pavlos P; Charonko, John J

    2017-01-01

    Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric

  15. An Algorithmic Diversity Diet?

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk; Schmidt, Jan-Hinrik

    2016-01-01

    With the growing influence of personalized algorithmic recommender systems on the exposure of media content to users, the relevance of discussing the diversity of recommendations increases, particularly as far as public service media (PSM) is concerned. An imagined implementation of a diversity...... diet system however triggers not only the classic discussion of the reach – distinctiveness balance for PSM, but also shows that ‘diversity’ is understood very differently in algorithmic recommender system communities than it is editorially and politically in the context of PSM. The design...... of a diversity diet system generates questions not just about editorial power, personal freedom and techno-paternalism, but also about the embedded politics of recommender systems as well as the human skills affiliated with PSM editorial work and the nature of PSM content....

  16. DAL Algorithms and Python

    CERN Document Server

    Aydemir, Bahar

    2017-01-01

    The Trigger and Data Acquisition (TDAQ) system of the ATLAS detector at the Large Hadron Collider (LHC) at CERN is composed of a large number of distributed hardware and software components. TDAQ system consists of about 3000 computers and more than 25000 applications which, in a coordinated manner, provide the data-taking functionality of the overall system. There is a number of online services required to configure, monitor and control the ATLAS data taking. In particular, the configuration service is used to provide configuration of above components. The configuration of the ATLAS data acquisition system is stored in XML-based object database named OKS. DAL (Data Access Library) allowing to access it's information by C++, Java and Python clients in a distributed environment. Some information has quite complicated structure, so it's extraction requires writing special algorithms. Algorithms available on C++ programming language and partially reimplemented on Java programming language. The goal of the projec...

  17. Genetic algorithm essentials

    CERN Document Server

    Kramer, Oliver

    2017-01-01

    This book introduces readers to genetic algorithms (GAs) with an emphasis on making the concepts, algorithms, and applications discussed as easy to understand as possible. Further, it avoids a great deal of formalisms and thus opens the subject to a broader audience in comparison to manuscripts overloaded by notations and equations. The book is divided into three parts, the first of which provides an introduction to GAs, starting with basic concepts like evolutionary operators and continuing with an overview of strategies for tuning and controlling parameters. In turn, the second part focuses on solution space variants like multimodal, constrained, and multi-objective solution spaces. Lastly, the third part briefly introduces theoretical tools for GAs, the intersections and hybridizations with machine learning, and highlights selected promising applications.

  18. Cuckoo search and firefly algorithm theory and applications

    CERN Document Server

    2014-01-01

    Nature-inspired algorithms such as cuckoo search and firefly algorithm have become popular and widely used in recent years in many applications. These algorithms are flexible, efficient and easy to implement. New progress has been made in the last few years, and it is timely to summarize the latest developments of cuckoo search and firefly algorithm and their diverse applications. This book will review both theoretical studies and applications with detailed algorithm analysis, implementation and case studies so that readers can benefit most from this book.  Application topics are contributed by many leading experts in the field. Topics include cuckoo search, firefly algorithm, algorithm analysis, feature selection, image processing, travelling salesman problem, neural network, GPU optimization, scheduling, queuing, multi-objective manufacturing optimization, semantic web service, shape optimization, and others.   This book can serve as an ideal reference for both graduates and researchers in computer scienc...

  19. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR)

    Energy Technology Data Exchange (ETDEWEB)

    Chen, Baiyu [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 and Carl E. Ravin Advanced Imaging Laboratories, Duke University, Durham, North Carolina 27705 (United States); Barnhart, Huiman [Department of Biostatistics and Bioinformatics, Duke University, Durham, North Carolina 27705 (United States); Richard, Samuel [Carl E. Ravin Advanced Imaging Laboratories, Duke University, Durham, North Carolina 27705 and Department of Radiology, Duke University, Durham, North Carolina 27705 (United States); Robins, Marthony [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Colsher, James [Department of Radiology, Duke University, Durham, North Carolina 27705 (United States); Samei, Ehsan [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Carl E. Ravin Advanced Imaging Laboratories, Duke University, Durham, North Carolina 27705 (United States); Department of Radiology, Duke University, Durham, North Carolina 27705 (United States); Department of Physics, Department of Biomedical Engineering, and Department of Electronic and Computer Engineering, Duke University, Durham, North Carolina 27705 (United States)

    2013-11-15

    Purpose: Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables.Methods: Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision.Results: Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A.Conclusions: The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of

  20. Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR).

    Science.gov (United States)

    Chen, Baiyu; Barnhart, Huiman; Richard, Samuel; Robins, Marthony; Colsher, James; Samei, Ehsan

    2013-11-01

    Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables. Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision. Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A. The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of accuracy on reconstruction algorithms