A fast and robust hepatocyte quantification algorithm including vein processing
Directory of Open Access Journals (Sweden)
Homeyer André
2010-03-01
Full Text Available Abstract Background Quantification of different types of cells is often needed for analysis of histological images. In our project, we compute the relative number of proliferating hepatocytes for the evaluation of the regeneration process after partial hepatectomy in normal rat livers. Results Our presented automatic approach for hepatocyte (HC quantification is suitable for the analysis of an entire digitized histological section given in form of a series of images. It is the main part of an automatic hepatocyte quantification tool that allows for the computation of the ratio between the number of proliferating HC-nuclei and the total number of all HC-nuclei for a series of images in one processing run. The processing pipeline allows us to obtain desired and valuable results for a wide range of images with different properties without additional parameter adjustment. Comparing the obtained segmentation results with a manually retrieved segmentation mask which is considered to be the ground truth, we achieve results with sensitivity above 90% and false positive fraction below 15%. Conclusions The proposed automatic procedure gives results with high sensitivity and low false positive fraction and can be applied to process entire stained sections.
Comparison of heterogeneity quantification algorithms for brain SPECT perfusion images
Modzelewski, Romain; Janvresse, Elise; De La Rue, Thierry; Vera, Pierre
2012-01-01
Background Several algorithms from the literature were compared with the original random walk (RW) algorithm for brain perfusion heterogeneity quantification purposes. Algorithms are compared on a set of 210 brain single photon emission computed tomography (SPECT) simulations and 40 patient exams. Methods Five algorithms were tested on numerical phantoms. The numerical anthropomorphic Zubal head phantom was used to generate 42 (6 × 7) different brain SPECT simulations. Seven diffuse cortical ...
Efficient Algorithms for Electrostatic Interactions Including Dielectric Contrasts
Directory of Open Access Journals (Sweden)
Christian Holm
2013-10-01
Full Text Available Coarse-grained models of soft matter are usually combined with implicit solvent models that take the electrostatic polarizability into account via a dielectric background. In biophysical or nanoscale simulations that include water, this constant can vary greatly within the system. Performing molecular dynamics or other simulations that need to compute exact electrostatic interactions between charges in those systems is computationally demanding. We review here several algorithms developed by us that perform exactly this task. For planar dielectric surfaces in partial periodic boundary conditions, the arising image charges can be either treated with the MMM2D algorithm in a very efficient and accurate way or with the electrostatic layer correction term, which enables the user to use his favorite 3D periodic Coulomb solver. Arbitrarily-shaped interfaces can be dealt with using induced surface charges with the induced charge calculation (ICC* algorithm. Finally, the local electrostatics algorithm, MEMD(Maxwell Equations Molecular Dynamics, even allows one to employ a smoothly varying dielectric constant in the systems. We introduce the concepts of these three algorithms and an extension for the inclusion of boundaries that are to be held fixed at a constant potential (metal conditions. For each method, we present a showcase application to highlight the importance of dielectric interfaces.
A Constrained Genetic Algorithm with Adaptively Defined Fitness Function in MRS Quantification
Papakostas, G. A.; Karras, D. A.; Mertzios, B. G.; Graveron-Demilly, D.; van Ormondt, D.
MRS Signal quantification is a rather involved procedure and has attracted the interest of the medical engineering community, regarding the development of computationally efficient methodologies. Significant contributions based on Computational Intelligence tools, such as Neural Networks (NNs), demonstrated a good performance but not without drawbacks already discussed by the authors. On the other hand preliminary application of Genetic Algorithms (GA) has already been reported in the literature by the authors regarding the peak detection problem encountered in MRS quantification using the Voigt line shape model. This paper investigates a novel constrained genetic algorithm involving a generic and adaptively defined fitness function which extends the simple genetic algorithm methodology in case of noisy signals. The applicability of this new algorithm is scrutinized through experimentation in artificial MRS signals interleaved with noise, regarding its signal fitting capabilities. Although extensive experiments with real world MRS signals are necessary, the herein shown performance illustrates the method's potential to be established as a generic MRS metabolites quantification procedure.
Mosconi, E; Sima, D M; Osorio Garcia, M I; Fontanella, M; Fiorini, S; Van Huffel, S; Marzola, P
2014-04-01
Proton magnetic resonance spectroscopy (MRS) is a sensitive method for investigating the biochemical compounds in a tissue. The interpretation of the data relies on the quantification algorithms applied to MR spectra. Each of these algorithms has certain underlying assumptions and may allow one to incorporate prior knowledge, which could influence the quality of the fit. The most commonly considered types of prior knowledge include the line-shape model (Lorentzian, Gaussian, Voigt), knowledge of the resonating frequencies, modeling of the baseline, constraints on the damping factors and phase, etc. In this article, we study whether the statistical outcome of a biological investigation can be influenced by the quantification method used. We chose to study lipid signals because of their emerging role in the investigation of metabolic disorders. Lipid spectra, in particular, are characterized by peaks that are in most cases not Lorentzian, because measurements are often performed in difficult body locations, e.g. in visceral fats close to peristaltic movements in humans or very small areas close to different tissues in animals. This leads to spectra with several peak distortions. Linear combination of Model spectra (LCModel), Advanced Method for Accurate Robust and Efficient Spectral fitting (AMARES), quantitation based on QUantum ESTimation (QUEST), Automated Quantification of Short Echo-time MRS (AQSES)-Lineshape and Integration were applied to simulated spectra, and area under the curve (AUC) values, which are proportional to the quantity of the resonating molecules in the tissue, were compared with true values. A comparison between techniques was also carried out on lipid signals from obese and lean Zucker rats, for which the polyunsaturation value expressed in white adipose tissue should be statistically different, as confirmed by high-resolution NMR measurements (considered the gold standard) on the same animals. LCModel, AQSES-Lineshape, QUEST and Integration
Investigation of an MLE Algorithm for Quantification of Aerial Radiological Measurements
Energy Technology Data Exchange (ETDEWEB)
Reed, Michael; Essex, James
2012-05-10
Aerial radiation detection is routinely used by many organizations (DHS, DOE, EPA, etc.) for the purposes of identifying the presence of and quantifying the existence of radiation along the ground. This work involves the search for lost or missing sources, as well as the characterization of large-scale releases such as might occur in a nuclear power plant accident. The standard in aerial radiological surveys involves flying large arrays of sodium-iodide detectors at altitude (15 to 700 meters) to acquire geo-referenced, 1 Hz, 1024-channel spectra. The historical shortfalls of this technology include: • Very low spatial resolution (typical field of view is circle of two-times altitude) • Relatively low detectability associated with large stand-off distances • Fundamental challenges in performing ground-level quantification This work uses modern computational power in conjunction with multi-dimensional deconvolution algorithms in an effort to improve spatial resolution, enhance detectability, and provide a robust framework for quantification.
Fundamental parameter based quantification algorithm for confocal nano-X-ray fluorescence analysis
Energy Technology Data Exchange (ETDEWEB)
Schoonjans, Tom, E-mail: Tom.Schoonjans@UGent.be [X-ray Microspectroscopy and Imaging Research Group (XMI), Department of Analytical Chemistry, Ghent University, Krijgslaan 281 S12, B-9000 Ghent (Belgium); Silversmit, Geert; Vekemans, Bart [X-ray Microspectroscopy and Imaging Research Group (XMI), Department of Analytical Chemistry, Ghent University, Krijgslaan 281 S12, B-9000 Ghent (Belgium); Schmitz, Sylvia [Geosciences Institute/Mineralogy, Goethe University Frankfurt, Altenhoeferallee 1, D-60438 Frankfurt (Germany); Burghammer, Manfred; Riekel, Christian [ESRF, 6 rue Jules Horowitz, BP220, F-38043 Grenoble Cedex (France); Brenker, Frank E. [Geosciences Institute/Mineralogy, Goethe University Frankfurt, Altenhoeferallee 1, D-60438 Frankfurt (Germany); Vincze, Laszlo, E-mail: Laszlo.Vincze@UGent.be [X-ray Microspectroscopy and Imaging Research Group (XMI), Department of Analytical Chemistry, Ghent University, Krijgslaan 281 S12, B-9000 Ghent (Belgium)
2012-01-15
A new method for the quantification of X-ray fluorescence (XRF) was derived based on the fundamental parameter method (FPM). The FPM equations were adapted to accommodate the special case of confocal nano-XRF, i.e. X-ray nano-beam excitation coupled with confocal detection, taking into account the special characteristics of the detector channel polycapillary. A thorough error estimation algorithm based on the Monte Carlo method was applied, producing a detailed analysis of the uncertainties of the quantification results. The new FPM algorithm was applied on confocal nano-XRF data obtained from cometary dust returned by NASA's Stardust mission, recorded at beamline ID13 of the European Synchrotron Radiation Facility. - Highlights: Black-Right-Pointing-Pointer A new method for the quantification of confocal XRF is presented. Black-Right-Pointing-Pointer The quantification is based on the fundamental parameter method (FPM). Black-Right-Pointing-Pointer The new FPM algorithm was applied for the analysis of unique cometary dust particles. Black-Right-Pointing-Pointer The cometary particles were returned by NASA's Stardust mission in 2006. Black-Right-Pointing-Pointer Error estimation is based on the Monte Carlo method.
Scalable Hierarchical Algorithms for stochastic PDEs and Uncertainty Quantification
Litvinenko, Alexander
2015-01-05
H-matrices and Fast Multipole (FMM) are powerful methods to approximate linear operators coming from partial differential and integral equations as well as speed up computational cost from quadratic or cubic to log-linear (O(n log n)), where n number of degrees of freedom in the discretization. The storage is reduced to the log-linear as well. This hierarchical structure is a good starting point for parallel algorithms. Parallelization on shared and distributed memory systems was pioneered by R. Kriemann, 2005. Since 2005, the area of parallel architectures and software is developing very fast. Progress in GPUs and Many-Core Systems (e.g. XeonPhi with 64 cores) motivated us to extend work started in [1,2,7,8].
Scherer, Moritz; Cordes, Jonas; Younsi, Alexander; Sahin, Yasemin-Aylin; Götz, Michael; Möhlenbruch, Markus; Stock, Christian; Bösel, Julian; Unterberg, Andreas; Maier-Hein, Klaus; Orakcioglu, Berk
2016-11-01
ABC/2 is still widely accepted for volume estimations in spontaneous intracerebral hemorrhage (ICH) despite known limitations, which potentially accounts for controversial outcome-study results. The aim of this study was to establish and validate an automatic segmentation algorithm, allowing for quick and accurate quantification of ICH. A segmentation algorithm implementing first- and second-order statistics, texture, and threshold features was trained on manual segmentations with a random-forest methodology. Quantitative data of the algorithm, manual segmentations, and ABC/2 were evaluated for agreement in a study sample (n=28) and validated in an independent sample not used for algorithm training (n=30). ABC/2 volumes were significantly larger compared with either manual or algorithm values, whereas no significant differences were found between the latter (Pcorrelation coefficient 0.95 [lower 95% confidence interval 0.91]) and superior to ABC/2 (concordance correlation coefficient 0.77 [95% confidence interval 0.64]). Validation confirmed agreement in an independent sample (algorithm concordance correlation coefficient 0.99 [95% confidence interval 0.98], ABC/2 concordance correlation coefficient 0.82 [95% confidence interval 0.72]). The algorithm was closer to respective manual segmentations than ABC/2 in 52/58 cases (89.7%). An automatic segmentation algorithm for volumetric analysis of spontaneous ICH was developed and validated in this study. Algorithm measurements showed strong agreement with manual segmentations, whereas ABC/2 exhibited its limitations, yielding inaccurate overestimations of ICH volume. The refined, yet time-efficient, quantification of ICH by the algorithm may facilitate evaluation of clot volume as an outcome predictor and trigger for surgical interventions in the clinical setting. © 2016 American Heart Association, Inc.
Directory of Open Access Journals (Sweden)
Emanuele Gandola
2016-09-01
Full Text Available The estimation and quantification of potentially toxic cyanobacteria in lakes and reservoirs are often used as a proxy of risk for water intended for human consumption and recreational activities. Here, we present data sets collected from three volcanic Italian lakes (Albano, Vico, Nemi that present filamentous cyanobacteria strains at different environments. Presented data sets were used to estimate abundance and morphometric characteristics of potentially toxic cyanobacteria comparing manual Vs. automated estimation performed by ACQUA (“ACQUA: Automated Cyanobacterial Quantification Algorithm for toxic filamentous genera using spline curves, pattern recognition and machine learning” (Gandola et al., 2016 [1]. This strategy was used to assess the algorithm performance and to set up the denoising algorithm. Abundance and total length estimations were used for software development, to this aim we evaluated the efficiency of statistical tools and mathematical algorithms, here described. The image convolution with the Sobel filter has been chosen to denoise input images from background signals, then spline curves and least square method were used to parameterize detected filaments and to recombine crossing and interrupted sections aimed at performing precise abundances estimations and morphometric measurements.
Choudhari, Khoobaram S; Jidesh, Pacheeripadikkal; Sudheendra, Parampalli; Kulkarni, Suresh D
2013-08-01
A new mathematical algorithm is reported for the accurate and efficient analysis of pore properties of nanoporous anodic alumina (NAA) membranes using scanning electron microscope (SEM) images. NAA membranes of the desired pore size were fabricated using a two-step anodic oxidation process. Surface morphology of the NAA membranes with different pore properties was studied using SEM images along with computerized image processing and analysis. The main objective was to analyze the SEM images of NAA membranes quantitatively, systematically, and quickly. The method uses a regularized shock filter for contrast enhancement, mathematical morphological operators, and a segmentation process for efficient determination of pore properties. The algorithm is executed using MATLAB, which generates a statistical report on the morphology of NAA membrane surfaces and performs accurate quantification of the parameters such as average pore-size distribution, porous area fraction, and average interpore distances. A good comparison between the pore property measurements was obtained using our algorithm and ImageJ software. This algorithm, with little manual intervention, is useful for optimizing the experimental process parameters during the fabrication of such nanostructures. Further, the algorithm is capable of analyzing SEM images of similar or asymmetrically porous nanostructures where sample and background have distinguishable contrast.
Péréfarres, Frédéric; Hoareau, Murielle; Chiroleu, Frédéric; Reynaud, Bernard; Dintinger, Jacques; Lett, Jean-Michel
2011-08-05
Begomovirus is a genus of phytopathogenic single-stranded DNA viruses, transmitted by the whitefly Bemisia tabaci. This genus includes emerging and economically significant viruses such as those associated with Tomato Yellow Leaf Curl Disease, for which diagnostic tools are needed to prevent dispersion and new introductions. Five real-time PCRs with an internal tomato reporter gene were developed for accurate detection and quantification of monopartite begomoviruses, including two strains of the Tomato yellow leaf curl virus (TYLCV; Mld and IL strains), the Tomato leaf curl Comoros virus-like viruses (ToLCKMV-like viruses) and the two molecules of the bipartite Potato yellow mosaic virus. These diagnostic tools have a unique standard quantification, comprising the targeted viral and internal report amplicons. These duplex real-time PCRs were applied to artificially inoculated plants to monitor and compare their viral development. Real-time PCRs were optimized for accurate detection and quantification over a range of 2 × 10(9) to 2 × 10(3) copies of genomic viral DNA/μL for TYLCV-Mld, TYLCV-IL and PYMV-B and 2 × 10(8) to 2 × 10(3) copies of genomic viral DNA/μL for PYMV-A and ToLCKMV-like viruses. These real-time PCRs were applied to artificially inoculated plants and viral loads were compared at 10, 20 and 30 days post-inoculation. Different patterns of viral accumulation were observed between the bipartite and the monopartite begomoviruses. Interestingly, PYMV accumulated more viral DNA at each date for both genomic components compared to all the monopartite viruses. Also, PYMV reached its highest viral load at 10 dpi contrary to the other viruses (20 dpi). The accumulation kinetics of the two strains of emergent TYLCV differed from the ToLCKMV-like viruses in the higher quantities of viral DNA produced in the early phase of the infection and in the shorter time to reach this peak viral load. To detect and quantify a wide range of begomoviruses, five duplex
Directory of Open Access Journals (Sweden)
Lett Jean-Michel
2011-08-01
Full Text Available Abstract Background Begomovirus is a genus of phytopathogenic single-stranded DNA viruses, transmitted by the whitefly Bemisia tabaci. This genus includes emerging and economically significant viruses such as those associated with Tomato Yellow Leaf Curl Disease, for which diagnostic tools are needed to prevent dispersion and new introductions. Five real-time PCRs with an internal tomato reporter gene were developed for accurate detection and quantification of monopartite begomoviruses, including two strains of the Tomato yellow leaf curl virus (TYLCV; Mld and IL strains, the Tomato leaf curl Comoros virus-like viruses (ToLCKMV-like viruses and the two molecules of the bipartite Potato yellow mosaic virus. These diagnostic tools have a unique standard quantification, comprising the targeted viral and internal report amplicons. These duplex real-time PCRs were applied to artificially inoculated plants to monitor and compare their viral development. Results Real-time PCRs were optimized for accurate detection and quantification over a range of 2 × 109 to 2 × 103 copies of genomic viral DNA/μL for TYLCV-Mld, TYLCV-IL and PYMV-B and 2 × 108 to 2 × 103 copies of genomic viral DNA/μL for PYMV-A and ToLCKMV-like viruses. These real-time PCRs were applied to artificially inoculated plants and viral loads were compared at 10, 20 and 30 days post-inoculation. Different patterns of viral accumulation were observed between the bipartite and the monopartite begomoviruses. Interestingly, PYMV accumulated more viral DNA at each date for both genomic components compared to all the monopartite viruses. Also, PYMV reached its highest viral load at 10 dpi contrary to the other viruses (20 dpi. The accumulation kinetics of the two strains of emergent TYLCV differed from the ToLCKMV-like viruses in the higher quantities of viral DNA produced in the early phase of the infection and in the shorter time to reach this peak viral load. Conclusions To detect and
A novel fuzzy C-means algorithm for unsupervised heterogeneous tumor quantification in PET.
Belhassen, Saoussen; Zaidi, Habib
2010-03-01
Accurate and robust image segmentation was identified as one of the most challenging issues facing PET quantification in oncological imaging. This difficulty is compounded by the low spatial resolution and high noise characteristics of PET images. The fuzzy C-means (FCM) clustering algorithm was largely used in various medical image segmentation approaches. However, the algorithm is sensitive to both noise and intensity heterogeneity since it does not take into account spatial contextual information. To overcome this limitation, a new fuzzy segmentation technique adapted to typical noisy and low resolution oncological PET data is proposed. PET images smoothed using a nonlinear anisotropic diffusion filter are added as a second input to the proposed FCM algorithm to incorporate spatial information (FCM-S). In addition, a methodology was developed to integrate the a trous wavelet transform in the standard FCM algorithm (FCM-SW) to allow handling of heterogeneous lesions' uptake. The algorithm was applied to the simulated data of the NCAT phantom, incorporating heterogeneous lesions in the lung and clinical PET/CT images of 21 patients presenting with histologically proven nonsmall-cell lung cancer (NSCLC) and 7 patients presenting with laryngeal squamous cell carcinoma (LSCC) to assess its performance for segmenting tumors with arbitrary size, shape, and tracer uptake. For NSCLC patients, the maximal tumor diameters measured from the macroscopic examination of the surgical specimen served as the ground truth for comparison with the maximum diameter estimated by the segmentation technique, whereas for LSCC patients, the 3D macroscopic tumor volume was considered as the ground truth for comparison with the corresponding PET-based volume. The proposed algorithm was also compared to the classical FCM segmentation technique. There is a good correlation (R2 = 0.942) between the actual maximal diameter of primary NSCLC tumors estimated using the proposed PET segmentation
Energy Technology Data Exchange (ETDEWEB)
Swiler, Laura Painton; Eldred, Michael Scott
2009-09-01
This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.
Elsheikh, A. H.
2013-12-01
Calibration of subsurface flow models is an essential step for managing ground water aquifers, designing of contaminant remediation plans, and maximizing recovery from hydrocarbon reservoirs. We investigate an efficient sampling algorithm known as nested sampling (NS), which can simultaneously sample the posterior distribution for uncertainty quantification, and estimate the Bayesian evidence for model selection. Model selection statistics, such as the Bayesian evidence, are needed to choose or assign different weights to different models of different levels of complexities. In this work, we report the first successful application of nested sampling for calibration of several nonlinear subsurface flow problems. The estimated Bayesian evidence by the NS algorithm is used to weight different parameterizations of the subsurface flow models (prior model selection). The results of the numerical evaluation implicitly enforced Occam\\'s razor where simpler models with fewer number of parameters are favored over complex models. The proper level of model complexity was automatically determined based on the information content of the calibration data and the data mismatch of the calibrated model.
Directory of Open Access Journals (Sweden)
Saleh LAshkari
2016-06-01
Full Text Available Selecting optimal features based on nature of the phenomenon and high discriminant ability is very important in the data classification problems. Since it doesn't require any assumption about stationary condition and size of the signal and the noise in Recurrent Quantification Analysis (RQA, it may be useful for epileptic seizure Detection. In this study, RQA was used to discriminate ictal EEG from the normal EEG where optimal features selected by combination of algorithm genetic and Bayesian Classifier. Recurrence plots of hundred samples in each two categories were obtained with five distance norms in this study: Euclidean, Maximum, Minimum, Normalized and Fixed Norm. In order to choose optimal threshold for each norm, ten threshold of ε was generated and then the best feature space was selected by genetic algorithm in combination with a bayesian classifier. The results shown that proposed method is capable of discriminating the ictal EEG from the normal EEG where for Minimum norm and 0.1˂ε˂1, accuracy was 100%. In addition, the sensitivity of proposed framework to the ε and the distance norm parameters was low. The optimal feature presented in this study is Trans which it was selected in most feature spaces with high accuracy.
Adaptive multiscale MCMC algorithm for uncertainty quantification in seismic parameter estimation
Tan, Xiaosi
2014-08-05
Formulating an inverse problem in a Bayesian framework has several major advantages (Sen and Stoffa, 1996). It allows finding multiple solutions subject to flexible a priori information and performing uncertainty quantification in the inverse problem. In this paper, we consider Bayesian inversion for the parameter estimation in seismic wave propagation. The Bayes\\' theorem allows writing the posterior distribution via the likelihood function and the prior distribution where the latter represents our prior knowledge about physical properties. One of the popular algorithms for sampling this posterior distribution is Markov chain Monte Carlo (MCMC), which involves making proposals and calculating their acceptance probabilities. However, for large-scale problems, MCMC is prohibitevely expensive as it requires many forward runs. In this paper, we propose a multilevel MCMC algorithm that employs multilevel forward simulations. Multilevel forward simulations are derived using Generalized Multiscale Finite Element Methods that we have proposed earlier (Efendiev et al., 2013a; Chung et al., 2013). Our overall Bayesian inversion approach provides a substantial speed-up both in the process of the sampling via preconditioning using approximate posteriors and the computation of the forward problems for different proposals by using the adaptive nature of multiscale methods. These aspects of the method are discussed n the paper. This paper is motivated by earlier work of M. Sen and his collaborators (Hong and Sen, 2007; Hong, 2008) who proposed the development of efficient MCMC techniques for seismic applications. In the paper, we present some preliminary numerical results.
A Quantification of the 3D Modeling Capabilities of the Kinectfusion Algorithm
2014-03-27
algorithm to achieve different results. To provide the rotation and varying sensor distances, a specialized setup was created from a motorized turn table...a Universal Serial Bus (USB) based stepper motor controller and the corresponding software. A picture of the turn table and sensor mount can be seen...algorithm. An NVidia Tesla K20 GPU was initially used for testing, as the K20 has 2496 CUDA cores and includes 5GB of GDDR5 memory. Based on the basic
Energy Technology Data Exchange (ETDEWEB)
Sutter, J. de [Dept. of Cardiology, Ghent University Hospital (Belgium); Wiele, C. van de; Bondt, P. de; Dierckx, R. [Dept. of Nuclear Medicine, Ghent University Hospital (Belgium); D' Asseler, Y. [MEDISIP, ELIS, Ghent State University (Belgium); Backer, G. de [Dept. of Public Health, Ghent University Hospital (Belgium); Rigo, P. [Department of Nuclear Medicine, Centre Hospitalier Universitaire, Liege (Belgium)
2000-12-01
Infarct size assessed by myocardial single-photon emission tomography (SPET) imaging is an important prognostic parameter after myocardial infarction (MI). We compared three commercially available automatic quantification algorithms that make use of normal templates for the evaluation of infarct extent and severity in a large population of patients with remote MI. We studied 100 consecutive patients (80 men, mean age 63{+-}11 years, mean LVEF 47%{+-}15%) with a remote MI who underwent resting technetium-99m tetrofosmin gated SPET study for infarct extent and severity quantification. The quantification algorithms used for comparison were a short-axis algorithm (Cedars-Emory quantitative analysis software, CEqual), a vertical long-axis algorithm (VLAX) and a three-dimensional fitting algorithm (Perfit). Semiquantitative visual infarct extent and severity assessment using a 20-segment model with a 5-point score and the relation of infarct extent and severity with rest LVEF determined by quantitative gated SPET (QGS) were used as standards to compare the different algorithms. Mean infarct extent was similar for visual analysis (30%{+-}21%) and the VLAX algorithm (25%{+-}17%), but CEqual (15%{+-}11%) and Perfit (5%{+-}6%) mean infarct extents were significantly lower compared with visual analysis and the VLAX algorithm. Moreover, infarct extent determined by Perfit was significantly lower than infarct extent determined by CEqual. Correlations between automatic and visual infarct extent and severity evaluations were moderate (r=0.47, P<0.0001 to r=0.62, P<0.0001) but comparable for all three algorithms. Correlations between LVEF and visual evaluation of infarct extent (r=-0.80, P<0.0001) and severity (r=-0.82, P<0.0001) were good but correlations were significantly lower for all three algorithms (r=-0.48, P<0.0001 to r=-0.65, P<0.0001). Systematically lower correlations were found in non-anterior infarctions (n=69) and obese patients (BMI {>=}30 kg/m{sup 2}, n=32
Energy Technology Data Exchange (ETDEWEB)
Taylor, J.C.; Matulis, C.E [Commonwealth Scientific and Industrial Research Organization, Lucas Heights, NSW (Australia); Hinczak, I. [James Hardie Research Laboratories, Granville, NSW (Australia)
1999-12-01
Full text: Previous Rietveld quantifications of Portland cement clinkers used a fixed structural model for alite, the major clinker phase with a C{sub 3}S-type structure. From Rietveld quantification studies using the SlROQUANT{sup tm} program on XRD powder patterns of the three NIST SRM standard clinkers 8486, 8487, and 8488 (CoK{alpha}), it is shown that best results here are obtained using three C{sub 3}S models, one from each of the three crystal systems found in alite polymorphs. It is necessary to specify the phase crystallography well, owing to the extreme superposition of alite and belite XRD lines. Unsatisfactory results can sometimes occur when only one or two of these C{sub 3}S polymorphs are used in the Rietveld quantifications. Polymorphism in belite (clinker C{sub 2}S-type phase) and C{sub 3}A(aluminate phase), is also included. The Rietveld full-profile XRD method is shown to be as powerful in quantifying phases in Portland cement clinker as the microscope point-counting(MPC) method. The Rietveld method can in addition, discriminate between phase polymorphs, and is far less labour-intensive than MPC. Rietveld and MPC methods give the same results for the three NIST standard clinkers. The Rietveld oxide weight percentages, obtained from the Rietveld phase weight percentages by a reverse-Bogue calculation, assuming the average chemical compositions of the clinker phases of Taylor(1990), agree well with the XRF oxide percentages. The MPC method also agrees equally as well with XRF oxide percentages as the Rietveld method. Bogue weight percentages do not agree with Rietveld or MPC figures. Transformation of the Bogue percentages by the reverse-Bogue program to oxides does not give as good a comparison with XRF as does either the Rietveld or MPC method. Copyright (1999) Australian X-ray Analytical Association Inc.
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Marcela de; Alvarez, Matheus; Alves, Allan F.F.; Miranda, Jose R.A., E-mail: marceladeoliveira@ig.com.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Hospital das Clinicas. Departamento de Doencas Tropicais e Diagnostico por Imagem
2012-12-15
The high-resolution computed tomography has become the imaging diagnostic exam most commonly used for the evaluation of the squeals of Paracoccidioidomycosis. The subjective evaluations the radiological abnormalities found on HRCT images do not provide an accurate quantification. The computer-aided diagnosis systems produce a more objective assessment of the abnormal patterns found in HRCT images. Thus, this research proposes the development of algorithms in MATLAB® computing environment can quantify semi-automatically pathologies such as pulmonary fibrosis and emphysema. The algorithm consists in selecting a region of interest (ROI), and by the use of masks, filter densities and morphological operators, to obtain a quantification of the injured area to the area of a healthy lung. The proposed method was tested on ten HRCT scans of patients with confirmed PCM. The results of semi-automatic measurements were compared with subjective evaluations performed by a specialist in radiology, falling to a coincidence of 80% for emphysema and 58% for fibrosis. (author)
A practical algorithm for distribution state estimation including renewable energy sources
Energy Technology Data Exchange (ETDEWEB)
Niknam, Taher [Electronic and Electrical Department, Shiraz University of Technology, Modares Blvd., P.O. 71555-313, Shiraz (Iran); Firouzi, Bahman Bahmani [Islamic Azad University Marvdasht Branch, Marvdasht (Iran)
2009-11-15
Renewable energy is energy that is in continuous supply over time. These kinds of energy sources are divided into five principal renewable sources of energy: the sun, the wind, flowing water, biomass and heat from within the earth. According to some studies carried out by the research institutes, about 25% of the new generation will be generated by Renewable Energy Sources (RESs) in the near future. Therefore, it is necessary to study the impact of RESs on the power systems, especially on the distribution networks. This paper presents a practical Distribution State Estimation (DSE) including RESs and some practical consideration. The proposed algorithm is based on the combination of Nelder-Mead simplex search and Particle Swarm Optimization (PSO) algorithms, called PSO-NM. The proposed algorithm can estimate load and RES output values by Weighted Least-Square (WLS) approach. Some practical considerations are var compensators, Voltage Regulators (VRs), Under Load Tap Changer (ULTC) transformer modeling, which usually have nonlinear and discrete characteristics, and unbalanced three-phase power flow equations. The comparison results with other evolutionary optimization algorithms such as original PSO, Honey Bee Mating Optimization (HBMO), Neural Networks (NNs), Ant Colony Optimization (ACO), and Genetic Algorithm (GA) for a test system demonstrate that PSO-NM is extremely effective and efficient for the DSE problems. (author)
Algorithms and Programs for Strong Gravitational Lensing In Kerr Space-time Including Polarization
Chen, Bin; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad
2015-05-01
Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.
ALGORITHMS AND PROGRAMS FOR STRONG GRAVITATIONAL LENSING IN KERR SPACE-TIME INCLUDING POLARIZATION
Energy Technology Data Exchange (ETDEWEB)
Chen, Bin; Maddumage, Prasad [Research Computing Center, Department of Scientific Computing, Florida State University, Tallahassee, FL 32306 (United States); Kantowski, Ronald; Dai, Xinyu; Baron, Eddie, E-mail: bchen3@fsu.edu [Homer L. Dodge Department of Physics and Astronomy, University of Oklahoma, Norman, OK 73019 (United States)
2015-05-15
Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), then a photon’s polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing the strong lensing effects of Kerr black holes, including the effects on polarization. Our algorithms are realized in a program “KERTAP” in two versions: MATLAB and Python. The key ingredients of KERTAP are a graphic user interface, a backward ray-tracing algorithm, a polarization propagator dealing with gravitational Faraday rotation, and algorithms computing observables such as flux magnification and polarization angles. Our algorithms can be easily realized in other programming languages such as FORTRAN, C, and C++. The MATLAB version of KERTAP is parallelized using the MATLAB Parallel Computing Toolbox and the Distributed Computing Server. The Python code was sped up using Cython and supports full implementation of MPI using the “mpi4py” package. As an example, we investigate the inclination angle dependence of the observed polarization and the strong lensing magnification of AGN X-ray emission. We conclude that it is possible to perform complex numerical-relativity related computations using interpreted languages such as MATLAB and Python.
Algorithms And Programs For Strong Gravitational Lensing In Kerr Space-time Including Polarization
Chen, Bin; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad
2015-01-01
Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), a photon's polarization plane will be rotated by the gravitational Faraday effect. The observed X-ray flux and polarization will then be influenced significantly by the strong gravity field near the source. Consequently, linear gravitational lensing theory is inadequate for such extreme circumstances. We present simple algorithms computing strong lensing effects of Kerr black holes, including effects on polarization. Our algorithms are realized in a program "KERTAP" in two versions: MATLAB and Python. The key ingredients of KERTAP are: a graphic ...
Directory of Open Access Journals (Sweden)
Yazan M. Alomari
2014-01-01
Full Text Available Segmentation and counting of blood cells are considered as an important step that helps to extract features to diagnose some specific diseases like malaria or leukemia. The manual counting of white blood cells (WBCs and red blood cells (RBCs in microscopic images is an extremely tedious, time consuming, and inaccurate process. Automatic analysis will allow hematologist experts to perform faster and more accurately. The proposed method uses an iterative structured circle detection algorithm for the segmentation and counting of WBCs and RBCs. The separation of WBCs from RBCs was achieved by thresholding, and specific preprocessing steps were developed for each cell type. Counting was performed for each image using the proposed method based on modified circle detection, which automatically counted the cells. Several modifications were made to the basic (RCD algorithm to solve the initialization problem, detecting irregular circles (cells, selecting the optimal circle from the candidate circles, determining the number of iterations in a fully dynamic way to enhance algorithm detection, and running time. The validation method used to determine segmentation accuracy was a quantitative analysis that included Precision, Recall, and F-measurement tests. The average accuracy of the proposed method was 95.3% for RBCs and 98.4% for WBCs.
Hung, Peter W; Paik, David S; Napel, Sandy; Yee, Judy; Jeffrey, R Brooke; Steinauer-Gebauer, Andreas; Min, Juno; Jathavedam, Ashwin; Beaulieu, Christopher F
2002-02-01
Three bowel distention-measuring algorithms for use at computed tomographic (CT) colonography were developed, validated in phantoms, and applied to a human CT colonographic data set. The three algorithms are the cross-sectional area method, the moving spheres method, and the segmental volume method. Each algorithm effectively quantified distention, but accuracy varied between methods. Clinical feasibility was demonstrated. Depending on the desired spatial resolution and accuracy, each algorithm can quantitatively depict colonic diameter in CT colonography.
Energy Technology Data Exchange (ETDEWEB)
Niknam, Taher; Meymand, Hamed Zeinoddini; Nayeripour, Majid [Electrical and Electronic Engineering Department, Shiraz University of Technology, Shiraz (Iran)
2010-08-15
Fuel cell power plants (FCPPs) have been taken into a great deal of consideration in recent years. The continuing growth of the power demand together with environmental constraints is increasing interest to use FCPPs in power system. Since FCPPs are usually connected to distribution network, the effect of FCPPs on distribution network is more than other sections of power system. One of the most important issues in distribution networks is optimal operation management (OOM) which can be affected by FCPPs. This paper proposes a new approach for optimal operation management of distribution networks including FCCPs. In the article, we consider the total electrical energy losses, the total electrical energy cost and the total emission as the objective functions which should be minimized. Whereas the optimal operation in distribution networks has a nonlinear mixed integer optimization problem, the optimal solution could be obtained through an evolutionary method. We use a new evolutionary algorithm based on Fuzzy Adaptive Particle Swarm Optimization (FAPSO) to solve the optimal operation problem and compare this method with Genetic Algorithm (GA), Particle Swarm Optimization (PSO), Differential Evolution (DE), Ant Colony Optimization (ACO) and Tabu Search (TS) over two distribution test feeders. (author)
Study of Power Flow Algorithm of AC/DC Distribution System including VSC-MTDC
Directory of Open Access Journals (Sweden)
Haifeng Liang
2015-08-01
Full Text Available In recent years, distributed generation and a large number of sensitive AC and DC loads have been connected to distribution networks, which introduce a series of challenges to distribution network operators (DNOs. In addition, the advantages of DC distribution networks, such as the energy conservation and emission reduction, mean that the voltage source converter based multi-terminal direct current (VSC-MTDC for AC/DC distribution systems demonstrates a great potential, hence drawing growing research interest. In this paper, considering losses of the reactor, the filter and the converter, a mathematical model of VSC-HVDC for the load flow analysis is derived. An AC/DC distribution network architecture has been built, based on which the differences in modified equations of the VSC-MTDC-based network under different control modes are analyzed. In addition, corresponding interface functions under five control modes are provided, and a back/forward iterative algorithm which is applied to power flow calculation of the AC/DC distribution system including VSC-MTDC is proposed. Finally, by calculating the power flow of the modified IEEE14 AC/DC distribution network, the efficiency and validity of the model and algorithm are evaluated. With various distributed generations connected to the network at appropriate locations, power flow results show that network losses and utilization of transmission networks are effectively reduced.
Paolini, Lucia
2016-06-01
Serum protein electrophoresis (SPE), serum immunofixation (s-IFE), free light chain measurement (FLC) and nephelometric measurements of total immunoglobulin in serum (IgTot) are some of the laboratory tests required for the management of plasma cell proliferative disorders. The monoclonal protein is usually visible on SPE as a spike (M-spike) in the γ region and the derived densitogram is used to quantify it relative to serum total protein concentration. IgA M-protein, however, often migrates in the β region on SPE and its quantification can be masked by other serum proteins that migrate in this region. The immunoassay Hevylite™ (heavy/light chain, HLC) seems to solve this problem: it quantifies the involved/uninvolved isotype, calculating the ratio IgAκ/IgAλ, considered indicative of clonal proliferation. However, this test seems redundant in the case of artifacts on SPE such as obvious hemolysis or lipemia, or if the IgA M-spike is clearly visible in the β region. In conclusion whereas the IgA HLC assay does not represent an alternative to SPE and s-IFE in the diagnostic patient workup, it may prove to be an alternative to SPE, s-IFE and total IgA quantification in risk stratification and evaluation of response to therapy in patients affected by MM and other monoclonal plasma proliferative disorders.
DEFF Research Database (Denmark)
Sunyer Pinya, Maria Antonia; Madsen, Henrik; Rosbjerg, Dan
2014-01-01
in climate model biases are negligible. This study develops a Bayesian framework that accounts for model dependencies and changes in model biases and compares it to estimates calculated based on a frequentist approach. The Bayesian framework is used to investigate the effects of the two assumptions......Climate change impact studies are subject to numerous uncertainties and assumptions. One of the main sources of uncertainty arises from the interpretation of climate model projections. Probabilistic procedures based on multimodel ensembles have been suggested in the literature to quantify...... this source of uncertainty. However, the interpretation of multimodel ensembles remains challenging. Several assumptions are often required in the uncertainty quantification of climate model projections. For example, most methods often assume that the climate models are independent and/or that changes...
Dicotyledon Weed Quantification Algorithm for Selective Herbicide Application in Maize Crops
DEFF Research Database (Denmark)
Laursen, Morten Stigaard; Jørgensen, Rasmus Nyholm; Midtiby, Henrik Skov
2016-01-01
of a single plant is possible. The aim of this study is to adapt the monocot and dicot coverage ratio vision (MoDiCoVi) algorithm to estimate dicotyledon leaf cover, perform grid spraying in real time, and present initial results in terms of potential herbicide savings in maize. The authors designed...
Directory of Open Access Journals (Sweden)
Yang Youjin
2016-01-01
Full Text Available Zika virus is spreaded by mosquito. There is high probability of Microcephaly. In 1947, the virus was first found from Uganda, but it has broken outall around world, specially North and south America. So, apriori algorithm and decision tree were used to compare polyprotein sequences of zika virus among other flavivirus; Yellow fever, West Nile virus, Dengue virus, Tick borne encephalitis. By this, dissimilarity and similarity about them were found.
Energy Technology Data Exchange (ETDEWEB)
Li, Weixuan; Lin, Guang; Li, Bing
2016-09-01
A well-known challenge in uncertainty quantification (UQ) is the "curse of dimensionality". However, many high-dimensional UQ problems are essentially low-dimensional, because the randomness of the quantity of interest (QoI) is caused only by uncertain parameters varying within a low-dimensional subspace, known as the sufficient dimension reduction (SDR) subspace. Motivated by this observation, we propose and demonstrate in this paper an inverse regression-based UQ approach (IRUQ) for high-dimensional problems. Specifically, we use an inverse regression procedure to estimate the SDR subspace and then convert the original problem to a low-dimensional one, which can be efficiently solved by building a response surface model such as a polynomial chaos expansion. The novelty and advantages of the proposed approach is seen in its computational efficiency and practicality. Comparing with Monte Carlo, the traditionally preferred approach for high-dimensional UQ, IRUQ with a comparable cost generally gives much more accurate solutions even for high-dimensional problems, and even when the dimension reduction is not exactly sufficient. Theoretically, IRUQ is proved to converge twice as fast as the approach it uses seeking the SDR subspace. For example, while a sliced inverse regression method converges to the SDR subspace at the rate of $O(n^{-1/2})$, the corresponding IRUQ converges at $O(n^{-1})$. IRUQ also provides several desired conveniences in practice. It is non-intrusive, requiring only a simulator to generate realizations of the QoI, and there is no need to compute the high-dimensional gradient of the QoI. Finally, error bars can be derived for the estimation results reported by IRUQ.
Energy Technology Data Exchange (ETDEWEB)
Kurata, Akira; Kono, Atsushi; Coenen, Adriaan; Saru-Chelu, Raluca G.; Krestin, Gabriel P. [Erasmus University Medical Center, Department of Radiology, Rotterdam (Netherlands); Sakamoto, Tsuyoshi [AZE inc, Development Division, Chiyoda, Tokyo (Japan); Kido, Teruhito; Mochizuki, Teruhito [Ehime University Graduate School of Medicine, Department of Radiology, Toon, Ehime (Japan); Higashino, Hiroshi [Yotsuba Circulation Clinic, Department of Radiology, Matsuyama, Ehime (Japan); Abe, Mitsunori [Yotsuba Circulation Clinic, Department of Cardiology, Matsuyama, Ehime (Japan); Feyter, Pim J. de; Nieman, Koen [Erasmus University Medical Center, Department of Radiology, Rotterdam (Netherlands); Erasmus University Medical Center, Department of Cardiology, Rotterdam (Netherlands)
2015-01-15
The purpose of this study was to estimate the myocardial area at risk (MAAR) using coronary computed tomography angiography (CTA) and Voronoi algorithm-based myocardial segmentation in comparison with single-photon emission computed tomography (SPECT). Thirty-four patients with coronary artery disease underwent 128-slice coronary CTA, stress/rest thallium-201 SPECT, and coronary angiography (CAG). CTA-based MAAR was defined as the sum of all CAG stenosis (>50 %) related territories (the ratio of the left ventricular volume). Using automated quantification software (17-segment model, 5-point scale), SPECT-based MAAR was defined as the number of segments with a score above zero as compared to the total 17 segments by summed stress score (SSS), difference (SDS) score map, and comprehensive SPECT interpretation with either SSS or SDS best correlating CAG findings (SSS/SDS). Results were compared using Pearson's correlation coefficient. Forty-nine stenoses were observed in 102 major coronary territories. Mean value of CTA-based MAAR was 28.3 ± 14.0 %. SSS-based, SDS-based, and SSS/SDS-based MAAR was 30.1 ± 6.1 %, 20.1 ± 15.8 %, and 26.8 ± 15.7 %, respectively. CTA-based MAAR was significantly related to SPECT-based MAAR (r = 0.531 for SSS; r = 0.494 for SDS; r = 0.814 for SSS/SDS; P < 0.05 in each). CTA-based Voronoi algorithm myocardial segmentation reliably quantifies SPECT-based MAAR. (orig.)
Energy Technology Data Exchange (ETDEWEB)
Togan, Vedat [Department of Civil Engineering, Karadeniz Technical University, Trabzon (Turkey); Karadeniz, Halil, E-mail: h.karadeniz@tudelft.n [Faculty of Civil Engineering and Geosciences, Delft University of Technology, Delft (Netherlands); Daloglu, Ayse T. [Department of Civil Engineering, Karadeniz Technical University, Trabzon (Turkey)
2010-08-15
A reliability analysis is usually required to carry out design optimization of large structural systems to incorporate the uncertainties on the parameters such as material properties, external loads, manufacturing condition, etc. This procedure is called Reliability Based Design Optimization (RBDO), and requires a structural analysis program, a reliability analysis and optimization tools to couple effectively. In this paper, an integrated framework is proposed to implement the RBDO of the offshore towers. It has two distinct approaches to evaluate the probabilistic constraints; namely Reliability-Index based Approach (RIA) and Performance Measure Approach (PMA). The proposed framework also suggests Sequential Quadratic Programming (SQP) and Differential Evolution (DE) as optimization methods. Examples of monopod, tripod, and jacket towers under the reliability constraints based on limit states of the critical stress, buckling, and the natural frequency are presented to demonstrate the applicability of the implemented algorithm to realistic structural systems.
Dicotyledon Weed Quantification Algorithm for Selective Herbicide Application in Maize Crops.
Laursen, Morten Stigaard; Jørgensen, Rasmus Nyholm; Midtiby, Henrik Skov; Jensen, Kjeld; Christiansen, Martin Peter; Giselsson, Thomas Mosgaard; Mortensen, Anders Krogh; Jensen, Peter Kryger
2016-11-04
The stricter legislation within the European Union for the regulation of herbicides that are prone to leaching causes a greater economic burden on the agricultural industry through taxation. Owing to the increased economic burden, research in reducing herbicide usage has been prompted. High-resolution images from digital cameras support the studying of plant characteristics. These images can also be utilized to analyze shape and texture characteristics for weed identification. Instead of detecting weed patches, weed density can be estimated at a sub-patch level, through which even the identification of a single plant is possible. The aim of this study is to adapt the monocot and dicot coverage ratio vision (MoDiCoVi) algorithm to estimate dicotyledon leaf cover, perform grid spraying in real time, and present initial results in terms of potential herbicide savings in maize. The authors designed and executed an automated, large-scale field trial supported by the Armadillo autonomous tool carrier robot. The field trial consisted of 299 maize plots. Half of the plots (parcels) were planned with additional seeded weeds; the other half were planned with naturally occurring weeds. The in-situ evaluation showed that, compared to conventional broadcast spraying, the proposed method can reduce herbicide usage by 65% without measurable loss in biological effect.
Dicotyledon Weed Quantification Algorithm for Selective Herbicide Application in Maize Crops
Laursen, Morten Stigaard; Jørgensen, Rasmus Nyholm; Midtiby, Henrik Skov; Jensen, Kjeld; Christiansen, Martin Peter; Giselsson, Thomas Mosgaard; Mortensen, Anders Krogh; Jensen, Peter Kryger
2016-01-01
The stricter legislation within the European Union for the regulation of herbicides that are prone to leaching causes a greater economic burden on the agricultural industry through taxation. Owing to the increased economic burden, research in reducing herbicide usage has been prompted. High-resolution images from digital cameras support the studying of plant characteristics. These images can also be utilized to analyze shape and texture characteristics for weed identification. Instead of detecting weed patches, weed density can be estimated at a sub-patch level, through which even the identification of a single plant is possible. The aim of this study is to adapt the monocot and dicot coverage ratio vision (MoDiCoVi) algorithm to estimate dicotyledon leaf cover, perform grid spraying in real time, and present initial results in terms of potential herbicide savings in maize. The authors designed and executed an automated, large-scale field trial supported by the Armadillo autonomous tool carrier robot. The field trial consisted of 299 maize plots. Half of the plots (parcels) were planned with additional seeded weeds; the other half were planned with naturally occurring weeds. The in-situ evaluation showed that, compared to conventional broadcast spraying, the proposed method can reduce herbicide usage by 65% without measurable loss in biological effect. PMID:27827908
Dicotyledon Weed Quantification Algorithm for Selective Herbicide Application in Maize Crops
Directory of Open Access Journals (Sweden)
Morten Stigaard Laursen
2016-11-01
Full Text Available The stricter legislation within the European Union for the regulation of herbicides that are prone to leaching causes a greater economic burden on the agricultural industry through taxation. Owing to the increased economic burden, research in reducing herbicide usage has been prompted. High-resolution images from digital cameras support the studying of plant characteristics. These images can also be utilized to analyze shape and texture characteristics for weed identification. Instead of detecting weed patches, weed density can be estimated at a sub-patch level, through which even the identification of a single plant is possible. The aim of this study is to adapt the monocot and dicot coverage ratio vision (MoDiCoVi algorithm to estimate dicotyledon leaf cover, perform grid spraying in real time, and present initial results in terms of potential herbicide savings in maize. The authors designed and executed an automated, large-scale field trial supported by the Armadillo autonomous tool carrier robot. The field trial consisted of 299 maize plots. Half of the plots (parcels were planned with additional seeded weeds; the other half were planned with naturally occurring weeds. The in-situ evaluation showed that, compared to conventional broadcast spraying, the proposed method can reduce herbicide usage by 65% without measurable loss in biological effect.
Energy Technology Data Exchange (ETDEWEB)
Nielsen, Joseph, E-mail: joseph.nielsen@inl.gov [Idaho National Laboratory, 1955 N. Fremont Avenue, P.O. Box 1625, Idaho Falls, ID 83402 (United States); University of Idaho, Department of Mechanical Engineering and Nuclear Engineering Program, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States); Tokuhiro, Akira [University of Idaho, Department of Mechanical Engineering and Nuclear Engineering Program, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States); Hiromoto, Robert [University of Idaho, Department of Computer Science, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States); Tu, Lei [University of Idaho, Department of Mechanical Engineering and Nuclear Engineering Program, 1776 Science Center Drive, Idaho Falls, ID 83402-1575 (United States)
2015-12-15
Highlights: • Dynamic Event Tree solutions have been optimized using the Branch-and-Bound algorithm. • A 60% efficiency in optimization has been achieved. • Modeling uncertainty within a risk-informed framework is evaluated. - Abstract: Evaluation of the impacts of uncertainty and sensitivity in modeling presents a significant set of challenges in particular to high fidelity modeling. Computational costs and validation of models creates a need for cost effective decision making with regards to experiment design. Experiments designed to validate computation models can be used to reduce uncertainty in the physical model. In some cases, large uncertainty in a particular aspect of the model may or may not have a large impact on the final results. For example, modeling of a relief valve may result in large uncertainty, however, the actual effects on final peak clad temperature in a reactor transient may be small and the large uncertainty with respect to valve modeling may be considered acceptable. Additionally, the ability to determine the adequacy of a model and the validation supporting it should be considered within a risk informed framework. Low fidelity modeling with large uncertainty may be considered adequate if the uncertainty is considered acceptable with respect to risk. In other words, models that are used to evaluate the probability of failure should be evaluated more rigorously with the intent of increasing safety margin. Probabilistic risk assessment (PRA) techniques have traditionally been used to identify accident conditions and transients. Traditional classical event tree methods utilize analysts’ knowledge and experience to identify the important timing of events in coordination with thermal-hydraulic modeling. These methods lack the capability to evaluate complex dynamic systems. In these systems, time and energy scales associated with transient events may vary as a function of transition times and energies to arrive at a different physical
Skeletonization algorithm-based blood vessel quantification using in vivo 3D photoacoustic imaging
Meiburger, K. M.; Nam, S. Y.; Chung, E.; Suggs, L. J.; Emelianov, S. Y.; Molinari, F.
2016-11-01
Blood vessels are the only system to provide nutrients and oxygen to every part of the body. Many diseases can have significant effects on blood vessel formation, so that the vascular network can be a cue to assess malicious tumor and ischemic tissues. Various imaging techniques can visualize blood vessel structure, but their applications are often constrained by either expensive costs, contrast agents, ionizing radiations, or a combination of the above. Photoacoustic imaging combines the high-contrast and spectroscopic-based specificity of optical imaging with the high spatial resolution of ultrasound imaging, and image contrast depends on optical absorption. This enables the detection of light absorbing chromophores such as hemoglobin with a greater penetration depth compared to purely optical techniques. We present here a skeletonization algorithm for vessel architectural analysis using non-invasive photoacoustic 3D images acquired without the administration of any exogenous contrast agents. 3D photoacoustic images were acquired on rats (n = 4) in two different time points: before and after a burn surgery. A skeletonization technique based on the application of a vesselness filter and medial axis extraction is proposed to extract the vessel structure from the image data and six vascular parameters (number of vascular trees (NT), vascular density (VD), number of branches (NB), 2D distance metric (DM), inflection count metric (ICM), and sum of angles metric (SOAM)) were calculated from the skeleton. The parameters were compared (1) in locations with and without the burn wound on the same day and (2) in the same anatomic location before (control) and after the burn surgery. Four out of the six descriptors were statistically different (VD, NB, DM, ICM, p approach to obtain quantitative characterization of the vascular network from 3D photoacoustic images without any exogenous contrast agent which can assess microenvironmental changes related to
Mohanty, P.; Rixen, D. J.
2004-08-01
Operational modal analysis procedures are efficient techniques to identify modal properties of structures excited through unknown random noise produced during operation. In many practical cases, harmonic excitations are often present in addition to the white-noise and, if the harmonic frequency is close to structural frequencies, standard identification techniques fail. Here, a method is presented to take into account the harmonic excitations while doing modal parameter identification for operational modal analysis (OMA). The proposed technique is based on the Ibrahim Time Domain method and explicitly includes the harmonic frequencies known a priori. Therefore, the modified technique allows proper identification of eigenfrequencies and modal damping even when harmonic excitation frequencies are close to the natural frequencies of the structures. Experimental results are shown in the presence of multi-harmonic loads for a steel plate to validate the method.
Solano, Carlos J F; Pothula, Karunakar R; Prajapati, Jigneshkumar D; De Biase, Pablo M; Noskov, Sergei Yu; Kleinekathöfer, Ulrich
2016-05-10
All-atom molecular dynamics simulations have a long history of applications studying ion and substrate permeation across biological and artificial pores. While offering unprecedented insights into the underpinning transport processes, MD simulations are limited in time-scales and ability to simulate physiological membrane potentials or asymmetric salt solutions and require substantial computational power. While several approaches to circumvent all of these limitations were developed, Brownian dynamics simulations remain an attractive option to the field. The main limitation, however, is an apparent lack of protein flexibility important for the accurate description of permeation events. In the present contribution, we report an extension of the Brownian dynamics scheme which includes conformational dynamics. To achieve this goal, the dynamics of amino-acid residues was incorporated into the many-body potential of mean force and into the Langevin equations of motion. The developed software solution, called BROMOCEA, was applied to ion transport through OmpC as a test case. Compared to fully atomistic simulations, the results show a clear improvement in the ratio of permeating anions and cations. The present tests strongly indicate that pore flexibility can enhance permeation properties which will become even more important in future applications to substrate translocation.
Liu, Jinxing
2013-04-24
When the brittle heterogeneous material is simulated via lattice models, the quasi-static failure depends on the relative magnitudes of Telem, the characteristic releasing time of the internal forces of the broken elements and Tlattice, the characteristic relaxation time of the lattice, both of which are infinitesimal compared with Tload, the characteristic loading period. The load-unload (L-U) method is used for one extreme, Telem << Tlattice, whereas the force-release (F-R) method is used for the other, Telem T lattice. For cases between the above two extremes, we develop a new algorithm by combining the L-U and the F-R trial displacement fields to construct the new trial field. As a result, our algorithm includes both L-U and F-R failure characteristics, which allows us to observe the influence of the ratio of Telem to Tlattice by adjusting their contributions in the trial displacement field. Therefore, the material dependence of the snap-back instabilities is implemented by introducing one snap-back parameter γ. Although in principle catastrophic failures can hardly be predicted accurately without knowing all microstructural information, effects of γ can be captured by numerical simulations conducted on samples with exactly the same microstructure but different γs. Such a same-specimen-based study shows how the lattice behaves along with the changing ratio of the L-U and F-R components. © 2013 The Author(s).
de Saint Laumer, Jean‐Yves; Leocata, Sabine; Tissot, Emeline; Baroux, Lucie; Kampf, David M.; Merle, Philippe; Boschung, Alain; Seyfried, Markus
2015-01-01
We previously showed that the relative response factors of volatile compounds were predictable from either combustion enthalpies or their molecular formulae only 1. We now extend this prediction to silylated derivatives by adding an increment in the ab initio calculation of combustion enthalpies. The accuracy of the experimental relative response factors database was also improved and its population increased to 490 values. In particular, more brominated compounds were measured, and their prediction accuracy was improved by adding a correction factor in the algorithm. The correlation coefficient between predicted and measured values increased from 0.936 to 0.972, leading to a mean prediction accuracy of ± 6%. Thus, 93% of the relative response factors values were predicted with an accuracy of better than ± 10%. The capabilities of the extended algorithm are exemplified by (i) the quick and accurate quantification of hydroxylated metabolites resulting from a biodegradation test after silylation and prediction of their relative response factors, without having the reference substances available; and (ii) the rapid purity determinations of volatile compounds. This study confirms that Gas chromatography with a flame ionization detector and using predicted relative response factors is one of the few techniques that enables quantification of volatile compounds without calibrating the instrument with the pure reference substance. PMID:26179324
Ticehurst, John R.; Aird, Deborah Z.; Dam, Lisa M.; Borek, Anita P.; Hargrove, John T.; Carroll, Karen C.
2006-01-01
We evaluated a two-step algorithm for detecting toxigenic Clostridium difficile: an enzyme immunoassay for glutamate dehydrogenase antigen (Ag-EIA) and then, for antigen-positive specimens, a concurrent cell culture cytotoxicity neutralization assay (CCNA). Antigen-negative results were ≥99% predictive of CCNA negativity. Because the Ag-EIA reduced cell culture workload by ≈75 to 80% and two-step testing was complete in ≤3 days, we decided that this algorithm would be effective. Over 6 months...
DEFF Research Database (Denmark)
Li, Chendan; Chaudhary, Sanjay; Vasquez, Juan Carlos
2014-01-01
) consideration of virtual impedance parameters and (ii) higher accuracy in reactive power flow calculation. The improved power flow analysis algorithm proposed in this paper is validated by comparing the calculation results with detailed time domain simulation results. Case studies have been carried out...
Zemcov, Michael; Crill, Brendan; Ryan, Matthew; Staniszewski, Zak
2016-06-01
Mega-pixel charge-integrating detectors are common in near-IR imaging applications. Optimal signal-to-noise ratio estimates of the photocurrents, which are particularly important in the low-signal regime, are produced by fitting linear models to sequential reads of the charge on the detector. Algorithms that solve this problem have a long history, but can be computationally intensive. Furthermore, the cosmic ray background is appreciable for these detectors in Earth orbit, particularly above the Earth’s magnetic poles and the South Atlantic Anomaly, and on-board reduction routines must be capable of flagging affected pixels. In this paper, we present an algorithm that generates optimal photocurrent estimates and flags random transient charge generation from cosmic rays, and is specifically designed to fit on a computationally restricted platform. We take as a case study the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx), a NASA Small Explorer astrophysics experiment concept, and show that the algorithm can easily fit in the resource-constrained environment of such a restricted platform. Detailed simulations of the input astrophysical signals and detector array performance are used to characterize the fitting routines in the presence of complex noise properties and charge transients. We use both Hubble Space Telescope Wide Field Camera-3 and Wide-field Infrared Survey Explorer to develop an empirical understanding of the susceptibility of near-IR detectors in low earth orbit and build a model for realistic cosmic ray energy spectra and rates. We show that our algorithm generates an unbiased estimate of the true photocurrent that is identical to that from a standard line fitting package, and characterize the rate, energy, and timing of both detected and undetected transient events. This algorithm has significant potential for imaging with charge-integrating detectors in astrophysics, earth science, and remote
DEFF Research Database (Denmark)
Stigaard Laursen, Morten; Jørgensen, Rasmus Nyholm; Midtiby, Henrik Skov;
on the initial weed coverage. However, additional field trials covering more seasons and locations are needed to verify the generalisation of these results. There is a potential for further herbicide savings as the time interval between the first and second spraying session was not long enough for the weeds......This work contributes a statistical model and simulation framework yielding the best estimate possible for the potential herbicide reduction when using the MoDiCoVi algorithm all the while requiring an efficacy comparable to conventional spraying. In June 2013 a maize field located in Denmark were...... seeded. The field was divided into parcels which were assigned to one of two main groups: 1) Control, consisting of subgroups of no spray and full dose spray; 2) MoDiCoVi algorithm subdivided into five different leaf cover thresholds for spray activation. Also approximately 25% of the parcels were seeded...
Directory of Open Access Journals (Sweden)
Yang Li
2014-02-01
Full Text Available Published methods for imaging and quantitatively analyzing morphological changes in neuronal axons have serious limitations because of their small sample sizes, and their time-consuming and nonobjective nature. Here we present an improved microfluidic chamber design suitable for fast and high-throughput imaging of neuronal axons. We developed the AxonQuant algorithm, which is suitable for automatic processing of axonal imaging data. This microfluidic chamber-coupled algorithm allows calculation of an ‘axonal continuity index' that quantitatively measures axonal health status in a manner independent of neuronal or axonal density. This method allows quantitative analysis of axonal morphology in an automatic and nonbiased manner. Our method will facilitate large-scale high-throughput screening for genes or therapeutic compounds for neurodegenerative diseases involving axonal damage. When combined with imaging technologies utilizing different gene markers, this method will provide new insights into the mechanistic basis for axon degeneration. Our microfluidic chamber culture-coupled AxonQuant algorithm will be widely useful for studying axonal biology and neurodegenerative disorders. © 2014 S. Karger AG, Basel
Directory of Open Access Journals (Sweden)
Jason D Hipp
2011-01-01
Full Text Available Introduction: Historically, effective clinical utilization of image analysis and pattern recognition algorithms in pathology has been hampered by two critical limitations: 1 the availability of digital whole slide imagery data sets and 2 a relative domain knowledge deficit in terms of application of such algorithms, on the part of practicing pathologists. With the advent of the recent and rapid adoption of whole slide imaging solutions, the former limitation has been largely resolved. However, with the expectation that it is unlikely for the general cohort of contemporary pathologists to gain advanced image analysis skills in the short term, the latter problem remains, thus underscoring the need for a class of algorithm that has the concurrent properties of image domain (or organ system independence and extreme ease of use, without the need for specialized training or expertise. Results: In this report, we present a novel, general case pattern recognition algorithm, Spatially Invariant Vector Quantization (SIVQ, that overcomes the aforementioned knowledge deficit. Fundamentally based on conventional Vector Quantization (VQ pattern recognition approaches, SIVQ gains its superior performance and essentially zero-training workflow model from its use of ring vectors, which exhibit continuous symmetry, as opposed to square or rectangular vectors, which do not. By use of the stochastic matching properties inherent in continuous symmetry, a single ring vector can exhibit as much as a millionfold improvement in matching possibilities, as opposed to conventional VQ vectors. SIVQ was utilized to demonstrate rapid and highly precise pattern recognition capability in a broad range of gross and microscopic use-case settings. Conclusion: With the performance of SIVQ observed thus far, we find evidence that indeed there exist classes of image analysis/pattern recognition algorithms suitable for deployment in settings where pathologists alone can effectively
Energy Technology Data Exchange (ETDEWEB)
Park, Hyung-Bok [Yonsei University Health System, Yonsei-Cedar Sinai Integrative Cardiovascular Imaging Research Center, Seoul (Korea, Republic of); Myongji Hospital, Division of Cardiology, Cardiovascular Center, Goyang (Korea, Republic of); Lee, Byoung Kwon [Yonsei University College of Medicine, Division of Cardiology, Gangnam Severance Hospital, Seoul (Korea, Republic of); Shin, Sanghoon [Yonsei University Health System, Yonsei-Cedar Sinai Integrative Cardiovascular Imaging Research Center, Seoul (Korea, Republic of); National Health Insurance Corporation Ilsan Hospital, Division of Cardiology, Goyang (Korea, Republic of); Heo, Ran; Chang, Hyuk-Jae; Chung, Namsik [Yonsei University Health System, Yonsei-Cedar Sinai Integrative Cardiovascular Imaging Research Center, Seoul (Korea, Republic of); Yonsei University Health System, Division of Cardiology, Severance Cardiovascular Hospital, Seoul (Korea, Republic of); Arsanjani, Reza [Cedars-Sinai Medical Center, Departments of Imaging and Medicine, Cedars-Sinai Heart Institute, Los Angeles, CA (United States); Kitslaar, Pieter H. [Leiden University Medical Center, Department of Radiology, Division of Image Processing, Leiden (Netherlands); Medis medical Imaging Systems B.V., Leiden (Netherlands); Broersen, Alexander; Dijkstra, Jouke [Leiden University Medical Center, Department of Radiology, Division of Image Processing, Leiden (Netherlands); Ahn, Sung Gyun [Yonsei University Wonju Severance Christian Hospital, Division of Cardiology, Wonju (Korea, Republic of); Min, James K. [New York-Presbyterian Hospital, Institute for Cardiovascular Imaging, Weill-Cornell Medical College, New York, NY (United States); Hong, Myeong-Ki; Jang, Yangsoo [Yonsei University Health System, Division of Cardiology, Severance Cardiovascular Hospital, Seoul (Korea, Republic of)
2015-10-15
To evaluate the diagnostic performance of automated coronary atherosclerotic plaque quantification (QCT) by different users (expert/non-expert/automatic). One hundred fifty coronary artery segments from 142 patients who underwent coronary computed tomography angiography (CCTA) and intravascular ultrasound (IVUS) were analyzed. Minimal lumen area (MLA), maximal lumen area stenosis percentage (%AS), mean plaque burden percentage (%PB), and plaque volume were measured semi-automatically by expert, non-expert, and fully automatic QCT analyses, and then compared to IVUS. Between IVUS and expert QCT analysis, the correlation coefficients (r) for the MLA, %AS, %PB, and plaque volume were excellent: 0.89 (p < 0.001), 0.84 (p < 0.001), 0.91 (p < 0.001), and 0.94 (p < 0.001), respectively. There were no significant differences in the mean parameters (all p values >0.05) except %AS (p = 0.01). The automatic QCT analysis showed comparable performance to non-expert QCT analysis, showing correlation coefficients (r) of the MLA (0.80 vs. 0.82), %AS (0.82 vs. 0.80), %PB (0.84 vs. 0.73), and plaque volume (0.84 vs. 0.79) when they were compared to IVUS, respectively. Fully automatic QCT analysis showed clinical utility compared with IVUS, as well as a compelling performance when compared with semiautomatic analyses. (orig.)
Zemcov, Michael; Ryan, Matthew; Staniszewski, Zak
2016-01-01
Mega-pixel charge-integrating detectors are common in near-IR imaging applications. Optimal signal-to-noise ratio estimates of the photocurrents, which are particularly important in the low-signal regime, are produced by fitting linear models to sequential reads of the charge on the detector. Algorithms that solve this problem have a long history, but can be computationally intensive. Furthermore, the cosmic ray background is appreciable for these detectors in Earth orbit, particularly above the Earth's magnetic poles and the South Atlantic Anomaly, and on-board reduction routines must be capable of flagging affected pixels. In this paper we present an algorithm that generates optimal photocurrent estimates and flags random transient charge generation from cosmic rays, and is specifically designed to fit on a computationally restricted platform. We take as a case study the Spectro-Photometer for the History of the Universe, Epoch of Reionization, and Ices Explorer (SPHEREx), a NASA Small Explorer astrophysics...
Ticehurst, John R; Aird, Deborah Z; Dam, Lisa M; Borek, Anita P; Hargrove, John T; Carroll, Karen C
2006-03-01
We evaluated a two-step algorithm for detecting toxigenic Clostridium difficile: an enzyme immunoassay for glutamate dehydrogenase antigen (Ag-EIA) and then, for antigen-positive specimens, a concurrent cell culture cytotoxicity neutralization assay (CCNA). Antigen-negative results were > or = 99% predictive of CCNA negativity. Because the Ag-EIA reduced cell culture workload by approximately 75 to 80% and two-step testing was complete in CCNA alone had been performed on all 5,887 specimens.
Hassan, Rania A.
In the design of complex large-scale spacecraft systems that involve a large number of components and subsystems, many specialized state-of-the-art design tools are employed to optimize the performance of various subsystems. However, there is no structured system-level concept-architecting process. Currently, spacecraft design is heavily based on the heritage of the industry. Old spacecraft designs are modified to adapt to new mission requirements, and feasible solutions---rather than optimal ones---are often all that is achieved. During the conceptual phase of the design, the choices available to designers are predominantly discrete variables describing major subsystems' technology options and redundancy levels. The complexity of spacecraft configurations makes the number of the system design variables that need to be traded off in an optimization process prohibitive when manual techniques are used. Such a discrete problem is well suited for solution with a Genetic Algorithm, which is a global search technique that performs optimization-like tasks. This research presents a systems engineering framework that places design requirements at the core of the design activities and transforms the design paradigm for spacecraft systems to a top-down approach rather than the current bottom-up approach. To facilitate decision-making in the early phases of the design process, the population-based search nature of the Genetic Algorithm is exploited to provide computationally inexpensive---compared to the state-of-the-practice---tools for both multi-objective design optimization and design optimization under uncertainty. In terms of computational cost, those tools are nearly on the same order of magnitude as that of standard single-objective deterministic Genetic Algorithm. The use of a multi-objective design approach provides system designers with a clear tradeoff optimization surface that allows them to understand the effect of their decisions on all the design objectives
Valaparla, Sunil K.; Peng, Qi; Gao, Feng; Clarke, Geoffrey D.
2014-03-01
Accurate measurements of human body fat distribution are desirable because excessive body fat is associated with impaired insulin sensitivity, type 2 diabetes mellitus (T2DM) and cardiovascular disease. In this study, we hypothesized that the performance of water suppressed (WS) MRI is superior to non-water suppressed (NWS) MRI for volumetric assessment of abdominal subcutaneous (SAT), intramuscular (IMAT), visceral (VAT), and total (TAT) adipose tissues. We acquired T1-weighted images on a 3T MRI system (TIM Trio, Siemens), which was analyzed using semi-automated segmentation software that employs a fuzzy c-means (FCM) clustering algorithm. Sixteen contiguous axial slices, centered at the L4-L5 level of the abdomen, were acquired in eight T2DM subjects with water suppression (WS) and without (NWS). Histograms from WS images show improved separation of non-fatty tissue pixels from fatty tissue pixels, compared to NWS images. Paired t-tests of WS versus NWS showed a statistically significant lower volume of lipid in the WS images for VAT (145.3 cc less, p=0.006) and IMAT (305 cc less, pfat volumes (436.1 cc less, p=0.002). There is strong correlation between WS and NWS quantification methods for SAT measurements (r=0.999), but poorer correlation for VAT studies (r=0.845). These results suggest that NWS pulse sequences may overestimate adipose tissue volumes and that WS pulse sequences are more desirable due to the higher contrast generated between fatty and non-fatty tissues.
DEFF Research Database (Denmark)
Bysted, Anette; Cold, S; Hølmer, Gunhild Kofoed
1999-01-01
Considering the need for a quick direct method for measurement of the fatty acid composition including trans isomers ofhuman adipose tissue we have developed a procedure using gas-liquid chromatography (GLC) alone, which is thussuitable for validation of fatty acid status in epidemiological studies...... for 25 min, and finally raised at 25 degrees C/min to 225 degrees C. The trans and cis isomers of18:1 were well separated from each other, as shown by silver-ion thin-layer chromatography. Verification by standardsshowed that the trans 18:1 isomers with a double bond in position 12 or lower were...... the fatty acid profiles including total trans in other tissues....
DEFF Research Database (Denmark)
Bysted, Anette; Cold, S; Hølmer, Gunhild Kofoed
1999-01-01
for 25 min, and finally raised at 25 degrees C/min to 225 degrees C. The trans and cis isomers of18:1 were well separated from each other, as shown by silver-ion thin-layer chromatography. Verification by standardsshowed that the trans 18:1 isomers with a double bond in position 12 or lower were......Considering the need for a quick direct method for measurement of the fatty acid composition including trans isomers ofhuman adipose tissue we have developed a procedure using gas-liquid chromatography (GLC) alone, which is thussuitable for validation of fatty acid status in epidemiological studies...
Introduction to uncertainty quantification
Sullivan, T J
2015-01-01
Uncertainty quantification is a topic of increasing practical importance at the intersection of applied mathematics, statistics, computation, and numerous application areas in science and engineering. This text provides a framework in which the main objectives of the field of uncertainty quantification are defined, and an overview of the range of mathematical methods by which they can be achieved. Complete with exercises throughout, the book will equip readers with both theoretical understanding and practical experience of the key mathematical and algorithmic tools underlying the treatment of uncertainty in modern applied mathematics. Students and readers alike are encouraged to apply the mathematical methods discussed in this book to their own favourite problems to understand their strengths and weaknesses, also making the text suitable as a self-study. This text is designed as an introduction to uncertainty quantification for senior undergraduate and graduate students with a mathematical or statistical back...
de Lucena, Rodrigo F.; Taioli, Fabio
2014-09-01
This paper presents a study on Rayleigh wave modeling. After model implementation using Matlab software, unpublished studies were conducted of dispersion curve sensitivity to percentage changes in parameter values, including S- and P-wave velocities, substrate density, and layer thickness. The study of the sensitivity of dispersion curves demonstrated that parameters such as S-wave velocity and layer thickness cannot be ignored as inversion parameters, while P-wave velocity and density can be considered as known parameters since their influence is minimal. However, the results showed limitations that should be considered and overcome when choosing the known and unknown parameters through determining a good initial model or/and by gathering a priori information. A methodology considering the sensitivity study of dispersion curves was developed and evaluated to generate initial values (initial model) to be included in the local search inversion algorithm, clearly establishing initial favorable conditions for data inversion.
Chen, Yasheng; Dhar, Rajat; Heitsch, Laura; Ford, Andria; Fernandez-Cadenas, Israel; Carrera, Caty; Montaner, Joan; Lin, Weili; Shen, Dinggang; An, Hongyu; Lee, Jin-Moo
2016-01-01
Although cerebral edema is a major cause of death and deterioration following hemispheric stroke, there remains no validated biomarker that captures the full spectrum of this critical complication. We recently demonstrated that reduction in intracranial cerebrospinal fluid (CSF) volume (∆ CSF) on serial computed tomography (CT) scans provides an accurate measure of cerebral edema severity, which may aid in early triaging of stroke patients for craniectomy. However, application of such a volumetric approach would be too cumbersome to perform manually on serial scans in a real-world setting. We developed and validated an automated technique for CSF segmentation via integration of random forest (RF) based machine learning with geodesic active contour (GAC) segmentation. The proposed RF + GAC approach was compared to conventional Hounsfield Unit (HU) thresholding and RF segmentation methods using Dice similarity coefficient (DSC) and the correlation of volumetric measurements, with manual delineation serving as the ground truth. CSF spaces were outlined on scans performed at baseline (line of identity in RF + GAC. When we applied the algorithm trained from images of one stroke center to segment CTs from another center, similar findings held. In conclusion, we have developed and validated an accurate automated approach to segment CSF and calculate its shifts on serial CT scans. This algorithm will allow us to efficiently and accurately measure the evolution of cerebral edema in future studies including large multi-site patient populations.
Directory of Open Access Journals (Sweden)
Yasheng Chen
2016-01-01
. This algorithm will allow us to efficiently and accurately measure the evolution of cerebral edema in future studies including large multi-site patient populations.
Toubar, Safaa S.; Hegazy, Maha A.; Elshahed, Mona S.; Helmy, Marwa I.
2016-06-01
In this work, resolution and quantitation of spectral signals are achieved by several univariate and multivariate techniques. The novel pure component contribution algorithm (PCCA) along with mean centering of ratio spectra (MCR) and the factor based partial least squares (PLS) algorithms were developed for simultaneous determination of chlorzoxazone (CXZ), aceclofenac (ACF) and paracetamol (PAR) in their pure form and recently co-formulated tablets. The PCCA method allows the determination of each drug at its λmax. While, the mean centered values at 230, 302 and 253 nm, were used for quantification of CXZ, ACF and PAR, respectively, by MCR method. Partial least-squares (PLS) algorithm was applied as a multivariate calibration method. The three methods were successfully applied for determination of CXZ, ACF and PAR in pure form and tablets. Good linear relationships were obtained in the ranges of 2-50, 2-40 and 2-30 μg mL- 1 for CXZ, ACF and PAR, in order, by both PCCA and MCR, while the PLS model was built for the three compounds each in the range of 2-10 μg mL- 1. The results obtained from the proposed methods were statistically compared with a reported one. PCCA and MCR methods were validated according to ICH guidelines, while PLS method was validated by both cross validation and an independent data set. They are found suitable for the determination of the studied drugs in bulk powder and tablets.
Reller, Megan E.; Lema, Clara A.; Perl, Trish M.; Cai, Mian; Ross, Tracy L.; Speck, Kathleen A.; Carroll, Karen C.
2007-01-01
We examined the incremental yield of stool culture (with toxin testing on isolates) versus our two-step algorithm for optimal detection of toxigenic Clostridium difficile. Per the two-step algorithm, stools were screened for C. difficile-associated glutamate dehydrogenase (GDH) antigen and, if positive, tested for toxin by a direct (stool) cell culture cytotoxicity neutralization assay (CCNA). In parallel, stools were cultured for C. difficile and tested for toxin by both indirect (isolate) C...
Institute of Scientific and Technical Information of China (English)
曹耀传; 王澄非; 潘树国
2009-01-01
针对GPS L1信号的软件接收机(SDR)的快速捕获问题,设计了GPS信号并行码相位搜索捕获的优化算法.首先分析了FFT并行码相位搜索捕获的理论模型,由本地C/A码功率谱频谱分布特点,使用前半频域范围内的信息,进行算法的初步优化;在此基础上采用比特量化方法量化数据信号,使单个计算机储存单元存储多路二进制量化数据,实现对多路数据的并行处理.实验证明改进的并行码相位搜索捕获算法极大地提高了捕获速度,是改进前的5倍,尽管有较少信号强度的衰减,但不影响捕获的正确性,实现了GPS信号的快速捕获.%An optimized algorithm is designed to improve parallel code phase search acquisition for GPS L1 sig-nal, which applies to soft-defined GPS receiver for fasting acquisition. In the following, the model of parallel code phase search acquisition algorithm using FIT is analyzed. By the characteristics of the signal power spec-trum, the algorithm is optimized by using the first half of spectrum information. Subsequently, bit quantification method is used to achieve parallel data processing by storing multi-channel data to a single memory unit. The result shows that the modified algorithm improves acquisition speed five times by decreasing little signal-noise ratio, and achieves fast acquisition.
Zare Hosseinzadeh, Ali; Bagheri, Abdollah; Ghodrati Amiri, Gholamreza; Koo, Ki-Young
2014-04-01
In this paper, a novel and effective damage diagnosis algorithm is proposed to localize and quantify structural damage using incomplete modal data, considering the existence of some limitations in the number of attached sensors on structures. The damage detection problem is formulated as an optimization problem by computing static displacements in the reduced model of a structure subjected to a unique static load. The static responses are computed through the flexibility matrix of the damaged structure obtained based on the incomplete modal data of the structure. In the algorithm, an iterated improved reduction system method is applied to prepare an accurate reduced model of a structure. The optimization problem is solved via a new evolutionary optimization algorithm called the cuckoo optimization algorithm. The efficiency and robustness of the presented method are demonstrated through three numerical examples. Moreover, the efficiency of the method is verified by an experimental study of a five-story shear building structure on a shaking table considering only two sensors. The obtained damage identification results for the numerical and experimental studies show the suitable and stable performance of the proposed damage identification method for structures with limited sensors.
Uncertainty quantification theory, implementation, and applications
Smith, Ralph C
2014-01-01
The field of uncertainty quantification is evolving rapidly because of increasing emphasis on models that require quantified uncertainties for large-scale applications, novel algorithm development, and new computational architectures that facilitate implementation of these algorithms. Uncertainty Quantification: Theory, Implementation, and Applications provides readers with the basic concepts, theory, and algorithms necessary to quantify input and response uncertainties for simulation models arising in a broad range of disciplines. The book begins with a detailed discussion of applications where uncertainty quantification is critical for both scientific understanding and policy. It then covers concepts from probability and statistics, parameter selection techniques, frequentist and Bayesian model calibration, propagation of uncertainties, quantification of model discrepancy, surrogate model construction, and local and global sensitivity analysis. The author maintains a complementary web page where readers ca...
Menéndez, María del Carmen; Rebollo, María José; Núñez, María del Carmen; Cox, Robert A.; García, María Jesús
2005-01-01
Mycobacterial species are able to control rRNA production through variations in the number and strength of promoters controlling their rrn operons. Mycobacterium chelonae and M. fortuitum are members of the rapidly growing mycobacterial group. They carry a total of five promoters each, encoded, respectively, by one and two rrn operons per genome. Quantification of precursor rrn transcriptional products (pre-rrn) has allowed detection of different promoter usage during cell growth. Bacteria gr...
Khatibinia, M.; Salajegheh, E.; Salajegheh, J.; Fadaee, M. J.
2013-10-01
A new discrete gravitational search algorithm (DGSA) and a metamodelling framework are introduced for reliability-based design optimization (RBDO) of reinforced concrete structures. The RBDO of structures with soil-structure interaction (SSI) effects is investigated in accordance with performance-based design. The proposed DGSA is based on the standard gravitational search algorithm (GSA) to optimize the structural cost under deterministic and probabilistic constraints. The Monte-Carlo simulation (MCS) method is considered as the most reliable method for estimating the probabilities of reliability. In order to reduce the computational time of MCS, the proposed metamodelling framework is employed to predict the responses of the SSI system in the RBDO procedure. The metamodel consists of a weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, which is called WWLS-SVM. Numerical results demonstrate the efficiency and computational advantages of DGSA and the proposed metamodel for RBDO of reinforced concrete structures.
Ojala, Jarkko Juhani; Kapanen, Mika
2015-11-08
A commercialized implementation of linear Boltzmann transport equation solver, the Acuros XB algorithm (AXB), represents a class of most advanced type 'c' photon radiotherapy dose calculation algorithms. The purpose of the study was to quantify the effects of the modifications implemented in the more recent version 11 of the AXB (AXB11) compared to the first commercial implementation, version 10 of the AXB (AXB10), in various anatomical regions in clinical treatment planning. Both versions of the AXB were part of Varian's Eclipse clinical treatment planning system and treatment plans for 10 patients were created using intensity-modulated radiotherapy (IMRT) and volumetric-modulated arc radiotherapy (VMAT). The plans were first created with the AXB10 and then recalculated with the AXB11 and full Monte Carlo (MC) simulations. Considering the full MC simulations as reference, a DVH analysis for gross tumor and planning target volumes (GTV and PTV) and organs at risk was performed, and also 3D gamma agreement index (GAI) values within a 15% isodose region and for the PTV were determined. Although differences up to 12% in DVH analysis were seen between the MC simulations and the AXB, based on the results of this study no general conclusion can be drawn that the modifications made in the AXB11 compared to the AXB10 would imply that the dose calculation accuracy of the AXB10 would be inferior to the AXB11 in the clinical patient treatment planning. The only clear improvement with the AXB11 over the AXB10 is the dose calculation accuracy in air cavities. In general, no large deviations are present in the DVH analysis results between the two versions of the algorithm, and the results of 3D gamma analysis do not favor one or the other. Thus it may be concluded that the results of the comprehensive studies assessing the accuracy of the AXB10 may be extended to the AXB11.
Reller, Megan E; Lema, Clara A; Perl, Trish M; Cai, Mian; Ross, Tracy L; Speck, Kathleen A; Carroll, Karen C
2007-11-01
We examined the incremental yield of stool culture (with toxin testing on isolates) versus our two-step algorithm for optimal detection of toxigenic Clostridium difficile. Per the two-step algorithm, stools were screened for C. difficile-associated glutamate dehydrogenase (GDH) antigen and, if positive, tested for toxin by a direct (stool) cell culture cytotoxicity neutralization assay (CCNA). In parallel, stools were cultured for C. difficile and tested for toxin by both indirect (isolate) CCNA and conventional PCR if the direct CCNA was negative. The "gold standard" for toxigenic C. difficile was detection of C. difficile by the GDH screen or by culture and toxin production by direct or indirect CCNA. We tested 439 specimens from 439 patients. GDH screening detected all culture-positive specimens. The sensitivity of the two-step algorithm was 77% (95% confidence interval [CI], 70 to 84%), and that of culture was 87% (95% CI, 80 to 92%). PCR results correlated completely with those of CCNA testing on isolates (29/29 positive and 32/32 negative, respectively). We conclude that GDH is an excellent screening test and that culture with isolate CCNA testing detects an additional 23% of toxigenic C. difficile missed by direct CCNA. Since culture is tedious and also detects nontoxigenic C. difficile, we conclude that culture is most useful (i) when the direct CCNA is negative but a high clinical suspicion of toxigenic C. difficile remains, (ii) in the evaluation of new diagnostic tests for toxigenic C. difficile (where the best reference standard is essential), and (iii) in epidemiologic studies (where the availability of an isolate allows for strain typing and antimicrobial susceptibility testing).
Absolute quantification of myocardial blood flow.
Yoshinaga, Keiichiro; Manabe, Osamu; Tamaki, Nagara
2016-07-21
With the increasing availability of positron emission tomography (PET) myocardial perfusion imaging, the absolute quantification of myocardial blood flow (MBF) has become popular in clinical settings. Quantitative MBF provides an important additional diagnostic or prognostic information over conventional visual assessment. The success of MBF quantification using PET/computed tomography (CT) has increased the demand for this quantitative diagnostic approach to be more accessible. In this regard, MBF quantification approaches have been developed using several other diagnostic imaging modalities including single-photon emission computed tomography, CT, and cardiac magnetic resonance. This review will address the clinical aspects of PET MBF quantification and the new approaches to MBF quantification.
Bagherinejad, Jafar; Niknam, Azar
2017-06-01
In this paper, a leader-follower competitive facility location problem considering the reactions of the competitors is studied. A model for locating new facilities and determining levels of quality for the facilities of the leader firm is proposed. Moreover, changes in the location and quality of existing facilities in a competitive market where a competitor offers the same goods or services are taken into account. The competitor could react by opening new facilities, closing existing ones, and adjusting the quality levels of its existing facilities. The market share, captured by each facility, depends on its distance to customer and its quality that is calculated based on the probabilistic Huff's model. Each firm aims to maximize its profit subject to constraints on quality levels and budget of setting up new facilities. This problem is formulated as a bi-level mixed integer non-linear model. The model is solved using a combination of Tabu Search with an exact method. The performance of the proposed algorithm is compared with an upper bound that is achieved by applying Karush-Kuhn-Tucker conditions. Computational results show that our algorithm finds near the upper bound solutions in a reasonable time.
Abrams, D.; Williams, C.
1999-01-01
This thesis describes several new quantum algorithms. These include a polynomial time algorithm that uses a quantum fast Fourier transform to find eigenvalues and eigenvectors of a Hamiltonian operator, and that can be applied in cases for which all know classical algorithms require exponential time.
Signal processing of heart signals for the quantification of non-deterministic events
Directory of Open Access Journals (Sweden)
Baddour Natalie
2011-01-01
Full Text Available Abstract Background Heart signals represent an important way to evaluate cardiovascular function and often what is desired is to quantify the level of some signal of interest against the louder backdrop of the beating of the heart itself. An example of this type of application is the quantification of cavitation in mechanical heart valve patients. Methods An algorithm is presented for the quantification of high-frequency, non-deterministic events such as cavitation from recorded signals. A closed-form mathematical analysis of the algorithm investigates its capabilities. The algorithm is implemented on real heart signals to investigate usability and implementation issues. Improvements are suggested to the base algorithm including aligning heart sounds, and the implementation of the Short-Time Fourier Transform to study the time evolution of the energy in the signal. Results The improvements result in better heart beat alignment and better detection and measurement of the random events in the heart signals, so that they may provide a method to quantify nondeterministic events in heart signals. The use of the Short-Time Fourier Transform allows the examination of the random events in both time and frequency allowing for further investigation and interpretation of the signal. Conclusions The presented algorithm does allow for the quantification of nondeterministic events but proper care in signal acquisition and processing must be taken to obtain meaningful results.
Deng, Ning; Li, Zhenye; Pan, Chao; Duan, Huilong
2015-01-01
Study of complex proteome brings forward higher request for the quantification method using mass spectrometry technology. In this paper, we present a mass spectrometry label-free quantification tool for complex proteomes, called freeQuant, which integrated quantification with functional analysis effectively. freeQuant consists of two well-integrated modules: label-free quantification and functional analysis with biomedical knowledge. freeQuant supports label-free quantitative analysis which makes full use of tandem mass spectrometry (MS/MS) spectral count, protein sequence length, shared peptides, and ion intensity. It adopts spectral count for quantitative analysis and builds a new method for shared peptides to accurately evaluate abundance of isoforms. For proteins with low abundance, MS/MS total ion count coupled with spectral count is included to ensure accurate protein quantification. Furthermore, freeQuant supports the large-scale functional annotations for complex proteomes. Mitochondrial proteomes from the mouse heart, the mouse liver, and the human heart were used to evaluate the usability and performance of freeQuant. The evaluation showed that the quantitative algorithms implemented in freeQuant can improve accuracy of quantification with better dynamic range.
Protein inference: A protein quantification perspective.
He, Zengyou; Huang, Ting; Liu, Xiaoqing; Zhu, Peijun; Teng, Ben; Deng, Shengchun
2016-08-01
In mass spectrometry-based shotgun proteomics, protein quantification and protein identification are two major computational problems. To quantify the protein abundance, a list of proteins must be firstly inferred from the raw data. Then the relative or absolute protein abundance is estimated with quantification methods, such as spectral counting. Until now, most researchers have been dealing with these two processes separately. In fact, the protein inference problem can be regarded as a special protein quantification problem in the sense that truly present proteins are those proteins whose abundance values are not zero. Some recent published papers have conceptually discussed this possibility. However, there is still a lack of rigorous experimental studies to test this hypothesis. In this paper, we investigate the feasibility of using protein quantification methods to solve the protein inference problem. Protein inference methods aim to determine whether each candidate protein is present in the sample or not. Protein quantification methods estimate the abundance value of each inferred protein. Naturally, the abundance value of an absent protein should be zero. Thus, we argue that the protein inference problem can be viewed as a special protein quantification problem in which one protein is considered to be present if its abundance is not zero. Based on this idea, our paper tries to use three simple protein quantification methods to solve the protein inference problem effectively. The experimental results on six data sets show that these three methods are competitive with previous protein inference algorithms. This demonstrates that it is plausible to model the protein inference problem as a special protein quantification task, which opens the door of devising more effective protein inference algorithms from a quantification perspective. The source codes of our methods are available at: http://code.google.com/p/protein-inference/.
Ghosh, Karabi
2017-02-01
We briefly comment on a paper by N.A. Gentile [J. Comput. Phys. 230 (2011) 5100-5114] in which the Fleck factor has been modified to include the effects of temperature-dependent opacities in the implicit Monte Carlo algorithm developed by Fleck and Cummings [1,2]. Instead of the Fleck factor, f = 1 / (1 + βcΔtσP), the author derived the modified Fleck factor g = 1 / (1 + βcΔtσP - min [σP‧ (a Tr4 - aT4) cΔt/ρCV, 0 ]) to be used in the Implicit Monte Carlo (IMC) algorithm in order to obtain more accurate solutions with much larger time steps. Here β = 4 aT3 / ρCV, σP is the Planck opacity and the derivative of Planck opacity w.r.t. the material temperature is σP‧ = dσP / dT.
Günter, Anastasia; Balsaa, Peter; Werres, Friedrich; Schmidt, Torsten C
2016-06-10
In this study, 21 organochlorine pesticides (OCPs) were determined based on sample preparation using solid-phase extraction disks (SPE disks) coupled with programmable temperature vaporizer (PTV)-large-volume injection gas-chromatography mass spectrometry (LVI-GC-MS). The work includes a comprehensive testing scheme on the suitability of the method for routine analysis of surface and drinking water including suspended particulate matter (SPM) with regard to requirements derived from the European Water Framework Directive (WFD, Directive 2000/60/EC). SPM is an important reservoir for OCPs, which contributes to the transport of these compounds in the aquatic environment. To achieve the detection limits required by the WFD, a high pre-concentration factor during sample preparation is necessary, which was achieved by disk SPE in this study. The performance of disk SPE is strongly influenced by the drying step, which could be significantly improved by effective elimination of the residual water by combination of a high vacuum pump and a low humidity atmosphere. Detection limits of the WFD in the ng/L range were achieved by large volume injection of 100μL sample extract. The recoveries ranged from 82% to 117% with an RSD smaller than 13%. The applicability of this method to natural samples was tested for instrumental qualification and system suitability evaluation. Successful participation in an interlaboratory comparison proved the suitability of the method for routine analysis.
Standardless quantification by parameter optimization in electron probe microanalysis
Energy Technology Data Exchange (ETDEWEB)
Limandri, Silvina P. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Bonetto, Rita D. [Centro de Investigacion y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco (CINDECA), CONICET, 47 Street 257, (1900) La Plata (Argentina); Facultad de Ciencias Exactas, Universidad Nacional de La Plata, 1 and 47 Streets (1900) La Plata (Argentina); Josa, Victor Galvan; Carreras, Alejo C. [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina); Trincavelli, Jorge C., E-mail: trincavelli@famaf.unc.edu.ar [Instituto de Fisica Enrique Gaviola (IFEG), CONICET (Argentina); Facultad de Matematica, Astronomia y Fisica, Universidad Nacional de Cordoba, Medina Allende s/n, (5016) Cordoba (Argentina)
2012-11-15
A method for standardless quantification by parameter optimization in electron probe microanalysis is presented. The method consists in minimizing the quadratic differences between an experimental spectrum and an analytical function proposed to describe it, by optimizing the parameters involved in the analytical prediction. This algorithm, implemented in the software POEMA (Parameter Optimization in Electron Probe Microanalysis), allows the determination of the elemental concentrations, along with their uncertainties. The method was tested in a set of 159 elemental constituents corresponding to 36 spectra of standards (mostly minerals) that include trace elements. The results were compared with those obtained with the commercial software GENESIS Spectrum Registered-Sign for standardless quantification. The quantifications performed with the method proposed here are better in the 74% of the cases studied. In addition, the performance of the method proposed is compared with the first principles standardless analysis procedure DTSA for a different data set, which excludes trace elements. The relative deviations with respect to the nominal concentrations are lower than 0.04, 0.08 and 0.35 for the 66% of the cases for POEMA, GENESIS and DTSA, respectively. - Highlights: Black-Right-Pointing-Pointer A method for standardless quantification in EPMA is presented. Black-Right-Pointing-Pointer It gives better results than the commercial software GENESIS Spectrum. Black-Right-Pointing-Pointer It gives better results than the software DTSA. Black-Right-Pointing-Pointer It allows the determination of the conductive coating thickness. Black-Right-Pointing-Pointer It gives an estimation for the concentration uncertainties.
Nature-inspired optimization algorithms
Yang, Xin-She
2014-01-01
Nature-Inspired Optimization Algorithms provides a systematic introduction to all major nature-inspired algorithms for optimization. The book's unified approach, balancing algorithm introduction, theoretical background and practical implementation, complements extensive literature with well-chosen case studies to illustrate how these algorithms work. Topics include particle swarm optimization, ant and bee algorithms, simulated annealing, cuckoo search, firefly algorithm, bat algorithm, flower algorithm, harmony search, algorithm analysis, constraint handling, hybrid methods, parameter tuning
Semiautomatic quantification of angiogenesis.
Boettcher, Markus; Gloe, Torsten; de Wit, Cor
2010-07-01
Angiogenesis is of major interest in developmental biology and cancer research. Different experimental approaches are available to study angiogenesis that have in common the need for microscopy, image acquisition, and analysis. Problems that are encountered hereby are the size of the structures, which requires generation of composite images and difficulties in quantifying angiogenic activity reliably and rapidly. Most graphic software packages lack some of the required functions for easy, semiautomatic quantification of angiogenesis and, consequently, multiple software packages or expensive programs have to be used to cover all necessary functions. A software package (AQuaL) to analyze angiogenic activity was developed using Java, which can be used platform-independently. It includes image acquisition relying on the Java Media Framework and an easy to use image alignment tool. Multiple overlapping images can be aligned and saved without limitations and loss of resolution into a composite image, which requires only the selection of a single point representing a characteristic structure in adjacent images. Angiogenic activity can be quantified in composite images semiautomatically by the assessment of the area overgrown by cells after filtering and image binarization. In addition, tagging of capillary-like structures allows quantification of their length and branching pattern. Both developed methods deliver reliable and correlating data as exemplified in the aortic ring angiogenesis assay. The developed software provides modular functions specifically targeted to quantify angiogenesis. Whereas the area measurement is time saving, length measurement provides additional information about the branching patterns, which is required for a qualitative differentiation of capillary growth. (c) 2010 Elsevier Inc. All rights reserved.
Kn, Bhanu Prakash; Gopalan, Venkatesh; Lee, Swee Shean; Velan, S Sendhil
2014-01-01
Obesity is a leading healthcare issue contributing to metabolic diseases. There is a great interest in non-invasive approaches for quantitating abdominal fat in obese animals and humans. In this work, we propose an automated method to distinguish and quantify subcutaneous and visceral adipose tissues (SAT and VAT) in rodents during obesity and weight loss interventions. We have also investigated the influence of different magnetic resonance sequences and sources of variability in quantification of fat depots. High-fat diet fed rodents were utilized for investigating the changes during obesity, exercise, and calorie restriction interventions (N = 7/cohort). Imaging was performed on a 7T Bruker ClinScan scanner using fast spin echo (FSE) and Dixon imaging methods to estimate the fat depots. Finally, we quantified the SAT and VAT volumes between the L1-L5 lumbar vertebrae using the proposed automatic hybrid geodesic region-based curve evolution algorithm. Significant changes in SAT and VAT volumes (pfat quantification correlation between FSE (with and without water suppression) sequences and Dixon for SAT and VAT were 0.9709, 0.9803 and 0.9955, 0.9840 respectively. The algorithm significantly reduced the computation time from 100 sec/slice to 25 sec/slice. The pre-processing, data-derived contour placement and avoidance of strong background-image boundary improved the convergence accuracy of the proposed algorithm. We developed a fully automatic segmentation algorithm to quantitate SAT and VAT from abdominal images of rodents, which can support large cohort studies. We additionally identified the influence of non-algorithmic variables including cradle disturbance, animal positioning, and MR sequence on the fat quantification. There were no large variations between FSE and Dixon-based estimation of SAT and VAT.
Algorithmically specialized parallel computers
Snyder, Lawrence; Gannon, Dennis B
1985-01-01
Algorithmically Specialized Parallel Computers focuses on the concept and characteristics of an algorithmically specialized computer.This book discusses the algorithmically specialized computers, algorithmic specialization using VLSI, and innovative architectures. The architectures and algorithms for digital signal, speech, and image processing and specialized architectures for numerical computations are also elaborated. Other topics include the model for analyzing generalized inter-processor, pipelined architecture for search tree maintenance, and specialized computer organization for raster
Mixture quantification using PLS in plastic scintillation measurements
Energy Technology Data Exchange (ETDEWEB)
Bagan, H.; Tarancon, A.; Rauret, G. [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Garcia, J.F., E-mail: jfgarcia@ub.ed [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain)
2011-06-15
This article reports the capability of plastic scintillation (PS) combined with multivariate calibration (Partial least squares; PLS) to detect and quantify alpha and beta emitters in mixtures. While several attempts have been made with this purpose in mind using liquid scintillation (LS), no attempt was done using PS that has the great advantage of not producing mixed waste after the measurements are performed. Following this objective, ternary mixtures of alpha and beta emitters ({sup 241}Am, {sup 137}Cs and {sup 90}Sr/{sup 90}Y) have been quantified. Procedure optimisation has evaluated the use of the net spectra or the sample spectra, the inclusion of different spectra obtained at different values of the Pulse Shape Analysis parameter and the application of the PLS1 or PLS2 algorithms. The conclusions show that the use of PS+PLS2 applied to the sample spectra, without the use of any pulse shape discrimination, allows quantification of the activities with relative errors less than 10% in most of the cases. This procedure not only allows quantification of mixtures but also reduces measurement time (no blanks are required) and the application of this procedure does not require detectors that include the pulse shape analysis parameter.
Automated quantification of nuclear immunohistochemical markers with different complexity.
López, Carlos; Lejeune, Marylène; Salvadó, María Teresa; Escrivà, Patricia; Bosch, Ramón; Pons, Lluis E; Alvaro, Tomás; Roig, Jordi; Cugat, Xavier; Baucells, Jordi; Jaén, Joaquín
2008-03-01
Manual quantification of immunohistochemically stained nuclear markers is still laborious and subjective and the use of computerized systems for digital image analysis have not yet resolved the problems of nuclear clustering. In this study, we designed a new automatic procedure for quantifying various immunohistochemical nuclear markers with variable clustering complexity. This procedure consisted of two combined macros. The first, developed with a commercial software, enabled the analysis of the digital images using color and morphological segmentation including a masking process. All information extracted with this first macro was automatically exported to an Excel datasheet, where a second macro composed of four different algorithms analyzed all the information and calculated the definitive number of positive nuclei for each image. One hundred and eighteen images with different levels of clustering complexity was analyzed and compared with the manual quantification obtained by a trained observer. Statistical analysis indicated a great reliability (intra-class correlation coefficient > 0.950) and no significant differences between the two methods. Bland-Altman plot and Kaplan-Meier curves indicated that the results of both methods were concordant around 90% of analyzed images. In conclusion, this new automated procedure is an objective, faster and reproducible method that has an excellent level of accuracy, even with digital images with a high complexity.
Margolis, C Z
1983-02-04
The clinical algorithm (flow chart) is a text format that is specially suited for representing a sequence of clinical decisions, for teaching clinical decision making, and for guiding patient care. A representative clinical algorithm is described in detail; five steps for writing an algorithm and seven steps for writing a set of algorithms are outlined. Five clinical education and patient care uses of algorithms are then discussed, including a map for teaching clinical decision making and protocol charts for guiding step-by-step care of specific problems. Clinical algorithms are compared as to their clinical usefulness with decision analysis. Three objections to clinical algorithms are answered, including the one that they restrict thinking. It is concluded that methods should be sought for writing clinical algorithms that represent expert consensus. A clinical algorithm could then be written for any area of medical decision making that can be standardized. Medical practice could then be taught more effectively, monitored accurately, and understood better.
Autonomous Star Tracker Algorithms
DEFF Research Database (Denmark)
Betto, Maurizio; Jørgensen, John Leif; Kilsgaard, Søren
1998-01-01
Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances.......Proposal, in response to an ESA R.f.P., to design algorithms for autonomous star tracker operations.The proposal also included the development of a star tracker breadboard to test the algorithms performances....
Adjoint-based uncertainty quantification and sensitivity analysis for reactor depletion calculations
Stripling, Hayes Franklin
Depletion calculations for nuclear reactors model the dynamic coupling between the material composition and neutron flux and help predict reactor performance and safety characteristics. In order to be trusted as reliable predictive tools and inputs to licensing and operational decisions, the simulations must include an accurate and holistic quantification of errors and uncertainties in its outputs. Uncertainty quantification is a formidable challenge in large, realistic reactor models because of the large number of unknowns and myriad sources of uncertainty and error. We present a framework for performing efficient uncertainty quantification in depletion problems using an adjoint approach, with emphasis on high-fidelity calculations using advanced massively parallel computing architectures. This approach calls for a solution to two systems of equations: (a) the forward, engineering system that models the reactor, and (b) the adjoint system, which is mathematically related to but different from the forward system. We use the solutions of these systems to produce sensitivity and error estimates at a cost that does not grow rapidly with the number of uncertain inputs. We present the framework in a general fashion and apply it to both the source-driven and k-eigenvalue forms of the depletion equations. We describe the implementation and verification of solvers for the forward and ad- joint equations in the PDT code, and we test the algorithms on realistic reactor analysis problems. We demonstrate a new approach for reducing the memory and I/O demands on the host machine, which can be overwhelming for typical adjoint algorithms. Our conclusion is that adjoint depletion calculations using full transport solutions are not only computationally tractable, they are the most attractive option for performing uncertainty quantification on high-fidelity reactor analysis problems.
Whole‐body MRI‐based fat quantification: A comparison to air displacement plethysmography
National Research Council Canada - National Science Library
Ludwig, Ute A; Klausmann, Florian; Baumann, Sandra; Honal, Matthias; Hövener, Jan‐Bernd; König, Daniel; Deibert, Peter; Büchert, Martin
2014-01-01
To demonstrate the feasibility of an algorithm for MRI whole-body quantification of internal and subcutaneous fat and quantitative comparison of total adipose tissue to air displacement plethysmography (ADP...
Gong, Wei; Duan, Qingyun
2016-04-01
Parameterization scheme has significant influence to the simulation ability of large, complex dynamic geophysical models, such as distributed hydrological models, land surface models, weather and climate models, etc. with the growing knowledge of physical processes, the dynamic geophysical models include more and more processes and producing more output variables. Consequently the parameter optimization / uncertainty quantification algorithms should also be multi-objective compatible. Although such algorithms have long been available, they usually require a large number of model runs and are therefore computationally expensive for large, complex dynamic models. In this research, we have developed surrogate-based multi-objective optimization method (MO-ASMO) and Markov Chain Monte Carlo method (MC-ASMO) for uncertainty quantification for these expensive dynamic models. The aim of MO-ASMO and MC-ASMO is to reduce the total number of model runs with appropriate adaptive sampling strategy assisted by surrogate modeling. Moreover, we also developed a method that can steer the search process with the help of prior parameterization scheme derived from the physical processes involved, so that all of the objectives can be improved simultaneously. The proposed algorithms have been evaluated with test problems and a land surface model - the Common Land Model (CoLM). The results demonstrated their effectiveness and efficiency.
Xu, Yiwen; Pickering, J Geoffrey; Nong, Zengxuan; Ward, Aaron D
2017-04-01
Immunohistochemical tissue staining enhances microvasculature characteristics, including the smooth muscle in the medial layer of the vessel walls that is responsible for regulation of blood flow. The vasculature can be imaged in a comprehensive fashion using whole-slide scanning. However, since each such image potentially contains hundreds of small vessels, manual vessel delineation and quantification is not practically feasible. In this work, we present a fully automatic segmentation and vasculature quantification algorithm for whole-slide images. We evaluated its performance on tissue samples drawn from the hind limbs of wild-type mice, stained for smooth muscle using 3,3'-Diaminobenzidine (DAB) immunostain. The algorithm was designed to be robust to vessel fragmentation due to staining irregularity, and artefactual staining of nonvessel objects. Colour deconvolution was used to isolate the DAB stain for detection of vessel wall fragments. Complete vessels were reconstructed from the fragments by joining endpoints of topological skeletons. Automatic measures of vessel density, perimeter, wall area and local wall thickness were taken. The segmentation algorithm was validated against manual measures, resulting in a Dice similarity coefficient of 89%. The relationships observed between these measures were as expected from a biological standpoint, providing further reinforcement of the accuracy of this system. This system provides a fully automated and accurate means of measuring the arteriolar and venular morphology of vascular smooth muscle.
Exact reliability quantification of highly reliable systems with maintenance
Energy Technology Data Exchange (ETDEWEB)
Bris, Radim, E-mail: radim.bris@vsb.c [VSB-Technical University Ostrava, Faculty of Electrical Engineering and Computer Science, Department of Applied Mathematics, 17. listopadu 15, 70833 Ostrava-Poruba (Czech Republic)
2010-12-15
When a system is composed of highly reliable elements, exact reliability quantification may be problematic, because computer accuracy is limited. Inaccuracy can be due to different aspects. For example, an error may be made when subtracting two numbers that are very close to each other, or at the process of summation of many very different numbers, etc. The basic objective of this paper is to find a procedure, which eliminates errors made by PC when calculations close to an error limit are executed. Highly reliable system is represented by the use of directed acyclic graph which is composed from terminal nodes, i.e. highly reliable input elements, internal nodes representing subsystems and edges that bind all of these nodes. Three admissible unavailability models of terminal nodes are introduced, including both corrective and preventive maintenance. The algorithm for exact unavailability calculation of terminal nodes is based on merits of a high-performance language for technical computing MATLAB. System unavailability quantification procedure applied to a graph structure, which considers both independent and dependent (i.e. repeatedly occurring) terminal nodes is based on combinatorial principle. This principle requires summation of a lot of very different non-negative numbers, which may be a source of an inaccuracy. That is why another algorithm for exact summation of such numbers is designed in the paper. The summation procedure uses benefits from a special number system with the base represented by the value 2{sup 32}. Computational efficiency of the new computing methodology is compared with advanced simulation software. Various calculations on systems from references are performed to emphasize merits of the methodology.
Qi, Wei; Zhang, Chi; Fu, Guangtao; Zhou, Huicheng
2016-02-01
It is widely recognized that optimization algorithm parameters have significant impacts on algorithm performance, but quantifying the influence is very complex and difficult due to high computational demands and dynamic nature of search parameters. The overall aim of this paper is to develop a global sensitivity analysis based framework to dynamically quantify the individual and interactive influence of algorithm parameters on algorithm performance. A variance decomposition sensitivity analysis method, Analysis of Variance (ANOVA), is used for sensitivity quantification, because it is capable of handling small samples and more computationally efficient compared with other approaches. The Shuffled Complex Evolution method developed at the University of Arizona algorithm (SCE-UA) is selected as an optimization algorithm for investigation, and two criteria, i.e., convergence speed and success rate, are used to measure the performance of SCE-UA. Results show the proposed framework can effectively reveal the dynamic sensitivity of algorithm parameters in the search processes, including individual influences of parameters and their interactive impacts. Interactions between algorithm parameters have significant impacts on SCE-UA performance, which has not been reported in previous research. The proposed framework provides a means to understand the dynamics of algorithm parameter influence, and highlights the significance of considering interactive parameter influence to improve algorithm performance in the search processes.
Approximate iterative algorithms
Almudevar, Anthony Louis
2014-01-01
Iterative algorithms often rely on approximate evaluation techniques, which may include statistical estimation, computer simulation or functional approximation. This volume presents methods for the study of approximate iterative algorithms, providing tools for the derivation of error bounds and convergence rates, and for the optimal design of such algorithms. Techniques of functional analysis are used to derive analytical relationships between approximation methods and convergence properties for general classes of algorithms. This work provides the necessary background in functional analysis a
Competing Sudakov Veto Algorithms
Kleiss, Ronald
2016-01-01
We present a way to analyze the distribution produced by a Monte Carlo algorithm. We perform these analyses on several versions of the Sudakov veto algorithm, adding a cutoff, a second variable and competition between emission channels. The analysis allows us to prove that multiple, seemingly different competition algorithms, including those that are currently implemented in most parton showers, lead to the same result. Finally, we test their performance and show that there are significantly faster alternatives to the commonly used algorithms.
Directory of Open Access Journals (Sweden)
May-Hadford Jennifer
2011-06-01
Full Text Available Abstract Background During period of crisis, laboratory planners may be faced with a need to make operational and clinical decisions in the face of limited information. To avoid this dilemma, our laboratory utilizes a secure web based platform, Data Integration for Alberta Laboratories (DIAL to make near real-time decisions. This manuscript utilizes the data collected by DIAL as well as laboratory test cost modeling to identify the relative economic impact of four proposed scenarios of testing for Pandemic H1N1 (2009 and other respiratory viral pathogens. Methods Historical data was collected from the two waves of the pandemic using DIAL. Four proposed molecular testing scenarios were generated: A Luminex respiratory virus panel (RVP first with/without US centers for Disease Control Influenza A Matrix gene assay (CDC-M, B CDC-M first with/without RVP, C RVP only, and D CDC-M only. Relative cost estimates of different testing algorithm were generated from a review of historical costs in the lab and were based on 2009 Canadian dollars. Results Scenarios A and B had similar costs when the rate of influenza A was low ( Conclusions No one approach is applicable to all conditions. Testing costs will vary depending on the test volume, prevalence of influenza A strains, as well as other circulating viruses and a more costly algorithm involving a combination of different tests may be chosen to ensure that tests results are returned to the clinician in a quicker manner. Costing should not be the only consideration for determination of laboratory algorithms.
DEFF Research Database (Denmark)
Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy
2007-01-01
variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel......We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...
Lamb Wave Damage Quantification Using GA-Based LS-SVM
Directory of Open Access Journals (Sweden)
Fuqiang Sun
2017-06-01
Full Text Available Lamb waves have been reported to be an efficient tool for non-destructive evaluations (NDE for various application scenarios. However, accurate and reliable damage quantification using the Lamb wave method is still a practical challenge, due to the complex underlying mechanism of Lamb wave propagation and damage detection. This paper presents a Lamb wave damage quantification method using a least square support vector machine (LS-SVM and a genetic algorithm (GA. Three damage sensitive features, namely, normalized amplitude, phase change, and correlation coefficient, were proposed to describe changes of Lamb wave characteristics caused by damage. In view of commonly used data-driven methods, the GA-based LS-SVM model using the proposed three damage sensitive features was implemented to evaluate the crack size. The GA method was adopted to optimize the model parameters. The results of GA-based LS-SVM were validated using coupon test data and lap joint component test data with naturally developed fatigue cracks. Cases of different loading and manufacturer were also included to further verify the robustness of the proposed method for crack quantification.
Uncertainty quantification and error analysis
Energy Technology Data Exchange (ETDEWEB)
Higdon, Dave M [Los Alamos National Laboratory; Anderson, Mark C [Los Alamos National Laboratory; Habib, Salman [Los Alamos National Laboratory; Klein, Richard [Los Alamos National Laboratory; Berliner, Mark [OHIO STATE UNIV.; Covey, Curt [LLNL; Ghattas, Omar [UNIV OF TEXAS; Graziani, Carlo [UNIV OF CHICAGO; Seager, Mark [LLNL; Sefcik, Joseph [LLNL; Stark, Philip [UC/BERKELEY; Stewart, James [SNL
2010-01-01
UQ studies all sources of error and uncertainty, including: systematic and stochastic measurement error; ignorance; limitations of theoretical models; limitations of numerical representations of those models; limitations on the accuracy and reliability of computations, approximations, and algorithms; and human error. A more precise definition for UQ is suggested below.
Inverse problems and uncertainty quantification
Litvinenko, Alexander
2013-12-18
In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander
2014-01-06
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Practical quantification of necrosis in histological whole-slide images.
Homeyer, André; Schenk, Andrea; Arlt, Janine; Dahmen, Uta; Dirsch, Olaf; Hahn, Horst K
2013-06-01
Since the histological quantification of necrosis is a common task in medical research and practice, we evaluate different image analysis methods for quantifying necrosis in whole-slide images. In a practical usage scenario, we assess the impact of different classification algorithms and feature sets on both accuracy and computation time. We show how a well-chosen combination of multiresolution features and an efficient postprocessing step enables the accurate quantification necrosis in gigapixel images in less than a minute. The results are general enough to be applied to other areas of histological image analysis as well.
Algorithms for Quantum Computers
Smith, Jamie
2010-01-01
This paper surveys the field of quantum computer algorithms. It gives a taste of both the breadth and the depth of the known algorithms for quantum computers, focusing on some of the more recent results. It begins with a brief review of quantum Fourier transform based algorithms, followed by quantum searching and some of its early generalizations. It continues with a more in-depth description of two more recent developments: algorithms developed in the quantum walk paradigm, followed by tensor network evaluation algorithms (which include approximating the Tutte polynomial).
Fast sparse Raman spectral unmixing for chemical fingerprinting and quantification
Yaghoobi, Mehrdad; Wu, Di; Clewes, Rhea J.; Davies, Mike E.
2016-10-01
with small contributions, which are normally not detected. The proposed algorithm is easily reconfigurable to include new library entries and optional preferential threat searches in the presence of predetermined threat indicators. Under Ministry of Defence funding, we have demonstrated the algorithm for fingerprinting and rough quantification of the concentration of chemical mixtures using a set of reference spectral mixtures. In our experiments, the algorithm successfully managed to detect the chemicals with concentrations below 10 percent. The running time of the algorithm is in the order of one second, using a single core of a desktop computer.
A comprehensive evaluation of alignment algorithms in the context of RNA-seq.
Lindner, Robert; Friedel, Caroline C
2012-01-01
Transcriptome sequencing (RNA-Seq) overcomes limitations of previously used RNA quantification methods and provides one experimental framework for both high-throughput characterization and quantification of transcripts at the nucleotide level. The first step and a major challenge in the analysis of such experiments is the mapping of sequencing reads to a transcriptomic origin including the identification of splicing events. In recent years, a large number of such mapping algorithms have been developed, all of which have in common that they require algorithms for aligning a vast number of reads to genomic or transcriptomic sequences. Although the FM-index based aligner Bowtie has become a de facto standard within mapping pipelines, a much larger number of possible alignment algorithms have been developed also including other variants of FM-index based aligners. Accordingly, developers and users of RNA-seq mapping pipelines have the choice among a large number of available alignment algorithms. To provide guidance in the choice of alignment algorithms for these purposes, we evaluated the performance of 14 widely used alignment programs from three different algorithmic classes: algorithms using either hashing of the reference transcriptome, hashing of reads, or a compressed FM-index representation of the genome. Here, special emphasis was placed on both precision and recall and the performance for different read lengths and numbers of mismatches and indels in a read. Our results clearly showed the significant reduction in memory footprint and runtime provided by FM-index based aligners at a precision and recall comparable to the best hash table based aligners. Furthermore, the recently developed Bowtie 2 alignment algorithm shows a remarkable tolerance to both sequencing errors and indels, thus, essentially making hash-based aligners obsolete.
A comprehensive evaluation of alignment algorithms in the context of RNA-seq.
Directory of Open Access Journals (Sweden)
Robert Lindner
Full Text Available Transcriptome sequencing (RNA-Seq overcomes limitations of previously used RNA quantification methods and provides one experimental framework for both high-throughput characterization and quantification of transcripts at the nucleotide level. The first step and a major challenge in the analysis of such experiments is the mapping of sequencing reads to a transcriptomic origin including the identification of splicing events. In recent years, a large number of such mapping algorithms have been developed, all of which have in common that they require algorithms for aligning a vast number of reads to genomic or transcriptomic sequences. Although the FM-index based aligner Bowtie has become a de facto standard within mapping pipelines, a much larger number of possible alignment algorithms have been developed also including other variants of FM-index based aligners. Accordingly, developers and users of RNA-seq mapping pipelines have the choice among a large number of available alignment algorithms. To provide guidance in the choice of alignment algorithms for these purposes, we evaluated the performance of 14 widely used alignment programs from three different algorithmic classes: algorithms using either hashing of the reference transcriptome, hashing of reads, or a compressed FM-index representation of the genome. Here, special emphasis was placed on both precision and recall and the performance for different read lengths and numbers of mismatches and indels in a read. Our results clearly showed the significant reduction in memory footprint and runtime provided by FM-index based aligners at a precision and recall comparable to the best hash table based aligners. Furthermore, the recently developed Bowtie 2 alignment algorithm shows a remarkable tolerance to both sequencing errors and indels, thus, essentially making hash-based aligners obsolete.
Kleinberg, Jon
2006-01-01
Algorithm Design introduces algorithms by looking at the real-world problems that motivate them. The book teaches students a range of design and analysis techniques for problems that arise in computing applications. The text encourages an understanding of the algorithm design process and an appreciation of the role of algorithms in the broader field of computer science.
Wang, Lui; Bayer, Steven E.
1991-01-01
Genetic algorithms are mathematical, highly parallel, adaptive search procedures (i.e., problem solving methods) based loosely on the processes of natural genetics and Darwinian survival of the fittest. Basic genetic algorithms concepts are introduced, genetic algorithm applications are introduced, and results are presented from a project to develop a software tool that will enable the widespread use of genetic algorithm technology.
Directory of Open Access Journals (Sweden)
MARIA A ZULUAGA
Full Text Available En este artículo se describen las adaptaciones hechas al algoritmo MARACAS para segmentar y cuantificar estructuras vasculares en imágenes TAC de la arteria carótida. El algoritmo MARACAS, que está basado en un modelo elástico y en un análisis de los valores y vectores propios de la matriz de inercia, fue inicialmente diseñado para segmentar una sola arteria en imágenes ARM. Las modificaciones están principalmente enfocadas a tratar las especificidades de las imágenes TAC, así como la presencia de bifurcaciones. Los algoritmos implementados en esta nueva versión se clasifican en dos niveles. 1. Los procesamientos de bajo nivel (filtrado de ruido y de artificios direccionales, presegmentación y realce destinados a mejorar la calidad de la imagen y presegmentarla. Estas técnicas están basadas en información a priori sobre el ruido, los artificios y los intervalos típicos de niveles de gris del lumen, del fondo y de las calcificaciones. 2. Los procesamientos de alto nivel para extraer la línea central de la arteria, segmentar el lumen y cuantificar la estenosis. A este nivel, se aplican conocimientos a priori sobre la forma y anatomía de las estructuras vasculares. El método fue evaluado en 31 imágenes suministradas en el concurso Carotid Lumen Segmentation and Stenosis Grading Grand Challenge 2009. Los resultados obtenidos en la segmentación arrojaron un coeficiente de similitud de Dice promedio de 80,4% comparado con la segmentación de referencia, y el error promedio de la cuantificación de estenosis fue 14,4%.This paper describes the adaptations of MARACAS algorithm to the segmentation and quantification of vascular structures in CTA images of the carotid artery. The MARACAS algorithm, which is based on an elastic model and on a multi-scale eigen-analysis of the inertia matrix, was originally designed to segment a single artery in MRA images. The modifications are primarily aimed at addressing the specificities of CT
Quantum CPU and Quantum Algorithm
Wang, An Min
1999-01-01
Making use of an universal quantum network -- QCPU proposed by me\\upcite{My1}, it is obtained that the whole quantum network which can implement some the known quantum algorithms including Deutsch algorithm, quantum Fourier transformation, Shor's algorithm and Grover's algorithm.
Directory of Open Access Journals (Sweden)
Bhanu Prakash Kn
Full Text Available BACKGROUND & AIMS: Obesity is a leading healthcare issue contributing to metabolic diseases. There is a great interest in non-invasive approaches for quantitating abdominal fat in obese animals and humans. In this work, we propose an automated method to distinguish and quantify subcutaneous and visceral adipose tissues (SAT and VAT in rodents during obesity and weight loss interventions. We have also investigated the influence of different magnetic resonance sequences and sources of variability in quantification of fat depots. MATERIALS AND METHODS: High-fat diet fed rodents were utilized for investigating the changes during obesity, exercise, and calorie restriction interventions (N = 7/cohort. Imaging was performed on a 7T Bruker ClinScan scanner using fast spin echo (FSE and Dixon imaging methods to estimate the fat depots. Finally, we quantified the SAT and VAT volumes between the L1-L5 lumbar vertebrae using the proposed automatic hybrid geodesic region-based curve evolution algorithm. RESULTS: Significant changes in SAT and VAT volumes (p<0.01 were observed between the pre- and post-intervention measurements. The SAT and VAT were 44.22±9%, 21.06±1.35% for control, -17.33±3.07%, -15.09±1.11% for exercise, and 18.56±2.05%, -3.9±0.96% for calorie restriction cohorts, respectively. The fat quantification correlation between FSE (with and without water suppression sequences and Dixon for SAT and VAT were 0.9709, 0.9803 and 0.9955, 0.9840 respectively. The algorithm significantly reduced the computation time from 100 sec/slice to 25 sec/slice. The pre-processing, data-derived contour placement and avoidance of strong background-image boundary improved the convergence accuracy of the proposed algorithm. CONCLUSIONS: We developed a fully automatic segmentation algorithm to quantitate SAT and VAT from abdominal images of rodents, which can support large cohort studies. We additionally identified the influence of non-algorithmic variables
Joux, Antoine
2009-01-01
Illustrating the power of algorithms, Algorithmic Cryptanalysis describes algorithmic methods with cryptographically relevant examples. Focusing on both private- and public-key cryptographic algorithms, it presents each algorithm either as a textual description, in pseudo-code, or in a C code program.Divided into three parts, the book begins with a short introduction to cryptography and a background chapter on elementary number theory and algebra. It then moves on to algorithms, with each chapter in this section dedicated to a single topic and often illustrated with simple cryptographic applic
A remark on collective quantification
Kontinen, J.; Szymanik, J.
2008-01-01
We consider collective quantification in natural language. For many years the common strategy in formalizing collective quantification has been to define the meanings of collective determiners, quantifying over collections, using certain type-shifting operations. These type-shifting operations, i.e.
New Optimization Algorithms in Physics
Hartmann, Alexander K
2004-01-01
Many physicists are not aware of the fact that they can solve their problems by applying optimization algorithms. Since the number of such algorithms is steadily increasing, many new algorithms have not been presented comprehensively until now. This presentation of recently developed algorithms applied in physics, including demonstrations of how they work and related results, aims to encourage their application, and as such the algorithms selected cover concepts and methods from statistical physics to optimization problems emerging in theoretical computer science.
A simplified Excel® algorithm for estimating the least limiting water range of soils
Directory of Open Access Journals (Sweden)
Leão Tairone Paiva
2004-01-01
Full Text Available The least limiting water range (LLWR of soils has been employed as a methodological approach for evaluation of soil physical quality in different agricultural systems, including forestry, grasslands and major crops. However, the absence of a simplified methodology for the quantification of LLWR has hampered the popularization of its use among researchers and soil managers. Taking this into account this work has the objective of proposing and describing a simplified algorithm developed in Excel® software for quantification of the LLWR, including the calculation of the critical bulk density, at which the LLWR becomes zero. Despite the simplicity of the procedures and numerical techniques of optimization used, the nonlinear regression produced reliable results when compared to those found in the literature.
Hougardy, Stefan
2016-01-01
Algorithms play an increasingly important role in nearly all fields of mathematics. This book allows readers to develop basic mathematical abilities, in particular those concerning the design and analysis of algorithms as well as their implementation. It presents not only fundamental algorithms like the sieve of Eratosthenes, the Euclidean algorithm, sorting algorithms, algorithms on graphs, and Gaussian elimination, but also discusses elementary data structures, basic graph theory, and numerical questions. In addition, it provides an introduction to programming and demonstrates in detail how to implement algorithms in C++. This textbook is suitable for students who are new to the subject and covers a basic mathematical lecture course, complementing traditional courses on analysis and linear algebra. Both authors have given this "Algorithmic Mathematics" course at the University of Bonn several times in recent years.
VESGEN Software for Mapping and Quantification of Vascular Regulators
Parsons-Wingerter, Patricia A.; Vickerman, Mary B.; Keith, Patricia A.
2012-01-01
VESsel GENeration (VESGEN) Analysis is an automated software that maps and quantifies effects of vascular regulators on vascular morphology by analyzing important vessel parameters. Quantification parameters include vessel diameter, length, branch points, density, and fractal dimension. For vascular trees, measurements are reported as dependent functions of vessel branching generation. VESGEN maps and quantifies vascular morphological events according to fractal-based vascular branching generation. It also relies on careful imaging of branching and networked vascular form. It was developed as a plug-in for ImageJ (National Institutes of Health, USA). VESGEN uses image-processing concepts of 8-neighbor pixel connectivity, skeleton, and distance map to analyze 2D, black-and-white (binary) images of vascular trees, networks, and tree-network composites. VESGEN maps typically 5 to 12 (or more) generations of vascular branching, starting from a single parent vessel. These generations are tracked and measured for critical vascular parameters that include vessel diameter, length, density and number, and tortuosity per branching generation. The effects of vascular therapeutics and regulators on vascular morphology and branching tested in human clinical or laboratory animal experimental studies are quantified by comparing vascular parameters with control groups. VESGEN provides a user interface to both guide and allow control over the users vascular analysis process. An option is provided to select a morphological tissue type of vascular trees, network or tree-network composites, which determines the general collections of algorithms, intermediate images, and output images and measurements that will be produced.
Tel, G.
1993-01-01
We define the notion of total algorithms for networks of processes. A total algorithm enforces that a "decision" is taken by a subset of the processes, and that participation of all processes is required to reach this decision. Total algorithms are an important building block in the design of distri
Disease quantification in dermatology
DEFF Research Database (Denmark)
Greve, Tanja Maria; Kamp, Søren; Jemec, Gregor B E
2013-01-01
useful in quantifying disease severity, they require an extensive clinical experience and carry a risk of subjectivity. We explore the opportunity to use in vivo near-infrared (NIR) spectra as an objective and noninvasive method for local disease severity assessment in 31 psoriasis patients in whom...... selected plaques were scored clinically. A partial least squares (PLS) regression model was used to analyze and predict the severity scores on the NIR spectra of psoriatic and uninvolved skin. The correlation between predicted and clinically assigned scores was R=0.94 (RMSE=0.96), suggesting that in vivo...... NIR provides accurate clinical quantification of psoriatic plaques. Hence, NIR may be a practical solution to clinical severity assessment of psoriasis, providing a continuous, linear, numerical value of severity....
Advancing agricultural greenhouse gas quantification*
Olander, Lydia; Wollenberg, Eva; Tubiello, Francesco; Herold, Martin
2013-03-01
1. Introduction Better information on greenhouse gas (GHG) emissions and mitigation potential in the agricultural sector is necessary to manage these emissions and identify responses that are consistent with the food security and economic development priorities of countries. Critical activity data (what crops or livestock are managed in what way) are poor or lacking for many agricultural systems, especially in developing countries. In addition, the currently available methods for quantifying emissions and mitigation are often too expensive or complex or not sufficiently user friendly for widespread use. The purpose of this focus issue is to capture the state of the art in quantifying greenhouse gases from agricultural systems, with the goal of better understanding our current capabilities and near-term potential for improvement, with particular attention to quantification issues relevant to smallholders in developing countries. This work is timely in light of international discussions and negotiations around how agriculture should be included in efforts to reduce and adapt to climate change impacts, and considering that significant climate financing to developing countries in post-2012 agreements may be linked to their increased ability to identify and report GHG emissions (Murphy et al 2010, CCAFS 2011, FAO 2011). 2. Agriculture and climate change mitigation The main agricultural GHGs—methane and nitrous oxide—account for 10%-12% of anthropogenic emissions globally (Smith et al 2008), or around 50% and 60% of total anthropogenic methane and nitrous oxide emissions, respectively, in 2005. Net carbon dioxide fluxes between agricultural land and the atmosphere linked to food production are relatively small, although significant carbon emissions are associated with degradation of organic soils for plantations in tropical regions (Smith et al 2007, FAO 2012). Population growth and shifts in dietary patterns toward more meat and dairy consumption will lead to
A New Metaheuristic Bat-Inspired Algorithm
Yang, Xin-She
2010-01-01
Metaheuristic algorithms such as particle swarm optimization, firefly algorithm and harmony search are now becoming powerful methods for solving many tough optimization problems. In this paper, we propose a new metaheuristic method, the Bat Algorithm, based on the echolocation behaviour of bats. We also intend to combine the advantages of existing algorithms into the new bat algorithm. After a detailed formulation and explanation of its implementation, we will then compare the proposed algorithm with other existing algorithms, including genetic algorithms and particle swarm optimization. Simulations show that the proposed algorithm seems much superior to other algorithms, and further studies are also discussed.
Multiphysics modeling and uncertainty quantification for an active composite reflector
Peterson, Lee D.; Bradford, S. C.; Schiermeier, John E.; Agnes, Gregory S.; Basinger, Scott A.
2013-09-01
A multiphysics, high resolution simulation of an actively controlled, composite reflector panel is developed to extrapolate from ground test results to flight performance. The subject test article has previously demonstrated sub-micron corrected shape in a controlled laboratory thermal load. This paper develops a model of the on-orbit performance of the panel under realistic thermal loads, with an active heater control system, and performs an uncertainty quantification of the predicted response. The primary contribution of this paper is the first reported application of the Sandia developed Sierra mechanics simulation tools to a spacecraft multiphysics simulation of a closed-loop system, including uncertainty quantification. The simulation was developed so as to have sufficient resolution to capture the residual panel shape error that remains after the thermal and mechanical control loops are closed. An uncertainty quantification analysis was performed to assess the predicted tolerance in the closed-loop wavefront error. Key tools used for the uncertainty quantification are also described.
Uncertainty Quantification in Numerical Aerodynamics
Litvinenko, Alexander
2017-05-16
We consider uncertainty quantification problem in aerodynamic simulations. We identify input uncertainties, classify them, suggest an appropriate statistical model and, finally, estimate propagation of these uncertainties into the solution (pressure, velocity and density fields as well as the lift and drag coefficients). The deterministic problem under consideration is a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data. Input uncertainties include: uncertain angle of attack, the Mach number, random perturbations in the airfoil geometry, mesh, shock location, turbulence model and parameters of this turbulence model. This problem requires efficient numerical/statistical methods since it is computationally expensive, especially for the uncertainties caused by random geometry variations which involve a large number of variables. In numerical section we compares five methods, including quasi-Monte Carlo quadrature, polynomial chaos with coefficients determined by sparse quadrature and gradient-enhanced version of Kriging, radial basis functions and point collocation polynomial chaos, in their efficiency in estimating statistics of aerodynamic performance upon random perturbation to the airfoil geometry [D.Liu et al \\'17]. For modeling we used the TAU code, developed in DLR, Germany.
Uncertainty Quantification in Hybrid Dynamical Systems
Sahai, Tuhin
2011-01-01
Uncertainty quantification (UQ) techniques are frequently used to ascertain output variability in systems with parametric uncertainty. Traditional algorithms for UQ are either system-agnostic and slow (such as Monte Carlo) or fast with stringent assumptions on smoothness (such as polynomial chaos and Quasi-Monte Carlo). In this work, we develop a fast UQ approach for hybrid dynamical systems by extending the polynomial chaos methodology to these systems. To capture discontinuities, we use a wavelet-based Wiener-Haar expansion. We develop a boundary layer approach to propagate uncertainty through separable reset conditions. We also introduce a transport theory based approach for propagating uncertainty through hybrid dynamical systems. Here the expansion yields a set of hyperbolic equations that are solved by integrating along characteristics. The solution of the partial differential equation along the characteristics allows one to quantify uncertainty in hybrid or switching dynamical systems. The above method...
Uncertainty quantification in hybrid dynamical systems
Sahai, Tuhin; Pasini, José Miguel
2013-03-01
Uncertainty quantification (UQ) techniques are frequently used to ascertain output variability in systems with parametric uncertainty. Traditional algorithms for UQ are either system-agnostic and slow (such as Monte Carlo) or fast with stringent assumptions on smoothness (such as polynomial chaos and Quasi-Monte Carlo). In this work, we develop a fast UQ approach for hybrid dynamical systems by extending the polynomial chaos methodology to these systems. To capture discontinuities, we use a wavelet-based Wiener-Haar expansion. We develop a boundary layer approach to propagate uncertainty through separable reset conditions. We also introduce a transport theory based approach for propagating uncertainty through hybrid dynamical systems. Here the expansion yields a set of hyperbolic equations that are solved by integrating along characteristics. The solution of the partial differential equation along the characteristics allows one to quantify uncertainty in hybrid or switching dynamical systems. The above methods are demonstrated on example problems.
Uncertainty Quantification in Climate Modeling
Sargsyan, K.; Safta, C.; Berry, R.; Debusschere, B.; Najm, H.
2011-12-01
We address challenges that sensitivity analysis and uncertainty quantification methods face when dealing with complex computational models. In particular, climate models are computationally expensive and typically depend on a large number of input parameters. We consider the Community Land Model (CLM), which consists of a nested computational grid hierarchy designed to represent the spatial heterogeneity of the land surface. Each computational cell can be composed of multiple land types, and each land type can incorporate one or more sub-models describing the spatial and depth variability. Even for simulations at a regional scale, the computational cost of a single run is quite high and the number of parameters that control the model behavior is very large. Therefore, the parameter sensitivity analysis and uncertainty propagation face significant difficulties for climate models. This work employs several algorithmic avenues to address some of the challenges encountered by classical uncertainty quantification methodologies when dealing with expensive computational models, specifically focusing on the CLM as a primary application. First of all, since the available climate model predictions are extremely sparse due to the high computational cost of model runs, we adopt a Bayesian framework that effectively incorporates this lack-of-knowledge as a source of uncertainty, and produces robust predictions with quantified uncertainty even if the model runs are extremely sparse. In particular, we infer Polynomial Chaos spectral expansions that effectively encode the uncertain input-output relationship and allow efficient propagation of all sources of input uncertainties to outputs of interest. Secondly, the predictability analysis of climate models strongly suffers from the curse of dimensionality, i.e. the large number of input parameters. While single-parameter perturbation studies can be efficiently performed in a parallel fashion, the multivariate uncertainty analysis
Cai, Wenli; Lee, June-Goo; Fikry, Karim; Yoshida, Hiroyuki; Novelline, Robert; de Moya, Marc
2013-01-01
It is commonly believed that the size of a pneumothorax is an important determinant of treatment decision, in particular regarding whether chest tube drainage (CTD) is required. However, the volumetric quantification of pneumothoraces has not routinely been performed in clinics. In this paper, we introduced an automated computer-aided volumetry (CAV) scheme for quantification of volume of pneumothoraces in chest multi-detect CT (MDCT) images. Moreover, we investigated the impact of accurate volume of pneumothoraces in the improvement of the performance in decision-making regarding CTD in the management of traumatic pneumothoraces. For this purpose, an occurrence frequency map was calculated for quantitative analysis of the importance of each clinical parameter in the decision-making regarding CTD by a computer simulation of decision-making using a genetic algorithm (GA) and a support vector machine (SVM). A total of 14 clinical parameters, including volume of pneumothorax calculated by our CAV scheme, was collected as parameters available for decision-making. The results showed that volume was the dominant parameter in decision-making regarding CTD, with an occurrence frequency value of 1.00. The results also indicated that the inclusion of volume provided the best performance that was statistically significant compared to the other tests in which volume was excluded from the clinical parameters. This study provides the scientific evidence for the application of CAV scheme in MDCT volumetric quantification of pneumothoraces in the management of clinically stable chest trauma patients with traumatic pneumothorax. PMID:22560899
Energy Technology Data Exchange (ETDEWEB)
Van Eyndhoven, G., E-mail: geert.vaneyndhoven@uantwerpen.be [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Kurttepeli, M. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Van Oers, C.J.; Cool, P. [Laboratory of Adsorption and Catalysis, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, B-2020 Antwerp (Belgium); Batenburg, K.J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, NL-1090 GB Amsterdam (Netherlands); Mathematical Institute, Universiteit Leiden, Niels Bohrweg 1, NL-2333 CA Leiden (Netherlands); Sijbers, J. [iMinds-Vision Lab, University of Antwerp, Universiteitsplein 1, B-2610 Wilrijk (Belgium)
2015-01-15
Electron tomography is currently a versatile tool to investigate the connection between the structure and properties of nanomaterials. However, a quantitative interpretation of electron tomography results is still far from straightforward. Especially accurate quantification of pore-space is hampered by artifacts introduced in all steps of the processing chain, i.e., acquisition, reconstruction, segmentation and quantification. Furthermore, most common approaches require subjective manual user input. In this paper, the PORES algorithm “POre REconstruction and Segmentation” is introduced; it is a tailor-made, integral approach, for the reconstruction, segmentation, and quantification of porous nanomaterials. The PORES processing chain starts by calculating a reconstruction with a nanoporous-specific reconstruction algorithm: the Simultaneous Update of Pore Pixels by iterative REconstruction and Simple Segmentation algorithm (SUPPRESS). It classifies the interior region to the pores during reconstruction, while reconstructing the remaining region by reducing the error with respect to the acquired electron microscopy data. The SUPPRESS reconstruction can be directly plugged into the remaining processing chain of the PORES algorithm, resulting in accurate individual pore quantification and full sample pore statistics. The proposed approach was extensively validated on both simulated and experimental data, indicating its ability to generate accurate statistics of nanoporous materials. - Highlights: • An electron tomography reconstruction/segmentation method for nanoporous materials. • The method exploits the porous nature of the scanned material. • Validated extensively on both simulation and real data experiments. • Results in increased image resolution and improved porosity quantification.
Verb aspect, alternations and quantification
Directory of Open Access Journals (Sweden)
Svetla Koeva
2015-11-01
Full Text Available Verb aspect, alternations and quantification In this paper we are briefly discuss the nature of Bulgarian verb aspect and argue that the verb aspect pairs are different lexical units with different (although related meaning, different argument structure (reflecting categories, explicitness and referential status of arguments and different sets of semantic and syntactic alternations. The verb prefixes resulting in perfective verbs derivation in some cases can be interpreted as lexical quantifiers as well. Thus the Bulgarian verb aspect is related (in different way both with the potential for the generation of alternations and with the prefixal lexical quantification. It is shown that the scope of the lexical quantification by means of verbal prefixes is the quantified verb phrase and the scope remains constant in all derived alternations. The paper concerns the basic issues of these complex problems, while the detailed description of the conditions satisfying particular alternation or particular lexical quantification are subject of a more detailed study.
Uncertainty Quantification in Aeroelasticity
Beran, Philip; Stanford, Bret; Schrock, Christopher
2017-01-01
Physical interactions between a fluid and structure, potentially manifested as self-sustained or divergent oscillations, can be sensitive to many parameters whose values are uncertain. Of interest here are aircraft aeroelastic interactions, which must be accounted for in aircraft certification and design. Deterministic prediction of these aeroelastic behaviors can be difficult owing to physical and computational complexity. New challenges are introduced when physical parameters and elements of the modeling process are uncertain. By viewing aeroelasticity through a nondeterministic prism, where key quantities are assumed stochastic, one may gain insights into how to reduce system uncertainty, increase system robustness, and maintain aeroelastic safety. This article reviews uncertainty quantification in aeroelasticity using traditional analytical techniques not reliant on computational fluid dynamics; compares and contrasts this work with emerging methods based on computational fluid dynamics, which target richer physics; and reviews the state of the art in aeroelastic optimization under uncertainty. Barriers to continued progress, for example, the so-called curse of dimensionality, are discussed.
Quantification of epicardial fat by computed tomography: why, when and how?
Marwan, Mohamed; Achenbach, Stephan
2013-01-01
In the past decade, interest has grown in the relation between epicardial fat and cardiovascular disease. Several imaging modalities such as echocardiography, computed tomography, and magnetic resonance imaging can be used to quantify epicardial adipose tissue. Computed tomography provides high spatial resolution and true volume coverage of the heart; therefore, it constitutes an attractive approach to quantifying epicardial fat. An increasing body of evidence has been accumulated which shows a relation between epicardial fat volume and coronary atherosclerosis, cardiovascular outcomes, and even non-atherosclerotic heart disease such as atrial fibrillation. The association of increased epicardial fat volume with cardiac disease remains significant even after correction for weight, body mass index, and traditional cardiovascular risk factors. The mechanisms have not been reliably identified, but metabolic properties of epicardial fat may play a role. At the present time, epicardial fat quantification is not included in recommended algorithms for risk stratification. However, the available data are intriguing enough to warrant further research.
Volumetric motion quantification by 3D tissue phase mapped CMR
Directory of Open Access Journals (Sweden)
Lutz Anja
2012-10-01
Full Text Available Abstract Background The objective of this study was the quantification of myocardial motion from 3D tissue phase mapped (TPM CMR. Recent work on myocardial motion quantification by TPM has been focussed on multi-slice 2D acquisitions thus excluding motion information from large regions of the left ventricle. Volumetric motion assessment appears an important next step towards the understanding of the volumetric myocardial motion and hence may further improve diagnosis and treatments in patients with myocardial motion abnormalities. Methods Volumetric motion quantification of the complete left ventricle was performed in 12 healthy volunteers and two patients applying a black-blood 3D TPM sequence. The resulting motion field was analysed regarding motion pattern differences between apical and basal locations as well as for asynchronous motion pattern between different myocardial segments in one or more slices. Motion quantification included velocity, torsion, rotation angle and strain derived parameters. Results All investigated motion quantification parameters could be calculated from the 3D-TPM data. Parameters quantifying hypokinetic or asynchronous motion demonstrated differences between motion impaired and healthy myocardium. Conclusions 3D-TPM enables the gapless volumetric quantification of motion abnormalities of the left ventricle, which can be applied in future application as additional information to provide a more detailed analysis of the left ventricular function.
Unsupervised learning algorithms
Aydin, Kemal
2016-01-01
This book summarizes the state-of-the-art in unsupervised learning. The contributors discuss how with the proliferation of massive amounts of unlabeled data, unsupervised learning algorithms, which can automatically discover interesting and useful patterns in such data, have gained popularity among researchers and practitioners. The authors outline how these algorithms have found numerous applications including pattern recognition, market basket analysis, web mining, social network analysis, information retrieval, recommender systems, market research, intrusion detection, and fraud detection. They present how the difficulty of developing theoretically sound approaches that are amenable to objective evaluation have resulted in the proposal of numerous unsupervised learning algorithms over the past half-century. The intended audience includes researchers and practitioners who are increasingly using unsupervised learning algorithms to analyze their data. Topics of interest include anomaly detection, clustering,...
Foundations of genetic algorithms 1991
1991-01-01
Foundations of Genetic Algorithms 1991 (FOGA 1) discusses the theoretical foundations of genetic algorithms (GA) and classifier systems.This book compiles research papers on selection and convergence, coding and representation, problem hardness, deception, classifier system design, variation and recombination, parallelization, and population divergence. Other topics include the non-uniform Walsh-schema transform; spurious correlations and premature convergence in genetic algorithms; and variable default hierarchy separation in a classifier system. The grammar-based genetic algorithm; condition
Parallel Architectures and Bioinspired Algorithms
Pérez, José; Lanchares, Juan
2012-01-01
This monograph presents examples of best practices when combining bioinspired algorithms with parallel architectures. The book includes recent work by leading researchers in the field and offers a map with the main paths already explored and new ways towards the future. Parallel Architectures and Bioinspired Algorithms will be of value to both specialists in Bioinspired Algorithms, Parallel and Distributed Computing, as well as computer science students trying to understand the present and the future of Parallel Architectures and Bioinspired Algorithms.
Normal Databases for the Relative Quantification of Myocardial Perfusion.
Rubeaux, Mathieu; Xu, Yuan; Germano, Guido; Berman, Daniel S; Slomka, Piotr J
2016-08-01
Myocardial perfusion imaging (MPI) with SPECT is performed clinically worldwide to detect and monitor coronary artery disease (CAD). MPI allows an objective quantification of myocardial perfusion at stress and rest. This established technique relies on normal databases to compare patient scans against reference normal limits. In this review, we aim to introduce the process of MPI quantification with normal databases and describe the associated perfusion quantitative measures that are used. New equipment and new software reconstruction algorithms have been introduced which require the development of new normal limits. The appearance and regional count variations of normal MPI scan may differ between these new scanners and standard Anger cameras. Therefore, these new systems may require the determination of new normal limits to achieve optimal accuracy in relative myocardial perfusion quantification. Accurate diagnostic and prognostic results rivaling those obtained by expert readers can be obtained by this widely used technique. Throughout this review, we emphasize the importance of the different normal databases and the need for specific databases relative to distinct imaging procedures. use of appropriate normal limits allows optimal quantification of MPI by taking into account subtle image differences due to the hardware and software used, and the population studied.
Myocardial Infarction Area Quantification using High-Resolution SPECT Images in Rats
Energy Technology Data Exchange (ETDEWEB)
Oliveira, Luciano Fonseca Lemos de [Divisão de Cardiologia, Departamento de Clínica Médica, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP (Brazil); Mejia, Jorge [Faculdade de Medicina de São José do Rio Preto, São José do Rio Preto, SP (Brazil); Carvalho, Eduardo Elias Vieira de; Lataro, Renata Maria; Frassetto, Sarita Nasbine [Divisão de Cardiologia, Departamento de Clínica Médica, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP (Brazil); Fazan, Rubens Jr.; Salgado, Hélio Cesar [Departamento de Fisiologia, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP (Brazil); Galvis-Alonso, Orfa Yineth [Faculdade de Medicina de São José do Rio Preto, São José do Rio Preto, SP (Brazil); Simões, Marcus Vinícius, E-mail: msimoes@fmrp.usp.br [Divisão de Cardiologia, Departamento de Clínica Médica, Faculdade de Medicina de Ribeirão Preto, Universidade de São Paulo, Ribeirão Preto, SP (Brazil)
2013-07-15
Imaging techniques enable in vivo sequential assessment of the morphology and function of animal organs in experimental models. We developed a device for high-resolution single photon emission computed tomography (SPECT) imaging based on an adapted pinhole collimator. To determine the accuracy of this system for quantification of myocardial infarct area in rats. Thirteen male Wistar rats (250 g) underwent experimental myocardial infarction by occlusion of the left coronary artery. After 4 weeks, SPECT images were acquired 1.5 hours after intravenous injection of 555 MBq of 99mTc-Sestamibi. The tomographic reconstruction was performed by using specially developed software based on the Maximum Likelihood algorithm. The analysis of the data included the correlation between the area of perfusion defects detected by scintigraphy and extent of myocardial fibrosis assessed by histology. The images showed a high target organ/background ratio with adequate visualization of the left ventricular walls and cavity. All animals presenting infarction areas were correctly identified by the perfusion images. There was no difference of the infarct area as measured by SPECT (21.1 ± 21.2%) and by histology (21.7 ± 22.0%; p=0.45). There was a strong correlation between individual values of the area of infarction measured by these two methods. The developed system presented adequate spatial resolution and high accuracy for the detection and quantification of myocardial infarction areas, consisting in a low cost and versatile option for high-resolution SPECT imaging of small rodents.
ROMA: representation and quantification of module activity from target expression data
Directory of Open Access Journals (Sweden)
Loredana eMartignetti
2016-02-01
Full Text Available In many analysis of high-throughput data in systems biology, there is a need to quantify the activity of a set of genes in individual samples. A typical example is the case where it is necessary to estimate the activity of a transcription factor (which is often not directly measurable from the expression of its target genes. We present here ROMA (Representation and quantification Of Module Activities Java software, designed for fast and robust computation of the activity of gene sets (or modules with coordinated expression. ROMA activity quantification is based on the simplest uni-factor linear model of gene regulation that approximates the expression data of a gene set by its first principal component.The proposed algorithm implements novel functionalities: it provides several method modifications for principal components computation, including weighted, robust and centered methods; it distinguishes overdispersed modules (based on the variance explained by the first principal component and coordinated modules (based on the significance of the spectral gap; finally, it computes statistical significance of the estimated module overdispersion or coordination.ROMA can be applied in many contexts, from estimating differential activities of transcriptional factors to findingoverdispersed pathways in single-cell transcriptomics data. We describe here the principles of ROMA providing several practical examples of its use.ROMA source code is available at https://github.com/sysbio-curie/Roma.
Benchmarking the performance of daily temperature homogenisation algorithms
Warren, Rachel; Bailey, Trevor; Jolliffe, Ian; Willett, Kate
2015-04-01
This work explores the creation of realistic synthetic data and its use as a benchmark for comparing the performance of different homogenisation algorithms on daily temperature data. Four different regions in the United States have been selected and three different inhomogeneity scenarios explored for each region. These benchmark datasets are beneficial as, unlike in the real world, the underlying truth is known a priori, thus allowing definite statements to be made about the performance of the algorithms run on them. Performance can be assessed in terms of the ability of algorithms to detect changepoints and also their ability to correctly remove inhomogeneities. The focus is on daily data, thus presenting new challenges in comparison to monthly data and pushing the boundaries of previous studies. The aims of this work are to evaluate and compare the performance of various homogenisation algorithms, aiding their improvement and enabling a quantification of the uncertainty remaining in the data even after they have been homogenised. An important outcome is also to evaluate how realistic the created benchmarks are. It is essential that any weaknesses in the benchmarks are taken into account when judging algorithm performance against them. This information in turn will help to improve future versions of the benchmarks. I intend to present a summary of this work including the method of benchmark creation, details of the algorithms run and some preliminary results. This work forms a three year PhD and feeds into the larger project of the International Surface Temperature Initiative which is working on a global scale and with monthly instead of daily data.
Standardless quantification methods in electron probe microanalysis
Energy Technology Data Exchange (ETDEWEB)
Trincavelli, Jorge, E-mail: trincavelli@famaf.unc.edu.ar [Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba (Argentina); Instituto de Física Enrique Gaviola, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba (Argentina); Limandri, Silvina, E-mail: s.limandri@conicet.gov.ar [Facultad de Matemática, Astronomía y Física, Universidad Nacional de Córdoba, Ciudad Universitaria, 5000 Córdoba (Argentina); Instituto de Física Enrique Gaviola, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Medina Allende s/n, Ciudad Universitaria, 5000 Córdoba (Argentina); Bonetto, Rita, E-mail: bonetto@quimica.unlp.edu.ar [Centro de Investigación y Desarrollo en Ciencias Aplicadas Dr. Jorge Ronco, Consejo Nacional de Investigaciones Científicas y Técnicas de la República Argentina, Facultad de Ciencias Exactas, de la Universidad Nacional de La Plata, Calle 47 N° 257, 1900 La Plata (Argentina)
2014-11-01
The elemental composition of a solid sample can be determined by electron probe microanalysis with or without the use of standards. The standardless algorithms are quite faster than the methods that require standards; they are useful when a suitable set of standards is not available or for rough samples, and also they help to solve the problem of current variation, for example, in equipments with cold field emission gun. Due to significant advances in the accuracy achieved during the last years, product of the successive efforts made to improve the description of generation, absorption and detection of X-rays, the standardless methods have increasingly become an interesting option for the user. Nevertheless, up to now, algorithms that use standards are still more precise than standardless methods. It is important to remark, that care must be taken with results provided by standardless methods that normalize the calculated concentration values to 100%, unless an estimate of the errors is reported. In this work, a comprehensive discussion of the key features of the main standardless quantification methods, as well as the level of accuracy achieved by them is presented. - Highlights: • Standardless methods are a good alternative when no suitable standards are available. • Their accuracy reaches 10% for 95% of the analyses when traces are excluded. • Some of them are suitable for the analysis of rough samples.
Gao, Simon S.; Liu, Li; Bailey, Steven T.; Flaxel, Christina J.; Huang, David; Li, Dengwang; Jia, Yali
2016-07-01
Quantification of choroidal neovascularization (CNV) as visualized by optical coherence tomography angiography (OCTA) may have importance clinically when diagnosing or tracking disease. Here, we present an automated algorithm to quantify the vessel skeleton of CNV as vessel length. Initial segmentation of the CNV on en face angiograms was achieved using saliency-based detection and thresholding. A level set method was then used to refine vessel edges. Finally, a skeleton algorithm was applied to identify vessel centerlines. The algorithm was tested on nine OCTA scans from participants with CNV and comparisons of the algorithm's output to manual delineation showed good agreement.
Directory of Open Access Journals (Sweden)
Hans Schonemann
1996-12-01
Full Text Available Some algorithms for singularity theory and algebraic geometry The use of Grobner basis computations for treating systems of polynomial equations has become an important tool in many areas. This paper introduces of the concept of standard bases (a generalization of Grobner bases and the application to some problems from algebraic geometry. The examples are presented as SINGULAR commands. A general introduction to Grobner bases can be found in the textbook [CLO], an introduction to syzygies in [E] and [St1]. SINGULAR is a computer algebra system for computing information about singularities, for use in algebraic geometry. The basic algorithms in SINGULAR are several variants of a general standard basis algorithm for general monomial orderings (see [GG]. This includes wellorderings (Buchberger algorithm ([B1], [B2] and tangent cone orderings (Mora algorithm ([M1], [MPT] as special cases: It is able to work with non-homogeneous and homogeneous input and also to compute in the localization of the polynomial ring in 0. Recent versions include algorithms to factorize polynomials and a factorizing Grobner basis algorithm. For a complete description of SINGULAR see [Si].
Uncertainty quantification for environmental models
Hill, Mary C.; Lu, Dan; Kavetski, Dmitri; Clark, Martyn P.; Ye, Ming
2012-01-01
Environmental models are used to evaluate the fate of fertilizers in agricultural settings (including soil denitrification), the degradation of hydrocarbons at spill sites, and water supply for people and ecosystems in small to large basins and cities—to mention but a few applications of these models. They also play a role in understanding and diagnosing potential environmental impacts of global climate change. The models are typically mildly to extremely nonlinear. The persistent demand for enhanced dynamics and resolution to improve model realism [17] means that lengthy individual model execution times will remain common, notwithstanding continued enhancements in computer power. In addition, high-dimensional parameter spaces are often defined, which increases the number of model runs required to quantify uncertainty [2]. Some environmental modeling projects have access to extensive funding and computational resources; many do not. The many recent studies of uncertainty quantification in environmental model predictions have focused on uncertainties related to data error and sparsity of data, expert judgment expressed mathematically through prior information, poorly known parameter values, and model structure (see, for example, [1,7,9,10,13,18]). Approaches for quantifying uncertainty include frequentist (potentially with prior information [7,9]), Bayesian [13,18,19], and likelihood-based. A few of the numerous methods, including some sensitivity and inverse methods with consequences for understanding and quantifying uncertainty, are as follows: Bayesian hierarchical modeling and Bayesian model averaging; single-objective optimization with error-based weighting [7] and multi-objective optimization [3]; methods based on local derivatives [2,7,10]; screening methods like OAT (one at a time) and the method of Morris [14]; FAST (Fourier amplitude sensitivity testing) [14]; the Sobol' method [14]; randomized maximum likelihood [10]; Markov chain Monte Carlo (MCMC) [10
Algorithms in combinatorial design theory
Colbourn, CJ
1985-01-01
The scope of the volume includes all algorithmic and computational aspects of research on combinatorial designs. Algorithmic aspects include generation, isomorphism and analysis techniques - both heuristic methods used in practice, and the computational complexity of these operations. The scope within design theory includes all aspects of block designs, Latin squares and their variants, pairwise balanced designs and projective planes and related geometries.
Cleve, R; Henderson, L; Macchiavello, C; Mosca, M
1998-01-01
Quantum computers use the quantum interference of different computational paths to enhance correct outcomes and suppress erroneous outcomes of computations. In effect, they follow the same logical paradigm as (multi-particle) interferometers. We show how most known quantum algorithms, including quantum algorithms for factorising and counting, may be cast in this manner. Quantum searching is described as inducing a desired relative phase between two eigenvectors to yield constructive interference on the sought elements and destructive interference on the remaining terms.
Hromkovic, Juraj
2009-01-01
Explores the science of computing. This book starts with the development of computer science, algorithms and programming, and then explains and shows how to exploit the concepts of infinity, computability, computational complexity, nondeterminism and randomness.
Essential algorithms a practical approach to computer algorithms
Stephens, Rod
2013-01-01
A friendly and accessible introduction to the most useful algorithms Computer algorithms are the basic recipes for programming. Professional programmers need to know how to use algorithms to solve difficult programming problems. Written in simple, intuitive English, this book describes how and when to use the most practical classic algorithms, and even how to create new algorithms to meet future needs. The book also includes a collection of questions that can help readers prepare for a programming job interview. Reveals methods for manipulating common data structures s
Micro-RNA quantification using DNA polymerase and pyrophosphate quantification.
Yu, Hsiang-Ping; Hsiao, Yi-Ling; Pan, Hung-Yin; Huang, Chih-Hung; Hou, Shao-Yi
2011-12-15
A rapid quantification method for micro-RNA based on DNA polymerase activity and pyrophosphate quantification has been developed. The tested micro-RNA serves as the primer, unlike the DNA primer in all DNA sequencing methods, and the DNA probe serves as the template for DNA replication. After the DNA synthesis, the pyrophosphate detection and quantification indicate the existence and quantity of the tested miRNA. Five femtomoles of the synthetic RNA could be detected. In 20-100 μg RNA samples purified from SiHa cells, the measurement was done using the proposed assay in which hsa-miR-16 and hsa-miR-21 are 0.34 fmol/μg RNA and 0.71 fmol/μg RNA, respectively. This simple and inexpensive assay takes less than 5 min after total RNA purification and preparation. The quantification is not affected by the pre-miRNA which cannot serve as the primer for the DNA synthesis in this assay. This assay is general for the detection of the target RNA or DNA with a known matched DNA template probe, which could be widely used for detection of small RNA, messenger RNA, RNA viruses, and DNA. Therefore, the method could be widely used in RNA and DNA assays. Copyright © 2011 Elsevier Inc. All rights reserved.
MAMA Software Features: Quantification Verification Documentation-1
Energy Technology Data Exchange (ETDEWEB)
Ruggiero, Christy E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Porter, Reid B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-05-21
This document reviews the verification of the basic shape quantification attributes in the MAMA software against hand calculations in order to show that the calculations are implemented mathematically correctly and give the expected quantification results.
MAMA Software Features: Quantification Verification Documentation-1
Energy Technology Data Exchange (ETDEWEB)
Ruggiero, Christy E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Porter, Reid B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-05-21
This document reviews the verification of the basic shape quantification attributes in the MAMA software against hand calculations in order to show that the calculations are implemented mathematically correctly and give the expected quantification results.
DEFF Research Database (Denmark)
Jensen, Peter S; Ziebell, Morten; Skouboe, Glenna
2011-01-01
In nuclear medicine brain imaging, it is important to delineate regions of interest (ROIs) so that the outcome is both accurate and reproducible. The purpose of this study was to validate a new time-saving algorithm (DATquan) for accurate and reproducible quantification of the striatal dopamine...... transporter (DAT) with appropriate radioligands and SPECT and without the need for structural brain scanning....
Grinsven, M.J.J.P. van; Lechanteur, Y.T.E.; Ven, J.P.H. van de; Ginneken, B. van; Hoyng, C.B.; Theelen, T.; Sanchez, C.I.
2013-01-01
PURPOSE: To evaluate a machine learning algorithm that allows for computer aided diagnosis (CAD) of non-advanced age-related macular degeneration (AMD) by providing an accurate detection and quantification of drusen location, area and size. METHODS: Color fundus photographs of 407 eyes without AMD
Automatic quantification of iris color
DEFF Research Database (Denmark)
Christoffersen, S.; Harder, Stine; Andersen, J. D.
2012-01-01
An automatic algorithm to quantify the eye colour and structural information from standard hi-resolution photos of the human iris has been developed. Initially, the major structures in the eye region are identified including the pupil, iris, sclera, and eyelashes. Based on this segmentation...... regions. The result is a blue-brown ratio for each eye. Furthermore, an image clustering approach has been used with promising results. The approach is based on using a sparse dictionary of feature vectors learned from a training set of iris regions. The feature vectors contain both local structural...... is completely data driven and it can divide a group of eye images into classes based on structure, colour or a combination of the two. The methods have been tested on a large set of photos with promising results....
Hu, T C
2002-01-01
Newly enlarged, updated second edition of a valuable text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discusses binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. 153 black-and-white illus. 23 tables.Newly enlarged, updated second edition of a valuable, widely used text presents algorithms for shortest paths, maximum flows, dynamic programming and backtracking. Also discussed are binary trees, heuristic and near optimums, matrix multiplication, and NP-complete problems. New to this edition: Chapter 9
StudyonMathematicalModeofQuantification Performance-to-Price Ratio
Institute of Scientific and Technical Information of China (English)
裴兰华; 史德战; 王冠
2007-01-01
Nowadays, consumers often compare the same kind of commodities and decide what to pick out when they purchase merchandise including the service. The paper discusses the mathematical mode of quantification performance-to-price ratio according to which product can be made in order to increase the competitiveness in the market.
Dallas, David C; Guerrero, Andres; Parker, Evan A; Robinson, Randall C; Gan, Junai; German, J Bruce; Barile, Daniela; Lebrilla, Carlito B
2015-03-01
Peptidomics is an emerging field branching from proteomics that targets endogenously produced protein fragments. Endogenous peptides are often functional within the body-and can be both beneficial and detrimental. This review covers the use of peptidomics in understanding digestion, and identifying functional peptides and biomarkers. Various techniques for peptide and glycopeptide extraction, both at analytical and preparative scales, and available options for peptide detection with MS are discussed. Current algorithms for peptide sequence determination, and both analytical and computational techniques for quantification are compared. Techniques for statistical analysis, sequence mapping, enzyme prediction, and peptide function, and structure prediction are explored.
Optical modulator including grapene
Energy Technology Data Exchange (ETDEWEB)
Liu, Ming; Yin, Xiaobo; Zhang, Xiang
2016-06-07
The present invention provides for a one or more layer graphene optical modulator. In a first exemplary embodiment the optical modulator includes an optical waveguide, a nanoscale oxide spacer adjacent to a working region of the waveguide, and a monolayer graphene sheet adjacent to the spacer. In a second exemplary embodiment, the optical modulator includes at least one pair of active media, where the pair includes an oxide spacer, a first monolayer graphene sheet adjacent to a first side of the spacer, and a second monolayer graphene sheet adjacent to a second side of the spacer, and at least one optical waveguide adjacent to the pair.
Visual Impairment, Including Blindness
... Who Knows What? Survey Item Bank Search for: Visual Impairment, Including Blindness Links updated, April 2017 En ... doesn’t wear his glasses. Back to top Visual Impairments in Children Vision is one of our ...
About meanings as a rule not included in dictionaries
Directory of Open Access Journals (Sweden)
Violetta Koseska-Toszewa
2015-11-01
Full Text Available About meanings as a rule not included in dictionaries The article discusses the problem of ambiguity of the exponents of logical existential quantification occurring in the verbal group of Bulgarian, Polish and Russian sentences: Bulg. отвреме навреме, понякога, Pol. od czasu do czasu, czasem, czasami, Rus. время от времени, иногда. This ambiguity can be explained by strong and weak existential quantification. Lexical semantics did not distinguish between these two types of existentiality. Here we present a description of selected exponents of logical existential quantification in Bulgarian, Polish and Russian.
Directory of Open Access Journals (Sweden)
Anna Bourmistrova
2011-02-01
Full Text Available The autodriver algorithm is an intelligent method to eliminate the need of steering by a driver on a well-defined road. The proposed method performs best on a four-wheel steering (4WS vehicle, though it is also applicable to two-wheel-steering (TWS vehicles. The algorithm is based on coinciding the actual vehicle center of rotation and road center of curvature, by adjusting the kinematic center of rotation. The road center of curvature is assumed prior information for a given road, while the dynamic center of rotation is the output of dynamic equations of motion of the vehicle using steering angle and velocity measurements as inputs. We use kinematic condition of steering to set the steering angles in such a way that the kinematic center of rotation of the vehicle sits at a desired point. At low speeds the ideal and actual paths of the vehicle are very close. With increase of forward speed the road and tire characteristics, along with the motion dynamics of the vehicle cause the vehicle to turn about time-varying points. By adjusting the steering angles, our algorithm controls the dynamic turning center of the vehicle so that it coincides with the road curvature center, hence keeping the vehicle on a given road autonomously. The position and orientation errors are used as feedback signals in a closed loop control to adjust the steering angles. The application of the presented autodriver algorithm demonstrates reliable performance under different driving conditions.
Fast Local Computation Algorithms
Rubinfeld, Ronitt; Vardi, Shai; Xie, Ning
2011-01-01
For input $x$, let $F(x)$ denote the set of outputs that are the "legal" answers for a computational problem $F$. Suppose $x$ and members of $F(x)$ are so large that there is not time to read them in their entirety. We propose a model of {\\em local computation algorithms} which for a given input $x$, support queries by a user to values of specified locations $y_i$ in a legal output $y \\in F(x)$. When more than one legal output $y$ exists for a given $x$, the local computation algorithm should output in a way that is consistent with at least one such $y$. Local computation algorithms are intended to distill the common features of several concepts that have appeared in various algorithmic subfields, including local distributed computation, local algorithms, locally decodable codes, and local reconstruction. We develop a technique, based on known constructions of small sample spaces of $k$-wise independent random variables and Beck's analysis in his algorithmic approach to the Lov{\\'{a}}sz Local Lemma, which und...
Sun, Rongrong; Wang, Yuanyuan
2008-11-01
Predicting the spontaneous termination of the atrial fibrillation (AF) leads to not only better understanding of mechanisms of the arrhythmia but also the improved treatment of the sustained AF. A novel method is proposed to characterize the AF based on structure and the quantification of the recurrence plot (RP) to predict the termination of the AF. The RP of the electrocardiogram (ECG) signal is firstly obtained and eleven features are extracted to characterize its three basic patterns. Then the sequential forward search (SFS) algorithm and Davies-Bouldin criterion are utilized to select the feature subset which can predict the AF termination effectively. Finally, the multilayer perceptron (MLP) neural network is applied to predict the AF termination. An AF database which includes one training set and two testing sets (A and B) of Holter ECG recordings is studied. Experiment results show that 97% of testing set A and 95% of testing set B are correctly classified. It demonstrates that this algorithm has the ability to predict the spontaneous termination of the AF effectively.
Handbook of Memetic Algorithms
Cotta, Carlos; Moscato, Pablo
2012-01-01
Memetic Algorithms (MAs) are computational intelligence structures combining multiple and various operators in order to address optimization problems. The combination and interaction amongst operators evolves and promotes the diffusion of the most successful units and generates an algorithmic behavior which can handle complex objective functions and hard fitness landscapes. “Handbook of Memetic Algorithms” organizes, in a structured way, all the the most important results in the field of MAs since their earliest definition until now. A broad review including various algorithmic solutions as well as successful applications is included in this book. Each class of optimization problems, such as constrained optimization, multi-objective optimization, continuous vs combinatorial problems, uncertainties, are analysed separately and, for each problem, memetic recipes for tackling the difficulties are given with some successful examples. Although this book contains chapters written by multiple authors, ...
Titan Science Return Quantification
Weisbin, Charles R.; Lincoln, William
2014-01-01
Each proposal for a NASA mission concept includes a Science Traceability Matrix (STM), intended to show that what is being proposed would contribute to satisfying one or more of the agency's top-level science goals. But the information traditionally provided cannot be used directly to quantitatively compare anticipated science return. We added numerical elements to NASA's STM and developed a software tool to process the data. We then applied this methodology to evaluate a group of competing concepts for a proposed mission to Saturn's moon, Titan.
Prudencio, Ernesto E.
2012-01-01
QUESO is a collection of statistical algorithms and programming constructs supporting research into the uncertainty quantification (UQ) of models and their predictions. It has been designed with three objectives: it should (a) be sufficiently abstract in order to handle a large spectrum of models, (b) be algorithmically extensible, allowing an easy insertion of new and improved algorithms, and (c) take advantage of parallel computing, in order to handle realistic models. Such objectives demand a combination of an object-oriented design with robust software engineering practices. QUESO is written in C++, uses MPI, and leverages libraries already available to the scientific community. We describe some UQ concepts, present QUESO, and list planned enhancements.
Precise Quantification of Nanoparticle Internalization
Gottstein, Claudia; Wu, Guohui; Wong, Benjamin J.; Zasadzinski, Joseph Anthony
2013-01-01
Nanoparticles have opened new exciting avenues for both diagnostic and therapeutic applications in human disease, and targeted nanoparticles are increasingly used as specific drug delivery vehicles. The precise quantification of nanoparticle internalization is of importance to measure the impact of physical and chemical properties on the uptake of nanoparticles into target cells or into cells responsible for rapid clearance. Internalization of nanoparticles has been measured...
Low cost high performance uncertainty quantification
Bekas, C.
2009-01-01
Uncertainty quantification in risk analysis has become a key application. In this context, computing the diagonal of inverse covariance matrices is of paramount importance. Standard techniques, that employ matrix factorizations, incur a cubic cost which quickly becomes intractable with the current explosion of data sizes. In this work we reduce this complexity to quadratic with the synergy of two algorithms that gracefully complement each other and lead to a radically different approach. First, we turned to stochastic estimation of the diagonal. This allowed us to cast the problem as a linear system with a relatively small number of multiple right hand sides. Second, for this linear system we developed a novel, mixed precision, iterative refinement scheme, which uses iterative solvers instead of matrix factorizations. We demonstrate that the new framework not only achieves the much needed quadratic cost but in addition offers excellent opportunities for scaling at massively parallel environments. We based our implementation on BLAS 3 kernels that ensure very high processor performance. We achieved a peak performance of 730 TFlops on 72 BG/P racks, with a sustained performance 73% of theoretical peak. We stress that the techniques presented in this work are quite general and applicable to several other important applications. Copyright © 2009 ACM.
Directory of Open Access Journals (Sweden)
Petra Matoušková
Full Text Available Obesity and metabolic syndrome is increasing health problem worldwide. Among other ways, nutritional intervention using phytochemicals is important method for treatment and prevention of this disease. Recent studies have shown that certain phytochemicals could alter the expression of specific genes and microRNAs (miRNAs that play a fundamental role in the pathogenesis of obesity. For study of the obesity and its treatment, monosodium glutamate (MSG-injected mice with developed central obesity, insulin resistance and liver lipid accumulation are frequently used animal models. To understand the mechanism of phytochemicals action in obese animals, the study of selected genes expression together with miRNA quantification is extremely important. For this purpose, real-time quantitative PCR is a sensitive and reproducible method, but it depends on proper normalization entirely. The aim of present study was to identify the appropriate reference genes for mRNA and miRNA quantification in MSG mice treated with green tea catechins, potential anti-obesity phytochemicals. Two sets of reference genes were tested: first set contained seven commonly used genes for normalization of messenger RNA, the second set of candidate reference genes included ten small RNAs for normalization of miRNA. The expression stability of these reference genes were tested upon treatment of mice with catechins using geNorm, NormFinder and BestKeeper algorithms. Selected normalizers for mRNA quantification were tested and validated on expression ofquinone oxidoreductase, biotransformation enzyme known to be modified by catechins. The effect of selected normalizers for miRNA quantification was tested on two obesity- and diabetes- related miRNAs, miR-221 and miR-29b, respectively. Finally, the combinations of B2M/18S/HPRT1 and miR-16/sno234 were validated as optimal reference genes for mRNA and miRNA quantification in liver and 18S/RPlP0/HPRT1 and sno234/miR-186 in small intestine of
Esquinas, Pedro L.; Uribe, Carlos F.; Gonzalez, M.; Rodríguez-Rodríguez, Cristina; Häfeli, Urs O.; Celler, Anna
2017-08-01
The main applications of 188Re in radionuclide therapies include trans-arterial liver radioembolization and palliation of painful bone-metastases. In order to optimize 188Re therapies, the accurate determination of radiation dose delivered to tumors and organs at risk is required. Single photon emission computed tomography (SPECT) can be used to perform such dosimetry calculations. However, the accuracy of dosimetry estimates strongly depends on the accuracy of activity quantification in 188Re images. In this study, we performed a series of phantom experiments aiming to investigate the accuracy of activity quantification for 188Re SPECT using high-energy and medium-energy collimators. Objects of different shapes and sizes were scanned in Air, non-radioactive water (Cold-water) and water with activity (Hot-water). The ordered subset expectation maximization algorithm with clinically available corrections (CT-based attenuation, triple-energy window (TEW) scatter and resolution recovery was used). For high activities, the dead-time corrections were applied. The accuracy of activity quantification was evaluated using the ratio of the reconstructed activity in each object to this object’s true activity. Each object’s activity was determined with three segmentation methods: a 1% fixed threshold (for cold background), a 40% fixed threshold and a CT-based segmentation. Additionally, the activity recovered in the entire phantom, as well as the average activity concentration of the phantom background were compared to their true values. Finally, Monte-Carlo simulations of a commercial γ -camera were performed to investigate the accuracy of the TEW method. Good quantification accuracy (errors activity concentration and for objects in cold background segmented with a 1% threshold. However, the accuracy of activity quantification for objects segmented with 40% threshold or CT-based methods decreased (errors >15%), mostly due to partial-volume effects. The Monte
Matoušková, Petra; Bártíková, Hana; Boušová, Iva; Hanušová, Veronika; Szotáková, Barbora; Skálová, Lenka
2014-01-01
Obesity and metabolic syndrome is increasing health problem worldwide. Among other ways, nutritional intervention using phytochemicals is important method for treatment and prevention of this disease. Recent studies have shown that certain phytochemicals could alter the expression of specific genes and microRNAs (miRNAs) that play a fundamental role in the pathogenesis of obesity. For study of the obesity and its treatment, monosodium glutamate (MSG)-injected mice with developed central obesity, insulin resistance and liver lipid accumulation are frequently used animal models. To understand the mechanism of phytochemicals action in obese animals, the study of selected genes expression together with miRNA quantification is extremely important. For this purpose, real-time quantitative PCR is a sensitive and reproducible method, but it depends on proper normalization entirely. The aim of present study was to identify the appropriate reference genes for mRNA and miRNA quantification in MSG mice treated with green tea catechins, potential anti-obesity phytochemicals. Two sets of reference genes were tested: first set contained seven commonly used genes for normalization of messenger RNA, the second set of candidate reference genes included ten small RNAs for normalization of miRNA. The expression stability of these reference genes were tested upon treatment of mice with catechins using geNorm, NormFinder and BestKeeper algorithms. Selected normalizers for mRNA quantification were tested and validated on expression of quinone oxidoreductase, biotransformation enzyme known to be modified by catechins. The effect of selected normalizers for miRNA quantification was tested on two obesity- and diabetes- related miRNAs, miR-221 and miR-29b, respectively. Finally, the combinations of B2M/18S/HPRT1 and miR-16/sno234 were validated as optimal reference genes for mRNA and miRNA quantification in liver and 18S/RPlP0/HPRT1 and sno234/miR-186 in small intestine of MSG mice. These
Matoušková, Petra; Bártíková, Hana; Boušová, Iva; Hanušová, Veronika; Szotáková, Barbora; Skálová, Lenka
2014-01-01
Obesity and metabolic syndrome is increasing health problem worldwide. Among other ways, nutritional intervention using phytochemicals is important method for treatment and prevention of this disease. Recent studies have shown that certain phytochemicals could alter the expression of specific genes and microRNAs (miRNAs) that play a fundamental role in the pathogenesis of obesity. For study of the obesity and its treatment, monosodium glutamate (MSG)-injected mice with developed central obesity, insulin resistance and liver lipid accumulation are frequently used animal models. To understand the mechanism of phytochemicals action in obese animals, the study of selected genes expression together with miRNA quantification is extremely important. For this purpose, real-time quantitative PCR is a sensitive and reproducible method, but it depends on proper normalization entirely. The aim of present study was to identify the appropriate reference genes for mRNA and miRNA quantification in MSG mice treated with green tea catechins, potential anti-obesity phytochemicals. Two sets of reference genes were tested: first set contained seven commonly used genes for normalization of messenger RNA, the second set of candidate reference genes included ten small RNAs for normalization of miRNA. The expression stability of these reference genes were tested upon treatment of mice with catechins using geNorm, NormFinder and BestKeeper algorithms. Selected normalizers for mRNA quantification were tested and validated on expression of NAD(P)H:quinone oxidoreductase, biotransformation enzyme known to be modified by catechins. The effect of selected normalizers for miRNA quantification was tested on two obesity- and diabetes- related miRNAs, miR-221 and miR-29b, respectively. Finally, the combinations of B2M/18S/HPRT1 and miR-16/sno234 were validated as optimal reference genes for mRNA and miRNA quantification in liver and 18S/RPlP0/HPRT1 and sno234/miR-186 in small intestine of MSG mice
Energy Technology Data Exchange (ETDEWEB)
Giacomini, Guilherme; Alvarez, Matheus; Oliveira, Marcela de; Miranda, Jose Ricardo A. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Pina, Diana R.; Pereira, Paulo C.M.; Ribeiro, Sergio M., E-mail: giacomini@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Faculdade de Medicina. Departamento de Doencas Tropicais e Diagnostico por Imagem
2014-12-15
Tuberculosis (TB) caused by Mycobacterium tuberculosis, is an infectious disease which remains a global health problem. The chest radiography is the commonly method employed to assess the TB's evolution. The methods for quantification of abnormalities of chest are usually performed on CT scans (CT). This quantification is important to assess the TB evolution and treatment and comparing different treatments. However, precise quantification is not feasible for the amount of CT scans required. The purpose of this work is to develop a methodology for quantification of lung damage caused by TB through chest radiographs. It was developed an algorithm for computational processing of exams in Matlab, which creates a lungs' 3D representation, with compromised dilated regions inside. The quantification of lung lesions was also made for the same patients through CT scans. The measurements from the two methods were compared and resulting in strong correlation. Applying statistical Bland and Altman, all samples were within the limits of agreement, with a confidence interval of 95%. The results showed an average variation of around 13% between the two quantification methods. The results suggest the effectiveness and applicability of the method developed, providing better risk-benefit to the patient and cost-benefit ratio for the institution. (author)
Analytic device including nanostructures
Di Fabrizio, Enzo M.
2015-07-02
A device for detecting an analyte in a sample comprising: an array including a plurality of pixels, each pixel including a nanochain comprising: a first nanostructure, a second nanostructure, and a third nanostructure, wherein size of the first nanostructure is larger than that of the second nanostructure, and size of the second nanostructure is larger than that of the third nanostructure, and wherein the first nanostructure, the second nanostructure, and the third nanostructure are positioned on a substrate such that when the nanochain is excited by an energy, an optical field between the second nanostructure and the third nanostructure is stronger than an optical field between the first nanostructure and the second nanostructure, wherein the array is configured to receive a sample; and a detector arranged to collect spectral data from a plurality of pixels of the array.
Laser Projection Photogrammetry and Video System for Quantification and Mensuration
Borne, L. J.; Kocak, D. M.
2005-05-01
This paper describes a novel photogrammetric laser/video system suited for a variety of underwater quantification and mensuration applications. The system is comprised of a purpose-built frame to which are mounted a roll/pitch motion reference sensor, video camera, and three microlasers. Orientation of the three lasers provides for optical triangulation, which allows computation of range at a specific location in the field-of-view. From this information and that derived from the motion sensor, the spatially variant magnification can be determined over the entire field-of-view using a simple algorithm. A variety of parameters can then be estimated using image-processing techniques, including perspective overlays, range to a point or location, scale in any region of the image, and area measurements. Specialized image processing algorithms can be added to provide object recognition, tracking, and other information. The specification of each component (i.e., laser wavelength and power, camera sensitivity and resolution, and dynamic range) and mounting geometry are determined based on the specific application and needed accuracy. The system can be mounted for use on any subsea vehicle or platform and provides a low cost automated approach for obtaining quantitative information from standard undersea video. Currently, the application software allows for post-processing of the video information but could be modified to process the video information in real-time. The first application of this system will be used by Washington State Department of Fish and Wildlife researchers onboard DSV DELTA. The system may prove valuable for estimating the abundance of commercially and recreationally exploited groundfish species within a transect area conducted off the coast of Washington State. This non-intrusive, direct observation technique affords a means to estimate the density of certain benthic fish species in high relief areas that currently cannot be sampled using routine trawl
DEFF Research Database (Denmark)
Markham, Annette
layered set of accounts to help build our understanding of how individuals relate to their devices, search systems, and social network sites. This work extends critical analyses of the power of algorithms in implicating the social self by offering narrative accounts from multiple perspectives. It also......This paper takes an actor network theory approach to explore some of the ways that algorithms co-construct identity and relational meaning in contemporary use of social media. Based on intensive interviews with participants as well as activity logging and data tracking, the author presents a richly...... contributes an innovative method for blending actor network theory with symbolic interaction to grapple with the complexity of everyday sensemaking practices within networked global information flows....
New assignment algorithms for data association
Castanon, David A.
1992-08-01
This paper presents an overview of new classes of algorithms for the algorithm concepts for the solution of combinatorial optimization problems arising in data association. These algorithms are based on extensions of Bertsekas' auction algorithm. The paper includes experimental results using these algorithms and a modification of the algorithm of Jonker and Volgenant on 2-dimensional measurement-track data association problems in the presence of false alarms, missed detections, sensor bias noise, and sensor measurement noise. The results indicate that some of the new algorithms are very efficient for the solution of these data association problems.
The Xmath Integration Algorithm
Bringslid, Odd
2009-01-01
The projects Xmath (Bringslid and Canessa, 2002) and dMath (Bringslid, de la Villa and Rodriguez, 2007) were supported by the European Commission in the so called Minerva Action (Xmath) and The Leonardo da Vinci programme (dMath). The Xmath eBook (Bringslid, 2006) includes algorithms into a wide range of undergraduate mathematical issues embedded…
The Copenhagen Triage Algorithm
DEFF Research Database (Denmark)
Hasselbalch, Rasmus Bo; Plesner, Louis Lind; Pries-Heje, Mia
2016-01-01
is non-inferior to an existing triage model in a prospective randomized trial. METHODS: The Copenhagen Triage Algorithm (CTA) study is a prospective two-center, cluster-randomized, cross-over, non-inferiority trial comparing CTA to the Danish Emergency Process Triage (DEPT). We include patients ≥16 years...
Dakota uncertainty quantification methods applied to the NEK-5000 SAHEX model.
Energy Technology Data Exchange (ETDEWEB)
Weirs, V. Gregory
2014-03-01
This report summarizes the results of a NEAMS project focused on the use of uncertainty and sensitivity analysis methods within the NEK-5000 and Dakota software framework for assessing failure probabilities as part of probabilistic risk assessment. NEK-5000 is a software tool under development at Argonne National Laboratory to perform computational fluid dynamics calculations for applications such as thermohydraulics of nuclear reactor cores. Dakota is a software tool developed at Sandia National Laboratories containing optimization, sensitivity analysis, and uncertainty quantification algorithms. The goal of this work is to demonstrate the use of uncertainty quantification methods in Dakota with NEK-5000.
Energy Technology Data Exchange (ETDEWEB)
Moral, F. del; Ramos, A.; Salgado, M.; Andrade, B; Munoz, V.
2010-07-01
In this work an analysis of the influence of the choice of the algorithm or planning system, on the calculus of the same treatment plan is introduced. For this purpose specific software has been developed for comparing plans of a series of IMRT cases of prostate and head and neck cancer calculated using the convolution, superposition and fast superposition algorithms implemented in the XiO 4.40 planning system (CMS). It has also been used for the comparison of the same treatment plan for lung pathology calculated in XiO with the mentioned algorithms, and calculated in the Plan 4.1 planning system (Brainlab) using its pencil beam algorithm. Differences in dose among the treatment plans have been quantified using a set of metrics. The recommendation for the dosimetry of a careful choice of the algorithm has been numerically confirmed. (Author).
Parallel Algorithms and Patterns
Energy Technology Data Exchange (ETDEWEB)
Robey, Robert W. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2016-06-16
This is a powerpoint presentation on parallel algorithms and patterns. A parallel algorithm is a well-defined, step-by-step computational procedure that emphasizes concurrency to solve a problem. Examples of problems include: Sorting, searching, optimization, matrix operations. A parallel pattern is a computational step in a sequence of independent, potentially concurrent operations that occurs in diverse scenarios with some frequency. Examples are: Reductions, prefix scans, ghost cell updates. We only touch on parallel patterns in this presentation. It really deserves its own detailed discussion which Gabe Rockefeller would like to develop.
Improved Chaff Solution Algorithm
2009-03-01
Programme de démonstration de technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré...technologies (PDT) sur l’intégration de capteurs et de systèmes d’armes embarqués (SISWS), un algorithme a été élaboré pour déterminer automatiquement...0Z4 2. SECURITY CLASSIFICATION (Overall security classification of the document including special warning terms if applicable .) UNCLASSIFIED
DEFF Research Database (Denmark)
Vissing, S.; Hededal, O.
-dimensional subspace in order to establish and solve a symmetric generalized eigenvalue problem in the subspace. The algorithm is described in pseudo code and implemented in the C programming language for lower triangular matrices A and B. The implementation includes procedures for selecting initial iteration vectors......An algorithm is presented for computing the m smallest eigenvalues and corresponding eigenvectors of the generalized eigenvalue problem (A - λB)Φ = 0 where A and B are real n x n symmetric matrices. In an iteration scheme the matrices A and B are projected simultaneously onto an m...
Martinez, Carol L.
1992-07-01
Over the last several years, Rocketdyne has actively developed condition and health monitoring techniques and their elements for rocket engine components, specifically high pressure turbopumps. Of key interest is the development of bearing signature analysis systems for real-time monitoring of the cryogen-cooled turbopump shaft bearings, which spin at speeds up to 36,000 RPM. These system elements include advanced bearing vibration sensors, signal processing techniques, wear mode algorithms, and integrated control software. Results of development efforts in the areas of signal processing and wear mode identification and quantification algorithms based on strain gage and deflectometer data are presented. Wear modes investigated include: inner race wear, cage pocket wear, outer race wear, differential ball wear, cracked inner race, and nominal wear.
Kernel Affine Projection Algorithms
Directory of Open Access Journals (Sweden)
José C. Príncipe
2008-05-01
Full Text Available The combination of the famed kernel trick and affine projection algorithms (APAs yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS. KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS, and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.
Kernel Affine Projection Algorithms
Liu, Weifeng; Príncipe, José C.
2008-12-01
The combination of the famed kernel trick and affine projection algorithms (APAs) yields powerful nonlinear extensions, named collectively here, KAPA. This paper is a follow-up study of the recently introduced kernel least-mean-square algorithm (KLMS). KAPA inherits the simplicity and online nature of KLMS while reducing its gradient noise, boosting performance. More interestingly, it provides a unifying model for several neural network techniques, including kernel least-mean-square algorithms, kernel adaline, sliding-window kernel recursive-least squares (KRLS), and regularization networks. Therefore, many insights can be gained into the basic relations among them and the tradeoff between computation complexity and performance. Several simulations illustrate its wide applicability.
Algorithms for Global Positioning
DEFF Research Database (Denmark)
Borre, Kai; Strang, Gilbert
The emergence of satellite technology has changed the lives of millions of people. In particular, GPS has brought an unprecedented level of accuracy to the field of geodesy. This text is a guide to the algorithms and mathematical principles that account for the success of GPS technology and repla......The emergence of satellite technology has changed the lives of millions of people. In particular, GPS has brought an unprecedented level of accuracy to the field of geodesy. This text is a guide to the algorithms and mathematical principles that account for the success of GPS technology....... At the heart of the matter are the positioning algorithms on which GPS technology relies, the discussion of which will affirm the mathematical contents of the previous chapters. Numerous ready-to-use MATLAB codes are included for the reader. This comprehensive guide will be invaluable for engineers...
Synthesis of Greedy Algorithms Using Dominance Relations
Nedunuri, Srinivas; Smith, Douglas R.; Cook, William R.
2010-01-01
Greedy algorithms exploit problem structure and constraints to achieve linear-time performance. Yet there is still no completely satisfactory way of constructing greedy algorithms. For example, the Greedy Algorithm of Edmonds depends upon translating a problem into an algebraic structure called a matroid, but the existence of such a translation can be as hard to determine as the existence of a greedy algorithm itself. An alternative characterization of greedy algorithms is in terms of dominance relations, a well-known algorithmic technique used to prune search spaces. We demonstrate a process by which dominance relations can be methodically derived for a number of greedy algorithms, including activity selection, and prefix-free codes. By incorporating our approach into an existing framework for algorithm synthesis, we demonstrate that it could be the basis for an effective engineering method for greedy algorithms. We also compare our approach with other characterizations of greedy algorithms.
Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.
Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W
2016-03-31
Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis.
Casanova, Henri; Robert, Yves
2008-01-01
""…The authors of the present book, who have extensive credentials in both research and instruction in the area of parallelism, present a sound, principled treatment of parallel algorithms. … This book is very well written and extremely well designed from an instructional point of view. … The authors have created an instructive and fascinating text. The book will serve researchers as well as instructors who need a solid, readable text for a course on parallelism in computing. Indeed, for anyone who wants an understandable text from which to acquire a current, rigorous, and broad vi
Doud, Andrea N; Weaver, Ashley A; Talton, Jennifer W; Barnard, Ryan T; Petty, John; Stitzel, Joel D
2016-01-01
Appropriate treatment at designated trauma centers (TCs) improves outcomes among injured children after motor vehicle crashes (MVCs). Advanced Automatic Crash Notification (AACN) has shown promise in improving triage to appropriate TCs. Pediatric-specific AACN algorithms have not yet been created. To create such an algorithm, it will be necessary to include some metric of development (age, height, or weight) as a covariate in the injury risk algorithm. This study sought to determine which marker of development should serve as a covariate in such an algorithm and to quantify injury risk at different levels of this metric. A retrospective review of occupants age pediatric AACN algorithm. Clinical judgment, literature review, and chi-square analysis were used to create groupings of the chosen metric that would discriminate injury patterns. Adjusted odds of particular injury types at the different levels of this metric were calculated from logistic regression while controlling for gender, vehicle velocity change (delta V), belted status (optimal, suboptimal, or unrestrained), and crash mode (rollover, rear, frontal, near-side, or far-side). NASS-CDS analysis produced 11,541 occupants age mass index (BMI) classifications. Adjusted odds of key injury types with respect to these age categorizations revealed that younger children were at increased odds of sustaining Abbreviated Injury Scale (AIS) 2+ and 3+ head injuries and AIS 3+ spinal injuries, whereas older children were at increased odds of sustaining thoracic fractures, AIS 3+ abdominal injuries, and AIS 2+ upper and lower extremity injuries. The injury patterns observed across developmental metrics in this study mirror those previously described among children with blunt trauma. This study identifies age as the metric best suited for use in a pediatric AACN algorithm and utilizes 12 years of data to provide quantifiable risks of particular injuries at different levels of this metric. This risk quantification will
Recursive Algorithm For Linear Regression
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
Boer, de P.; Caspers, M.; Sanders, J.W.; Kemperman, R.; Wijman, J.; Lommerse, G.; Roeselers, G.; Montijn, R.; Abee, T.; Kort, R.
2015-01-01
Background
Spoilage of food products is frequently caused by bacterial spores and lactic acid bacteria. Identification of these organisms by classic cultivation methods is limited by their ability to form colonies on nutrient agar plates. In this study, we adapted and optimized 16S rRNA amplicon
Boer, de P.; Caspers, M.; Sanders, J.W.; Kemperman, R.; Wijman, J.; Lommerse, G.; Roeselers, G.; Montijn, R.; Abee, T.; Kort, R.
2015-01-01
Background
Spoilage of food products is frequently caused by bacterial spores and lactic acid bacteria. Identification of these organisms by classic cultivation methods is limited by their ability to form colonies on nutrient agar plates. In this study, we adapted and optimized 16S rRNA amplicon
Detection and Quantification of Neurotransmitters in Dialysates
Zapata, Agustin; Chefer, Vladimir I.; Shippenberg, Toni S.; Denoroy, Luc
2009-01-01
Sensitive analytical methods are needed for the separation and quantification of neurotransmitters obtained in microdialysate studies. This unit describes methods that permit quantification of nanomolar concentrations of monoamines and their metabolites (high-pressure liquid chromatography electrochemical detection), acetylcholine (HPLC-coupled to an enzyme reactor), and amino acids (HPLC-fluorescence detection; capillary electrophoresis with laser-induced fluorescence detection).
Comparison of five DNA quantification methods
DEFF Research Database (Denmark)
Nielsen, Karsten; Mogensen, Helle Smidt; Hedman, Johannes;
2008-01-01
Six commercial preparations of human genomic DNA were quantified using five quantification methods: UV spectrometry, SYBR-Green dye staining, slot blot hybridization with the probe D17Z1, Quantifiler Human DNA Quantification kit and RB1 rt-PCR. All methods measured higher DNA concentrations than ...
Quantification and Reconstruction in Photoacoustic Tomography
Guo, Zijian
Optical absorption is closely associated with many physiological important parameters, such as the concentration and oxygen saturation of hemoglobin. Conventionally, accurate quantification in PAT requires knowledge of the optical fluence attenuation, acoustic pressure attenuation, and detection bandwidth. We circumvent this requirement by quantifying the optical absorption coefficients from the acoustic spectra of PA signals acquired at multiple optical wavelengths. We demonstrate the method using the optical-resolution photoacoustic microscopy (OR-PAM) and the acoustical-resolution photoacoustic microscopy (AR-PAM) in the optical ballistic regime and in the optical diffusive regime, respectively. The data acquisition speed in photoacoustic computed tomography (PACT) is limited by the laser repetition rate and the number of parallel ultrasound detecting channels. Reconstructing an image with fewer measurements can effectively accelerate the data acquisition and reduce the system cost. We adapted Compressed Sensing (CS) for the reconstruction in PACT. CS-based PACT was implemented as a non-linear conjugate gradient descent algorithm and tested with both phantom and in vivo experiments. Speckles have been considered ubiquitous in all scattering-based coherent imaging technologies. As a coherent imaging modality based on optical absorption, photoacoustic (PA) tomography (PAT) is generally devoid of speckles. PAT suppresses speckles by building up prominent boundary signals, via a mechanism similar to that of specular reflection. When imaging smooth boundary absorbing targets, the speckle visibility in PAT, which is defined as the ratio of the square root of the average power of speckles to that of boundaries, is inversely proportional to the square root of the absorber density. If the surfaces of the absorbing targets have uncorrelated height fluctuations, however, the boundary features may become fully developed speckles. The findings were validated by simulations
Akhtar, T.; Shoemaker, C. A.
2009-12-01
expensive objective function evaluations are performed to generate a near optimal trade-off curve. GOMORS was tested on a 6-dimensional groundwater remediation model with 3 pumping wells and 2 management periods, and the results were compared against a widely used multi-objective evolutionary optimization algorithm: Non-Dominated Sorting Genetic Algorithm (NSGA-II). The results indicate that GOMORS matches the performance of NSGA-II with one or more orders of magnitude fewer objective function evaluations. GOMORS is a general method that can be used with other water resources applications and with other multiple objectives including those involving risks or quantification of uncertainty.
Partitional clustering algorithms
2015-01-01
This book summarizes the state-of-the-art in partitional clustering. Clustering, the unsupervised classification of patterns into groups, is one of the most important tasks in exploratory data analysis. Primary goals of clustering include gaining insight into, classifying, and compressing data. Clustering has a long and rich history that spans a variety of scientific disciplines including anthropology, biology, medicine, psychology, statistics, mathematics, engineering, and computer science. As a result, numerous clustering algorithms have been proposed since the early 1950s. Among these algorithms, partitional (nonhierarchical) ones have found many applications, especially in engineering and computer science. This book provides coverage of consensus clustering, constrained clustering, large scale and/or high dimensional clustering, cluster validity, cluster visualization, and applications of clustering. Examines clustering as it applies to large and/or high-dimensional data sets commonly encountered in reali...
Targeted Proteomic Quantification on Quadrupole-Orbitrap Mass Spectrometer*
Gallien, Sebastien; Duriez, Elodie; Crone, Catharina; Kellmann, Markus; Moehring, Thomas; Domon, Bruno
2012-01-01
There is an immediate need for improved methods to systematically and precisely quantify large sets of peptides in complex biological samples. To date protein quantification in biological samples has been routinely performed on triple quadrupole instruments operated in selected reaction monitoring mode (SRM), and two major challenges remain. Firstly, the number of peptides to be included in one survey experiment needs to be increased to routinely reach several hundreds, and secondly, the degr...
Quantum Central Processing Unit and Quantum Algorithm
Institute of Scientific and Technical Information of China (English)
王安民
2002-01-01
Based on a scalable and universal quantum network, quantum central processing unit, proposed in our previous paper [Chin. Phys. Left. 18 (2001)166], the whole quantum network for the known quantum algorithms,including quantum Fourier transformation, Shor's algorithm and Grover's algorithm, is obtained in a unitied way.
Biology-Derived Algorithms in Engineering Optimization
Yang, Xin-She
2010-01-01
Biology-derived algorithms are an important part of computational sciences, which are essential to many scientific disciplines and engineering applications. Many computational methods are derived from or based on the analogy to natural evolution and biological activities, and these biologically inspired computations include genetic algorithms, neural networks, cellular automata, and other algorithms.
Inverse Computation and the Universal Resolving Algorithm
Institute of Scientific and Technical Information of China (English)
无
2001-01-01
We survey fundamental concepts for inverse programming and thenpresent the Uni v ersal Resolving Algorithm, an algorithm for inverse computation in a first-orde r , functional programming language. We discuss the key concepts of the algorithm, including a three-step approach based on the notion of a perfect process tree, and demonstrate our implementation with several examples of inverse computation.
Boosting foundations and algorithms
Schapire, Robert E
2012-01-01
Boosting is an approach to machine learning based on the idea of creating a highly accurate predictor by combining many weak and inaccurate "rules of thumb." A remarkably rich theory has evolved around boosting, with connections to a range of topics, including statistics, game theory, convex optimization, and information geometry. Boosting algorithms have also enjoyed practical success in such fields as biology, vision, and speech processing. At various times in its history, boosting has been perceived as mysterious, controversial, even paradoxical.
Elementary functions algorithms and implementation
Muller, Jean-Michel
2016-01-01
This textbook presents the concepts and tools necessary to understand, build, and implement algorithms for computing elementary functions (e.g., logarithms, exponentials, and the trigonometric functions). Both hardware- and software-oriented algorithms are included, along with issues related to accurate floating-point implementation. This third edition has been updated and expanded to incorporate the most recent advances in the field, new elementary function algorithms, and function software. After a preliminary chapter that briefly introduces some fundamental concepts of computer arithmetic, such as floating-point arithmetic and redundant number systems, the text is divided into three main parts. Part I considers the computation of elementary functions using algorithms based on polynomial or rational approximations and using table-based methods; the final chapter in this section deals with basic principles of multiple-precision arithmetic. Part II is devoted to a presentation of “shift-and-add” algorithm...
Evolutionary Graph Drawing Algorithms
Institute of Scientific and Technical Information of China (English)
Huang Jing-wei; Wei Wen-fang
2003-01-01
In this paper, graph drawing algorithms based on genetic algorithms are designed for general undirected graphs and directed graphs. As being shown, graph drawing algorithms designed by genetic algorithms have the following advantages: the frames of the algorithms are unified, the method is simple, different algorithms may be attained by designing different objective functions, therefore enhance the reuse of the algorithms. Also, aesthetics or constrains may be added to satisfy different requirements.
Accessible quantification of multiparticle entanglement
Cianciaruso, Marco; Adesso, Gerardo
2015-01-01
Entanglement is a key ingredient for quantum technologies and a fundamental signature of quantumness in a broad range of phenomena encompassing many-body physics, thermodynamics, cosmology, and life sciences. For arbitrary multiparticle systems, the quantification of entanglement typically involves hard optimisation problems, and requires demanding tomographical techniques. In this paper we show that such difficulties can be overcome by developing an experimentally friendly method to evaluate measures of multiparticle entanglement via a geometric approach. The method provides exact analytical results for a relevant class of mixed states of $N$ qubits, and computable lower bounds to entanglement for any general state. For practical purposes, the entanglement determination requires local measurements in just three settings for any $N$. We demonstrate the power of our approach to quantify multiparticle entanglement in $N$-qubit bound entangled states and other states recently engineered in laboratory using quant...
Energy Technology Data Exchange (ETDEWEB)
Fontana, W.
1990-12-13
In this paper complex adaptive systems are defined by a self- referential loop in which objects encode functions that act back on these objects. A model for this loop is presented. It uses a simple recursive formal language, derived from the lambda-calculus, to provide a semantics that maps character strings into functions that manipulate symbols on strings. The interaction between two functions, or algorithms, is defined naturally within the language through function composition, and results in the production of a new function. An iterated map acting on sets of functions and a corresponding graph representation are defined. Their properties are useful to discuss the behavior of a fixed size ensemble of randomly interacting functions. This function gas'', or Turning gas'', is studied under various conditions, and evolves cooperative interaction patterns of considerable intricacy. These patterns adapt under the influence of perturbations consisting in the addition of new random functions to the system. Different organizations emerge depending on the availability of self-replicators.
Quantification of sun-related changes in the eye in conjunctival ultraviolet autofluorescence images
Huynh, Emily; Bukowska, Danuta M.; Yazar, Seyhan; McKnight, Charlotte M.; Mian, Ajmal; Mackey, David A.
2016-01-01
Abstract. Quantification of sun-related changes in conjunctival ultraviolet autofluorescence (CUVAF) images is a subjective and tedious task, in which reproducibility of results is difficult. Thus, we have developed a semiautomatic method in MATLAB® to analyze CUVAF images retrospectively. The algorithm was validated on 200 images from 50 randomly selected participants from the Western Australian Pregnancy Cohort (Raine) study 20-year follow-up assessment, in which CUVAF area measurements were available from previous manual analysis. Algorithm performance was compared to manual measurements and yielded better than 95% correspondence in both intra- and interobserver agreement. Furthermore, the semiautomatic method significantly reduced analysis time by 50%. PMID:27610398
Benchmarking monthly homogenization algorithms
Directory of Open Access Journals (Sweden)
V. K. C. Venema
2011-08-01
Full Text Available The COST (European Cooperation in Science and Technology Action ES0601: Advances in homogenization methods of climate series: an integrated approach (HOME has executed a blind intercomparison and validation study for monthly homogenization algorithms. Time series of monthly temperature and precipitation were evaluated because of their importance for climate studies and because they represent two important types of statistics (additive and multiplicative. The algorithms were validated against a realistic benchmark dataset. The benchmark contains real inhomogeneous data as well as simulated data with inserted inhomogeneities. Random break-type inhomogeneities were added to the simulated datasets modeled as a Poisson process with normally distributed breakpoint sizes. To approximate real world conditions, breaks were introduced that occur simultaneously in multiple station series within a simulated network of station data. The simulated time series also contained outliers, missing data periods and local station trends. Further, a stochastic nonlinear global (network-wide trend was added.
Participants provided 25 separate homogenized contributions as part of the blind study as well as 22 additional solutions submitted after the details of the imposed inhomogeneities were revealed. These homogenized datasets were assessed by a number of performance metrics including (i the centered root mean square error relative to the true homogeneous value at various averaging scales, (ii the error in linear trend estimates and (iii traditional contingency skill scores. The metrics were computed both using the individual station series as well as the network average regional series. The performance of the contributions depends significantly on the error metric considered. Contingency scores by themselves are not very informative. Although relative homogenization algorithms typically improve the homogeneity of temperature data, only the best ones improve
Objectified quantification of uncertainties in Bayesian atmospheric inversions
Directory of Open Access Journals (Sweden)
A. Berchet
2014-07-01
Full Text Available Classical Bayesian atmospheric inversions process atmospheric observations and prior emissions, the two being connected by an observation operator picturing mainly the atmospheric transport. These inversions rely on prescribed errors in the observations, the prior emissions and the observation operator. At the meso-scale, inversion results are very sensitive to the prescribed error distributions, which are not accurately known. The classical Bayesian framework experiences difficulties in quantifying the impact of mis-specified error distributions on the optimized fluxes. In order to cope with this issue, we rely on recent research results and enhance the classical Bayesian inversion framework through a marginalization on all the plausible errors that can be prescribed in the system. The marginalization consists in computing inversions for all possible error distributions weighted by the probability of occurence of the error distributions. The posterior distribution of the fluxes calculated by the marginalization is complicated and not explicitly describable. We then carry out a Monte-Carlo sampling relying on an approximation of the probability of occurence of the error distributions. This approximation is deduced from the well-tested algorithm of the Maximum of Likelihood. Thus, the marginalized inversion relies on an automatic objectified diagnosis of the error statistics, without any prior knowledge about the matrices. It robustly includes the uncertainties on the error distributions, contrary to what is classically done with frozen expert-knowledge error statistics. Some expert knowledge is still used in the method for the choice of emission aggregation pattern and sampling protocol in order to reduce the computation costs of the method. The relevance and the robustness of the method is tested on a case study: the inversion of methane surface fluxes at the meso-scale with real observation sites in Eurasia. Observing System Simulation
Uncertainty Quantification for Large-Scale Ice Sheet Modeling
Energy Technology Data Exchange (ETDEWEB)
Ghattas, Omar [Univ. of Texas, Austin, TX (United States)
2016-02-05
This report summarizes our work to develop advanced forward and inverse solvers and uncertainty quantification capabilities for a nonlinear 3D full Stokes continental-scale ice sheet flow model. The components include: (1) forward solver: a new state-of-the-art parallel adaptive scalable high-order-accurate mass-conservative Newton-based 3D nonlinear full Stokes ice sheet flow simulator; (2) inverse solver: a new adjoint-based inexact Newton method for solution of deterministic inverse problems governed by the above 3D nonlinear full Stokes ice flow model; and (3) uncertainty quantification: a novel Hessian-based Bayesian method for quantifying uncertainties in the inverse ice sheet flow solution and propagating them forward into predictions of quantities of interest such as ice mass flux to the ocean.
Uncertainty Quantification and Validation for RANS Turbulence Models
Oliver, Todd; Moser, Robert
2011-11-01
Uncertainty quantification and validation procedures for RANS turbulence models are developed and applied. The procedures used here rely on a Bayesian view of probability. In particular, the uncertainty quantification methodology requires stochastic model development, model calibration, and model comparison, all of which are pursued using tools from Bayesian statistics. Model validation is also pursued in a probabilistic framework. The ideas and processes are demonstrated on a channel flow example. Specifically, a set of RANS models--including Baldwin-Lomax, Spalart-Allmaras, k- ɛ, k- ω, and v2- f--and uncertainty representations are analyzed using DNS data for fully-developed channel flow. Predictions of various quantities of interest and the validity (or invalidity) of the various models for making those predictions will be examined. This work is supported by the Department of Energy [National Nuclear Security Administration] under Award Number [DE-FC52-08NA28615].
MORPHOLOGICAL QUANTIFICATION OF AORTIC CALCIFICATION FROM LOW MAGNIFICATION IMAGES
Directory of Open Access Journals (Sweden)
Jesús Angulo
2011-05-01
Full Text Available Atherosclerotic and medial vascular calcifications are frequent in chronic renal failure patiens and predict their increased cardiovascular mortality. Experimental models for mice have been recently developed in order to study these disorders. The aim of this paper is to present the morphological image processing algorithms developed for the semi-automated measurement of calcification from sections of aorta stained using von Kossa's silver nitrate procedure and acquired at low magnification power (x 2.5 on colour images. The approach is separated into two sequential phases. First, the segmentation is aimed to extract the calcification structures and on the other hand to demarcate the region of the atherosclerotic lesion within the tissue. The segmentation yields the image data which is the input to the second phase, the quantification. Calcified structures are measured inside and outside the lesion using a granulometric curve which allows the calculation of statistical parameters of size. The same operator computes the shape of the lesion. The relative proportion of the area of calcification is also calculated respectively for the atherosclerotic lesion area and the area outside such lesions. In conclusion, the here developed method allows quantification of vascular calcified deposits in mouse aorta. This method will be useful for the quantitative assessment of pathological vascular changes in animals and man.
Guided Wave Delamination Detection and Quantification With Wavefield Data Analysis
Tian, Zhenhua; Campbell Leckey, Cara A.; Seebo, Jeffrey P.; Yu, Lingyu
2014-01-01
Unexpected damage can occur in aerospace composites due to impact events or material stress during off-nominal loading events. In particular, laminated composites are susceptible to delamination damage due to weak transverse tensile and inter-laminar shear strengths. Developments of reliable and quantitative techniques to detect delamination damage in laminated composites are imperative for safe and functional optimally-designed next-generation composite structures. In this paper, we investigate guided wave interactions with delamination damage and develop quantification algorithms by using wavefield data analysis. The trapped guided waves in the delamination region are observed from the wavefield data and further quantitatively interpreted by using different wavenumber analysis methods. The frequency-wavenumber representation of the wavefield shows that new wavenumbers are present and correlate to trapped waves in the damage region. These new wavenumbers are used to detect and quantify the delamination damage through the wavenumber analysis, which can show how the wavenumber changes as a function of wave propagation distance. The location and spatial duration of the new wavenumbers can be identified, providing a useful means not only for detecting the presence of delamination damage but also allowing for estimation of the delamination size. Our method has been applied to detect and quantify real delamination damage with complex geometry (grown using a quasi-static indentation technique). The detection and quantification results show the location, size, and shape of the delamination damage.
Directory of Open Access Journals (Sweden)
Ruholla Jafari-Marandi
2017-04-01
Full Text Available Genetic Algorithm (GA has been one of the most popular methods for many challenging optimization problems when exact approaches are too computationally expensive. A review of the literature shows extensive research attempting to adapt and develop the standard GA. Nevertheless, the essence of GA which consists of concepts such as chromosomes, individuals, crossover, mutation, and others rarely has been the focus of recent researchers. In this paper method, Fluid Genetic Algorithm (FGA, some of these concepts are changed, removed, and furthermore, new concepts are introduced. The performance of GA and FGA are compared through seven benchmark functions. FGA not only shows a better success rate and better convergence control, but it can be applied to a wider range of problems including multi-objective and multi-level problems. Also, the application of FGA for a real engineering problem, Quadric Assignment Problem (AQP, is shown and experienced.
The cyclic reduction algorithm
Bini, Dario; Meini, Beatrice
2009-05-01
Cyclic reduction is an algorithm invented by G.H. Golub and R. W. Hockney in the mid 1960s for solving linear systems related to the finite differences discretization of the Poisson equation over a rectangle. Among the algorithms of Gene Golub, it is one of the most versatile and powerful ever created. Recently, it has been applied to solve different problems from different applicative areas. In this paper we survey the main features of cyclic reduction, relate it to properties of analytic functions, recall its extension to solving more general finite and infinite linear systems, and different kinds of nonlinear matrix equations, including algebraic Riccati equations, with applications to Markov chains, queueing models and transport theory. Some new results concerning the convergence properties of cyclic reduction and its applicability are proved under very weak assumptions. New formulae for overcoming breakdown are provided.
Beyond-Binary Arithmetic: Algorithms and VLSI Implementations
Aoki, Takafumi; Higuchi, Tatsuo
2000-01-01
Beyond-binary arithmetic algorithms are defined as a new class of computer arithmetic algorithms which employ non-binary data representations to achieve higher performances beyond those of conventional binary algorithms. This paper presents prominent examples of beyond-binary arithmetic algorithms: examples include (i) a high-radix redundant division algorithm without using lookup tables, (ii) a high-radix redundant CORDIC algorithm for fast vector rotation, and (iii) redundant complex arithm...
Quantification of chromatin condensation level by image processing.
Irianto, Jerome; Lee, David A; Knight, Martin M
2014-03-01
The level of chromatin condensation is related to the silencing/activation of chromosomal territories and therefore impacts on gene expression. Chromatin condensation changes during cell cycle, progression and differentiation, and is influenced by various physicochemical and epigenetic factors. This study describes a validated experimental technique to quantify chromatin condensation. A novel image processing procedure is developed using Sobel edge detection to quantify the level of chromatin condensation from nuclei images taken by confocal microscopy. The algorithm was developed in MATLAB and used to quantify different levels of chromatin condensation in chondrocyte nuclei achieved through alteration in osmotic pressure. The resulting chromatin condensation parameter (CCP) is in good agreement with independent multi-observer qualitative visual assessment. This image processing technique thereby provides a validated unbiased parameter for rapid and highly reproducible quantification of the level of chromatin condensation.
Quantification of Uncertainty in Extreme Scale Computations (QUEST)
Energy Technology Data Exchange (ETDEWEB)
Ghanem, Roger [Univ. of Southern California, Los Angeles, CA (United States)
2017-04-18
QUEST was a SciDAC Institute comprising Sandia National Laboratories, Los Alamos National Laboratory, the University of Southern California, the Massachusetts Institute of Technology, the University of Texas at Austin, and Duke University. The mission of QUEST is to: (1) develop a broad class of uncertainty quantification (UQ) methods/tools, and (2) provide UQ expertise and software to other SciDAC projects, thereby enabling/guiding their UQ activities. The USC effort centered on the development of reduced models and efficient algorithms for implementing various components of the UQ pipeline. USC personnel were responsible for the development of adaptive bases, adaptive quadrature, and reduced models to be used in estimation and inference.
Developing and Implementing the Data Mining Algorithms in RAVEN
Energy Technology Data Exchange (ETDEWEB)
Sen, Ramazan Sonat [Idaho National Lab. (INL), Idaho Falls, ID (United States); Maljovec, Daniel Patrick [Idaho National Lab. (INL), Idaho Falls, ID (United States); Alfonsi, Andrea [Idaho National Lab. (INL), Idaho Falls, ID (United States); Rabiti, Cristian [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2015-09-01
The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.
Algorithms for defects in nanostructures
Energy Technology Data Exchange (ETDEWEB)
Chan, T.-L.; Tiago, Murilo L. [Center for Computational Materials, Institute for Computational Engineering and Sciences, University of Texas, Austin, Texas 78712 (United States); Chelikowsky, James R. [Center for Computational Materials, Institute for Computational Engineering and Sciences, University of Texas, Austin, Texas 78712 (United States); Departments of Physics and Chemical Engineering, University of Texas, Austin, Texas 78712 (United States)], E-mail: jrc@ices.utexas.edu
2007-12-15
We illustrate recent progress in developing algorithms for solving the Kohn-Sham problem. Key ingredients of our algorithm include pseudopotentials implemented on a real space grid and the use of damped-Chebyshev polynomial filtered subspace iteration. This procedure allows one to predict electronic properties for many materials across the nano-regime, i.e., from atoms to nanocrystals of sufficient size to replicate bulk properties. We will illustrate this method for large silicon quantum dots doped with phosphorus defect.
An Efficient Pattern Matching Algorithm
Sleit, Azzam; Almobaideen, Wesam; Baarah, Aladdin H.; Abusitta, Adel H.
In this study, we present an efficient algorithm for pattern matching based on the combination of hashing and search trees. The proposed solution is classified as an offline algorithm. Although, this study demonstrates the merits of the technique for text matching, it can be utilized for various forms of digital data including images, audio and video. The performance superiority of the proposed solution is validated analytically and experimentally.
Unconventional Algorithms: Complementarity of Axiomatics and Construction
Directory of Open Access Journals (Sweden)
Gordana Dodig Crnkovic
2012-10-01
Full Text Available In this paper, we analyze axiomatic and constructive issues of unconventional computations from a methodological and philosophical point of view. We explain how the new models of algorithms and unconventional computations change the algorithmic universe, making it open and allowing increased flexibility and expressive power that augment creativity. At the same time, the greater power of new types of algorithms also results in the greater complexity of the algorithmic universe, transforming it into the algorithmic multiverse and demanding new tools for its study. That is why we analyze new powerful tools brought forth by local mathematics, local logics, logical varieties and the axiomatic theory of algorithms, automata and computation. We demonstrate how these new tools allow efficient navigation in the algorithmic multiverse. Further work includes study of natural computation by unconventional algorithms and constructive approaches.
Chinese handwriting recognition an algorithmic perspective
Su, Tonghua
2013-01-01
This book provides an algorithmic perspective on the recent development of Chinese handwriting recognition. Two technically sound strategies, the segmentation-free and integrated segmentation-recognition strategy, are investigated and algorithms that have worked well in practice are primarily focused on. Baseline systems are initially presented for these strategies and are subsequently expanded on and incrementally improved. The sophisticated algorithms covered include: 1) string sample expansion algorithms which synthesize string samples from isolated characters or distort realistic string samples; 2) enhanced feature representation algorithms, e.g. enhanced four-plane features and Delta features; 3) novel learning algorithms, such as Perceptron learning with dynamic margin, MPE training and distributed training; and lastly 4) ensemble algorithms, that is, combining the two strategies using both parallel structure and serial structure. All the while, the book moves from basic to advanced algorithms, helping ...
Algorithm Animation with Galant.
Stallmann, Matthias F
2017-01-01
Although surveys suggest positive student attitudes toward the use of algorithm animations, it is not clear that they improve learning outcomes. The Graph Algorithm Animation Tool, or Galant, challenges and motivates students to engage more deeply with algorithm concepts, without distracting them with programming language details or GUIs. Even though Galant is specifically designed for graph algorithms, it has also been used to animate other algorithms, most notably sorting algorithms.
Bayesian deconvolution and quantification of metabolites in complex 1D NMR spectra using BATMAN.
Hao, Jie; Liebeke, Manuel; Astle, William; De Iorio, Maria; Bundy, Jacob G; Ebbels, Timothy M D
2014-01-01
Data processing for 1D NMR spectra is a key bottleneck for metabolomic and other complex-mixture studies, particularly where quantitative data on individual metabolites are required. We present a protocol for automated metabolite deconvolution and quantification from complex NMR spectra by using the Bayesian automated metabolite analyzer for NMR (BATMAN) R package. BATMAN models resonances on the basis of a user-controllable set of templates, each of which specifies the chemical shifts, J-couplings and relative peak intensities for a single metabolite. Peaks are allowed to shift position slightly between spectra, and peak widths are allowed to vary by user-specified amounts. NMR signals not captured by the templates are modeled non-parametrically by using wavelets. The protocol covers setting up user template libraries, optimizing algorithmic input parameters, improving prior information on peak positions, quality control and evaluation of outputs. The outputs include relative concentration estimates for named metabolites together with associated Bayesian uncertainty estimates, as well as the fit of the remainder of the spectrum using wavelets. Graphical diagnostics allow the user to examine the quality of the fit for multiple spectra simultaneously. This approach offers a workflow to analyze large numbers of spectra and is expected to be useful in a wide range of metabolomics studies.
Uncertainty Quantification in Aerodynamics Simulations Project
National Aeronautics and Space Administration — The objective of the proposed work (Phases I and II) is to develop uncertainty quantification methodologies and software suitable for use in CFD simulations of...
MAMA Software Features: Visual Examples of Quantification
Energy Technology Data Exchange (ETDEWEB)
Ruggiero, Christy E. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Porter, Reid B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2014-05-20
This document shows examples of the results from quantifying objects of certain sizes and types in the software. It is intended to give users a better feel for some of the quantification calculations, and, more importantly, to help users understand the challenges with using a small set of ‘shape’ quantification calculations for objects that can vary widely in shapes and features. We will add more examples to this in the coming year.
Directory of Open Access Journals (Sweden)
Li Song
2010-04-01
Full Text Available Abstract Background Quantitative proteomics technologies have been developed to comprehensively identify and quantify proteins in two or more complex samples. Quantitative proteomics based on differential stable isotope labeling is one of the proteomics quantification technologies. Mass spectrometric data generated for peptide quantification are often noisy, and peak detection and definition require various smoothing filters to remove noise in order to achieve accurate peptide quantification. Many traditional smoothing filters, such as the moving average filter, Savitzky-Golay filter and Gaussian filter, have been used to reduce noise in MS peaks. However, limitations of these filtering approaches often result in inaccurate peptide quantification. Here we present the WaveletQuant program, based on wavelet theory, for better or alternative MS-based proteomic quantification. Results We developed a novel discrete wavelet transform (DWT and a 'Spatial Adaptive Algorithm' to remove noise and to identify true peaks. We programmed and compiled WaveletQuant using Visual C++ 2005 Express Edition. We then incorporated the WaveletQuant program in the Trans-Proteomic Pipeline (TPP, a commonly used open source proteomics analysis pipeline. Conclusions We showed that WaveletQuant was able to quantify more proteins and to quantify them more accurately than the ASAPRatio, a program that performs quantification in the TPP pipeline, first using known mixed ratios of yeast extracts and then using a data set from ovarian cancer cell lysates. The program and its documentation can be downloaded from our website at http://systemsbiozju.org/data/WaveletQuant.
Risk Quantification and Evaluation Modelling
Directory of Open Access Journals (Sweden)
Manmohan Singh
2014-07-01
Full Text Available In this paper authors have discussed risk quantification methods and evaluation of risks and decision parameter to be used for deciding on ranking of the critical items, for prioritization of condition monitoring based risk and reliability centered maintenance (CBRRCM. As time passes any equipment or any product degrades into lower effectiveness and the rate of failure or malfunctioning increases, thereby lowering the reliability. Thus with the passage of time or a number of active tests or periods of work, the reliability of the product or the system, may fall down to a low value known as a threshold value, below which the reliability should not be allowed to dip. Hence, it is necessary to fix up the normal basis for determining the appropriate points in the product life cycle where predictive preventive maintenance may be applied in the programme so that the reliability (the probability of successful functioning can be enhanced, preferably to its original value, by reducing the failure rate and increasing the mean time between failure. It is very important for defence application where reliability is a prime work. An attempt is made to develop mathematical model for risk assessment and ranking them. Based on likeliness coefficient β1 and risk coefficient β2 ranking of the sub-systems can be modelled and used for CBRRCM.Defence Science Journal, Vol. 64, No. 4, July 2014, pp. 378-384, DOI:http://dx.doi.org/10.14429/dsj.64.6366
Precise quantification of nanoparticle internalization.
Gottstein, Claudia; Wu, Guohui; Wong, Benjamin J; Zasadzinski, Joseph Anthony
2013-06-25
Nanoparticles have opened new exciting avenues for both diagnostic and therapeutic applications in human disease, and targeted nanoparticles are increasingly used as specific drug delivery vehicles. The precise quantification of nanoparticle internalization is of importance to measure the impact of physical and chemical properties on the uptake of nanoparticles into target cells or into cells responsible for rapid clearance. Internalization of nanoparticles has been measured by various techniques, but comparability of data between different laboratories is impeded by lack of a generally accepted standardized assay. Furthermore, the distinction between associated and internalized particles has been a challenge for many years, although this distinction is critical for most research questions. Previously used methods to verify intracellular location are typically not quantitative and do not lend themselves to high-throughput analysis. Here, we developed a mathematical model which integrates the data from high-throughput flow cytometry measurements with data from quantitative confocal microscopy. The generic method described here will be a useful tool in biomedical nanotechnology studies. The method was then applied to measure the impact of surface coatings of vesosomes on their internalization by cells of the reticuloendothelial system (RES). RES cells are responsible for rapid clearance of nanoparticles, and the resulting fast blood clearance is one of the major challenges in biomedical applications of nanoparticles. Coating of vesosomes with long chain polyethylene glycol showed a trend for lower internalization by RES cells.
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Energy Technology Data Exchange (ETDEWEB)
Haihua Zhao; Vincent A. Mousseau
2008-09-01
This report presents the forward sensitivity analysis method as a means for quantification of uncertainty in system analysis. The traditional approach to uncertainty quantification is based on a “black box” approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. This approach requires large number of simulation runs and therefore has high computational cost. Contrary to the “black box” method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In this approach equations for the propagation of uncertainty are constructed and the sensitivity is solved for as variables in the same simulation. This “glass box” method can generate similar sensitivity information as the above “black box” approach with couples of runs to cover a large uncertainty region. Because only small numbers of runs are required, those runs can be done with a high accuracy in space and time ensuring that the uncertainty of the physical model is being measured and not simply the numerical error caused by the coarse discretization. In the forward sensitivity method, the model is differentiated with respect to each parameter to yield an additional system of the same size as the original one, the result of which is the solution sensitivity. The sensitivity of any output variable can then be directly obtained from these sensitivities by applying the chain rule of differentiation. We extend the forward sensitivity method to include time and spatial steps as special parameters so that the numerical errors can be quantified against other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty analysis. By knowing the relative sensitivity of time and space steps with other
Energy Technology Data Exchange (ETDEWEB)
Symbalisty, E.M.D.; Zinn, J.; Whitaker, R.W.
1995-09-01
This paper describes the history, physics, and algorithms of the computer code RADFLO and its extension HYCHEM. RADFLO is a one-dimensional, radiation-transport hydrodynamics code that is used to compute early-time fireball behavior for low-altitude nuclear bursts. The primary use of the code is the prediction of optical signals produced by nuclear explosions. It has also been used to predict thermal and hydrodynamic effects that are used for vulnerability and lethality applications. Another closely related code, HYCHEM, is an extension of RADFLO which includes the effects of nonequilibrium chemistry. Some examples of numerical results will be shown, along with scaling expressions derived from those results. We describe new computations of the structures and luminosities of steady-state shock waves and radiative thermal waves, which have been extended to cover a range of ambient air densities for high-altitude applications. We also describe recent modifications of the codes to use a one-dimensional analog of the CAVEAT fluid-dynamics algorithm in place of the former standard Richtmyer-von Neumann algorithm.
Quantification of rice bran oil in oil blends
Energy Technology Data Exchange (ETDEWEB)
Mishra, R.; Sharma, H. K.; Sengar, G.
2012-11-01
Blends consisting of physically refined rice bran oil (PRBO): sunflower oil (SnF) and PRBO: safflower oil (SAF) in different proportions were analyzed for various physicochemical parameters. The quantification of pure rice bran oil in the blended oils was carried out using different methods including gas chromatographic, HPLC, ultrasonic velocity and methods based on physico-chemical parameters. The physicochemical parameters such as ultrasonic velocity, relative association and acoustic impedance at 2 MHz, iodine value, palmitic acid content and oryzanol content reflected significant changes with increased proportions of PRBO in the blended oils. These parameters were selected as dependent parameters and % PRBO proportion was selected as independent parameters. The study revealed that regression equations based on the oryzanol content, palmitic acid composition, ultrasonic velocity, relative association, acoustic impedance, and iodine value can be used for the quantification of rice bran oil in blended oils. The rice bran oil can easily be quantified in the blended oils based on the oryzanol content by HPLC even at a 1% level. The palmitic acid content in blended oils can also be used as an indicator to quantify rice bran oil at or above the 20% level in blended oils whereas the method based on ultrasonic velocity, acoustic impedance and relative association showed initial promise in the quantification of rice bran oil. (Author) 23 refs.
The Rational Hybrid Monte Carlo Algorithm
Clark, M A
2006-01-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
The Rational Hybrid Monte Carlo algorithm
Clark, Michael
2006-12-01
The past few years have seen considerable progress in algorithmic development for the generation of gauge fields including the effects of dynamical fermions. The Rational Hybrid Monte Carlo (RHMC) algorithm, where Hybrid Monte Carlo is performed using a rational approximation in place the usual inverse quark matrix kernel is one of these developments. This algorithm has been found to be extremely beneficial in many areas of lattice QCD (chiral fermions, finite temperature, Wilson fermions etc.). We review the algorithm and some of these benefits, and we compare against other recent algorithm developements. We conclude with an update of the Berlin wall plot comparing costs of all popular fermion formulations.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
This paper argues that agent-based simulation can be used as a way for testing Kansei Engineering methods which deal with the human reaction from sensory to mental state, that is, sensitivity, sense,sensibility, feeling, esthetics, emotion affection and intuition. A new fuzzy linear quantification method is tested in an artificial world by agent-based modeling and simulations, and the performance of the fuzzy linear method is compared with that of a genetic algorithm. The simulations can expand people's imagination and enhance people's intuition that the new fuzzy linear quantification method is effective.
DEFF Research Database (Denmark)
Vu, Minh Thi Thuy; Jepsen, Per Meyer; Hansen, Benni Winding
2014-01-01
ignored. In this study, we propose a novel method for highly precise classification of development stages and biomass of A. tonsa, in intensive live feed cultures, using an automated ZooImage system, a freeware image analysis. We successfully created a training set of 13 categories, including 7 copepod...... and 6 non-copepod (debris) groups. ZooImage used this training set for automatic discrimination through a random forest algorithm with the general accuracy of 92.8%. The ZooImage showed no significant difference in classifying solitary eggs, or mixed nauplii stages and copepodites compared to personal...... microscope observation. Furthermore, ZooImage was also adapted for automatic estimation of A. tonsa biomass. This is the first study that has successfully applied ZooImage software which enables fast and reliable quantification of the development stages and the biomass of A. tonsa. As a result, relevant...
Quantification of myocardial perfusion defects using three different software packages
Energy Technology Data Exchange (ETDEWEB)
Svensson, Annmarie; Aakesson, Liz [Department of Clinical Physiology, Malmoe University Hospital, 205 02, Malmoe (Sweden); Edenbrandt, Lars [Department of Clinical Physiology, Malmoe University Hospital, 205 02, Malmoe (Sweden); Department of Clinical Physiology, Sahlgrenska University Hospital, Gothenburg (Sweden)
2004-02-01
Software packages are widely used for quantification of myocardial perfusion defects. The quantification is used to assist the physician in his/her interpretation of the study. The purpose of this study was to compare the quantification of reversible perfusion defects by three different commercially available software packages. We included 50 consecutive patients who underwent myocardial perfusion single-photon emission tomography (SPET) with a 2-day technetium-99m tetrofosmin protocol. Two experienced technologists processed the studies using the following three software packages: Cedars Quantitative Perfusion SPECT, Emory Cardiac Toolbox and 4D-MSPECT. The same sets of short axis slices were used as input to all three software packages. Myocardial uptake was scored in 20 segments for both the rest and the stress studies. The summed difference score (SDS) was calculated for each patient and the SDS values were classified into: normal (<4), mildly abnormal (4-8), moderately abnormal (9-13), and severely abnormal (>13). All three software packages were in agreement that 21 patients had a normal SDS, four patients had a mildly abnormal SDS and one patient had a severely abnormal SDS. In the remaining 24 patients (48%) there was disagreement between the software packages regarding SDS classification. A difference in classification of more than one step between the highest and lowest scores, for example from normal to moderately abnormal or from mildly to severely abnormal, was found in six of these 24 patients. Widely used software packages commonly differ in their quantification of myocardial perfusion defects. The interpreting physician should be aware of these differences when using scoring systems. (orig.)
The Performance Comparisons between the Unconstrained and Constrained Equalization Algorithms
Institute of Scientific and Technical Information of China (English)
HE Zhong-qiu; LI Dao-ben
2003-01-01
This paper proposes two unconstrained algorithms, the Steepest Decent (SD) algorithm and the Conjugate Gradient (CG) algorithm, based on a superexcellent cost function [1～3]. At the same time, two constrained algorithms which include the Constrained Steepest Decent (CSD) algorithm and the Constrained Conjugate Gradient algorithm (CCG) are deduced subject to a new constrain condition. They are both implemented in unitary transform domain. The computational complexities of the constrained algorithms are compared to those of the unconstrained algorithms. Resulting simulations show their performance comparisons.
NWRA AVOSS Wake Vortex Prediction Algorithm. 3.1.1
Robins, R. E.; Delisi, D. P.; Hinton, David (Technical Monitor)
2002-01-01
This report provides a detailed description of the wake vortex prediction algorithm used in the Demonstration Version of NASA's Aircraft Vortex Spacing System (AVOSS). The report includes all equations used in the algorithm, an explanation of how to run the algorithm, and a discussion of how the source code for the algorithm is organized. Several appendices contain important supplementary information, including suggestions for enhancing the algorithm and results from test cases.
Lamacie, Mariana M; Thavendiranathan, Paaladinesh; Hanneman, Kate; Greiser, Andreas; Jolly, Marie-Pierre; Ward, Richard; Wintersperger, Bernd J
2017-04-01
To evaluate deformable registration algorithms (DRA)-based quantification of cine steady-state free-precession (SSFP) for myocardial strain assessment in comparison with feature-tracking (FT) and speckle-tracking echocardiography (STE). Data sets of 28 patients/10 volunteers, undergoing same-day 1.5T cardiac MRI and echocardiography were included. LV global longitudinal (GLS), circumferential (GCS) and radial (GRS) peak systolic strain were assessed on cine SSFP data using commercially available FT algorithms and prototype DRA-based algorithms. STE was applied as standard of reference for accuracy, precision and intra-/interobserver reproducibility testing. DRA showed narrower limits of agreement compared to STE for GLS (-4.0 [-0.9,-7.9]) and GCS (-5.1 [1.1,-11.2]) than FT (3.2 [11.2,-4.9]; 3.8 [13.9,-6.3], respectively). While both DRA and FT demonstrated significant differences to STE for GLS and GCS (all ptracking (FT) methods. • Cine MR DRA and FT analysis demonstrate differences to speckle-tracking echocardiography • DRA demonstrated better correlation with STE than FT for MR-derived global strain data.
Planar graphs theory and algorithms
Nishizeki, T
1988-01-01
Collected in this volume are most of the important theorems and algorithms currently known for planar graphs, together with constructive proofs for the theorems. Many of the algorithms are written in Pidgin PASCAL, and are the best-known ones; the complexities are linear or 0(nlogn). The first two chapters provide the foundations of graph theoretic notions and algorithmic techniques. The remaining chapters discuss the topics of planarity testing, embedding, drawing, vertex- or edge-coloring, maximum independence set, subgraph listing, planar separator theorem, Hamiltonian cycles, and single- or multicommodity flows. Suitable for a course on algorithms, graph theory, or planar graphs, the volume will also be useful for computer scientists and graph theorists at the research level. An extensive reference section is included.
Mathematical algorithms for approximate reasoning
Murphy, John H.; Chay, Seung C.; Downs, Mary M.
1988-01-01
Most state of the art expert system environments contain a single and often ad hoc strategy for approximate reasoning. Some environments provide facilities to program the approximate reasoning algorithms. However, the next generation of expert systems should have an environment which contain a choice of several mathematical algorithms for approximate reasoning. To meet the need for validatable and verifiable coding, the expert system environment must no longer depend upon ad hoc reasoning techniques but instead must include mathematically rigorous techniques for approximate reasoning. Popular approximate reasoning techniques are reviewed, including: certainty factors, belief measures, Bayesian probabilities, fuzzy logic, and Shafer-Dempster techniques for reasoning. A group of mathematically rigorous algorithms for approximate reasoning are focused on that could form the basis of a next generation expert system environment. These algorithms are based upon the axioms of set theory and probability theory. To separate these algorithms for approximate reasoning various conditions of mutual exclusivity and independence are imposed upon the assertions. Approximate reasoning algorithms presented include: reasoning with statistically independent assertions, reasoning with mutually exclusive assertions, reasoning with assertions that exhibit minimum overlay within the state space, reasoning with assertions that exhibit maximum overlay within the state space (i.e. fuzzy logic), pessimistic reasoning (i.e. worst case analysis), optimistic reasoning (i.e. best case analysis), and reasoning with assertions with absolutely no knowledge of the possible dependency among the assertions. A robust environment for expert system construction should include the two modes of inference: modus ponens and modus tollens. Modus ponens inference is based upon reasoning towards the conclusion in a statement of logical implication, whereas modus tollens inference is based upon reasoning away
Skiena, Steven S
2008-01-01
Explaining designing algorithms, and analyzing their efficacy and efficiency, this book covers combinatorial algorithms technology, stressing design over analysis. It presents instruction on methods for designing and analyzing computer algorithms. It contains the catalog of algorithmic resources, implementations and a bibliography
Hamiltonian Algorithm Sound Synthesis
大矢, 健一
2013-01-01
Hamiltonian Algorithm (HA) is an algorithm for searching solutions is optimization problems. This paper introduces a sound synthesis technique using Hamiltonian Algorithm and shows a simple example. "Hamiltonian Algorithm Sound Synthesis" uses phase transition effect in HA. Because of this transition effect, totally new waveforms are produced.
Computer-assisted quantification of CD3+ T cells in follicular lymphoma.
Abas, Fazly S; Shana'ah, Arwa; Christian, Beth; Hasserjian, Robert; Louissaint, Abner; Pennell, Michael; Sahiner, Berkman; Chen, Weijie; Niazi, Muhammad Khalid Khan; Lozanski, Gerard; Gurcan, Metin
2017-01-22
The advance of high resolution digital scans of pathology slides allowed development of computer based image analysis algorithms that may help pathologists in IHC stains quantification. While very promising, these methods require further refinement before they are implemented in routine clinical setting. Particularly critical is to evaluate algorithm performance in a setting similar to current clinical practice. In this article, we present a pilot study that evaluates the use of a computerized cell quantification method in the clinical estimation of CD3 positive (CD3+) T cells in follicular lymphoma (FL). Our goal is to demonstrate the degree to which computerized quantification is comparable to the practice of estimation by a panel of expert pathologists. The computerized quantification method uses entropy based histogram thresholding to separate brown (CD3+) and blue (CD3-) regions after a color space transformation. A panel of four board-certified hematopathologists evaluated a database of 20 FL images using two different reading methods: visual estimation and manual marking of each CD3+ cell in the images. These image data and the readings provided a reference standard and the range of variability among readers. Sensitivity and specificity measures of the computer's segmentation of CD3+ and CD- T cell are recorded. For all four pathologists, mean sensitivity and specificity measures are 90.97 and 88.38%, respectively. The computerized quantification method agrees more with the manual cell marking as compared to the visual estimations. Statistical comparison between the computerized quantification method and the pathologist readings demonstrated good agreement with correlation coefficient values of 0.81 and 0.96 in terms of Lin's concordance correlation and Spearman's correlation coefficient, respectively. These values are higher than most of those calculated among the pathologists. In the future, the computerized quantification method may be used to investigate
A Short Survey of Document Structure Similarity Algorithms
Energy Technology Data Exchange (ETDEWEB)
Buttler, D
2004-02-27
This paper provides a brief survey of document structural similarity algorithms, including the optimal Tree Edit Distance algorithm and various approximation algorithms. The approximation algorithms include the simple weighted tag similarity algorithm, Fourier transforms of the structure, and a new application of the shingle technique to structural similarity. We show three surprising results. First, the Fourier transform technique proves to be the least accurate of any of approximation algorithms, while also being slowest. Second, optimal Tree Edit Distance algorithms may not be the best technique for clustering pages from different sites. Third, the simplest approximation to structure may be the most effective and efficient mechanism for many applications.
DEFF Research Database (Denmark)
Bucher, Taina
2017-01-01
of algorithms affect people's use of these platforms, if at all? To help answer these questions, this article examines people's personal stories about the Facebook algorithm through tweets and interviews with 25 ordinary users. To understand the spaces where people and algorithms meet, this article develops...... the notion of the algorithmic imaginary. It is argued that the algorithmic imaginary – ways of thinking about what algorithms are, what they should be and how they function – is not just productive of different moods and sensations but plays a generative role in moulding the Facebook algorithm itself...
Energy Technology Data Exchange (ETDEWEB)
Geist, G.A. [Oak Ridge National Lab., TN (United States). Computer Science and Mathematics Div.; Howell, G.W. [Florida Inst. of Tech., Melbourne, FL (United States). Dept. of Applied Mathematics; Watkins, D.S. [Washington State Univ., Pullman, WA (United States). Dept. of Pure and Applied Mathematics
1997-11-01
The BR algorithm, a new method for calculating the eigenvalues of an upper Hessenberg matrix, is introduced. It is a bulge-chasing algorithm like the QR algorithm, but, unlike the QR algorithm, it is well adapted to computing the eigenvalues of the narrowband, nearly tridiagonal matrices generated by the look-ahead Lanczos process. This paper describes the BR algorithm and gives numerical evidence that it works well in conjunction with the Lanczos process. On the biggest problems run so far, the BR algorithm beats the QR algorithm by a factor of 30--60 in computing time and a factor of over 100 in matrix storage space.
A novel method for quantification of beam's-eye-view tumor tracking performance.
Hu, Yue-Houng; Myronakis, Marios; Rottmann, Joerg; Wang, Adam; Morf, Daniel; Shedlock, Daniel; Baturin, Paul; Star-Lack, Josh; Berbeco, Ross
2017-09-09
In-treatment imaging using an electronic portal imaging device (EPID) can be used to confirm patient and tumor positioning. Real-time tumor tracking performance using current digital megavolt (MV) imagers is hindered by poor image quality. Novel EPID designs may help to improve quantum noise response, while also preserving the high spatial resolution of the current clinical detector. Recently investigated EPID design improvements include but are not limited to multi-layer imager (MLI) architecture, thick crystalline and amorphous scintillators, and phosphor pixilation and focusing. The goal of the present study was to provide a method of quantifying improvement in tracking performance as well as to reveal the physical underpinnings of detector design that impact tracking quality. The study employs a generalizable ideal observer methodology for the quantification of tumor tracking performance. The analysis is applied to study both the effect of increasing scintillator thickness on a standard, single-layer imager (SLI) design as well as the effect of MLI architecture on tracking performance. The present study uses the ideal observer signal-to-noise ratio (d') as a surrogate for tracking performance. We employ functions which model clinically relevant tasks and generalized frequency-domain imaging metrics to connect image quality with tumor tracking. A detection task for relevant Cartesian shapes (i.e. spheres and cylinders) was used to quantify trackability of cases employing fiducial markers. Automated lung tumor tracking algorithms often leverage the differences in benign and malignant lung tissue textures. These types of algorithms (e.g. soft tissue localization - STiL) were simulated by designing a discrimination task, which quantifies the differentiation of tissue textures, measured experimentally and fit as a power-law in trend (with exponent β) using a cohort of MV images of patient lungs. The modeled MTF and NPS were used to investigate the effect of
Quantification of patient-derived 3D cancer spheroids in high-content screening images
Kang, Mi-Sun; Rhee, Seon-Min; Seo, Ji-Hyun; Kim, Myoung-Hee
2017-02-01
We present a cell image quantification method for image-based drug response prediction from patient-derived glioblastoma cells. Drug response of each person differs at the cellular level. Therefore, quantification of a patient-derived cell phenotype is important in drug response prediction. We performed fluorescence microscopy to understand the features of patient-derived 3D cancer spheroids. A 3D cell culture simulates the in-vivo environment more closely than 2D adherence culture, and thus, allows more accurate cell analysis. Furthermore, it allows assessment of cellular aggregates. Cohesion is an important feature of cancer cells. In this paper, we demonstrate image-based quantification of cellular area, fluorescence intensity, and cohesion. To this end, we first performed image stitching to create an image of each well of the plate with the same environment. This image shows colonies of various sizes and shapes. To automatically detect the colonies, we used an intensity based classification algorithm. The morphological features of each cancer cell colony were measured. Next, we calculated the location correlation of each colony that is appeal of the cell density in the same well environment. Finally, we compared the features for drug-treated and untreated cells. This technique could potentially be applied for drug screening and quantification of the effects of the drugs.
Protocol for Quantification of Defects in Natural Fibres for Composites
Directory of Open Access Journals (Sweden)
Ulrich Andreas Mortensen
2014-01-01
Full Text Available Natural bast-type plant fibres are attracting increasing interest for being used for structural composite applications where high quality fibres with good mechanical properties are required. A protocol for the quantification of defects in natural fibres is presented. The protocol is based on the experimental method of optical microscopy and the image analysis algorithms of the seeded region growing method and Otsu’s method. The use of the protocol is demonstrated by examining two types of differently processed flax fibres to give mean defect contents of 6.9 and 3.9%, a difference which is tested to be statistically significant. The protocol is evaluated with respect to the selection of image analysis algorithms, and Otsu’s method is found to be a more appropriate method than the alternative coefficient of variation method. The traditional way of defining defect size by area is compared to the definition of defect size by width, and it is shown that both definitions can be used to give unbiased findings for the comparison between fibre types. Finally, considerations are given with respect to true measures of defect content, number of determinations, and number of significant figures used for the descriptive statistics.
Methane Leak Detection and Emissions Quantification with UAVs
Barchyn, T.; Fox, T. A.; Hugenholtz, C.
2016-12-01
Robust leak detection and emissions quantification algorithms are required to accurately monitor greenhouse gas emissions. Unmanned aerial vehicles (UAVs, `drones') could both reduce the cost and increase the accuracy of monitoring programs. However, aspects of the platform create unique challenges. UAVs typically collect large volumes of data that are close to source (due to limited range) and often lower quality (due to weight restrictions on sensors). Here we discuss algorithm development for (i) finding sources of unknown position (`leak detection') and (ii) quantifying emissions from a source of known position. We use data from a simulated leak and field study in Alberta, Canada. First, we detail a method for localizing a leak of unknown spatial location using iterative fits against a forward Gaussian plume model. We explore sources of uncertainty, both inherent to the method and operational. Results suggest this method is primarily constrained by accurate wind direction data, distance downwind from source, and the non-Gaussian shape of close range plumes. Second, we examine sources of uncertainty in quantifying emissions with the mass balance method. Results suggest precision is constrained by flux plane interpolation errors and time offsets between spatially adjacent measurements. Drones can provide data closer to the ground than piloted aircraft, but large portions of the plume are still unquantified. Together, we find that despite larger volumes of data, working with close range plumes as measured with UAVs is inherently difficult. We describe future efforts to mitigate these challenges and work towards more robust benchmarking for application in industrial and regulatory settings.
Uncertainty quantification applied to the mode coupling phenomenon
Treimer, Martin; Allert, Baldur; Dylla, Katrin; Müller, Gerhard
2017-02-01
In this study a method for the uncertainty quantification of friction induced vibrations based on the mode coupling phenomenon is shown. The main focus is the assessment of the phenomenon under consideration of uncertain input parameters for the robustness evaluation. Stability assessments of the system under parameter scatter are given. It is pointed out how this is implemented within the scope of the Finite Element method. On the basis of the Euler-Bernoulli beam as a proof-of-concept model a procedure for the assessment of the system's robustness is shown. An objective function is proposed and used to evaluate a design of experiment. By means of a regression analysis an indicator for the robustness of the system is given. Numerical results are presented on the basis of the Euler-Bernoulli beam and a Finite Element brake model. A universal procedure is shown, the approach of which can be used for robustness assessments in different fields of interest. The algorithm that has an optimal efficiency is validated by a comparison with an algorithm which has an optimal quality of prediction. The procedure is applied on the robustness' assessment of brake squeal.
Explicit pattern matching assignment algorithm
Levedahl, Mark
2002-08-01
Sharing data between two tracking systems frequently involves use of an object map: the transmitting system sends a frame of data with multiple observations, and the receiving system uses an assignment algorithm to correlate the information with its local observation data base. The usual prescription for this problem is an optimal assignment algorithm (such as JVC or auction) using a cost matrix based upon chi-squared distances between the local and remote observation data. The optimal assignment algorithm does not actually perform pattern matching, so this approach is not robust to large registration errors between the two systems when there exist differences in the number of observations held by both systems. Performance of a new assignment algorithm that uses a cost function including terms for both registration errors and track to track random errors is presented: the cost function explicitly includes a bias between the two observation sets and thus provides a maximum likelihood solution to the assignment problem. In practice, this assignment approach provides near perfect assignment accuracy in cases where the bias errors exceed the dimension of the transmitted object map and there exist mismatches in the numbers of observations made by the two systems. This performance extends to many cases where the optimal assignment algorithm methodology produces errors nearly 100% of the time. The paper includes the theoretical foundation of the assignment problem solved and comparison of achieved accuracy with existing optimal assignment approaches.
Separation and quantification of microalgal carbohydrates.
Templeton, David W; Quinn, Matthew; Van Wychen, Stefanie; Hyman, Deborah; Laurens, Lieve M L
2012-12-28
Structural carbohydrates can constitute a large fraction of the dry weight of algal biomass and thus accurate identification and quantification is important for summative mass closure. Two limitations to the accurate characterization of microalgal carbohydrates are the lack of a robust analytical procedure to hydrolyze polymeric carbohydrates to their respective monomers and the subsequent identification and quantification of those monosaccharides. We address the second limitation, chromatographic separation of monosaccharides, here by identifying optimum conditions for the resolution of a synthetic mixture of 13 microalgae-specific monosaccharides, comprised of 8 neutral, 2 amino sugars, 2 uronic acids and 1 alditol (myo-inositol as an internal standard). The synthetic 13-carbohydrate mix showed incomplete resolution across 11 traditional high performance liquid chromatography (HPLC) methods, but showed improved resolution and accurate quantification using anion exchange chromatography (HPAEC) as well as alditol acetate derivatization followed by gas chromatography (for the neutral- and amino-sugars only). We demonstrate the application of monosaccharide quantification using optimized chromatography conditions after sulfuric acid analytical hydrolysis for three model algae strains and compare the quantification and complexity of monosaccharides in analytical hydrolysates relative to a typical terrestrial feedstock, sugarcane bagasse.
Improved pulse laser ranging algorithm based on high speed sampling
Gao, Xuan-yi; Qian, Rui-hai; Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; He, Shi-jie; Guo, Xiao-kang
2016-10-01
Narrow pulse laser ranging achieves long-range target detection using laser pulse with low divergent beams. Pulse laser ranging is widely used in military, industrial, civil, engineering and transportation field. In this paper, an improved narrow pulse laser ranging algorithm is studied based on the high speed sampling. Firstly, theoretical simulation models have been built and analyzed including the laser emission and pulse laser ranging algorithm. An improved pulse ranging algorithm is developed. This new algorithm combines the matched filter algorithm and the constant fraction discrimination (CFD) algorithm. After the algorithm simulation, a laser ranging hardware system is set up to implement the improved algorithm. The laser ranging hardware system includes a laser diode, a laser detector and a high sample rate data logging circuit. Subsequently, using Verilog HDL language, the improved algorithm is implemented in the FPGA chip based on fusion of the matched filter algorithm and the CFD algorithm. Finally, the laser ranging experiment is carried out to test the improved algorithm ranging performance comparing to the matched filter algorithm and the CFD algorithm using the laser ranging hardware system. The test analysis result demonstrates that the laser ranging hardware system realized the high speed processing and high speed sampling data transmission. The algorithm analysis result presents that the improved algorithm achieves 0.3m distance ranging precision. The improved algorithm analysis result meets the expected effect, which is consistent with the theoretical simulation.
Genetic Algorithm for Optimization: Preprocessor and Algorithm
Sen, S. K.; Shaykhian, Gholam A.
2006-01-01
Genetic algorithm (GA) inspired by Darwin's theory of evolution and employed to solve optimization problems - unconstrained or constrained - uses an evolutionary process. A GA has several parameters such the population size, search space, crossover and mutation probabilities, and fitness criterion. These parameters are not universally known/determined a priori for all problems. Depending on the problem at hand, these parameters need to be decided such that the resulting GA performs the best. We present here a preprocessor that achieves just that, i.e., it determines, for a specified problem, the foregoing parameters so that the consequent GA is a best for the problem. We stress also the need for such a preprocessor both for quality (error) and for cost (complexity) to produce the solution. The preprocessor includes, as its first step, making use of all the information such as that of nature/character of the function/system, search space, physical/laboratory experimentation (if already done/available), and the physical environment. It also includes the information that can be generated through any means - deterministic/nondeterministic/graphics. Instead of attempting a solution of the problem straightway through a GA without having/using the information/knowledge of the character of the system, we would do consciously a much better job of producing a solution by using the information generated/created in the very first step of the preprocessor. We, therefore, unstintingly advocate the use of a preprocessor to solve a real-world optimization problem including NP-complete ones before using the statistically most appropriate GA. We also include such a GA for unconstrained function optimization problems.
Energy Technology Data Exchange (ETDEWEB)
Fischer, Andreia Caroline Fischer da Silveira
2014-07-01
In clinical practice, PET/CT images are often analyzed qualitatively by visual comparison of tumor lesions and normal tissues uptake; and semi-quantitatively by means of a parameter called SUV (Standardized Uptake Value). To ensure that longitudinal studies acquired on different scanners are interchangeable, and information of quantification is comparable, it is necessary to establish a strategy to harmonize the quantification of SUV. The aim of this study is to evaluate the strategy to harmonize the quantification of PET/CT images, performed with different scanner models and manufacturers. For this purpose, a survey of the technical characteristics of equipment and acquisition protocols of clinical images of different services of PET/CT in the state of Rio Grande do Sul was conducted. For each scanner, the accuracy of SUV quantification, and the Recovery Coefficient (RC) curves were determined, using the reconstruction parameters clinically relevant and available. From these data, harmonized performance specifications among the evaluated scanners were identified, as well as the algorithm that produces, for each one, the most accurate quantification. Finally, the most appropriate reconstruction parameters to harmonize the SUV quantification in each scanner, either regionally or internationally were identified. It was found that the RC values of the analyzed scanners proved to be overestimated by up to 38%, particularly for objects larger than 17mm. These results demonstrate the need for further optimization, through the reconstruction parameters modification, and even the change of the reconstruction algorithm used in each scanner. It was observed that there is a decoupling between the best image for PET/CT qualitative analysis and the best image for quantification studies. Thus, the choice of reconstruction method should be tied to the purpose of the PET/CT study in question, since the same reconstruction algorithm is not adequate, in one scanner, for qualitative
A new algorithm for hip fracture surgery
DEFF Research Database (Denmark)
Palm, Henrik; Krasheninnikoff, Michael; Holck, Kim
2012-01-01
Background and purpose Treatment of hip fracture patients is controversial. We implemented a new operative and supervision algorithm (the Hvidovre algorithm) for surgical treatment of all hip fractures, primarily based on own previously published results. Methods 2,000 consecutive patients over 50...... years of age who were admitted and operated on because of a hip fracture were prospectively included. 1,000 of these patients were included after implementation of the algorithm. Demographic parameters, hospital treatment, and reoperations within the first postoperative year were assessed from patient...... by reoperations was reduced from 24% of total hospitalization before the algorithm was introduced to 18% after it was introduced. Interpretation It is possible to implement an algorithm for treatment of all hip fracture patients in a large teaching hospital. In our case, the Hvidovre algorithm both raised...
Institute of Scientific and Technical Information of China (English)
Tian-qi WU; Min YAO; Jian-hua YANG
2016-01-01
By adopting the distributed problem-solving strategy, swarm intelligence algorithms have been successfully applied to many optimization problems that are difficult to deal with using traditional methods. At present, there are many well-implemented algorithms, such as particle swarm optimization, genetic algorithm, artificial bee colony algorithm, and ant colony optimization. These algorithms have already shown favorable performances. However, with the objects becoming increasingly complex, it is becoming gradually more difficult for these algorithms to meet human’s demand in terms of accuracy and time. Designing a new algorithm to seek better solutions for optimization problems is becoming increasingly essential. Dolphins have many noteworthy biological characteristics and living habits such as echolocation, information exchanges, cooperation, and division of labor. Combining these biological characteristics and living habits with swarm intelligence and bringing them into optimization prob-lems, we propose a brand new algorithm named the ‘dolphin swarm algorithm’ in this paper. We also provide the definitions of the algorithm and specific descriptions of the four pivotal phases in the algorithm, which are the search phase, call phase, reception phase, and predation phase. Ten benchmark functions with different properties are tested using the dolphin swarm algorithm, particle swarm optimization, genetic algorithm, and artificial bee colony algorithm. The convergence rates and benchmark func-tion results of these four algorithms are compared to testify the effect of the dolphin swarm algorithm. The results show that in most cases, the dolphin swarm algorithm performs better. The dolphin swarm algorithm possesses some great features, such as first-slow-then-fast convergence, periodic convergence, local-optimum-free, and no specific demand on benchmark functions. Moreover, the dolphin swarm algorithm is particularly appropriate to optimization problems, with more
Temporal and spatial quantification of farm and landscape functions
DEFF Research Database (Denmark)
Andersen, Peter Stubkjær
This PhD thesis presents a study on the spatial distribution of agricultural functions at farm and landscape levels. The study focuses on conceptualization of multifunctionality. The concrete conceptual steps include: identification of indicators of four farm and landscape functions – production...... is generally decreases and a tendency of increased segregation of the rural landscape is observed. In perspective, further studies on quantification in tangible units, synergies and trade-offs between functions at different scales, and correlations between structures and functions are needed....
New focused crawling algorithm
Institute of Scientific and Technical Information of China (English)
Su Guiyang; Li Jianhua; Ma Yinghua; Li Shenghong; Song Juping
2005-01-01
Focused carawling is a new research approach of search engine. It restricts information retrieval and provides search service in specific topic area. Focused crawling search algorithm is a key technique of focused crawler which directly affects the search quality. This paper first introduces several traditional topic-specific crawling algorithms, then an inverse link based topic-specific crawling algorithm is put forward. Comparison experiment proves that this algorithm has a good performance in recall, obviously better than traditional Breadth-First and Shark-Search algorithms. The experiment also proves that this algorithm has a good precision.
Symplectic algebraic dynamics algorithm
Institute of Scientific and Technical Information of China (English)
2007-01-01
Based on the algebraic dynamics solution of ordinary differential equations andintegration of ,the symplectic algebraic dynamics algorithm sn is designed,which preserves the local symplectic geometric structure of a Hamiltonian systemand possesses the same precision of the na ve algebraic dynamics algorithm n.Computer experiments for the 4th order algorithms are made for five test modelsand the numerical results are compared with the conventional symplectic geometric algorithm,indicating that sn has higher precision,the algorithm-inducedphase shift of the conventional symplectic geometric algorithm can be reduced,and the dynamical fidelity can be improved by one order of magnitude.
Adaptive cockroach swarm algorithm
Obagbuwa, Ibidun C.; Abidoye, Ademola P.
2017-07-01
An adaptive cockroach swarm optimization (ACSO) algorithm is proposed in this paper to strengthen the existing cockroach swarm optimization (CSO) algorithm. The ruthless component of CSO algorithm is modified by the employment of blend crossover predator-prey evolution method which helps algorithm prevent any possible population collapse, maintain population diversity and create adaptive search in each iteration. The performance of the proposed algorithm on 16 global optimization benchmark function problems was evaluated and compared with the existing CSO, cuckoo search, differential evolution, particle swarm optimization and artificial bee colony algorithms.
Decoherence in Search Algorithms
Abal, G; Marquezino, F L; Oliveira, A C; Portugal, R
2009-01-01
Recently several quantum search algorithms based on quantum walks were proposed. Those algorithms differ from Grover's algorithm in many aspects. The goal is to find a marked vertex in a graph faster than classical algorithms. Since the implementation of those new algorithms in quantum computers or in other quantum devices is error-prone, it is important to analyze their robustness under decoherence. In this work we analyze the impact of decoherence on quantum search algorithms implemented on two-dimensional grids and on hypercubes.
Quantification of isotopic turnover in agricultural systems
Braun, A.; Auerswald, K.; Schnyder, H.
2012-04-01
The isotopic turnover, which is a proxy for the metabolic rate, is gaining scientific importance. It is quantified for an increasing range of organisms, from microorganisms over plants to animals including agricultural livestock. Additionally, the isotopic turnover is analyzed on different scales, from organs to organisms to ecosystems and even to the biosphere. In particular, the quantification of the isotopic turnover of specific tissues within the same organism, e.g. organs like liver and muscle and products like milk and faeces, has brought new insights to improve understanding of nutrient cycles and fluxes, respectively. Thus, the knowledge of isotopic turnover is important in many areas, including physiology, e.g. milk synthesis, ecology, e.g. soil retention time of water, and medical science, e.g. cancer diagnosis. So far, the isotopic turnover is quantified by applying time, cost and expertise intensive tracer experiments. Usually, this comprises two isotopic equilibration periods. A first equilibration period with a constant isotopic input signal is followed by a second equilibration period with a distinct constant isotopic input signal. This yields a smooth signal change from the first to the second signal in the object under consideration. This approach reveals at least three major problems. (i) The input signals must be controlled isotopically, which is almost impossible in many realistic cases like free ranging animals. (ii) Both equilibration periods may be very long, especially when the turnover rate of the object under consideration is very slow, which aggravates the first problem. (iii) The detection of small or slow pools is improved by large isotopic signal changes, but large isotopic changes also involve a considerable change in the input material; e.g. animal studies are usually carried out as diet-switch experiments, where the diet is switched between C3 and C4 plants, since C3 and C4 plants differ strongly in their isotopic signal. The
3D automatic quantification applied to optically sectioned images to improve microscopy analysis
Directory of Open Access Journals (Sweden)
JE Diaz-Zamboni
2009-08-01
Full Text Available New fluorescence microscopy techniques, such as confocal or digital deconvolution microscopy, allow to easily obtain three-dimensional (3D information from specimens. However, there are few 3D quantification tools that allow extracting information of these volumes. Therefore, the amount of information acquired by these techniques is difficult to manipulate and analyze manually. The present study describes a model-based method, which for the first time shows 3D visualization and quantification of fluorescent apoptotic body signals, from optical serial sections of porcine hepatocyte spheroids correlating them to their morphological structures. The method consists on an algorithm that counts apoptotic bodies in a spheroid structure and extracts information from them, such as their centroids in cartesian and radial coordinates, relative to the spheroid centre, and their integrated intensity. 3D visualization of the extracted information, allowed us to quantify the distribution of apoptotic bodies in three different zones of the spheroid.
Foundations of statistical algorithms with references to R packages
Weihs, Claus; Ligges, Uwe
2013-01-01
A new and refreshingly different approach to presenting the foundations of statistical algorithms, Foundations of Statistical Algorithms: With References to R Packages reviews the historical development of basic algorithms to illuminate the evolution of today's more powerful statistical algorithms. It emphasizes recurring themes in all statistical algorithms, including computation, assessment and verification, iteration, intuition, randomness, repetition and parallelization, and scalability. Unique in scope, the book reviews the upcoming challenge of scaling many of the established techniques
Mannheim, Julia G; Judenhofer, Martin S; Schmid, Andreas; Tillmanns, Julia; Stiller, Detlef; Sossi, Vesna; Pichler, Bernd J
2012-06-21
Quantification accuracy and partial volume effect (PVE) of the Siemens Inveon PET scanner were evaluated. The influence of transmission source activities (40 and 160 MBq) on the quantification accuracy and the PVE were determined. Dynamic range, object size and PVE for different sphere sizes, contrast ratios and positions in the field of view (FOV) were evaluated. The acquired data were reconstructed using different algorithms and correction methods. The activity level of the transmission source and the total emission activity in the FOV strongly influenced the attenuation maps. Reconstruction algorithms, correction methods, object size and location within the FOV had a strong influence on the PVE in all configurations. All evaluated parameters potentially influence the quantification accuracy. Hence, all protocols should be kept constant during a study to allow a comparison between different scans.
Quantification of organ motion based on an adaptive image-based scale invariant feature method
Energy Technology Data Exchange (ETDEWEB)
Paganelli, Chiara [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133 (Italy); Peroni, Marta [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Paul Scherrer Institut, Zentrum für Protonentherapie, WMSA/C15, CH-5232 Villigen PSI (Italy); Baroni, Guido; Riboldi, Marco [Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano, piazza L. Da Vinci 32, Milano 20133, Italy and Bioengineering Unit, Centro Nazionale di Adroterapia Oncologica, strada Campeggi 53, Pavia 27100 (Italy)
2013-11-15
Purpose: The availability of corresponding landmarks in IGRT image series allows quantifying the inter and intrafractional motion of internal organs. In this study, an approach for the automatic localization of anatomical landmarks is presented, with the aim of describing the nonrigid motion of anatomo-pathological structures in radiotherapy treatments according to local image contrast.Methods: An adaptive scale invariant feature transform (SIFT) was developed from the integration of a standard 3D SIFT approach with a local image-based contrast definition. The robustness and invariance of the proposed method to shape-preserving and deformable transforms were analyzed in a CT phantom study. The application of contrast transforms to the phantom images was also tested, in order to verify the variation of the local adaptive measure in relation to the modification of image contrast. The method was also applied to a lung 4D CT dataset, relying on manual feature identification by an expert user as ground truth. The 3D residual distance between matches obtained in adaptive-SIFT was then computed to verify the internal motion quantification with respect to the expert user. Extracted corresponding features in the lungs were used as regularization landmarks in a multistage deformable image registration (DIR) mapping the inhale vs exhale phase. The residual distances between the warped manual landmarks and their reference position in the inhale phase were evaluated, in order to provide a quantitative indication of the registration performed with the three different point sets.Results: The phantom study confirmed the method invariance and robustness properties to shape-preserving and deformable transforms, showing residual matching errors below the voxel dimension. The adapted SIFT algorithm on the 4D CT dataset provided automated and accurate motion detection of peak to peak breathing motion. The proposed method resulted in reduced residual errors with respect to standard SIFT
Targeted proteomic quantification on quadrupole-orbitrap mass spectrometer.
Gallien, Sebastien; Duriez, Elodie; Crone, Catharina; Kellmann, Markus; Moehring, Thomas; Domon, Bruno
2012-12-01
There is an immediate need for improved methods to systematically and precisely quantify large sets of peptides in complex biological samples. To date protein quantification in biological samples has been routinely performed on triple quadrupole instruments operated in selected reaction monitoring mode (SRM), and two major challenges remain. Firstly, the number of peptides to be included in one survey experiment needs to be increased to routinely reach several hundreds, and secondly, the degree of selectivity should be improved so as to reliably discriminate the targeted analytes from background interferences. High resolution and accurate mass (HR/AM) analysis on the recently developed Q-Exactive mass spectrometer can potentially address these issues. This instrument presents a unique configuration: it is constituted of an orbitrap mass analyzer equipped with a quadrupole mass filter as the front-end for precursor ion mass selection. This configuration enables new quantitative methods based on HR/AM measurements, including targeted analysis in MS mode (single ion monitoring) and in MS/MS mode (parallel reaction monitoring). The ability of the quadrupole to select a restricted m/z range allows one to overcome the dynamic range limitations associated with trapping devices, and the MS/MS mode provides an additional stage of selectivity. When applied to targeted protein quantification in urine samples and benchmarked with the reference SRM technique, the quadrupole-orbitrap instrument exhibits similar or better performance in terms of selectivity, dynamic range, and sensitivity. This high performance is further enhanced by leveraging the multiplexing capability of the instrument to design novel acquisition methods and apply them to large targeted proteomic studies for the first time, as demonstrated on 770 tryptic yeast peptides analyzed in one 60-min experiment. The increased quality of quadrupole-orbitrap data has the potential to improve existing protein
First approach to radionuclide mixtures quantification by using plastic scintillators
Energy Technology Data Exchange (ETDEWEB)
Tarancon, A. [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain); Garcia, J.F. [Departament de Pintura, Universitat de Barcelona, Pau Gargallo 4, E-08028 Barcelona (Spain)]. E-mail: jfgarcia@ub.edu; Rauret, G. [Departament de Quimica Analitica, Universitat de Barcelona, Diagonal 647, E-08028 Barcelona (Spain)
2007-05-08
Recent studies have evaluated the capability of plastic scintillation (PS) as an alternative to liquid scintillation (LS) in radionuclide activity determination without mixed waste production. In order to complete the comparison, we now assess the extent to which PS can be used to quantify mixtures of radionuclides and the influence of the diameter of the plastic scintillation beads in detection efficiency. The results show that the detection efficiency decreases and the spectrum shrink to lower energies when the size of the plastic scintillation beads increases, and that the lower the energy of the beta particle, the greater the variation takes place. Similar behaviour has been observed for beta-gamma and alpha emitters. Two scenarios for the quantification of mixtures are considered, one including two radionuclides ({sup 14}C and {sup 60}Co) whose spectra do not overlap significantly, and the other including two radionuclides ({sup 137}Cs and {sup 90}Sr/{sup 90}Y), where the spectra of one the isotopes is totally overlapped by the other The calculation has been performed by using the conventional window selection procedure and a new approach in which the selected windows correspond to those with lower quantification errors. Relative errors obtained using the proposed approach (less than 10%) are lower than those of the conventional procedure, even when a radionuclide is completely overlapped, except for those samples with extreme activity ratios that were not included in the window optimization process.
Energy Technology Data Exchange (ETDEWEB)
Zito, Felicia; De Bernardi, Elisabetta; Soffientini, Chiara; Canzi, Cristina; Casati, Rosangela; Gerundini, Paolo; Baselli, Giuseppe [Nuclear Medicine Department, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, 20122 Milan (Italy); Bioengineering Department, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milan (Italy) and Tecnomed Foundation, University of Milano-Bicocca, via Pergolesi 33, 20900 Monza (Italy); Bioengineering Department, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milan (Italy); Nuclear Medicine Department, Fondazione IRCCS Ca' Granda Ospedale Maggiore Policlinico, via Francesco Sforza 35, 20122 Milan (Italy); Bioengineering Department, Politecnico di Milano, piazza Leonardo da Vinci 32, 20133 Milan (Italy)
2012-09-15
-FDG solution is able to saturate the zeolite pores and that the concentration does not influence the distribution uniformity of both solution and solute, at least at the trace concentrations used for zeolite activation. An additional proof of uniformity of zeolite saturation was obtained observing a correspondence between uptake and adsorbed volume of solution, corresponding to about 27.8% of zeolite volume. As to the ground truth for zeolites positioned inside the phantom, the segmentation of finely aligned CT images provided reliable borders, as demonstrated by a mean absolute volume error of 2.8% with respect to the PET threshold segmentation corresponding to the maximum Dice. Conclusions: The proposed methodology allowed obtaining an experimental phantom data set that can be used as a feasible tool to test and validate quantification and segmentation algorithms for PET in oncology. The phantom is currently under consideration for being included in a benchmark designed by AAPM TG211, which will be available to the community to evaluate PET automatic segmentation methods.
Extension of least squares spectral resolution algorithm to high-resolution lipidomics data
Energy Technology Data Exchange (ETDEWEB)
Zeng, Ying-Xu [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); Mjøs, Svein Are, E-mail: svein.mjos@kj.uib.no [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); David, Fabrice P.A. [Bioinformatics and Biostatistics Core Facility, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL) and Swiss Institute of Bioinformatics (SIB), Lausanne (Switzerland); Schmid, Adrien W. [Proteomics Core Facility, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne (Switzerland)
2016-03-31
Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.
Software For Genetic Algorithms
Wang, Lui; Bayer, Steve E.
1992-01-01
SPLICER computer program is genetic-algorithm software tool used to solve search and optimization problems. Provides underlying framework and structure for building genetic-algorithm application program. Written in Think C.
Single DNA imaging and length quantification through a mobile phone microscope
Wei, Qingshan; Luo, Wei; Chiang, Samuel; Kappel, Tara; Mejia, Crystal; Tseng, Derek; Chan, Raymond Yan L.; Yan, Eddie; Qi, Hangfei; Shabbir, Faizan; Ozkan, Haydar; Feng, Steve; Ozcan, Aydogan
2016-03-01
The development of sensitive optical microscopy methods for the detection of single DNA molecules has become an active research area which cultivates various promising applications including point-of-care (POC) genetic testing and diagnostics. Direct visualization of individual DNA molecules usually relies on sophisticated optical microscopes that are mostly available in well-equipped laboratories. For POC DNA testing/detection, there is an increasing need for the development of new single DNA imaging and sensing methods that are field-portable, cost-effective, and accessible for diagnostic applications in resource-limited or field-settings. For this aim, we developed a mobile-phone integrated fluorescence microscopy platform that allows imaging and sizing of single DNA molecules that are stretched on a chip. This handheld device contains an opto-mechanical attachment integrated onto a smartphone camera module, which creates a high signal-to-noise ratio dark-field imaging condition by using an oblique illumination/excitation configuration. Using this device, we demonstrated imaging of individual linearly stretched λ DNA molecules (48 kilobase-pair, kbp) over 2 mm2 field-of-view. We further developed a robust computational algorithm and a smartphone app that allowed the users to quickly quantify the length of each DNA fragment imaged using this mobile interface. The cellphone based device was tested by five different DNA samples (5, 10, 20, 40, and 48 kbp), and a sizing accuracy of mobile DNA imaging and sizing platform can be very useful for various diagnostic applications including the detection of disease-specific genes and quantification of copy-number-variations at POC settings.
Energy Technology Data Exchange (ETDEWEB)
Enghauser, Michael [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2016-02-01
The goal of the Domestic Nuclear Detection Office (DNDO) Algorithm Improvement Program (AIP) is to facilitate gamma-radiation detector nuclide identification algorithm development, improvement, and validation. Accordingly, scoring criteria have been developed to objectively assess the performance of nuclide identification algorithms. In addition, a Microsoft Excel spreadsheet application for automated nuclide identification scoring has been developed. This report provides an overview of the equations, nuclide weighting factors, nuclide equivalencies, and configuration weighting factors used by the application for scoring nuclide identification algorithm performance. Furthermore, this report presents a general overview of the nuclide identification algorithm scoring application including illustrative examples.
DEFF Research Database (Denmark)
Richardson, Katherine; Denny, R.; Hughes, C.
2012-01-01
A probability-based quantification framework is presented for the calculation of relative peptide and protein abundance in label-free and label-dependent LC-MS proteomics data. The results are accompanied by credible intervals and regulation probabilities. The algorithm takes into account data...
Progressive geometric algorithms
Directory of Open Access Journals (Sweden)
Sander P.A. Alewijnse
2015-01-01
Full Text Available Progressive algorithms are algorithms that, on the way to computing a complete solution to the problem at hand, output intermediate solutions that approximate the complete solution increasingly well. We present a framework for analyzing such algorithms, and develop efficient progressive algorithms for two geometric problems: computing the convex hull of a planar point set, and finding popular places in a set of trajectories.
Borbely, Eva
2007-01-01
A quantum algorithm is a set of instructions for a quantum computer, however, unlike algorithms in classical computer science their results cannot be guaranteed. A quantum system can undergo two types of operation, measurement and quantum state transformation, operations themselves must be unitary (reversible). Most quantum algorithms involve a series of quantum state transformations followed by a measurement. Currently very few quantum algorithms are known and no general design methodology e...
Quantification of carbon nanomaterials in vivo.
Wang, Haifang; Yang, Sheng-Tao; Cao, Aoneng; Liu, Yuanfang
2013-03-19
A diverse array of carbon nanomaterials (NMs), including fullerene, carbon nanotubes (CNTs), graphene, nanodiamonds, and carbon nanoparticles, have been discovered and widely applied in a variety of industries. Carbon NMs have been detected in the environment and have a strong possibility of entering the human body. The safety of carbon NMs has thus become a serious concern in academia and society. To achieve strict biosafety assessments, researchers need to fully understand the effects and fates of NMs in the human body, including information about absorption, distribution, metabolism, excretion, and toxicity (ADME/T). To acquire the ADME data, researchers must quantify NMs, but carbon NMs are very difficult to quantify in vivo. The carbon background in a typical biological system is high, particularly compared with the much lower concentration of carbon NMs. Moreover, carbon NMs lack a specific detection signal. Therefore, isotopic labeling, with its high sensitivity and specificity, is the first choice to quantify carbon NMs in vivo. Previously, researchers have used many isotopes, including ¹³C, ¹⁴C, ¹²⁵I, ¹³¹I, ³H, ⁶⁴Cu, ¹¹¹In, ⁸⁶Y, 99mTc, and ⁶⁷Ga, to label carbon NMs. We used these isotopic labeling methods to study the ADME of carbon NMs via different exposure pathways in animal models. Except for the metabolism of carbon NMs, which has seldom been investigated, significant amounts of data have been reported on the in vivo absorption, distribution, excretion, and toxicity of carbon NMs, which have revealed characteristic behaviors of carbon NMs, such as reticuloendothelial system (RES) capture. However, the complexity of the biological systems and diverse preparation and functionalization of the same carbon NMs have led to inconsistent results across different studies. Therefore, the data obtained so far have not provided a compatible and systematic profile of biosafety. Further efforts are needed to address these problems. In
Quantification and Propagation of Nuclear Data Uncertainties
Rising, Michael E.
The use of several uncertainty quantification and propagation methodologies is investigated in the context of the prompt fission neutron spectrum (PFNS) uncertainties and its impact on critical reactor assemblies. First, the first-order, linear Kalman filter is used as a nuclear data evaluation and uncertainty quantification tool combining available PFNS experimental data and a modified version of the Los Alamos (LA) model. The experimental covariance matrices, not generally given in the EXFOR database, are computed using the GMA methodology used by the IAEA to establish more appropriate correlations within each experiment. Then, using systematics relating the LA model parameters across a suite of isotopes, the PFNS for both the uranium and plutonium actinides are evaluated leading to a new evaluation including cross-isotope correlations. Next, an alternative evaluation approach, the unified Monte Carlo (UMC) method, is studied for the evaluation of the PFNS for the n(0.5 MeV)+Pu-239 fission reaction and compared to the Kalman filter. The UMC approach to nuclear data evaluation is implemented in a variety of ways to test convergence toward the Kalman filter results and to determine the nonlinearities present in the LA model. Ultimately, the UMC approach is shown to be comparable to the Kalman filter for a realistic data evaluation of the PFNS and is capable of capturing the nonlinearities present in the LA model. Next, the impact that the PFNS uncertainties have on important critical assemblies is investigated. Using the PFNS covariance matrices in the ENDF/B-VII.1 nuclear data library, the uncertainties of the effective multiplication factor, leakage, and spectral indices of the Lady Godiva and Jezebel critical assemblies are quantified. Using principal component analysis on the PFNS covariance matrices results in needing only 2-3 principal components to retain the PFNS uncertainties. Then, using the polynomial chaos expansion (PCE) on the uncertain output
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available techniques and is organized by algorithmic paradigm.
Optimal Mixing Evolutionary Algorithms
Thierens, D.; Bosman, P.A.N.; Krasnogor, N.
2011-01-01
A key search mechanism in Evolutionary Algorithms is the mixing or juxtaposing of partial solutions present in the parent solutions. In this paper we look at the efficiency of mixing in genetic algorithms (GAs) and estimation-of-distribution algorithms (EDAs). We compute the mixing probabilities of
Implementation of Parallel Algorithms
1991-09-30
Lecture Notes in Computer Science , Warwich, England, July 16-20... Lecture Notes in Computer Science , Springer-Verlag, Bangalor, India, December 1990. J. Reif, J. Canny, and A. Page, "An Exact Algorithm for Kinodynamic...Parallel Algorithms and its Impact on Computational Geometry, in Optimal Algorithms, H. Djidjev editor, Springer-Verlag Lecture Notes in Computer Science
Semiclassical Shor's Algorithm
Giorda, P; Sen, S; Sen, S; Giorda, Paolo; Iorio, Alfredo; Sen, Samik; Sen, Siddhartha
2003-01-01
We propose a semiclassical version of Shor's quantum algorithm to factorize integer numbers, based on spin-1/2 SU(2) generalized coherent states. Surprisingly, we find numerical evidence that the algorithm's success probability is not too severely modified by our semiclassical approximation. This suggests that it is worth pursuing practical implementations of the algorithm on semiclassical devices.
Combinatorial optimization theory and algorithms
Korte, Bernhard
2002-01-01
Combinatorial optimization is one of the youngest and most active areas of discrete mathematics, and is probably its driving force today. This book describes the most important ideas, theoretical results, and algorithms of this field. It is conceived as an advanced graduate text, and it can also be used as an up-to-date reference work for current research. The book includes the essential fundamentals of graph theory, linear and integer programming, and complexity theory. It covers classical topics in combinatorial optimization as well as very recent ones. The emphasis is on theoretical results and algorithms with provably good performance. Some applications and heuristics are mentioned, too.
Ensemble Methods Foundations and Algorithms
Zhou, Zhi-Hua
2012-01-01
An up-to-date, self-contained introduction to a state-of-the-art machine learning approach, Ensemble Methods: Foundations and Algorithms shows how these accurate methods are used in real-world tasks. It gives you the necessary groundwork to carry out further research in this evolving field. After presenting background and terminology, the book covers the main algorithms and theories, including Boosting, Bagging, Random Forest, averaging and voting schemes, the Stacking method, mixture of experts, and diversity measures. It also discusses multiclass extension, noise tolerance, error-ambiguity a
THE FEATURE SUBSET SELECTION ALGORITHM
Institute of Scientific and Technical Information of China (English)
LiuYongguo; LiXueming; 等
2003-01-01
The motivation of data mining is how to extract effective information from huge data in very large database.However,some redundant irrelevant attributes,which result in low performance and high computing complexity,are included in the very large database in general.So,Feature Selection(FSS)becomes one important issue in the field of data mining.In this letter,an Fss model based on the filter approach is built,which uses the simulated annealing gentic algorithm.Experimental results show that convergence and stability of this algorithm are adequately achieved.
THE FEATURE SUBSET SELECTION ALGORITHM
Institute of Scientific and Technical Information of China (English)
Liu Yongguo; Li Xueming; Wu Zhongfu
2003-01-01
The motivation of data mining is how to extract effective information from huge data in very large database. However, some redundant and irrelevant attributes, which result in low performance and high computing complexity, are included in the very large database in general.So, Feature Subset Selection (FSS) becomes one important issue in the field of data mining. In this letter, an FSS model based on the filter approach is built, which uses the simulated annealing genetic algorithm. Experimental results show that convergence and stability of this algorithm are adequately achieved.
HPC Analytics Support. Requirements for Uncertainty Quantification Benchmarks
Energy Technology Data Exchange (ETDEWEB)
Paulson, Patrick R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Purohit, Sumit [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Rodriguez, Luke R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2015-05-01
This report outlines techniques for extending benchmark generation products so they support uncertainty quantification by benchmarked systems. We describe how uncertainty quantification requirements can be presented to candidate analytical tools supporting SPARQL. We describe benchmark data sets for evaluating uncertainty quantification, as well as an approach for using our benchmark generator to produce data sets for generating benchmark data sets.
Quantification of fluorescent reporters in plant cells.
Pound, Michael; French, Andrew P; Wells, Darren M
2015-01-01
Fluorescent reporters are powerful tools for plant research. Many studies require accurate determination of fluorescence intensity and localization. Here, we describe protocols for the quantification of fluorescence intensity in plant cells from confocal laser scanning microscope images using semiautomated software and image analysis techniques.
Quantification of interferon signaling in avian cells
Kint, Joeri; Forlenza, Maria
2015-01-01
Activation of the type I interferon (IFN) response is an essential defense mechanism against invading pathogens such as viruses. This chapter describes two protocols to quantify activation of the chicken IFN response through analysis of gene expression by real-time quantitative PCR and by quantif
Quantification of coating aging using impedance measurements
Westing, E.P.M. van; Weijde, D.H. van der; Vreijling, M.P.W.; Ferrari, G.M.; Wit, J.H.W. de
1998-01-01
This chapter shows the application results of a novel approach to quantify the ageing of organic coatings using impedance measurements. The ageing quantification is based on the typical impedance behaviour of barrier coatings in immersion. This immersion behaviour is used to determine the limiting c
Quantification of topological concepts using ideals
Directory of Open Access Journals (Sweden)
Robert Lowen
2001-01-01
Full Text Available We introduce certain ideals of real-valued functions as a natural generalization of filters. We show that these ideals establish a canonical framework for the quantification of topological concepts, such as closedness, adherence, and compactness, in the setting of approach spaces.
Quantification of Cannabinoid Content in Cannabis
Tian, Y.; Zhang, F.; Jia, K.; Wen, M.; Yuan, Ch.
2015-09-01
Cannabis is an economically important plant that is used in many fields, in addition to being the most commonly consumed illicit drug worldwide. Monitoring the spatial distribution of cannabis cultivation and judging whether it is drug- or fiber-type cannabis is critical for governments and international communities to understand the scale of the illegal drug trade. The aim of this study was to investigate whether the cannabinoids content in cannabis could be spectrally quantified using a spectrometer and to identify the optimal wavebands for quantifying the cannabinoid content. Spectral reflectance data of dried cannabis leaf samples and the cannabis canopy were measured in the laboratory and in the field, respectively. Correlation analysis and the stepwise multivariate regression method were used to select the optimal wavebands for cannabinoid content quantification based on the laboratory-measured spectral data. The results indicated that the delta-9-tetrahydrocannabinol (THC) content in cannabis leaves could be quantified using laboratory-measured spectral reflectance data and that the 695 nm band is the optimal band for THC content quantification. This study provides prerequisite information for designing spectral equipment to enable immediate quantification of THC content in cannabis and to discriminate drug- from fiber-type cannabis based on THC content quantification in the field.
Directory of Open Access Journals (Sweden)
Jane Tufvesson
2015-01-01
Full Text Available Introduction. Manual delineation of the left ventricle is clinical standard for quantification of cardiovascular magnetic resonance images despite being time consuming and observer dependent. Previous automatic methods generally do not account for one major contributor to stroke volume, the long-axis motion. Therefore, the aim of this study was to develop and validate an automatic algorithm for time-resolved segmentation covering the whole left ventricle, including basal slices affected by long-axis motion. Methods. Ninety subjects imaged with a cine balanced steady state free precession sequence were included in the study (training set n=40, test set n=50. Manual delineation was reference standard and second observer analysis was performed in a subset (n=25. The automatic algorithm uses deformable model with expectation-maximization, followed by automatic removal of papillary muscles and detection of the outflow tract. Results. The mean differences between automatic segmentation and manual delineation were EDV −11 mL, ESV 1 mL, EF −3%, and LVM 4 g in the test set. Conclusions. The automatic LV segmentation algorithm reached accuracy comparable to interobserver for manual delineation, thereby bringing automatic segmentation one step closer to clinical routine. The algorithm and all images with manual delineations are available for benchmarking.
Excursion-Set-Mediated Genetic Algorithm
Noever, David; Baskaran, Subbiah
1995-01-01
Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.
A Runge-Kutta Nystrom algorithm.
Bettis, D. G.
1973-01-01
A Runge-Kutta algorithm of order five is presented for the solution of the initial value problem where the system of ordinary differential equations is of second order and does not contain the first derivative. The algorithm includes the Fehlberg step control procedure.
Excursion-Set-Mediated Genetic Algorithm
Noever, David; Baskaran, Subbiah
1995-01-01
Excursion-set-mediated genetic algorithm (ESMGA) is embodiment of method of searching for and optimizing computerized mathematical models. Incorporates powerful search and optimization techniques based on concepts analogous to natural selection and laws of genetics. In comparison with other genetic algorithms, this one achieves stronger condition for implicit parallelism. Includes three stages of operations in each cycle, analogous to biological generation.
Germain, Pierre-Luc
2016-06-20
RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods.
Germain, Pierre-Luc; Vitriolo, Alessandro; Adamo, Antonio; Laise, Pasquale; Das, Vivek; Testa, Giuseppe
2016-06-20
RNA sequencing (RNAseq) has become the method of choice for transcriptome analysis, yet no consensus exists as to the most appropriate pipeline for its analysis, with current benchmarks suffering important limitations. Here, we address these challenges through a rich benchmarking resource harnessing (i) two RNAseq datasets including ERCC ExFold spike-ins; (ii) Nanostring measurements of a panel of 150 genes on the same samples; (iii) a set of internal, genetically-determined controls; (iv) a reanalysis of the SEQC dataset; and (v) a focus on relative quantification (i.e. across-samples). We use this resource to compare different approaches to each step of RNAseq analysis, from alignment to differential expression testing. We show that methods providing the best absolute quantification do not necessarily provide good relative quantification across samples, that count-based methods are superior for gene-level relative quantification, and that the new generation of pseudo-alignment-based software performs as well as established methods, at a fraction of the computing time. We also assess the impact of library type and size on quantification and differential expression analysis. Finally, we have created a R package and a web platform to enable the simple and streamlined application of this resource to the benchmarking of future methods. © The Author(s) 2016. Published by Oxford University Press on behalf of Nucleic Acids Research.
Scalable algorithms for contact problems
Dostál, Zdeněk; Sadowská, Marie; Vondrák, Vít
2016-01-01
This book presents a comprehensive and self-contained treatment of the authors’ newly developed scalable algorithms for the solutions of multibody contact problems of linear elasticity. The brand new feature of these algorithms is theoretically supported numerical scalability and parallel scalability demonstrated on problems discretized by billions of degrees of freedom. The theory supports solving multibody frictionless contact problems, contact problems with possibly orthotropic Tresca’s friction, and transient contact problems. It covers BEM discretization, jumping coefficients, floating bodies, mortar non-penetration conditions, etc. The exposition is divided into four parts, the first of which reviews appropriate facets of linear algebra, optimization, and analysis. The most important algorithms and optimality results are presented in the third part of the volume. The presentation is complete, including continuous formulation, discretization, decomposition, optimality results, and numerical experimen...
Simple Algorithm Portfolio for SAT
Nikolic, Mladen; Janicic, Predrag
2011-01-01
The importance of algorithm portfolio techniques for SAT has long been noted, and a number of very successful systems have been devised, including the most successful one --- SATzilla. However, all these systems are quite complex (to understand, reimplement, or modify). In this paper we propose a new algorithm portfolio for SAT that is extremely simple, but in the same time so efficient that it outperforms SATzilla. For a new SAT instance to be solved, our portfolio finds its k-nearest neighbors from the training set and invokes a solver that performs the best at those instances. The main distinguishing feature of our algorithm portfolio is the locality of the selection procedure --- the selection of a SAT solver is based only on few instances similar to the input one.
Quantum walks and search algorithms
Portugal, Renato
2013-01-01
This book addresses an interesting area of quantum computation called quantum walks, which play an important role in building quantum algorithms, in particular search algorithms. Quantum walks are the quantum analogue of classical random walks. It is known that quantum computers have great power for searching unsorted databases. This power extends to many kinds of searches, particularly to the problem of finding a specific location in a spatial layout, which can be modeled by a graph. The goal is to find a specific node knowing that the particle uses the edges to jump from one node to the next. This book is self-contained with main topics that include: Grover's algorithm, describing its geometrical interpretation and evolution by means of the spectral decomposition of the evolution operater Analytical solutions of quantum walks on important graphs like line, cycles, two-dimensional lattices, and hypercubes using Fourier transforms Quantum walks on generic graphs, describing methods to calculate the limiting d...
Engineering a Cache-Oblivious Sorting Algorithm
DEFF Research Database (Denmark)
Brodal, Gerth Stølting; Fagerberg, Rolf; Vinther, Kristoffer
2007-01-01
This paper is an algorithmic engineering study of cache-oblivious sorting. We investigate by empirical methods a number of implementation issues and parameter choices for the cache-oblivious sorting algorithm Lazy Funnelsort, and compare the final algorithm with Quicksort, the established standard...... for comparison-based sorting, as well as with recent cache-aware proposals. The main result is a carefully implemented cache-oblivious sorting algorithm, which our experiments show can be faster than the best Quicksort implementation we are able to find, already for input sizes well within the limits of RAM....... It is also at least as fast as the recent cache-aware implementations included in the test. On disk the difference is even more pronounced regarding Quicksort and the cache-aware algorithms, whereas the algorithm is slower than a careful implementation of multiway Mergesort such as TPIE....
Performance of a Distributed Stochastic Approximation Algorithm
Bianchi, Pascal; Hachem, Walid
2012-01-01
In this paper, a distributed stochastic approximation algorithm is studied. Applications of such algorithms include decentralized estimation, optimization, control or computing. The algorithm consists in two steps: a local step, where each node in a network updates a local estimate using a stochastic approximation algorithm with decreasing step size, and a gossip step, where a node computes a local weighted average between its estimates and those of its neighbors. Convergence of the estimates toward a consensus is established under weak assumptions. The approach relies on two main ingredients: the existence of a Lyapunov function for the mean field in the agreement subspace, and a contraction property of the random matrices of weights in the subspace orthogonal to the agreement subspace. A second order analysis of the algorithm is also performed under the form of a Central Limit Theorem. The Polyak-averaged version of the algorithm is also considered.
Visualizing output for a data learning algorithm
Carson, Daniel; Graham, James; Ternovskiy, Igor
2016-05-01
This paper details the process we went through to visualize the output for our data learning algorithm. We have been developing a hierarchical self-structuring learning algorithm based around the general principles of the LaRue model. One example of a proposed application of this algorithm would be traffic analysis, chosen because it is conceptually easy to follow and there is a significant amount of already existing data and related research material with which to work with. While we choose the tracking of vehicles for our initial approach, it is by no means the only target of our algorithm. Flexibility is the end goal, however, we still need somewhere to start. To that end, this paper details our creation of the visualization GUI for our algorithm, the features we included and the initial results we obtained from our algorithm running a few of the traffic based scenarios we designed.
Fast algorithm on string cross pattern matching
Institute of Scientific and Technical Information of China (English)
Liu Gongshen; Li Jianhua; Li Shenghong
2005-01-01
Given a set U which is consisted of strings defined on alphabet ∑ , string cross pattern matching is to find all the matches between every two strings in U. It is utilized in text processing like removing the duplication of strings.This paper presents a fast string cross pattern matching algorithm based on extracting high frequency strings. Compared with existing algorithms including single-pattern algorithms and multi-pattern matching algorithms, this algorithm is featured by both low time complexityand low space complexity. Because Chinese alphabet is large and the average length of Chinese words is much short, this algorithm is more suitable to process the text written by Chinese, especially when the size of ∑ is large and the number of strings is far more than the maximum length of strings of set U.
Detecting Danger: The Dendritic Cell Algorithm
Greensmith, Julie; Cayzer, Steve
2010-01-01
The Dendritic Cell Algorithm (DCA) is inspired by the function of the dendritic cells of the human immune system. In nature, dendritic cells are the intrusion detection agents of the human body, policing the tissue and organs for potential invaders in the form of pathogens. In this research, and abstract model of DC behaviour is developed and subsequently used to form an algorithm, the DCA. The abstraction process was facilitated through close collaboration with laboratory- based immunologists, who performed bespoke experiments, the results of which are used as an integral part of this algorithm. The DCA is a population based algorithm, with each agent in the system represented as an 'artificial DC'. Each DC has the ability to combine multiple data streams and can add context to data suspected as anomalous. In this chapter the abstraction process and details of the resultant algorithm are given. The algorithm is applied to numerous intrusion detection problems in computer security including the detection of p...
Semiconvergence and Relaxation Parameters for Projected SIRT Algorithms
DEFF Research Database (Denmark)
Elfving, Tommy; Hansen, Per Christian; Nikazad, Touraj
2012-01-01
We give a detailed study of the semiconverg ence behavior of projected nonstationary simultaneous iterative reconstruction technique (SIRT) algorithms, including the projected Landweber algorithm. We also consider the use of a relaxation parameter strategy, proposed recently for the standard...
Akl, Selim G
1985-01-01
Parallel Sorting Algorithms explains how to use parallel algorithms to sort a sequence of items on a variety of parallel computers. The book reviews the sorting problem, the parallel models of computation, parallel algorithms, and the lower bounds on the parallel sorting problems. The text also presents twenty different algorithms, such as linear arrays, mesh-connected computers, cube-connected computers. Another example where algorithm can be applied is on the shared-memory SIMD (single instruction stream multiple data stream) computers in which the whole sequence to be sorted can fit in the
Voice Matching Using Genetic Algorithm
Directory of Open Access Journals (Sweden)
Abhishek Bal
2014-03-01
Full Text Available In this paper, the use of Genetic Algorithm (GA for voice recognition is described. The practical application of Genetic Algorithm (GA to the solution of engineering problem is a rapidly emerging approach in the field of control engineering and signal processing. Genetic algorithms are useful for searching a space in multi-directional way from large spaces and poorly defined space. Voice is a signal of infinite information. Digital processing of voice signal is very important for automatic voice recognition technology. Nowadays, voice processing is very much important in security mechanism due to mimicry characteristic. So studying the voice feature extraction in voice processing is very necessary in military, hospital, telephone system, investigation bureau and etc. In order to extract valuable information from the voice signal, make decisions on the process, and obtain results, the data needs to be manipulated and analyzed. In this paper, if the instant voice is not matched with same person’s reference voices in the database, then Genetic Algorithm (GA is applied between two randomly chosen reference voices. Again the instant voice is compared with the result of Genetic Algorithm (GA which is used, including its three main steps: selection, crossover and mutation. We illustrate our approach with different sample of voices from human in our institution.
Fourier Lucas-Kanade algorithm.
Lucey, Simon; Navarathna, Rajitha; Ashraf, Ahmed Bilal; Sridharan, Sridha
2013-06-01
In this paper, we propose a framework for both gradient descent image and object alignment in the Fourier domain. Our method centers upon the classical Lucas & Kanade (LK) algorithm where we represent the source and template/model in the complex 2D Fourier domain rather than in the spatial 2D domain. We refer to our approach as the Fourier LK (FLK) algorithm. The FLK formulation is advantageous when one preprocesses the source image and template/model with a bank of filters (e.g., oriented edges, Gabor, etc.) as 1) it can handle substantial illumination variations, 2) the inefficient preprocessing filter bank step can be subsumed within the FLK algorithm as a sparse diagonal weighting matrix, 3) unlike traditional LK, the computational cost is invariant to the number of filters and as a result is far more efficient, and 4) this approach can be extended to the Inverse Compositional (IC) form of the LK algorithm where nearly all steps (including Fourier transform and filter bank preprocessing) can be precomputed, leading to an extremely efficient and robust approach to gradient descent image matching. Further, these computational savings translate to nonrigid object alignment tasks that are considered extensions of the LK algorithm, such as those found in Active Appearance Models (AAMs).
Abello, James M
1999-01-01
The AMS and DIMACS are pleased to present this 50th volume in the DIMACS series. This series contains volumes coming out of programs at the Center for Discrete Mathematics and Theoretical Computer Science (DIMACS), which is headquartered at Rutgers University in partnership with Princeton University, AT&T Labs-Research, Bell Labs (Lucent Technologies), NEC Research Institute, and Telcordia Technologies (formerly Bellcore). The series includes topics on research and education concerning areas such as discrete and computational geometry, discrete optimization, data structures and algorithms, computational intractability, massive data sets, networks, graph theory, combinatorics, computational number theory and cryptology, discrete probability, recursive function theory and mathematical logic, Boolean functions, computational and mathematical biology, and computational algebra.
Methane bubbling: from speculation to quantification
Grinham, A. R.; Dunbabin, M.; Yuan, Z.
2013-12-01
Rates of methane bubbling (ebullition) represent a notoriously difficult emission pathway to quantify with highly variable spatial and temporal changes. However, the importance of bubbling fluxes in terms of total emissions is increasingly recognised from a number of different globally relevant natural systems including lakes, reservoirs and wetlands. This represents a critical challenge to current survey efforts to quantify greenhouse gas emissions and reduce the uncertainty associated with bubbling fluxes. A number of different methods have been proposed to overcome this challenge including bubble traps, floating chambers, echo sounders, laser spectrography and camera systems. Each method has relative merits and deficiencies with all trading-off the ability to directly quantify methane and provide spatial and temporal coverage. Here we present a novel method that allows direct measurement of methane bubble concentration as well as the ability to persistently monitor a wide spatial area. Central to the monitoring system is an Autonomous Surface Vessel (ASV) and an Optical Methane Detector (OMD). The ASV is equipped with solar panels and uses electric motors for propulsion to allow persistent environmental monitoring. The OMD has a path length of 1.3 m and 7 Hz sampling so a typical mission of 3 hours at 1 m s-1 covers an area in excess of 10 000 m2 and over 65 000 data points. The system was assessed on four sub-tropical freshwater reservoirs of varying surface area (0.5 to 100 km2), age (2 to 65 y) and catchment land use (40 to 90% natural vegetation cover). Each reservoir had unique challenges in terms of navigation and field conditions to test feasibility of this method. Deployment length varied from a single day to over 4 months to test method durability. In addition to ASV bubble surveys, floating static chambers were deployed to determine diffusive fluxes. Localised instantaneous bubble flux rates within a single reservoir ranged over three orders of
Nanotechnology-based strategies for the detection and quantification of microRNA.
Degliangeli, Federica; Pompa, Pier Paolo; Fiammengo, Roberto
2014-07-28
MicroRNAs (miRNAs) are important regulators of gene expression, and many pathological conditions, including cancer, are characterized by altered miRNA expression levels. Therefore, accurate and sensitive quantification of miRNAs may result in correct disease diagnosis establishing these small noncoding RNA transcripts as valuable biomarkers. Aiming at overcoming some limitations of conventional quantification strategies, nanotechnology is currently providing numerous significant alternatives to miRNA sensing. In this review an up-to-date account of nanotechnology-based strategies for miRNA detection and quantification is given. The topics covered are: nanoparticle-based approaches in solution, sensing based on nanostructured surfaces, combined nanoparticle/surface sensing approaches, and single-molecule approaches.
Modified Clipped LMS Algorithm
Directory of Open Access Journals (Sweden)
Lotfizad Mojtaba
2005-01-01
Full Text Available Abstract A new algorithm is proposed for updating the weights of an adaptive filter. The proposed algorithm is a modification of an existing method, namely, the clipped LMS, and uses a three-level quantization ( scheme that involves the threshold clipping of the input signals in the filter weight update formula. Mathematical analysis shows the convergence of the filter weights to the optimum Wiener filter weights. Also, it can be proved that the proposed modified clipped LMS (MCLMS algorithm has better tracking than the LMS algorithm. In addition, this algorithm has reduced computational complexity relative to the unmodified one. By using a suitable threshold, it is possible to increase the tracking capability of the MCLMS algorithm compared to the LMS algorithm, but this causes slower convergence. Computer simulations confirm the mathematical analysis presented.
Group Leaders Optimization Algorithm
Daskin, Anmer
2010-01-01
Complexity of global optimization algorithms makes implementation of the algorithms difficult and leads the algorithms to require more computer resources for the optimization process. The ability to explore the whole solution space without increasing the complexity of algorithms has a great importance to not only get reliable results but so also make the implementation of these algorithms more convenient for higher dimensional and complex-real world problems in science and engineering. In this paper, we present a new global optimization algorithm in which the influence of the leaders in social groups is used as an inspiration for the evolutionary technique that is designed into a group architecture similar to the architecture of Cooperative Coevolutionary Algorithms. Therefore, we present the implementation method and the experimental results for the single and multidimensional optimization test problems and a scientific real world problem, the energies and the geometric structures of Lennard-Jones clusters.
Entropy Message Passing Algorithm
Ilic, Velimir M; Branimir, Todorovic T
2009-01-01
Message passing over factor graph can be considered as generalization of many well known algorithms for efficient marginalization of multivariate function. A specific instance of the algorithm is obtained by choosing an appropriate commutative semiring for the range of the function to be marginalized. Some examples are Viterbi algorithm, obtained on max-product semiring and forward-backward algorithm obtained on sum-product semiring. In this paper, Entropy Message Passing algorithm (EMP) is developed. It operates over entropy semiring, previously introduced in automata theory. It is shown how EMP extends the use of message passing over factor graphs to probabilistic model algorithms such as Expectation Maximization algorithm, gradient methods and computation of model entropy, unifying the work of different authors.
Investigating Performance of Various Natural Computing Algorithms
Directory of Open Access Journals (Sweden)
Bharat V. Chawda
2017-01-01
Full Text Available Nature is there since millenniums. Natural elements have withstood harsh complexities since years and have proved their efficiency in tackling them. This aspect has inspired many researchers to design algorithms based on phenomena in the natural world since the last couple of decades. Such algorithms are known as natural computing algorithms or nature inspired algorithms. These algorithms have established their ability to solve a large number of real-world complex problems by providing optimal solutions within the reasonable time duration. This paper presents an investigation by assessing the performance of some of the well-known natural computing algorithms with their variations. These algorithms include Genetic Algorithms, Ant Colony Optimization, River Formation Dynamics, Firefly Algorithm and Cuckoo Search. The Traveling Sales man Problem is used here as a test bed problem for performance evaluation of these algorithms. It is a kind of combinatorial optimization problem and known as one the most famous NP-Hard problems. It is simple and easy to understand, but at the same time, very difficult to find the optimal solution in a reasonable time – particularly with the increase in a number of cities. The source code for the above natural computing algorithms is developed in MATLAB R2015b and applied on several TSP instances given in TSPLIB library. Results obtained are analyzed based on various criteria such as tour length, required iterations, convergence time and quality of solutions. Conclusions derived from this analysis help to establish the superiority of Firefly Algorithms over the other algorithms in comparative terms.
Directory of Open Access Journals (Sweden)
Assaf Zaritsky
Full Text Available Confocal microscopy analysis of fluorescence and morphology is becoming the standard tool in cell biology and molecular imaging. Accurate quantification algorithms are required to enhance the understanding of different biological phenomena. We present a novel approach based on image-segmentation of multi-cellular regions in bright field images demonstrating enhanced quantitative analyses and better understanding of cell motility. We present MultiCellSeg, a segmentation algorithm to separate between multi-cellular and background regions for bright field images, which is based on classification of local patches within an image: a cascade of Support Vector Machines (SVMs is applied using basic image features. Post processing includes additional classification and graph-cut segmentation to reclassify erroneous regions and refine the segmentation. This approach leads to a parameter-free and robust algorithm. Comparison to an alternative algorithm on wound healing assay images demonstrates its superiority. The proposed approach was used to evaluate common cell migration models such as wound healing and scatter assay. It was applied to quantify the acceleration effect of Hepatocyte growth factor/scatter factor (HGF/SF on healing rate in a time lapse confocal microscopy wound healing assay and demonstrated that the healing rate is linear in both treated and untreated cells, and that HGF/SF accelerates the healing rate by approximately two-fold. A novel fully automated, accurate, zero-parameters method to classify and score scatter-assay images was developed and demonstrated that multi-cellular texture is an excellent descriptor to measure HGF/SF-induced cell scattering. We show that exploitation of textural information from differential interference contrast (DIC images on the multi-cellular level can prove beneficial for the analyses of wound healing and scatter assays. The proposed approach is generic and can be used alone or alongside traditional
Quantification and Genotyping of Human Sapoviruses in the Llobregat River Catchment, Spain▿ †
Sano, Daisuke; Pérez-Sautu, Unai; Guix, Susana; Pintó, Rosa Maria; Miura, Takayuki; Okabe, Satoshi; Bosch, Albert
2011-01-01
Human sapoviruses (SaVs) were quantified and characterized in an 18-month survey conducted along the Llobregat river catchment area in Spain. Sample types included freshwater, untreated and treated wastewater, and drinking water. All genogroups were recovered, and a seasonal distribution was observed. This is the first report of SaV quantification and genotyping in the environment outside Japan. PMID:21148702
Quantification and Genotyping of Human Sapoviruses in the Llobregat River Catchment, Spain▿ †
Sano, Daisuke; Pérez-Sautu, Unai; Guix, Susana; Pintó, Rosa Maria; Miura, Takayuki; Okabe, Satoshi; Bosch, Albert
2010-01-01
Human sapoviruses (SaVs) were quantified and characterized in an 18-month survey conducted along the Llobregat river catchment area in Spain. Sample types included freshwater, untreated and treated wastewater, and drinking water. All genogroups were recovered, and a seasonal distribution was observed. This is the first report of SaV quantification and genotyping in the environment outside Japan.
Quantification of gap junctional intercellular communication based on digital image analysis
DEFF Research Database (Denmark)
Hofgaard, Johannes P; Mollerup, Sarah; Holstein-Rathlou, Niels-Henrik
2009-01-01
numerous cells to obtain reliable estimates of metabolic coupling. Quantification is often based on manual counting of fluorescent cells, which is time consuming and may include some degree of subjectivity. In this report, we introduce a technique based on digital image analysis, and the software...
Quantification methods of Black Carbon: Comparison of Rock-Eval analysis with traditional methods
Poot, A.; Quik, J.T.K.; Veld, H.; Koelmans, A.A.
2009-01-01
Black Carbon (BC) quantification methods are reviewed, including new Rock-Eval 6 data on BC reference materials. BC has been reported to have major impacts on climate, human health and environmental quality. Especially for risk assessment of persistent organic pollutants (POPs) it is important to ac
Almalki, Manal; Gray, Kathleen; Sanchez, Fernando Martin
2015-01-01
Self-quantification is seen as an emerging paradigm for health care self-management. Self-quantification systems (SQS) can be used for tracking, monitoring, and quantifying health aspects including mental, emotional, physical, and social aspects in order to gain self-knowledge. However, there has been a lack of a systematic approach for conceptualising and mapping the essential activities that are undertaken by individuals who are using SQS in order to improve health outcomes. In this paper, we propose a new model of personal health information self-quantification systems (PHI-SQS). PHI-SQS model describes two types of activities that individuals go through during their journey of health self-managed practice, which are 'self-quantification' and 'self-activation'. In this paper, we aimed to examine thoroughly the first type of activity in PHI-SQS which is 'self-quantification'. Our objectives were to review the data management processes currently supported in a representative set of self-quantification tools and ancillary applications, and provide a systematic approach for conceptualising and mapping these processes with the individuals' activities. We reviewed and compared eleven self-quantification tools and applications (Zeo Sleep Manager, Fitbit, Actipressure, MoodPanda, iBGStar, Sensaris Senspod, 23andMe, uBiome, Digifit, BodyTrack, and Wikilife), that collect three key health data types (Environmental exposure, Physiological patterns, Genetic traits). We investigated the interaction taking place at different data flow stages between the individual user and the self-quantification technology used. We found that these eleven self-quantification tools and applications represent two major tool types (primary and secondary self-quantification systems). In each type, the individuals experience different processes and activities which are substantially influenced by the technologies' data management capabilities. Self-quantification in personal health maintenance
Energy Technology Data Exchange (ETDEWEB)
Yoo, Hyun Suk; Lee, Jeong Min; Yoon, Jeong Hee; Lee, Dong Ho; Chang, Won; Han, Joon Koo [Seoul National University Hospital, Seoul (Korea, Republic of)
2016-09-15
To prospectively compare technical success rate and reliable measurements of virtual touch quantification (VTQ) elastography and elastography point quantification (ElastPQ), and to correlate liver stiffness (LS) measurements obtained by the two elastography techniques. Our study included 85 patients, 80 of whom were previously diagnosed with chronic liver disease. The technical success rate and reliable measurements of the two kinds of point shear wave elastography (pSWE) techniques were compared by χ{sup 2} analysis. LS values measured using the two techniques were compared and correlated via Wilcoxon signed-rank test, Spearman correlation coefficient, and 95% Bland-Altman limit of agreement. The intraobserver reproducibility of ElastPQ was determined by 95% Bland-Altman limit of agreement and intraclass correlation coefficient (ICC). The two pSWE techniques showed similar technical success rate (98.8% for VTQ vs. 95.3% for ElastPQ, p = 0.823) and reliable LS measurements (95.3% for VTQ vs. 90.6% for ElastPQ, p = 0.509). The mean LS measurements obtained by VTQ (1.71 ± 0.47 m/s) and ElastPQ (1.66 ± 0.41 m/s) were not significantly different (p = 0.209). The LS measurements obtained by the two techniques showed strong correlation (r = 0.820); in addition, the 95% limit of agreement of the two methods was 27.5% of the mean. Finally, the ICC of repeat ElastPQ measurements was 0.991. Virtual touch quantification and ElastPQ showed similar technical success rate and reliable measurements, with strongly correlated LS measurements. However, the two methods are not interchangeable due to the large limit of agreement.
Imaging of axial spondyloarthritis including ankylosing spondylitis.
Braun, J; Baraliakos, X
2011-03-01
New bone formation of the vertebral column is pathognomonic for ankylosing spondylitis (AS), while acute and/or chronic changes in the sacroiliac joints are relevant for diagnosis. The 'gold standard' for assessment of structural changes in AS are conventional radiographs, while MRI is useful to assess inflammation. Recent MRI studies have shown that the lower half of the thoracic spine is most commonly affected in AS. Scoring tools for spinal inflammation such as the ASspiMRI-a have been proposed, successfully used in large clinical trials and compared in a multireader experiment; none was finally preferred by OMERACT. Quantification of structural spinal AS changes is performed by the modified Stokes AS Spine Score (mSASSS), which evaluates lateral cervical and lumbar radiographs. Two years was identified as the shortest possible follow-up time based on the reliability and sensitivity to change of the mSASSS. A potential disadvantage of the mSASSS is that the thoracic spine is not included. Recent data based on the mSASSS have suggested that tumour necrosis factor blockers do not inhibit radiographic progression in AS. Since the mean radiographic change is reported to be less than 1 syndesmophyte over 2 years, the sensitivity to change of the mSASSS has been questioned. However, in one study where continuous non-steroidal anti-inflammatory drugs use was compared with on-demand use, a difference between these two methods of drug intake was reported. The face and construct validity of the mSASSS has been criticised because a score of ´1´ contains a mixture of osteodestructive (erosions) and osteoproliferative changes (squaring and sclerosis). A new scoring system, the RASSS, which concentrates only on bone formation and which includes the lower part of the thoracic spine is currently being evaluated. The relationship between inflammation and new bone formation in AS has recently been investigated. Low sclerostin and DKK-1 serum levels, both inhibitors of bone
An introduction to the analysis of algorithms
Sedgewick, Robert
2013-01-01
Despite growing interest, basic information on methods and models for mathematically analyzing algorithms has rarely been directly accessible to practitioners, researchers, or students. An Introduction to the Analysis of Algorithms, Second Edition, organizes and presents that knowledge, fully introducing primary techniques and results in the field. Robert Sedgewick and the late Philippe Flajolet have drawn from both classical mathematics and computer science, integrating discrete mathematics, elementary real analysis, combinatorics, algorithms, and data structures. They emphasize the mathematics needed to support scientific studies that can serve as the basis for predicting algorithm performance and for comparing different algorithms on the basis of performance. Techniques covered in the first half of the book include recurrences, generating functions, asymptotics, and analytic combinatorics. Structures studied in the second half of the book include permutations, trees, strings, tries, and mappings. Numerous ...
Stewart, Robert; White, Devin; Urban, Marie; Morton, April; Webster, Clayton; Stoyanov, Miroslav; Bright, Eddie; Bhaduri, Budhendra L.
2013-05-01
The Population Density Tables (PDT) project at Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity-based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach, knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 50 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.
Statistical modeling and MAP estimation for body fat quantification with MRI ratio imaging
Wong, Wilbur C. K.; Johnson, David H.; Wilson, David L.
2008-03-01
We are developing small animal imaging techniques to characterize the kinetics of lipid accumulation/reduction of fat depots in response to genetic/dietary factors associated with obesity and metabolic syndromes. Recently, we developed an MR ratio imaging technique that approximately yields lipid/{lipid + water}. In this work, we develop a statistical model for the ratio distribution that explicitly includes a partial volume (PV) fraction of fat and a mixture of a Rician and multiple Gaussians. Monte Carlo hypothesis testing showed that our model was valid over a wide range of coefficient of variation of the denominator distribution (c.v.: 0-0:20) and correlation coefficient among the numerator and denominator (ρ 0-0.95), which cover the typical values that we found in MRI data sets (c.v.: 0:027-0:063, ρ: 0:50-0:75). Then a maximum a posteriori (MAP) estimate for the fat percentage per voxel is proposed. Using a digital phantom with many PV voxels, we found that ratio values were not linearly related to PV fat content and that our method accurately described the histogram. In addition, the new method estimated the ground truth within +1.6% vs. +43% for an approach using an uncorrected ratio image, when we simply threshold the ratio image. On the six genetically obese rat data sets, the MAP estimate gave total fat volumes of 279 +/- 45mL, values 21% smaller than those from the uncorrected ratio images, principally due to the non-linear PV effect. We conclude that our algorithm can increase the accuracy of fat volume quantification even in regions having many PV voxels, e.g. ectopic fat depots.
Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo
2014-05-01
Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.
Energy Technology Data Exchange (ETDEWEB)
Stewart, Robert N [ORNL; White, Devin A [ORNL; Urban, Marie L [ORNL; Morton, April M [ORNL; Webster, Clayton G [ORNL; Stoyanov, Miroslav K [ORNL; Bright, Eddie A [ORNL; Bhaduri, Budhendra L [ORNL
2013-01-01
The Population Density Tables (PDT) project at the Oak Ridge National Laboratory (www.ornl.gov) is developing population density estimates for specific human activities under normal patterns of life based largely on information available in open source. Currently, activity based density estimates are based on simple summary data statistics such as range and mean. Researchers are interested in improving activity estimation and uncertainty quantification by adopting a Bayesian framework that considers both data and sociocultural knowledge. Under a Bayesian approach knowledge about population density may be encoded through the process of expert elicitation. Due to the scale of the PDT effort which considers over 250 countries, spans 40 human activity categories, and includes numerous contributors, an elicitation tool is required that can be operationalized within an enterprise data collection and reporting system. Such a method would ideally require that the contributor have minimal statistical knowledge, require minimal input by a statistician or facilitator, consider human difficulties in expressing qualitative knowledge in a quantitative setting, and provide methods by which the contributor can appraise whether their understanding and associated uncertainty was well captured. This paper introduces an algorithm that transforms answers to simple, non-statistical questions into a bivariate Gaussian distribution as the prior for the Beta distribution. Based on geometric properties of the Beta distribution parameter feasibility space and the bivariate Gaussian distribution, an automated method for encoding is developed that responds to these challenging enterprise requirements. Though created within the context of population density, this approach may be applicable to a wide array of problem domains requiring informative priors for the Beta distribution.
HPTLC in Herbal Drug Quantification
Shinde, Devanand B.; Chavan, Machindra J.; Wakte, Pravin S.
For the past few decades, compounds from natural sources have been gaining importance because of the vast chemical diversity they offer. This has led to phenomenal increase in the demand for herbal medicines in the last two decades and need has been felt for ensuring the quality, safety, and efficacy of herbal drugs. Phytochemical evaluation is one of the tools for the quality assessment, which include preliminary phytochemical screening, chemoprofiling, and marker compound analysis using modern analytical techniques. High-performance thin-layer chromatography (HPTLC) has been emerged as an important tool for the qualitative, semiquantitative, and quantitative phytochemical analysis of the herbal drugs and formulations. This includes developing TLC fingerprinting profiles and estimation of biomarkers. This review has an attempt to focus on the theoretical considerations of HPTLC and some examples of herbal drugs and formulations analyzed by HPTLC.
Directory of Open Access Journals (Sweden)
Daniel H Rapoport
with high reliability and statistical significance. These include the distribution of life/cycle times and cell areas, as well as of the symmetry of cell divisions and motion analyses. The new algorithm thus allows for the quantification and parameterization of cell culture with unprecedented accuracy. To evaluate our validation algorithm, two large reference data sets were manually created. These data sets comprise more than 320,000 unstained adult pancreatic stem cells from rat, including 2592 mitotic events. The reference data sets specify every cell position and shape, and assign each cell to the correct branch of its genealogic tree. We provide these reference data sets for free use by others as a benchmark for the future improvement of automated tracking methods.
Uncertainty Quantification in Climate Modeling and Projection
Energy Technology Data Exchange (ETDEWEB)
Qian, Yun; Jackson, Charles; Giorgi, Filippo; Booth, Ben; Duan, Qingyun; Forest, Chris; Higdon, Dave; Hou, Z. Jason; Huerta, Gabriel
2016-05-01
The projection of future climate is one of the most complex problems undertaken by the scientific community. Although scientists have been striving to better understand the physical basis of the climate system and to improve climate models, the overall uncertainty in projections of future climate has not been significantly reduced (e.g., from the IPCC AR4 to AR5). With the rapid increase of complexity in Earth system models, reducing uncertainties in climate projections becomes extremely challenging. Since uncertainties always exist in climate models, interpreting the strengths and limitations of future climate projections is key to evaluating risks, and climate change information for use in Vulnerability, Impact, and Adaptation (VIA) studies should be provided with both well-characterized and well-quantified uncertainty. The workshop aimed at providing participants, many of them from developing countries, information on strategies to quantify the uncertainty in climate model projections and assess the reliability of climate change information for decision-making. The program included a mixture of lectures on fundamental concepts in Bayesian inference and sampling, applications, and hands-on computer laboratory exercises employing software packages for Bayesian inference, Markov Chain Monte Carlo methods, and global sensitivity analyses. The lectures covered a range of scientific issues underlying the evaluation of uncertainties in climate projections, such as the effects of uncertain initial and boundary conditions, uncertain physics, and limitations of observational records. Progress in quantitatively estimating uncertainties in hydrologic, land surface, and atmospheric models at both regional and global scales was also reviewed. The application of Uncertainty Quantification (UQ) concepts to coupled climate system models is still in its infancy. The Coupled Model Intercomparison Project (CMIP) multi-model ensemble currently represents the primary data for
Extended Forward Sensitivity Analysis for Uncertainty Quantification
Energy Technology Data Exchange (ETDEWEB)
Haihua Zhao; Vincent A. Mousseau
2011-09-01
Verification and validation (V&V) are playing more important roles to quantify uncertainties and realize high fidelity simulations in engineering system analyses, such as transients happened in a complex nuclear reactor system. Traditional V&V in the reactor system analysis focused more on the validation part or did not differentiate verification and validation. The traditional approach to uncertainty quantification is based on a 'black box' approach. The simulation tool is treated as an unknown signal generator, a distribution of inputs according to assumed probability density functions is sent in and the distribution of the outputs is measured and correlated back to the original input distribution. The 'black box' method mixes numerical errors with all other uncertainties. It is also not efficient to perform sensitivity analysis. Contrary to the 'black box' method, a more efficient sensitivity approach can take advantage of intimate knowledge of the simulation code. In these types of approaches equations for the propagation of uncertainty are constructed and the sensitivities are directly solved for as variables in the simulation. This paper presents the forward sensitivity analysis as a method to help uncertainty qualification. By including time step and potentially spatial step as special sensitivity parameters, the forward sensitivity method is extended as one method to quantify numerical errors. Note that by integrating local truncation errors over the whole system through the forward sensitivity analysis process, the generated time step and spatial step sensitivity information reflect global numerical errors. The discretization errors can be systematically compared against uncertainties due to other physical parameters. This extension makes the forward sensitivity method a much more powerful tool to help uncertainty qualification. By knowing the relative sensitivity of time and space steps with other interested physical
Enhanced peptide quantification using spectral count clustering and cluster abundance
Directory of Open Access Journals (Sweden)
Lee Seungmook
2011-10-01
Full Text Available Abstract Background Quantification of protein expression by means of mass spectrometry (MS has been introduced in various proteomics studies. In particular, two label-free quantification methods, such as spectral counting and spectra feature analysis have been extensively investigated in a wide variety of proteomic studies. The cornerstone of both methods is peptide identification based on a proteomic database search and subsequent estimation of peptide retention time. However, they often suffer from restrictive database search and inaccurate estimation of the liquid chromatography (LC retention time. Furthermore, conventional peptide identification methods based on the spectral library search algorithms such as SEQUEST or SpectraST have been found to provide neither the best match nor high-scored matches. Lastly, these methods are limited in the sense that target peptides cannot be identified unless they have been previously generated and stored into the database or spectral libraries. To overcome these limitations, we propose a novel method, namely Quantification method based on Finding the Identical Spectral set for a Homogenous peptide (Q-FISH to estimate the peptide's abundance from its tandem mass spectrometry (MS/MS spectra through the direct comparison of experimental spectra. Intuitively, our Q-FISH method compares all possible pairs of experimental spectra in order to identify both known and novel proteins, significantly enhancing identification accuracy by grouping replicated spectra from the same peptide targets. Results We applied Q-FISH to Nano-LC-MS/MS data obtained from human hepatocellular carcinoma (HCC and normal liver tissue samples to identify differentially expressed peptides between the normal and disease samples. For a total of 44,318 spectra obtained through MS/MS analysis, Q-FISH yielded 14,747 clusters. Among these, 5,777 clusters were identified only in the HCC sample, 6,648 clusters only in the normal tissue sample
Broersen, Alexander; de Graaf, Michiel A; Eggermont, Jeroen; Wolterbeek, Ron; Kitslaar, Pieter H; Dijkstra, Jouke; Bax, Jeroen J; Reiber, Johan H C; Scholte, Arthur J
2016-04-01
We enhance intravascular ultrasound virtual histology (VH) tissue characterization by fully automatic quantification of the acoustic shadow behind calcified plaque. VH is unable to characterize atherosclerosis located behind calcifications. In this study, the quantified acoustic shadows are considered calcified to approximate the real dense calcium (DC) plaque volume. In total, 57 patients with 108 coronary lesions were included. A novel post-processing step is applied on the VH images to quantify the acoustic shadow and enhance the VH results. The VH and enhanced VH results are compared to quantitative computed tomography angiography (QTA) plaque characterization as reference standard. The correlation of the plaque types between enhanced VH and QTA differs significantly from the correlation with unenhanced VH. For DC, the correlation improved from 0.733 to 0.818. Instead of an underestimation of DC in VH with a bias of 8.5 mm(3), there was a smaller overestimation of 1.1 mm(3) in the enhanced VH. Although tissue characterization within the acoustic shadow in VH is difficult, the novel algorithm improved the DC tissue characterization. This algorithm contributes to accurate assessment of calcium on VH and could be applied in clinical studies.
Autonomous intelligent vehicles theory, algorithms, and implementation
Cheng, Hong
2011-01-01
Here is the latest on intelligent vehicles, covering object and obstacle detection and recognition and vehicle motion control. Includes a navigation approach using global views; introduces algorithms for lateral and longitudinal motion control and more.
A factorization algorithm to compute Pfaffians
Rubow, Jürgen; Wolff, Ulli(Institut für Physik, Humboldt Universität, Newtonstr. 15, 12489 Berlin, Germany)
2011-01-01
We describe an explicit algorithm to factorize an even antisymmetric N^2 matrix into triangular and trivial factors. This allows for a straight forward computation of Pfaffians (including their signs) at the cost of N^3/3 flops.
The LLL algorithm survey and applications
Nguyen, Phong Q
2010-01-01
The first book to offer a comprehensive view of the LLL algorithm, this text surveys computational aspects of Euclidean lattices and their main applications. It includes many detailed motivations, explanations and examples.
Automated quantification reveals hyperglycemia inhibits endothelial angiogenic function.
Directory of Open Access Journals (Sweden)
Anthony R Prisco
Full Text Available Diabetes Mellitus (DM has reached epidemic levels globally. A contributing factor to the development of DM is high blood glucose (hyperglycemia. One complication associated with DM is a decreased angiogenesis. The Matrigel tube formation assay (TFA is the most widely utilized in vitro assay designed to assess angiogenic factors and conditions. In spite of the widespread use of Matrigel TFAs, quantification is labor-intensive and subjective, often limiting experiential design and interpretation of results. This study describes the development and validation of an open source software tool for high throughput, morphometric analysis of TFA images and the validation of an in vitro hyperglycemic model of DM.Endothelial cells mimic angiogenesis when placed onto a Matrigel coated surface by forming tube-like structures. The goal of this study was to develop an open-source software algorithm requiring minimal user input (Pipeline v1.3 to automatically quantify tubular metrics from TFA images. Using Pipeline, the ability of endothelial cells to form tubes was assessed after culture in normal or high glucose for 1 or 2 weeks. A significant decrease in the total tube length and number of branch points was found when comparing groups treated with high glucose for 2 weeks versus normal glucose or 1 week of high glucose.Using Pipeline, it was determined that hyperglycemia inhibits formation of endothelial tubes in vitro. Analysis using Pipeline was more accurate and significantly faster than manual analysis. The Pipeline algorithm was shown to have additional applications, such as detection of retinal vasculature.
A Spanish model for quantification and management of construction waste.
Solís-Guzmán, Jaime; Marrero, Madelyn; Montes-Delgado, Maria Victoria; Ramírez-de-Arellano, Antonio
2009-09-01
Currently, construction and demolition waste (C&D waste) is a worldwide issue that concerns not only governments but also the building actors involved in construction activity. In Spain, a new national decree has been regulating the production and management of C&D waste since February 2008. The present work describes the waste management model that has inspired this decree: the Alcores model implemented with good results in Los Alcores Community (Seville, Spain). A detailed model is also provided to estimate the volume of waste that is expected to be generated on the building site. The quantification of C&D waste volume, from the project stage, is essential for the building actors to properly plan and control its disposal. This quantification model has been developed by studying 100 dwelling projects, especially their bill of quantities, and defining three coefficients to estimate the demolished volume (CT), the wreckage volume (CR) and the packaging volume (CE). Finally, two case studies are included to illustrate the usefulness of the model to estimate C&D waste volume in both new construction and demolition projects.
Identification and Quantification of Protein Glycosylation
Directory of Open Access Journals (Sweden)
Ziv Roth
2012-01-01
Full Text Available Glycosylation is one of the most abundant posttranslation modifications of proteins, and accumulating evidence indicate that the vast majority of proteins in eukaryotes are glycosylated. Glycosylation plays a role in protein folding, interaction, stability, and mobility, as well as in signal transduction. Thus, by regulating protein activity, glycosylation is involved in the normal functioning of the cell and in the development of diseases. Indeed, in the past few decades there has been a growing realization of the importance of protein glycosylation, as aberrant glycosylation has been implicated in metabolic, neurodegenerative, and neoplastic diseases. Thus, the identification and quantification of protein-borne oligosaccharides have become increasingly important both in the basic sciences of biochemistry and glycobiology and in the applicative sciences, particularly biomedicine and biotechnology. Here, we review the state-of-the-art methodologies for the identification and quantification of oligosaccharides, specifically N- and O-glycosylated proteins.
Adaptive Alternating Minimization Algorithms
Niesen, Urs; Wornell, Gregory
2007-01-01
The classical alternating minimization (or projection) algorithm has been successful in the context of solving optimization problems over two variables or equivalently of finding a point in the intersection of two sets. The iterative nature and simplicity of the algorithm has led to its application to many areas such as signal processing, information theory, control, and finance. A general set of sufficient conditions for the convergence and correctness of the algorithm is quite well-known when the underlying problem parameters are fixed. In many practical situations, however, the underlying problem parameters are changing over time, and the use of an adaptive algorithm is more appropriate. In this paper, we study such an adaptive version of the alternating minimization algorithm. As a main result of this paper, we provide a general set of sufficient conditions for the convergence and correctness of the adaptive algorithm. Perhaps surprisingly, these conditions seem to be the minimal ones one would expect in ...
Energy Technology Data Exchange (ETDEWEB)
Rodrigues, Bruna T.; Alvarez, Matheus; Souza, Rafael T.F.; Miranda, Jose R.A., E-mail: matheus@ibb.unesp.br [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Instituto de Biociencias. Departamento de Fisica e Biofisica; Romeiro, Fernando G. [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Fac de Mediciana. Departamento de Clinica Medica; Pina, Diana R. de; Trindade, Andre Petean [Universidade Estadual Paulista Julio de Mesquita Filho (UNESP), Botucatu, SP (Brazil). Fac. de Medicina. Departamento de Doencas Tropicais e Diagnostico por Imagem
2012-12-15
This paper presents an original methodology of liver tumors segmentation, based on wavelet transform. A virtual phantom was constructed with the same mean and standard deviation of the intensity of gray presented by the measured liver tissue. The optimized algorithm had a sensitivity ranging from 0.81 to 0.83, with a specificity of 0.95 for differentiation of hepatic tumors from normal tissues. We obtained a 96% agreement between the pixels segmented by an experienced radiologist and the algorithm presented here. According to the results shown in this work, the algorithm is optimal for the beginning of the tests for quantification of liver tumors in retrospective surveys. (author)
Milde, Florian; Franco, Davide; Ferrari, Aldo; Kurtcuoglu, Vartan; Poulikakos, Dimos; Koumoutsakos, Petros
2012-11-01
Cell migration is commonly quantified by tracking the speed of the cell layer interface in wound healing assays. This quantification is often hampered by low signal to noise ratio, in particular when complex substrates are employed to emulate in vivo cell migration in geometrically complex environments. Moreover, information about the cell motion, readily available inside the migrating cell layers, is not usually harvested. We introduce Cell Image Velocimetry (CIV), a combination of cell layer segmentation and image velocimetry algorithms, to drastically enhance the quantification of cell migration by wound healing assays. The resulting software analyses the speed of the interface as well as the detailed velocity field inside the cell layers in an automated fashion. CIV is shown to be highly robust for images with low signal to noise ratio, low contrast and frame shifting and it is portable across various experimental settings. The modular design and parametrization of CIV is not restricted to wound healing assays and allows for the exploration and quantification of flow phenomena in any optical microscopy dataset. Here, we demonstrate the capabilities of CIV in wound healing assays over topographically engineered surfaces and quantify the relative merits of differently aligned gratings on cell migration.
Jiang, Yu; Li, Changying; Takeda, Fumiomi
2016-10-01
Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive, time-consuming, and subjective. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. Experiments were conducted on 300 samples of southern highbush blueberry (Camellia, Rebel, and Star) and on 1500 samples of northern highbush blueberry (Bluecrop, Jersey, and Liberty) for hyperspectral imaging analysis, firmness measurement, and human evaluation. An algorithm was developed to automatically calculate a bruise ratio index (ratio of bruised to whole fruit area) for bruise quantification. The spectra of bruised and healthy tissues were statistically separated and the separation was independent of cultivars. Support vector machine (SVM) classification of the spectra from the regions of interest (ROIs) achieved over 94%, 92%, and 96% accuracy on the training set, independent testing set, and combined set, respectively. The statistical results showed that the bruise ratio index was equivalent to the measured firmness but better than the predicted firmness in regard to effectiveness of bruise quantification, and the bruise ratio index had a strong correlation with human assessment (R2 = 0.78 - 0.83). Therefore, the proposed approach and the bruise ratio index are effective to non-destructively detect and quantify blueberry bruising.
Jiang, Yu; Li, Changying; Takeda, Fumiomi
2016-01-01
Currently, blueberry bruising is evaluated by either human visual/tactile inspection or firmness measurement instruments. These methods are destructive, time-consuming, and subjective. The goal of this paper was to develop a non-destructive approach for blueberry bruising detection and quantification. Experiments were conducted on 300 samples of southern highbush blueberry (Camellia, Rebel, and Star) and on 1500 samples of northern highbush blueberry (Bluecrop, Jersey, and Liberty) for hyperspectral imaging analysis, firmness measurement, and human evaluation. An algorithm was developed to automatically calculate a bruise ratio index (ratio of bruised to whole fruit area) for bruise quantification. The spectra of bruised and healthy tissues were statistically separated and the separation was independent of cultivars. Support vector machine (SVM) classification of the spectra from the regions of interest (ROIs) achieved over 94%, 92%, and 96% accuracy on the training set, independent testing set, and combined set, respectively. The statistical results showed that the bruise ratio index was equivalent to the measured firmness but better than the predicted firmness in regard to effectiveness of bruise quantification, and the bruise ratio index had a strong correlation with human assessment (R2 = 0.78 − 0.83). Therefore, the proposed approach and the bruise ratio index are effective to non-destructively detect and quantify blueberry bruising. PMID:27767050
Vuiblet, Vincent; Fere, Michael; Gobinet, Cyril; Birembaut, Philippe; Piot, Olivier; Rieu, Philippe
2016-08-01
Renal interstitial fibrosis and interstitial active inflammation are the main histologic features of renal allograft biopsy specimens. Fibrosis is currently assessed by semiquantitative subjective analysis, and color image analysis has been developed to improve the reliability and repeatability of this evaluation. However, these techniques fail to distinguish fibrosis from constitutive collagen or active inflammation. We developed an automatic, reproducible Fourier-transform infrared (FTIR) imaging-based technique for simultaneous quantification of fibrosis and inflammation in renal allograft biopsy specimens. We generated and validated a classification model using 49 renal biopsy specimens and subsequently tested the robustness of this classification algorithm on 166 renal grafts. Finally, we explored the clinical relevance of fibrosis quantification using FTIR imaging by comparing results with renal function at 3 months after transplantation (M3) and the variation of renal function between M3 and M12. We showed excellent robustness for fibrosis and inflammation classification, with >90% of renal biopsy specimens adequately classified by FTIR imaging. Finally, fibrosis quantification by FTIR imaging correlated with renal function at M3, and the variation in fibrosis between M3 and M12 correlated well with the variation in renal function over the same period. This study shows that FTIR-based analysis of renal graft biopsy specimens is a reproducible and reliable label-free technique for quantifying fibrosis and active inflammation. This technique seems to be more relevant than digital image analysis and promising for both research studies and routine clinical practice.
Moucari, Rami; Marcellin, Patrick
2011-01-01
HBsAg is a very important clinical test that might not only indicate active hepatitis B virus (HBV) infection but might also be used to predict clinical and treatment outcome. Clearance of HBsAg in patients with chronic HBV infection is associated with a much better clinical outcome, although surveillance for early detection of hepatocellular carcinoma (HCC) should continue. HBV DNA quantification is currently used for selecting candidates for therapy, monitoring response to therapy and detecting the emergence of drug resistance. Assays for HBsAg quantification are less expensive than HBV DNA and fully automated with a high throughput capacity. HBsAg titering may be a useful tool to manage patients with chronic HBV, to more clearly define which patients may, and more importantly, may not, benefit from treatment. Baseline and on-treatment HBsAg quantification may help to refine future treatment algorithms for both immune-modulator therapy and nucleos(t)ide analogues. Both HBV markers provide complementary information on the status of HBV infection. However, the relevance of serum HBsAg levels and its use as a reliable replacement for both covalently closed circular DNA and HBV DNA remain unclear.
Near-optimal RNA-Seq quantification
Bray, Nicolas; Pimentel, Harold; Melsted, Páll; Pachter, Lior
2015-01-01
We present a novel approach to RNA-Seq quantification that is near optimal in speed and accuracy. Software implementing the approach, called kallisto, can be used to analyze 30 million unaligned paired-end RNA-Seq reads in less than 5 minutes on a standard laptop computer while providing results as accurate as those of the best existing tools. This removes a major computational bottleneck in RNA-Seq analysis.
Standardized Relative Quantification of Immunofluorescence Tissue Staining
sprotocols
2015-01-01
Authors: Oriol Arqués, Irene Chicote, Stephan Tenbaum, Isabel Puig & Héctor G. Palmer ### Abstract The detection of correlations between the expression levels or sub-cellular localization of different proteins with specific characteristics of human tumors, such as e.g. grade of malignancy, may give important hints of functional associations. Here we describe the method we use for relative quantification of immunofluorescence staining of tumor tissue sections, which allows us to co...
Whitepaper on Uncertainty Quantification for MPACT
Energy Technology Data Exchange (ETDEWEB)
Williams, Mark L. [Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
2015-12-17
The MPACT code provides the ability to perform high-fidelity deterministic calculations to obtain a wide variety of detailed results for very complex reactor core models. However MPACT currently does not have the capability to propagate the effects of input data uncertainties to provide uncertainties in the calculated results. This white paper discusses a potential method for MPACT uncertainty quantification (UQ) based on stochastic sampling.
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
2016-06-07
REFERENCE ONLY NAVAL U~DERWATER SYSTEMS CENTER NEW LONDON LABORATORY NEW LONDON, CONNECTICUT 06320 Technical Memorandum SIMAS ADM XBT ALGORITHM ...REPORT TYPE Technical Memo 3. DATES COVERED 05-12-1984 to 05-12-1984 4. TITLE AND SUBTITLE SIMAS ADM XBT Algorithm 5a. CONTRACT NUMBER 5b...NOTES NUWC2015 14. ABSTRACT An algorithm has been developed for the detection and correction of surface ship launched expendable bathythermograph
Static Analysis Numerical Algorithms
2016-04-01
STATIC ANALYSIS OF NUMERICAL ALGORITHMS KESTREL TECHNOLOGY, LLC APRIL 2016 FINAL TECHNICAL REPORT APPROVED FOR PUBLIC RELEASE; DISTRIBUTION...3. DATES COVERED (From - To) NOV 2013 – NOV 2015 4. TITLE AND SUBTITLE STATIC ANALYSIS OF NUMERICAL ALGORITHMS 5a. CONTRACT NUMBER FA8750-14-C... algorithms , linear digital filters and integrating accumulators, modifying existing versions of Honeywell’s HiLiTE model-based development system and
Recursive forgetting algorithms
DEFF Research Database (Denmark)
Parkum, Jens; Poulsen, Niels Kjølstad; Holst, Jan
1992-01-01
In the first part of the paper, a general forgetting algorithm is formulated and analysed. It contains most existing forgetting schemes as special cases. Conditions are given ensuring that the basic convergence properties will hold. In the second part of the paper, the results are applied...... to a specific algorithm with selective forgetting. Here, the forgetting is non-uniform in time and space. The theoretical analysis is supported by a simulation example demonstrating the practical performance of this algorithm...
Fingerprint Feature Extraction Algorithm
Directory of Open Access Journals (Sweden)
Mehala. G
2014-03-01
Full Text Available The goal of this paper is to design an efficient Fingerprint Feature Extraction (FFE algorithm to extract the fingerprint features for Automatic Fingerprint Identification Systems (AFIS. FFE algorithm, consists of two major subdivisions, Fingerprint image preprocessing, Fingerprint image postprocessing. A few of the challenges presented in an earlier are, consequently addressed, in this paper. The proposed algorithm is able to enhance the fingerprint image and also extracting true minutiae.
Fingerprint Feature Extraction Algorithm
Mehala. G
2014-01-01
The goal of this paper is to design an efficient Fingerprint Feature Extraction (FFE) algorithm to extract the fingerprint features for Automatic Fingerprint Identification Systems (AFIS). FFE algorithm, consists of two major subdivisions, Fingerprint image preprocessing, Fingerprint image postprocessing. A few of the challenges presented in an earlier are, consequently addressed, in this paper. The proposed algorithm is able to enhance the fingerprint image and also extractin...
Redesigning linear algebra algorithms
Energy Technology Data Exchange (ETDEWEB)
Dongarra, J.J.
1983-01-01
Many of the standard algorithms in linear algebra as implemented in FORTRAN do not achieve maximum performance on today's large-scale vector computers. The author examines the problem and constructs alternative formulations of algorithms that do not lose the clarity of the original algorithm or sacrifice the FORTRAN portable environment, but do gain the performance attainable on these supercomputers. The resulting implementation not only performs well on vector computers but also increases performance on conventional sequential computers. 13 references.
Redesigning linear algebra algorithms
Energy Technology Data Exchange (ETDEWEB)
Dongarra, J.J.
1983-01-01
Many of the standard algorithms in linear algebra as implemented in FORTRAN do not achieve maximum performance on today's large-scale vector computers. In this paper we examine the problem and construct alternative formulations of algorithms that do not lose the clarity of the original algorithm or sacrifice the Fortran portable environment, but do gain the performance attainable on these supercomputers. The resulting implementation not only performs well on vector computers but also increases performance on conventional sequential computers.
The Method of Manufactured Universes for validating uncertainty quantification methods
Stripling, H.F.
2011-09-01
The Method of Manufactured Universes is presented as a validation framework for uncertainty quantification (UQ) methodologies and as a tool for exploring the effects of statistical and modeling assumptions embedded in these methods. The framework calls for a manufactured reality from which experimental data are created (possibly with experimental error), an imperfect model (with uncertain inputs) from which simulation results are created (possibly with numerical error), the application of a system for quantifying uncertainties in model predictions, and an assessment of how accurately those uncertainties are quantified. The application presented in this paper manufactures a particle-transport universe, models it using diffusion theory with uncertain material parameters, and applies both Gaussian process and Bayesian MARS algorithms to make quantitative predictions about new experiments within the manufactured reality. The results of this preliminary study indicate that, even in a simple problem, the improper application of a specific UQ method or unrealized effects of a modeling assumption may produce inaccurate predictions. We conclude that the validation framework presented in this paper is a powerful and flexible tool for the investigation and understanding of UQ methodologies. © 2011 Elsevier Ltd. All rights reserved.
Symmetrized Importance Samplers for Uncertainty Quantification and Data Assimilation
Leach, A.; Lin, K. K.; Morzfeld, M.
2016-12-01
Generating weighted samples from a given probability density by importance sampling is a central step in many algorithms for uncertainty quantification and data assimilation, e.g. for particle filters and rare event sampling. A challenge for importance sampling is to maintain good performance in the limit of small noise, in particular ensuring that the relative variance of the weights do not diverge as noise amplitude decreases. In recent years, a number of researchers have proposed novel importance sampling methods for stochastic differential equations in the low-noise regime. These methods have provably good performance in this setting, and can be used as part of particle filters and other data assimilation methods. Here, we build on these advances to propose and compare a number of importance samplers. We study the weight variance of these methods in a small-noise analysis. We show that samplers with a judicious choice of proposal density can have weight variance that scales linearly with noise amplitude. Furthermore, we show that a general symmetrization procedure can be applied to such first-order methods to produce second-order-accurate methods.
Novel algorithm for management of acute epididymitis.
Hongo, Hiroshi; Kikuchi, Eiji; Matsumoto, Kazuhiro; Yazawa, Satoshi; Kanao, Kent; Kosaka, Takeo; Mizuno, Ryuichi; Miyajima, Akira; Saito, Shiro; Oya, Mototsugu
2017-01-01
To identify predictive factors for the severity of epididymitis and to develop an algorithm guiding decisions on how to manage patients with this disease. A retrospective study was carried out on 160 epididymitis patients at Keio University Hospital. We classified cases into severe and non-severe groups, and compared clinical findings at the first visit. Based on statistical analyses, we developed an algorithm for predicting severe cases. We validated the algorithm by applying it to an external cohort of 96 patients at Tokyo Medical Center. The efficacy of the algorithm was investigated by a decision curve analysis. A total of 19 patients (11.9%) had severe epididymitis. Patient characteristics including older age, previous history of diabetes mellitus and fever, as well as laboratory data including a higher white blood cell count, C-reactive protein level and blood urea nitrogen level were independently associated with severity. A predictive algorithm was created with the ability to classify epididymitis cases into three risk groups. In the Keio University Hospital cohort, 100%, 23.5%, and 3.4% of cases in the high-, intermediate-, and low-risk groups, respectively, became severe. The specificity of the algorithm for predicting severe epididymitis proved to be 100% in the Keio University Hospital cohort and 98.8% in the Tokyo Medical Center cohort. The decision curve analysis also showed the high efficacy of the algorithm. This algorithm might aid in decision-making for the clinical management of acute epididymitis. © 2016 The Japanese Urological Association.
Automated quantification of synapses by fluorescence microscopy.
Schätzle, Philipp; Wuttke, René; Ziegler, Urs; Sonderegger, Peter
2012-02-15
The quantification of synapses in neuronal cultures is essential in studies of the molecular mechanisms underlying synaptogenesis and synaptic plasticity. Conventional counting of synapses based on morphological or immunocytochemical criteria is extremely work-intensive. We developed a fully automated method which quantifies synaptic elements and complete synapses based on immunocytochemistry. Pre- and postsynaptic elements are detected by their corresponding fluorescence signals and their proximity to dendrites. Synapses are defined as the combination of a pre- and postsynaptic element within a given distance. The analysis is performed in three dimensions and all parameters required for quantification can be easily adjusted by a graphical user interface. The integrated batch processing enables the analysis of large datasets without any further user interaction and is therefore efficient and timesaving. The potential of this method was demonstrated by an extensive quantification of synapses in neuronal cultures from DIV 7 to DIV 21. The method can be applied to all datasets containing a pre- and postsynaptic labeling plus a dendritic or cell surface marker.
Automated Template Quantification for DNA Sequencing Facilities
Ivanetich, Kathryn M.; Yan, Wilson; Wunderlich, Kathleen M.; Weston, Jennifer; Walkup, Ward G.; Simeon, Christian
2005-01-01
The quantification of plasmid DNA by the PicoGreen dye binding assay has been automated, and the effect of quantification of user-submitted templates on DNA sequence quality in a core laboratory has been assessed. The protocol pipets, mixes and reads standards, blanks and up to 88 unknowns, generates a standard curve, and calculates template concentrations. For pUC19 replicates at five concentrations, coefficients of variance were 0.1, and percent errors were from 1% to 7% (n = 198). Standard curves with pUC19 DNA were nonlinear over the 1 to 1733 ng/μL concentration range required to assay the majority (98.7%) of user-submitted templates. Over 35,000 templates have been quantified using the protocol. For 1350 user-submitted plasmids, 87% deviated by ≥ 20% from the requested concentration (500 ng/μL). Based on data from 418 sequencing reactions, quantification of user-submitted templates was shown to significantly improve DNA sequence quality. The protocol is applicable to all types of double-stranded DNA, is unaffected by primer (1 pmol/μL), and is user modifiable. The protocol takes 30 min, saves 1 h of technical time, and costs approximately $0.20 per unknown. PMID:16461949
Validated method for phytohormone quantification in plants
Directory of Open Access Journals (Sweden)
Marilia eAlmeida-Trapp
2014-08-01
Full Text Available Phytohormones are long time known as important components of signalling cascades in plant development and plant responses to various abiotic and biotic challenges. Quantifications of phytohormone levels in plants are typically carried out using GC or LC-MS/MS systems, due to their high sensitivity, specificity, and the fact that not much sample preparation is needed. However, mass spectrometer-based analyses are often affected by the particular sample type (different matrices, extraction procedure, and experimental setups, i.e. the chromatographic separation system and/or mass spectrometer analyser (Triple-quadrupole, Iontrap, TOF, Orbitrap. For these reasons, a validated method is required in order to enable comparison of data that are generated in different laboratories, under different experimental set-ups, and in different matrices.So far, many phytohormone quantification studies were done using either QTRAP or Triple-quadrupole mass spectrometers. None of them was performed under the regime of a fully-validated method. Therefore, we developed and established such validated method for quantification of stress-related phytohormones such as jasmonates, abscisic acid, salicylic acid, IAA, in the model plant Arabidopsis thaliana and the fruit crop Citrus sinensis, using an Iontrap mass spectrometer. All parameters recommended by FDA (US Food and Drug Administration or EMEA (European Medicines Evaluation Agency for validation of analytical methods were evaluated: sensitivity, selectivity, repeatability and reproducibility (accuracy and precision.
Uncertainty Quantification with Applications to Engineering Problems
DEFF Research Database (Denmark)
Bigoni, Daniele
The systematic quantification of the uncertainties affecting dynamical systems and the characterization of the uncertainty of their outcomes is critical for engineering design and analysis, where risks must be reduced as much as possible. Uncertainties stem naturally from our limitations in measu......The systematic quantification of the uncertainties affecting dynamical systems and the characterization of the uncertainty of their outcomes is critical for engineering design and analysis, where risks must be reduced as much as possible. Uncertainties stem naturally from our limitations...... in measurements, predictions and manufacturing, and we can say that any dynamical system used in engineering is subject to some of these uncertainties. The first part of this work presents an overview of the mathematical framework used in Uncertainty Quantification (UQ) analysis and introduces the spectral tensor...... some auxiliary properties, we will apply PC on it, obtaining the STT-decomposition. This will allow the decoupling of each dimension, leading to a much cheaper construction of the PC surrogate. In the associated paper, the capabilities of the STT-decomposition are checked on commonly used test...
Algorithms in Algebraic Geometry
Dickenstein, Alicia; Sommese, Andrew J
2008-01-01
In the last decade, there has been a burgeoning of activity in the design and implementation of algorithms for algebraic geometric computation. Some of these algorithms were originally designed for abstract algebraic geometry, but now are of interest for use in applications and some of these algorithms were originally designed for applications, but now are of interest for use in abstract algebraic geometry. The workshop on Algorithms in Algebraic Geometry that was held in the framework of the IMA Annual Program Year in Applications of Algebraic Geometry by the Institute for Mathematics and Its
Distributed Minimum Hop Algorithms
1982-01-01
acknowledgement), node d starts iteration i+1, and otherwise the algorithm terminates. A detailed description of the algorithm is given in pidgin algol...precise behavior of the algorithm under these circumstances is described by the pidgin algol program in the appendix which is executed by each node. The...l) < N!(2) for each neighbor j, and thus by induction,J -1 N!(2-1) < n-i + (Z-1) + N!(Z-1), completing the proof. Algorithm Dl in Pidgin Algol It is
Explaining algorithms using metaphors
Forišek, Michal
2013-01-01
There is a significant difference between designing a new algorithm, proving its correctness, and teaching it to an audience. When teaching algorithms, the teacher's main goal should be to convey the underlying ideas and to help the students form correct mental models related to the algorithm. This process can often be facilitated by using suitable metaphors. This work provides a set of novel metaphors identified and developed as suitable tools for teaching many of the 'classic textbook' algorithms taught in undergraduate courses worldwide. Each chapter provides exercises and didactic notes fo
License plate detection algorithm
Broitman, Michael; Klopovsky, Yuri; Silinskis, Normunds
2013-12-01
A novel algorithm for vehicle license plates localization is proposed. The algorithm is based on pixel intensity transition gradient analysis. Near to 2500 natural-scene gray-level vehicle images of different backgrounds and ambient illumination was tested. The best set of algorithm's parameters produces detection rate up to 0.94. Taking into account abnormal camera location during our tests and therefore geometrical distortion and troubles from trees this result could be considered as passable. Correlation between source data, such as license Plate dimensions and texture, cameras location and others, and parameters of algorithm were also defined.
Spectral Decomposition Algorithm (SDA)
National Aeronautics and Space Administration — Spectral Decomposition Algorithm (SDA) is an unsupervised feature extraction technique similar to PCA that was developed to better distinguish spectral features in...
Theory including future not excluded
DEFF Research Database (Denmark)
Nagao, K.; Nielsen, H.B.
2013-01-01
We study a complex action theory (CAT) whose path runs over not only past but also future. We show that, if we regard a matrix element defined in terms of the future state at time T and the past state at time TA as an expectation value in the CAT, then we are allowed to have the Heisenberg equation......, Ehrenfest's theorem, and the conserved probability current density. In addition,we showthat the expectation value at the present time t of a future-included theory for large T - t and large t - T corresponds to that of a future-not-included theory with a proper inner product for large t - T. Hence, the CAT...
Improvement of quantification in SPECT studies by scatter and attenuation compensation
Energy Technology Data Exchange (ETDEWEB)
Younes, R.B.; Mas, J.; Bidet, R.
1989-07-01
The filtered backprojection images obtained from classical SPECT studies are not adequate for evaluation of volumes or parameters of clinical interest. Noise, scattering, boundary accuracy and attenuation are the main problems of SPECT quantification. It is the aim of the following study to overcome these difficulties. The first step of all correction algorithm is the contour detection of the attenuation medium. A new procedure, previously described by the authors, accurately and automatically found the boundaries of the surrounding body. The Compton scattering elimination is carried out by a modified version of Jaszczak's method. This alteration is essential to implement the iterative attenuation correction algorithm derived from Chang's method. Results obtained using computer simulation and real phantoms or clinical studies demonstrate the high improvement of contrast and count levels in the corrected slices. The process is fully automatic and the efficiency of the procedures allow fast processing of the daily SPECT examination.
Efficient uncertainty quantification methodologies for high-dimensional climate land models
Energy Technology Data Exchange (ETDEWEB)
Sargsyan, Khachik [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Safta, Cosmin [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Berry, Robert Dan [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Ray, Jaideep [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Debusschere, Bert J. [Sandia National Lab. (SNL-CA), Livermore, CA (United States); Najm, Habib N. [Sandia National Lab. (SNL-CA), Livermore, CA (United States)
2011-11-01
In this report, we proposed, examined and implemented approaches for performing efficient uncertainty quantification (UQ) in climate land models. Specifically, we applied Bayesian compressive sensing framework to a polynomial chaos spectral expansions, enhanced it with an iterative algorithm of basis reduction, and investigated the results on test models as well as on the community land model (CLM). Furthermore, we discussed construction of efficient quadrature rules for forward propagation of uncertainties from high-dimensional, constrained input space to output quantities of interest. The work lays grounds for efficient forward UQ for high-dimensional, strongly non-linear and computationally costly climate models. Moreover, to investigate parameter inference approaches, we have applied two variants of the Markov chain Monte Carlo (MCMC) method to a soil moisture dynamics submodel of the CLM. The evaluation of these algorithms gave us a good foundation for further building out the Bayesian calibration framework towards the goal of robust component-wise calibration.
Faster fourier transformation: The algorithm of S. Winograd
Zohar, S.
1979-01-01
The new DFT algorithm of S. Winograd is developed and presented in detail. This is an algorithm which uses about 1/5 of the number of multiplications used by the Cooley-Tukey algorithm and is applicable to any order which is a product of relatively prime factors from the following list: 2,3,4,5,7,8,9,16. The algorithm is presented in terms of a series of tableaus which are convenient, compact, graphical representations of the sequence of arithmetic operations in the corresponding parts of the algorithm. Using these in conjunction with included Tables makes it relatively easy to apply the algorithm and evaluate its performance.
A theoretical comparison of evolutionary algorithms and simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Hart, W.E.
1995-08-28
This paper theoretically compares the performance of simulated annealing and evolutionary algorithms. Our main result is that under mild conditions a wide variety of evolutionary algorithms can be shown to have greater performance than simulated annealing after a sufficiently large number of function evaluations. This class of EAs includes variants of evolutionary strategie and evolutionary programming, the canonical genetic algorithm, as well as a variety of genetic algorithms that have been applied to combinatorial optimization problems. The proof of this result is based on a performance analysis of a very general class of stochastic optimization algorithms, which has implications for the performance of a variety of other optimization algorithm.
Video surveillance for sensor platforms algorithms and architectures
Al Najjar, Mayssaa; Bayoumi, Magdy
2014-01-01
This book introduces resource aware image decomposition, registration, fusion, object detection and tracking algorithms along with their applications in security, monitoring and integration in 3rd Generation Surveillance Systems. All algorithms are evaluated through experimental and simulation results and a parallel and pipelined efficient architecture for implementing the algorithms is described. • Describes a new type of image processing algorithms that are suited for low power and low memory platforms such as wireless sensor networks or mobile devices; • Uses simulation and experimental results to evaluate algorithms presented; • Includes hardware architecture for critical components in the algorithms described.
Testing the accuracy of redshift space group finding algorithms
Frederic, J J
1994-01-01
Using simulated redshift surveys generated from a high resolution N-body cosmological structure simulation, we study algorithms used to identify groups of galaxies in redshift space. Two algorithms are investigated; both are friends-of-friends schemes with variable linking lengths in the radial and transverse dimensions. The chief difference between the algorithms is in the redshift linking length. The algorithm proposed by Huchra \\& Geller (1982) uses a generous linking length designed to find ``fingers of god'' while that of Nolthenius \\& White (1987) uses a smaller linking length to minimize contamination by projection. We find that neither of the algorithms studied is intrinsically superior to the other; rather, the ideal algorithm as well as the ideal algorithm parameters depend on the purpose for which groups are to be studied. The Huchra/Geller algorithm misses few real groups, at the cost of including some spurious groups and members, while the Nolthenius/White algorithm misses high velocity d...
Reliability quantification and visualization for electric microgrids
Panwar, Mayank
and parallel with the area Electric Power Systems (EPS), (3) includes the local EPS and may include portions of the area EPS, and (4) is intentionally planned. A more reliable electric power grid requires microgrids to operate in tandem with the EPS. The reliability can be quantified through various metrics for performance measure. This is done through North American Electric Reliability Corporation (NERC) metrics in North America. The microgrid differs significantly from the traditional EPS, especially at asset level due to heterogeneity in assets. Thus, the performance cannot be quantified by the same metrics as used for EPS. Some of the NERC metrics are calculated and interpreted in this work to quantify performance for a single asset and group of assets in a microgrid. Two more metrics are introduced for system level performance quantification. The next step is a better representation of the large amount of data generated by the microgrid. Visualization is one such form of representation which is explored in detail and a graphical user interface (GUI) is developed as a deliverable tool to the operator for informative decision making and planning. Electronic appendices-I and II contain data and MATLAB© program codes for analysis and visualization for this work.
Algorithms for Drug Sensitivity Prediction
Directory of Open Access Journals (Sweden)
Carlos De Niz
2016-11-01
Full Text Available Precision medicine entails the design of therapies that are matched for each individual patient. Thus, predictive modeling of drug responses for specific patients constitutes a significant challenge for personalized therapy. In this article, we consider a review of approaches that have been proposed to tackle the drug sensitivity prediction problem especially with respect to personalized cancer therapy. We first discuss modeling approaches that are based on genomic characterizations alone and further the discussion by including modeling techniques that integrate both genomic and functional information. A comparative analysis of the prediction performance of four representative algorithms, elastic net, random forest, kernelized Bayesian multi-task learning and deep learning, reflecting the broad classes of regularized linear, ensemble, kernelized and neural network-based models, respectively, has been included in the paper. The review also considers the challenges that need to be addressed for successful implementation of the algorithms in clinical practice.
Biodiversity conservation including uncharismatic species
DEFF Research Database (Denmark)
Muñoz, Joaquin
2007-01-01
Recent papers mention ideas on the topics of biodiversity conservation strategies and priorities (Redford et al. 2003; Lamoreux et al. 2006; Rodrı´guez et al. 2006), the current status of biodiversity (Loreau et al. 2006), the obligations of conservation biologists regarding management policies...... (Chapron 2006; Schwartz 2006), and the main threats to biodiversity (including invasive species) (Bawa 2006). I suggest, however, that these articles do not really deal with biodiversity. Rather, they all focus on a few obviously charismatic groups (mammals, birds, some plants, fishes, human culture...
New algorithms for the symmetric tridiagonal eigenvalue computation
Energy Technology Data Exchange (ETDEWEB)
Pan, V. [City Univ. of New York, Bronx, NY (United States)]|[International Computer Sciences Institute, Berkeley, CA (United States)
1994-12-31
The author presents new algorithms that accelerate the bisection method for the symmetric eigenvalue problem. The algorithms rely on some new techniques, which include acceleration of Newton`s iteration and can also be further applied to acceleration of some other iterative processes, in particular, of iterative algorithms for approximating polynomial zeros.
Optical Algorithm for Cloud Shadow Detection Over Water
2013-02-01
5] R. Amin, A. Gilerson, J. Zhou, B. Gross, F. Moshary, and S. Ahmed, "Im- pacts of atmospheric corrections on algal bloom detection techniques...optical algorithms to detect and classify harmful algal blooms from space. His current research interests include optical algorithm development...algorithm, remote sensing, shadow detection 16. SECURITY CLASSIFICATION OF: a. REPORT Unclassified b. ABSTRACT Unclassified c. THIS PAGE
Machine vision theory, algorithms, practicalities
Davies, E R
2005-01-01
In the last 40 years, machine vision has evolved into a mature field embracing a wide range of applications including surveillance, automated inspection, robot assembly, vehicle guidance, traffic monitoring and control, signature verification, biometric measurement, and analysis of remotely sensed images. While researchers and industry specialists continue to document their work in this area, it has become increasingly difficult for professionals and graduate students to understand the essential theory and practicalities well enough to design their own algorithms and systems. This book directl
FLUXNET2015 Dataset: Batteries included
Pastorello, G.; Papale, D.; Agarwal, D.; Trotta, C.; Chu, H.; Canfora, E.; Torn, M. S.; Baldocchi, D. D.
2016-12-01
The synthesis datasets have become one of the signature products of the FLUXNET global network. They are composed from contributions of individual site teams to regional networks, being then compiled into uniform data products - now used in a wide variety of research efforts: from plant-scale microbiology to global-scale climate change. The FLUXNET Marconi Dataset in 2000 was the first in the series, followed by the FLUXNET LaThuile Dataset in 2007, with significant additions of data products and coverage, solidifying the adoption of the datasets as a research tool. The FLUXNET2015 Dataset counts with another round of substantial improvements, including extended quality control processes and checks, use of downscaled reanalysis data for filling long gaps in micrometeorological variables, multiple methods for USTAR threshold estimation and flux partitioning, and uncertainty estimates - all of which accompanied by auxiliary flags. This "batteries included" approach provides a lot of information for someone who wants to explore the data (and the processing methods) in detail. This inevitably leads to a large number of data variables. Although dealing with all these variables might seem overwhelming at first, especially to someone looking at eddy covariance data for the first time, there is method to our madness. In this work we describe the data products and variables that are part of the FLUXNET2015 Dataset, and the rationale behind the organization of the dataset, covering the simplified version (labeled SUBSET), the complete version (labeled FULLSET), and the auxiliary products in the dataset.
Directory of Open Access Journals (Sweden)
Julie Nikolaisen
Full Text Available Mitochondrial morphology and function are coupled in healthy cells, during pathological conditions and (adaptation to endogenous and exogenous stress. In this sense mitochondrial shape can range from small globular compartments to complex filamentous networks, even within the same cell. Understanding how mitochondrial morphological changes (i.e. "mitochondrial dynamics" are linked to cellular (patho physiology is currently the subject of intense study and requires detailed quantitative information. During the last decade, various computational approaches have been developed for automated 2-dimensional (2D analysis of mitochondrial morphology and number in microscopy images. Although these strategies are well suited for analysis of adhering cells with a flat morphology they are not applicable for thicker cells, which require a three-dimensional (3D image acquisition and analysis procedure. Here we developed and validated an automated image analysis algorithm allowing simultaneous 3D quantification of mitochondrial morphology and network properties in human endothelial cells (HUVECs. Cells expressing a mitochondria-targeted green fluorescence protein (mitoGFP were visualized by 3D confocal microscopy and mitochondrial morphology was quantified using both the established 2D method and the new 3D strategy. We demonstrate that both analyses can be used to characterize and discriminate between various mitochondrial morphologies and network properties. However, the results from 2D and 3D analysis were not equivalent when filamentous mitochondria in normal HUVECs were compared with circular/spherical mitochondria in metabolically stressed HUVECs treated with rotenone (ROT. 2D quantification suggested that metabolic stress induced mitochondrial fragmentation and loss of biomass. In contrast, 3D analysis revealed that the mitochondrial network structure was dissolved without affecting the amount and size of the organelles. Thus, our results demonstrate
Quantification of regional fat volume in rat MRI
Sacha, Jaroslaw P.; Cockman, Michael D.; Dufresne, Thomas E.; Trokhan, Darren
2003-05-01
Multiple initiatives in the pharmaceutical and beauty care industries are directed at identifying therapies for weight management. Body composition measurements are critical for such initiatives. Imaging technologies that can be used to measure body composition noninvasively include DXA (dual energy x-ray absorptiometry) and MRI (magnetic resonance imaging). Unlike other approaches, MRI provides the ability to perform localized measurements of fat distribution. Several factors complicate the automatic delineation of fat regions and quantification of fat volumes. These include motion artifacts, field non-uniformity, brightness and contrast variations, chemical shift misregistration, and ambiguity in delineating anatomical structures. We have developed an approach to deal practically with those challenges. The approach is implemented in a package, the Fat Volume Tool, for automatic detection of fat tissue in MR images of the rat abdomen, including automatic discrimination between abdominal and subcutaneous regions. We suppress motion artifacts using masking based on detection of implicit landmarks in the images. Adaptive object extraction is used to compensate for intensity variations. This approach enables us to perform fat tissue detection and quantification in a fully automated manner. The package can also operate in manual mode, which can be used for verification of the automatic analysis or for performing supervised segmentation. In supervised segmentation, the operator has the ability to interact with the automatic segmentation procedures to touch-up or completely overwrite intermediate segmentation steps. The operator's interventions steer the automatic segmentation steps that follow. This improves the efficiency and quality of the final segmentation. Semi-automatic segmentation tools (interactive region growing, live-wire, etc.) improve both the accuracy and throughput of the operator when working in manual mode. The quality of automatic segmentation has been
Gems of combinatorial optimization and graph algorithms
Skutella, Martin; Stiller, Sebastian; Wagner, Dorothea
2015-01-01
Are you looking for new lectures for your course on algorithms, combinatorial optimization, or algorithmic game theory? Maybe you need a convenient source of relevant, current topics for a graduate student or advanced undergraduate student seminar? Or perhaps you just want an enjoyable look at some beautiful mathematical and algorithmic results, ideas, proofs, concepts, and techniques in discrete mathematics and theoretical computer science? Gems of Combinatorial Optimization and Graph Algorithms is a handpicked collection of up-to-date articles, carefully prepared by a select group of international experts, who have contributed some of their most mathematically or algorithmically elegant ideas. Topics include longest tours and Steiner trees in geometric spaces, cartograms, resource buying games, congestion games, selfish routing, revenue equivalence and shortest paths, scheduling, linear structures in graphs, contraction hierarchies, budgeted matching problems, and motifs in networks. This ...
Adaptive Filtering Algorithms and Practical Implementation
Diniz, Paulo S R
2013-01-01
In the fourth edition of Adaptive Filtering: Algorithms and Practical Implementation, author Paulo S.R. Diniz presents the basic concepts of adaptive signal processing and adaptive filtering in a concise and straightforward manner. The main classes of adaptive filtering algorithms are presented in a unified framework, using clear notations that facilitate actual implementation. The main algorithms are described in tables, which are detailed enough to allow the reader to verify the covered concepts. Many examples address problems drawn from actual applications. New material to this edition includes: Analytical and simulation examples in Chapters 4, 5, 6 and 10 Appendix E, which summarizes the analysis of set-membership algorithm Updated problems and references Providing a concise background on adaptive filtering, this book covers the family of LMS, affine projection, RLS and data-selective set-membership algorithms as well as nonlinear, sub-band, blind, IIR adaptive filtering, and more. Several problems are...
Quantum robot: structure, algorithms and applications
Dong, Daoyi; Zhang, Chenbin; Chen, Zonghai
2008-01-01
A brand-new paradigm of robots--quantum robots--is proposed through the fusion of quantum theory with robot technology. A quantum robot is essentially a complex quantum system which generally consists of three fundamental components: multi-quantum computing units (MQCU), quantum controller/actuator, and information acquisition units. Corresponding to the system structure, several learning control algorithms, including quantum searching algorithms and quantum reinforcement learning algorithms, are presented for quantum robots. The theoretical results show that quantum robots using quantum searching algorithms can reduce the complexity of the search problem from O(N^2) in classical robots to O(N^3/2). Simulation results demonstrate that quantum robots are also superior to classical robots in efficient learning under novel quantum reinforcement learning algorithms. Considering the advantages of quantum robots, some important potential applications are also analyzed and prospected.
Algorithms for improved performance in cryptographic protocols.
Energy Technology Data Exchange (ETDEWEB)
Schroeppel, Richard Crabtree; Beaver, Cheryl Lynn
2003-11-01
Public key cryptographic algorithms provide data authentication and non-repudiation for electronic transmissions. The mathematical nature of the algorithms, however, means they require a significant amount of computation, and encrypted messages and digital signatures possess high bandwidth. Accordingly, there are many environments (e.g. wireless, ad-hoc, remote sensing networks) where public-key requirements are prohibitive and cannot be used. The use of elliptic curves in public-key computations has provided a means by which computations and bandwidth can be somewhat reduced. We report here on the research conducted in an LDRD aimed to find even more efficient algorithms and to make public-key cryptography available to a wider range of computing environments. We improved upon several algorithms, including one for which a patent has been applied. Further we discovered some new problems and relations on which future cryptographic algorithms may be based.
Learning theory of distributed spectral algorithms
Guo, Zheng-Chu; Lin, Shao-Bo; Zhou, Ding-Xuan
2017-07-01
Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms.
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network-oblivi...
2016-01-01
This project concerns the implementation of a decentralized algorithm for shape formation. The first idea was to test this algorithm with a swarm of autonomous drones but, due to the lack of time and the complexity of the project, the work was just developed in 2D and in simulation.
DEFF Research Database (Denmark)
Husfeldt, Thore
2015-01-01
This chapter presents an introduction to graph colouring algorithms. The focus is on vertex-colouring algorithms that work for general classes of graphs with worst-case performance guarantees in a sequential model of computation. The presentation aims to demonstrate the breadth of available...
Ciocanea Teodorescu I.,
2016-01-01
In this thesis we are interested in describing algorithms that answer questions arising in ring and module theory. Our focus is on deterministic polynomial-time algorithms and rings and modules that are finite. The first main result of this thesis is a solution to the module isomorphism problem in
Implementing Vehicle Routing Algorithms
1975-09-01
Multiple Depot Vehicle Dispatch Problem," presented at the URSA/TIMS meeting San Juan, Puerto Rico, Oct. 1974. 28. Gillett, B., and Miller, L., " A Heuristic Algorithm for...45. Lam, T., "Comments on a Heuristic Algorithm for the Multiple Terminal Delivery Problem," Transportation Science, Vol. 4, No. 4, Nov. 1970, p. 403
Parallel scheduling algorithms
Energy Technology Data Exchange (ETDEWEB)
Dekel, E.; Sahni, S.
1983-01-01
Parallel algorithms are given for scheduling problems such as scheduling to minimize the number of tardy jobs, job sequencing with deadlines, scheduling to minimize earliness and tardiness penalties, channel assignment, and minimizing the mean finish time. The shared memory model of parallel computers is used to obtain fast algorithms. 26 references.
Ciocanea Teodorescu I.,
2016-01-01
In this thesis we are interested in describing algorithms that answer questions arising in ring and module theory. Our focus is on deterministic polynomial-time algorithms and rings and modules that are finite. The first main result of this thesis is a solution to the module isomorphism problem in
Loo, K
2005-01-01
The goals of this paper are to show the following. First, Grover's algorithm can be viewed as a digital approximation to the analog quantum algorithm proposed in "An Analog Analogue of a Digital Quantum Computation", by E. Farhi and S. Gutmann, Phys.Rev. A 57, 2403 - 2406 (1998), quant-ph/9612026. We will call the above analog algorithm the Grover-Farhi-Gutmann or GFG algorithm. Second, the propagator of the GFG algorithm can be written as a sum-over-paths formula and given a sum-over-path interpretation, i.e., a Feynman path sum/integral. We will use nonstandard analysis to do this. Third, in the semi-classical limit $\\hbar\\to 0$, both the Grover and the GFG algorithms (viewed in the setting of the approximation in this paper) must run instantaneously. Finally, we will end the paper with an open question. In "Semiclassical Shor's Algorithm", by P. Giorda, et al, Phys. Rev.A 70, 032303 (2004), quant-ph/0303037, the authors proposed building semi-classical quantum computers to run Shor's algorithm because the ...
Automated pericardium delineation and epicardial fat volume quantification from noncontrast CT.
Ding, Xiaowei; Terzopoulos, Demetri; Diaz-Zamudio, Mariana; Berman, Daniel S; Slomka, Piotr J; Dey, Damini
2015-09-01
The authors aimed to develop and validate an automated algorithm for epicardial fat volume (EFV) quantification from noncontrast CT. The authors developed a hybrid algorithm based on initial segmentation with a multiple-patient CT atlas, followed by automated pericardium delineation using geodesic active contours. A coregistered segmented CT atlas was created from manually segmented CT data and stored offline. The heart and pericardium in test CT data are first initialized by image registration to the CT atlas. The pericardium is then detected by a knowledge-based algorithm, which extracts only the membrane representing the pericardium. From its initial atlas position, the pericardium is modeled by geodesic active contours, which iteratively deform and lock onto the detected pericardium. EFV is automatically computed using standard fat attenuation range. The authors applied their algorithm on 50 patients undergoing routine coronary calcium assessment by CT. Measurement time was 60 s per-patient. EFV quantified by the algorithm (83.60 ± 32.89 cm(3)) and expert readers (81.85 ± 34.28 cm(3)) showed excellent correlation (r = 0.97, p < 0.0001), with no significant differences by comparison of individual data points (p = 0.15). Voxel overlap by Dice coefficient between the algorithm and expert readers was 0.92 (range 0.88-0.95). The mean surface distance and Hausdorff distance in millimeter between manually drawn contours and the automatically obtained contours were 0.6 ± 0.9 mm and 3.9 ± 1.7 mm, respectively. Mean difference between the algorithm and experts was 9.7% ± 7.4%, similar to interobserver variability between 2 readers (8.0% ± 5.3%, p = 0.3). The authors' novel automated method based on atlas-initialized active contours accurately and rapidly quantifies EFV from noncontrast CT.
Families classification including multiopposition asteroids
Milani, Andrea; Spoto, Federica; Knežević, Zoran; Novaković, Bojan; Tsirvoulis, Georgios
2016-01-01
In this paper we present the results of our new classification of asteroid families, upgraded by using catalog with > 500,000 asteroids. We discuss the outcome of the most recent update of the family list and of their membership. We found enough evidence to perform 9 mergers of the previously independent families. By introducing an improved method of estimation of the expected family growth in the less populous regions (e.g. at high inclination) we were able to reliably decide on rejection of one tiny group as a probable statistical fluke. Thus we reduced our current list to 115 families. We also present newly determined ages for 6 families, including complex 135 and 221, improving also our understanding of the dynamical vs. collisional families relationship. We conclude with some recommendations for the future work and for the family name problem.
Quantification of stenosis in coronary artery via CTA using fuzzy distance transform
Xu, Yan; Saha, Punam K.; Hu, Guangshu; Liang, Guoyuan; Yang, Yan; Geng, Jinzhao
2009-02-01
tomographic angiography (CTA) being noninvasive, economical and informative, has become a common modality for monitoring disease status and treatment effects. Here, we present a new method for detecting and quantifying coronary arterial stenosis via CTA using fuzzy distance transform (FDT) approach. FDT computes local depth at each image point in the presence of partial voluming. Coronary arterial stenoses are detected and their severities are quantified by analyzing FDT values along the medial axis of an artery obtained by skeletonization. Also, we have developed a new skeletal pruning algorithm toward improving quality of medial axes and therefore, enhancing the accuracy of stenosis detection and quantification. The method is completed using the following steps - (1) fuzzy segmentation of coronary artery via CTA, (2) FDT computation of coronary arteries, (3) medial axis computation, (4) estimation of local diameter along arteries and (5) stenosis detection and quantification of arterial blockage. Performance of the method has been quantitatively evaluated on a realistic coronary artery phantom dataset with randomly simulated stenoses and the results are compared with a classical binary algorithm. The method has also been applied on a clinical CTA dataset from thirteen patients with 59 stenoses and the results are compared with an expert's quantitative assessment of stenoses. Results of the phantom experiment indicate that the new method is significantly more accurate as compared to the conventional binary method. Also, the results of the clinical study indicate that the computerized method is highly in agreement with the expert's assessments.
Directory of Open Access Journals (Sweden)
Francesca Musiani
2013-08-01
Full Text Available Algorithms are increasingly often cited as one of the fundamental shaping devices of our daily, immersed-in-information existence. Their importance is acknowledged, their performance scrutinised in numerous contexts. Yet, a lot of what constitutes 'algorithms' beyond their broad definition as “encoded procedures for transforming input data into a desired output, based on specified calculations” (Gillespie, 2013 is often taken for granted. This article seeks to contribute to the discussion about 'what algorithms do' and in which ways they are artefacts of governance, providing two examples drawing from the internet and ICT realm: search engine queries and e-commerce websites’ recommendations to customers. The question of the relationship between algorithms and rules is likely to occupy an increasingly central role in the study and the practice of internet governance, in terms of both institutions’ regulation of algorithms, and algorithms’ regulation of our society.
DEFF Research Database (Denmark)
Bilardi, Gianfranco; Pietracaprina, Andrea; Pucci, Geppino
2016-01-01
A framework is proposed for the design and analysis of network-oblivious algorithms, namely algorithms that can run unchanged, yet efficiently, on a variety of machines characterized by different degrees of parallelism and communication capabilities. The framework prescribes that a network......-oblivious algorithm be specified on a parallel model of computation where the only parameter is the problem’s input size, and then evaluated on a model with two parameters, capturing parallelism granularity and communication latency. It is shown that for a wide class of network-oblivious algorithms, optimality...... of cache hierarchies, to the realm of parallel computation. Its effectiveness is illustrated by providing optimal network-oblivious algorithms for a number of key problems. Some limitations of the oblivious approach are also discussed....
Jacobsson Svärd, Staffan; Holcombe, Scott; Grape, Sophie
2015-05-01
A fuel assembly operated in a nuclear power plant typically contains 100-300 fuel rods, depending on fuel type, which become strongly radioactive during irradiation in the reactor core. For operational and security reasons, it is of interest to experimentally deduce rod-wise information from the fuel, preferably by means of non-destructive measurements. The tomographic SPECT technique offers such possibilities through its two-step application; (1) recording the gamma-ray flux distribution around the fuel assembly, and (2) reconstructing the assembly's internal source distribution, based on the recorded radiation field. In this paper, algorithms for performing the latter step and extracting quantitative relative rod-by-rod data are accounted for. As compared to application of SPECT in nuclear medicine, nuclear fuel assemblies present a much more heterogeneous distribution of internal attenuation to gamma radiation than the human body, typically with rods containing pellets of heavy uranium dioxide surrounded by cladding of a zirconium alloy placed in water or air. This inhomogeneity severely complicates the tomographic quantification of the rod-wise relative source content, and the deduction of conclusive data requires detailed modelling of the attenuation to be introduced in the reconstructions. However, as shown in this paper, simplified models may still produce valuable information about the fuel. Here, a set of reconstruction algorithms for SPECT on nuclear fuel assemblies are described and discussed in terms of their quantitative performance for two applications; verification of fuel assemblies' completeness in nuclear safeguards, and rod-wise fuel characterization. It is argued that a request not to base the former assessment on any a priori information brings constraints to which reconstruction methods that may be used in that case, whereas the use of a priori information on geometry and material content enables highly accurate quantitative assessment, which
Clutter discrimination algorithm simulation in pulse laser radar imaging
Zhang, Yan-mei; Li, Huan; Guo, Hai-chao; Su, Xuan; Zhu, Fule
2015-10-01
Pulse laser radar imaging performance is greatly influenced by different kinds of clutter. Various algorithms are developed to mitigate clutter. However, estimating performance of a new algorithm is difficult. Here, a simulation model for estimating clutter discrimination algorithms is presented. This model consists of laser pulse emission, clutter jamming, laser pulse reception and target image producing. Additionally, a hardware platform is set up gathering clutter data reflected by ground and trees. The data logging is as clutter jamming input in the simulation model. The hardware platform includes a laser diode, a laser detector and a high sample rate data logging circuit. The laser diode transmits short laser pulses (40ns FWHM) at 12.5 kilohertz pulse rate and at 905nm wavelength. An analog-to-digital converter chip integrated in the sample circuit works at 250 mega samples per second. The simulation model and the hardware platform contribute to a clutter discrimination algorithm simulation system. Using this system, after analyzing clutter data logging, a new compound pulse detection algorithm is developed. This new algorithm combines matched filter algorithm and constant fraction discrimination (CFD) algorithm. Firstly, laser echo pulse signal is processed by matched filter algorithm. After the first step, CFD algorithm comes next. Finally, clutter jamming from ground and trees is discriminated and target image is produced. Laser radar images are simulated using CFD algorithm, matched filter algorithm and the new algorithm respectively. Simulation result demonstrates that the new algorithm achieves the best target imaging effect of mitigating clutter reflected by ground and trees.
Including Magnetostriction in Micromagnetic Models
Conbhuí, Pádraig Ó.; Williams, Wyn; Fabian, Karl; Nagy, Lesleis
2016-04-01
The magnetic anomalies that identify crustal spreading are predominantly recorded by basalts formed at the mid-ocean ridges, whose magnetic signals are dominated by iron-titanium-oxides (Fe3-xTixO4), so called "titanomagnetites", of which the Fe2.4Ti0.6O4 (TM60) phase is the most common. With sufficient quantities of titanium present, these minerals exhibit strong magnetostriction. To date, models of these grains in the pseudo-single domain (PSD) range have failed to accurately account for this effect. In particular, a popular analytic treatment provided by Kittel (1949) for describing the magnetostrictive energy as an effective increase of the anisotropy constant can produce unphysical strains for non-uniform magnetizations. I will present a rigorous approach based on work by Brown (1966) and by Kroner (1958) for including magnetostriction in micromagnetic codes which is suitable for modelling hysteresis loops and finding remanent states in the PSD regime. Preliminary results suggest the more rigorously defined micromagnetic models exhibit higher coercivities and extended single domain ranges when compared to more simplistic approaches.
Collaborative framework for PIV uncertainty quantification: the experimental database
Neal, Douglas R.; Sciacchitano, Andrea; Smith, Barton L.; Scarano, Fulvio
2015-07-01
The uncertainty quantification of particle image velocimetry (PIV) measurements has recently become a topic of great interest as shown by the recent appearance of several different methods within the past few years. These approaches have different working principles, merits and limitations, which have been speculated upon in subsequent studies. This paper reports a unique experiment that has been performed specifically to test the efficacy of PIV uncertainty methods. The case of a rectangular jet, as previously studied by Timmins et al (2012) and Wilson and Smith (2013b), is used. The novel aspect of the experiment is simultaneous velocity measurements using two different time-resolved PIV systems and a hot-wire anemometry (HWA) system. The first PIV system, called the PIV measurement system (‘PIV-MS’), is intended for nominal measurements of which the uncertainty is to be evaluated. It is based on a single camera and features a dynamic velocity range (DVR) representative of typical PIV experiments. The second PIV system, called the ‘PIV-HDR’ (high dynamic range) system, features a significantly higher DVR obtained with a higher digital imaging resolution. The hot-wire is placed in close proximity to the PIV measurement domain. The three measurement systems were carefully set to simultaneously measure the flow velocity at the same time and location. The comparison between the PIV-HDR system and the HWA provides an estimate of the measurement precision of the reference velocity for evaluation of the instantaneous error in the measurement system. The discrepancy between the PIV-MS and the reference data provides the measurement error, which is later used to assess the different uncertainty quantification methods proposed in the literature. A detailed comparison of the uncertainty estimation methods based on the present datasets is presented in a second paper from Sciacchitano et al (2015). Furthermore, this database offers the potential to be used for
Quantification of Posterior Globe Flattening: Methodology Development and Validationc
Lumpkins, S. B.; Garcia, K. M.; Sargsyan, A. E.; Hamilton, D. R.; Berggren, M. D.; Antonsen, E.; Ebert, D.
2011-01-01
Microgravity exposure affects visual acuity in a subset of astronauts, and mechanisms may include structural changes in the posterior globe and orbit. Particularly, posterior globe flattening has been implicated in several astronauts. This phenomenon is known to affect some terrestrial patient populations, and has been shown to be associated with intracranial hypertension. It is commonly assessed by magnetic resonance imaging (MRI), computed tomography (CT), or B-mode ultrasound (US), without consistent objective criteria. NASA uses a semi-quantitative scale of 0-3 as part of eye/orbit MRI and US analysis for occupational monitoring purposes. The goal of this study was to initiate development of an objective quantification methodology for posterior globe flattening.
Quantification of Posterior Globe Flattening: Methodology Development and Validation
Lumpkins, Sarah B.; Garcia, Kathleen M.; Sargsyan, Ashot E.; Hamilton, Douglas R.; Berggren, Michael D.; Ebert, Douglas
2012-01-01
Microgravity exposure affects visual acuity in a subset of astronauts and mechanisms may include structural changes in the posterior globe and orbit. Particularly, posterior globe flattening has been implicated in the eyes of several astronauts. This phenomenon is known to affect some terrestrial patient populations and has been shown to be associated with intracranial hypertension. It is commonly assessed by magnetic resonance imaging (MRI), computed tomography (CT) or B-mode Ultrasound (US), without consistent objective criteria. NASA uses a semiquantitative scale of 0-3 as part of eye/orbit MRI and US analysis for occupational monitoring purposes. The goal of this study was ot initiate development of an objective quantification methodology to monitor small changes in posterior globe flattening.
Microplastics in Baltic bottom sediments: Quantification procedures and first results.
Zobkov, M; Esiukova, E
2017-01-30
Microplastics in the marine environment are known as a global ecological problem but there are still no standardized analysis procedures for their quantification. The first breakthrough in this direction was the NOAA Laboratory Methods for quantifying synthetic particles in water and sediments, but fibers numbers have been found to be underestimated with this approach. We propose modifications for these methods that will allow us to analyze microplastics in bottom sediments, including small fibers. Addition of an internal standard to sediment samples and occasional empty runs are advised for analysis quality control. The microplastics extraction efficiency using the proposed modifications is 92±7%. Distribution of microplastics in bottom sediments of the Russian part of the Baltic Sea is presented. Microplastic particles were found in all of the samples with an average concentration of 34±10 items/kg DW and have the same order of magnitude as neighbor studies reported.
Temporal and spatial quantification of farm and landscape functions
DEFF Research Database (Denmark)
Andersen, Peter Stubkjær
This PhD thesis presents a study on the spatial distribution of agricultural functions at farm and landscape levels. The study focuses on conceptualization of multifunctionality. The concrete conceptual steps include: identification of indicators of four farm and landscape functions – production......, residence, habitat, and recreation; development of a method for quantifying farm functionality and assessing multifunctionality; and definition of a farm typology based on multifunctionality strategies. Empirical data from farm interviews were used in the study to test the developed methods. The results...... is generally decreases and a tendency of increased segregation of the rural landscape is observed. In perspective, further studies on quantification in tangible units, synergies and trade-offs between functions at different scales, and correlations between structures and functions are needed....
A New Modified Firefly Algorithm
Directory of Open Access Journals (Sweden)
Medha Gupta
2016-07-01
Full Text Available Nature inspired meta-heuristic algorithms studies the emergent collective intelligence of groups of simple agents. Firefly Algorithm is one of the new such swarm-based metaheuristic algorithm inspired by the flashing behavior of fireflies. The algorithm was first proposed in 2008 and since then has been successfully used for solving various optimization problems. In this work, we intend to propose a new modified version of Firefly algorithm (MoFA and later its performance is compared with the standard firefly algorithm along with various other meta-heuristic algorithms. Numerical studies and results demonstrate that the proposed algorithm is superior to existing algorithms.
Efficient Quantification of Uncertainties in Complex Computer Code Results Project
National Aeronautics and Space Administration — Propagation of parameter uncertainties through large computer models can be very resource intensive. Frameworks and tools for uncertainty quantification are...
Efficient Quantification of Uncertainties in Complex Computer Code Results Project
National Aeronautics and Space Administration — This proposal addresses methods for efficient quantification of margins and uncertainties (QMU) for models that couple multiple, large-scale commercial or...
Aerodynamic Modeling with Heterogeneous Data Assimilation and Uncertainty Quantification Project
National Aeronautics and Space Administration — Clear Science Corp. proposes to develop an aerodynamic modeling tool that assimilates data from different sources and facilitates uncertainty quantification. The...
Koydemir, Hatice Ceylan; Gorocs, Zoltan; Tseng, Derek; Cortazar, Bingen; Feng, Steve; Chan, Raymond Yan Lok; Burbano, Jordi; McLeod, Euan; Ozcan, Aydogan
2015-03-07
Rapid and sensitive detection of waterborne pathogens in drinkable and recreational water sources is crucial for treating and preventing the spread of water related diseases, especially in resource-limited settings. Here we present a field-portable and cost-effective platform for detection and quantification of Giardia lamblia cysts, one of the most common waterborne parasites, which has a thick cell wall that makes it resistant to most water disinfection techniques including chlorination. The platform consists of a smartphone coupled with an opto-mechanical attachment weighing ~205 g, which utilizes a hand-held fluorescence microscope design aligned with the camera unit of the smartphone to image custom-designed disposable water sample cassettes. Each sample cassette is composed of absorbent pads and mechanical filter membranes; a membrane with 8 μm pore size is used as a porous spacing layer to prevent the backflow of particles to the upper membrane, while the top membrane with 5 μm pore size is used to capture the individual Giardia cysts that are fluorescently labeled. A fluorescence image of the filter surface (field-of-view: ~0.8 cm(2)) is captured and wirelessly transmitted via the mobile-phone to our servers for rapid processing using a machine learning algorithm that is trained on statistical features of Giardia cysts to automatically detect and count the cysts captured on the membrane. The results are then transmitted back to the mobile-phone in less than 2 minutes and are displayed through a smart application running on the phone. This mobile platform, along with our custom-developed sample preparation protocol, enables analysis of large volumes of water (e.g., 10-20 mL) for automated detection and enumeration of Giardia cysts in ~1 hour, including all the steps of sample preparation and analysis. We evaluated the performance of this approach using flow-cytometer-enumerated Giardia-contaminated water samples, demonstrating an average cyst capture
Locomotive Assignment Optimization Including Train Delays
Directory of Open Access Journals (Sweden)
Sandra Kasalica
2013-10-01
Full Text Available Intention– Cyclic locomotive assignment planning is a specific type of organization of locomotive usage, and in fact, it means putting the complete workload to a closed chain, which is repeated periodically. The concept of cyclic locomotive assignment planning type organization in the area of train traction has proven in practice as the best one, but as it is made for in-advance defined timetable and without considering the stochastic nature of the timetable realization process, it leads to incompatibility in using locomotives. Methodology – Methodology defined in this paper contains: research of train delays on the Serbian Railways and Montenegrin Railways networks, analysis of the real system organization of locomotive usage in conditions of train delays, theoretical thesis of solving the problem of optimal cyclic locomotive assignment planning in conditions of train delays, designing of a model with algorithms, preparing the software package, testing the model and program with results, as well as the conclusions drawn from the complete research project. Results– The optimization model of cyclic locomotive assignment planning during the process of making timetable including train delays has been defined. Conclusion –The obtained results have shown as expected, that the larger delays of trains required a larger number of locomotives. However, by using this model it is possible to optimize the required number of locomotives, taking into account the real time delays of trains.
Energy Technology Data Exchange (ETDEWEB)
Berry, K.; Dayton, S.
1996-10-28
Citibank was using a data collection system to create a one-time-only mailing history on prospective credit card customers that was becoming dated in its time to market requirements and as such was in need of performance improvements. To compound problems with their existing system, the assurance of the quality of the data matching process was manpower intensive and needed to be automated. Analysis, design, and prototyping capabilities involving information technology were areas of expertise provided by DOE-LMES Data Systems Research and Development (DSRD) program. The goal of this project was for Data Systems Research and Development (DSRD) to analyze the current Citibank credit card offering system and suggest and prototype technology improvements that would result in faster processing with quality as good as the current system. Technologies investigated include: a high-speed network of reduced instruction set computing (RISC) processors for loosely coupled parallel processing, tightly coupled, high performance parallel processing, higher order computer languages such as `C`, fuzzy matching algorithms applied to very large data files, relational database management system, and advanced programming techniques.
Energy Technology Data Exchange (ETDEWEB)
Lamacie, Mariana M. [University Health Network, Department of Medical Imaging, Toronto, Ontario (Canada); Thavendiranathan, Paaladinesh [University Health Network, Department of Medical Imaging, Toronto, Ontario (Canada); University of Toronto, Department of Medicine, Division of Cardiology, Toronto, Ontario (Canada); Hanneman, Kate [University Health Network, Department of Medical Imaging, Toronto, Ontario (Canada); University of Toronto, Department of Medical Imaging, Toronto, Ontario (Canada); Greiser, Andreas [Siemens Healthcare, Erlangen (Germany); Jolly, Marie-Pierre [Medical Imaging Technologies, Siemens Healthcare, Princeton, NJ (United States); Ward, Richard [University of Toronto, Department of Medicine, Division of Cardiology, Toronto, Ontario (Canada); Wintersperger, Bernd J. [University Health Network, Department of Medical Imaging, Toronto, Ontario (Canada); University of Toronto, Department of Medical Imaging, Toronto, Ontario (Canada); Toronto General Hospital, Department of Medical Imaging, Toronto, Ontario (Canada)
2017-04-15
To evaluate deformable registration algorithms (DRA)-based quantification of cine steady-state free-precession (SSFP) for myocardial strain assessment in comparison with feature-tracking (FT) and speckle-tracking echocardiography (STE). Data sets of 28 patients/10 volunteers, undergoing same-day 1.5T cardiac MRI and echocardiography were included. LV global longitudinal (GLS), circumferential (GCS) and radial (GRS) peak systolic strain were assessed on cine SSFP data using commercially available FT algorithms and prototype DRA-based algorithms. STE was applied as standard of reference for accuracy, precision and intra-/interobserver reproducibility testing. DRA showed narrower limits of agreement compared to STE for GLS (-4.0 [-0.9,-7.9]) and GCS (-5.1 [1.1,-11.2]) than FT (3.2 [11.2,-4.9]; 3.8 [13.9,-6.3], respectively). While both DRA and FT demonstrated significant differences to STE for GLS and GCS (all p<0.001), only DRA correlated significantly to STE for GLS (r=0.47; p=0.006). However, good correlation was demonstrated between MR techniques (GLS:r=0.74; GCS:r=0.80; GRS:r=0.45, all p<0.05). Comparing DRA with FT, intra-/interobserver coefficient of variance was lower (1.6 %/3.2 % vs. 6.4 %/5.7 %) and intraclass-correlation coefficient was higher. DRA GCS and GRS data presented zero variability for repeated observations. DRA is an automated method that allows myocardial deformation assessment with superior reproducibility compared to FT. (orig.)
Quantification of competitive value of documents
Directory of Open Access Journals (Sweden)
Pavel Šimek
2009-01-01
Full Text Available The majority of Internet users use the global network to search for different information using fulltext search engines such as Google, Yahoo!, or Seznam. The web presentation operators are trying, with the help of different optimization techniques, to get to the top places in the results of fulltext search engines. Right there is a great importance of Search Engine Optimization and Search Engine Marketing, because normal users usually try links only on the first few pages of the fulltext search engines results on certain keywords and in catalogs they use primarily hierarchically higher placed links in each category. Key to success is the application of optimization methods which deal with the issue of keywords, structure and quality of content, domain names, individual sites and quantity and reliability of backward links. The process is demanding, long-lasting and without a guaranteed outcome. A website operator without advanced analytical tools do not identify the contribution of individual documents from which the entire web site consists. If the web presentation operators want to have an overview of their documents and web site in global, it is appropriate to quantify these positions in a specific way, depending on specific key words. For this purpose serves the quantification of competitive value of documents, which consequently sets global competitive value of a web site. Quantification of competitive values is performed on a specific full-text search engine. For each full-text search engine can be and often are, different results. According to published reports of ClickZ agency or Market Share is according to the number of searches by English-speaking users most widely used Google search engine, which has a market share of more than 80%. The whole procedure of quantification of competitive values is common, however, the initial step which is the analysis of keywords depends on a choice of the fulltext search engine.
Stereo-particle image velocimetry uncertainty quantification
Bhattacharya, Sayantan; Charonko, John J.; Vlachos, Pavlos P.
2017-01-01
Particle image velocimetry (PIV) measurements are subject to multiple elemental error sources and thus estimating overall measurement uncertainty is challenging. Recent advances have led to a posteriori uncertainty estimation methods for planar two-component PIV. However, no complete methodology exists for uncertainty quantification in stereo PIV. In the current work, a comprehensive framework is presented to quantify the uncertainty stemming from stereo registration error and combine it with the underlying planar velocity uncertainties. The disparity in particle locations of the dewarped images is used to estimate the positional uncertainty of the world coordinate system, which is then propagated to the uncertainty in the calibration mapping function coefficients. Next, the calibration uncertainty is combined with the planar uncertainty fields of the individual cameras through an uncertainty propagation equation and uncertainty estimates are obtained for all three velocity components. The methodology was tested with synthetic stereo PIV data for different light sheet thicknesses, with and without registration error, and also validated with an experimental vortex ring case from 2014 PIV challenge. Thorough sensitivity analysis was performed to assess the relative impact of the various parameters to the overall uncertainty. The results suggest that in absence of any disparity, the stereo PIV uncertainty prediction method is more sensitive to the planar uncertainty estimates than to the angle uncertainty, although the latter is not negligible for non-zero disparity. Overall the presented uncertainty quantification framework showed excellent agreement between the error and uncertainty RMS values for both the synthetic and the experimental data and demonstrated reliable uncertainty prediction coverage. This stereo PIV uncertainty quantification framework provides the first comprehensive treatment on the subject and potentially lays foundations applicable to volumetric
Comparison of turbulence mitigation algorithms
Kozacik, Stephen T.; Paolini, Aaron; Sherman, Ariel; Bonnett, James; Kelmelis, Eric
2017-07-01
When capturing imagery over long distances, atmospheric turbulence often degrades the data, especially when observation paths are close to the ground or in hot environments. These issues manifest as time-varying scintillation and warping effects that decrease the effective resolution of the sensor and reduce actionable intelligence. In recent years, several image processing approaches to turbulence mitigation have shown promise. Each of these algorithms has different computational requirements, usability demands, and degrees of independence from camera sensors. They also produce different degrees of enhancement when applied to turbulent imagery. Additionally, some of these algorithms are applicable to real-time operational scenarios while others may only be suitable for postprocessing workflows. EM Photonics has been developing image-processing-based turbulence mitigation technology since 2005. We will compare techniques from the literature with our commercially available, real-time, GPU-accelerated turbulence mitigation software. These comparisons will be made using real (not synthetic), experimentally obtained data for a variety of conditions, including varying optical hardware, imaging range, subjects, and turbulence conditions. Comparison metrics will include image quality, video latency, computational complexity, and potential for real-time operation. Additionally, we will present a technique for quantitatively comparing turbulence mitigation algorithms using real images of radial resolution targets.
Kunz, Pierre; Fellenberg, Jörg; Moskovszky, Linda; Sápi, Zoltan; Krenacs, Tibor; Poeschl, Johannes; Lehner, Burkhard; Szendrõi, Miklos; Ewerbeck, Volker; Kinscherf, Ralf; Fritzsching, Benedikt
2014-01-01
Background In osteosarcoma survival rates could not be improved over the last 30 years. Novel biomarkers are warranted to allow risk stratification of patients for more individual treatment following initial diagnosis. Although previous studies of the tumor microenvironment have identified promising candidates, novel biomarkers have not been translated into routine histopathology. Substantial difficulties regarding immunohistochemical detection and quantification of antigens in decalcified and heterogeneous osteosarcoma might largely explain this translational short-coming. Furthermore, we hypothesized that conventional hot spot analysis is often not representative for the whole section when applied to heterogeneous tissues like osteosarcoma. We aimed to overcome these difficulties for major biomarkers of the immunovascular microenvironment. Methods Immunohistochemistry was systematically optimized for cell surface (CD31, CD8) and intracellular antigens (FOXP3) including evaluation of 200 different antigen retrieval conditions. Distribution patterns of these antigens were analyzed in formalin-fixed and paraffin-embedded samples from 120 high-grade central osteosarcoma biopsies and computer-assisted whole-slide analysis was compared with conventional quantification methods including hot spot analysis. Results More than 96% of osteosarcoma samples were positive for all antigens after optimization of immunohistochemistry. In contrast, standard immunohistochemistry retrieved false negative results in 35–65% of decalcified osteosarcoma specimens. Standard hot spot analysis was applicable for homogeneous distributed FOXP3+ and CD8+ cells. However, heterogeneous distribution of vascular CD31 did not allow reliable quantification with hot spot analysis in 85% of all samples. Computer-assisted whole-slide analysis of total CD31- immunoreactive area proved as the most appropriate quantification method. Conclusion Standard staining and quantification procedures are not
Uncertainty quantification and stochastic modeling with Matlab
Souza de Cursi, Eduardo
2015-01-01
Uncertainty Quantification (UQ) is a relatively new research area which describes the methods and approaches used to supply quantitative descriptions of the effects of uncertainty, variability and errors in simulation problems and models. It is rapidly becoming a field of increasing importance, with many real-world applications within statistics, mathematics, probability and engineering, but also within the natural sciences. Literature on the topic has up until now been largely based on polynomial chaos, which raises difficulties when considering different types of approximation and does no
QUANTIFICATION OF TISSUE PROPERTIES IN SMALL VOLUMES
Energy Technology Data Exchange (ETDEWEB)
J. MOURANT; ET AL
2000-12-01
The quantification of tissue properties by optical measurements will facilitate the development of noninvasive methods of cancer diagnosis and detection. Optical measurements are sensitive to tissue structure which is known to change during tumorigenesis. The goals of the work presented in this paper were to verify that the primary scatterers of light in cells are structures much smaller than the nucleus and then to develop an optical technique that can quantify parameters of structures the same size as the scattering features in cells. Polarized, elastic back-scattering was found to be able to quantify changes in scattering properties for turbid media consisting of scatterers of the size found in tissue.
Perfusion Quantification Using Gaussian Process Deconvolution
DEFF Research Database (Denmark)
Andersen, Irene Klærke; Have, Anna Szynkowiak; Rasmussen, Carl Edward
2002-01-01
The quantification of perfusion using dynamic susceptibility contrast MRI (DSC-MRI) requires deconvolution to obtain the residual impulse response function (IRF). In this work, a method using the Gaussian process for deconvolution (GPD) is proposed. The fact that the IRF is smooth is incorporated...... optimized according to the noise level in each voxel. The comparison is carried out using artificial data as well as data from healthy volunteers. It is shown that GPD is comparable to SVD with a variable optimized threshold when determining the maximum of the IRF, which is directly related to the perfusion...
Adjoint-Based Uncertainty Quantification with MCNP
Energy Technology Data Exchange (ETDEWEB)
Seifried, Jeffrey E. [Univ. of California, Berkeley, CA (United States)
2011-09-01
This work serves to quantify the instantaneous uncertainties in neutron transport simulations born from nuclear data and statistical counting uncertainties. Perturbation and adjoint theories are used to derive implicit sensitivity expressions. These expressions are transformed into forms that are convenient for construction with MCNP6, creating the ability to perform adjoint-based uncertainty quantification with MCNP6. These new tools are exercised on the depleted-uranium hybrid LIFE blanket, quantifying its sensitivities and uncertainties to important figures of merit. Overall, these uncertainty estimates are small (< 2%). Having quantified the sensitivities and uncertainties, physical understanding of the system is gained and some confidence in the simulation is acquired.
Quantification of thermal damage in skin tissue
Institute of Scientific and Technical Information of China (English)
Xu Feng; Wen Ting; Lu Tianjian; Seffen Keith
2008-01-01
Skin thermal damage or skin burns are the most commonly encountered type of trauma in civilian and military communities. Besides, advances in laser, microwave and similar technologies have led to recent developments of thermal treatments for disease and damage involving skin tissue, where the objective is to induce thermal damage precisely within targeted tissue structures but without affecting the surrounding, healthy tissue. Further, extended pain sensation induced by thermal damage has also brought great problem for burn patients. Thus, it is of great importance to quantify the thermal damage in skin tissue. In this paper, the available models and experimental methods for quantification of thermal damage in skin tissue are discussed.
Tutorial examples for uncertainty quantification methods.
Energy Technology Data Exchange (ETDEWEB)
De Bord, Sarah [Univ. of California, Davis, CA (United States)
2015-08-01
This report details the work accomplished during my 2015 SULI summer internship at Sandia National Laboratories in Livermore, CA. During this internship, I worked on multiple tasks with the common goal of making uncertainty quantification (UQ) methods more accessible to the general scientific community. As part of my work, I created a comprehensive numerical integration example to incorporate into the user manual of a UQ software package. Further, I developed examples involving heat transfer through a window to incorporate into tutorial lectures that serve as an introduction to UQ methods.
Parallel algorithms for numerical linear algebra
van der Vorst, H
1990-01-01
This is the first in a new series of books presenting research results and developments concerning the theory and applications of parallel computers, including vector, pipeline, array, fifth/future generation computers, and neural computers.All aspects of high-speed computing fall within the scope of the series, e.g. algorithm design, applications, software engineering, networking, taxonomy, models and architectural trends, performance, peripheral devices.Papers in Volume One cover the main streams of parallel linear algebra: systolic array algorithms, message-passing systems, algorithms for p
Quantum algorithms for solving linear differential equations
Berry, Dominic W
2010-01-01
Linear differential equations are ubiquitous in science and engineering. Quantum computers can simulate quantum systems, which are described by homogeneous linear differential equations that produce only oscillating terms. Here we extend quantum simulation algorithms to general inhomogeneous linear differential equations, which can include exponential terms as well as oscillating terms in their solution. As with other algorithms of this type, the solution is encoded in amplitudes of the quantum state. The algorithm does not give the explicit solution, but it is possible to extract global features of the solution.
Kernel learning algorithms for face recognition
Li, Jun-Bao; Pan, Jeng-Shyang
2013-01-01
Kernel Learning Algorithms for Face Recognition covers the framework of kernel based face recognition. This book discusses the advanced kernel learning algorithms and its application on face recognition. This book also focuses on the theoretical deviation, the system framework and experiments involving kernel based face recognition. Included within are algorithms of kernel based face recognition, and also the feasibility of the kernel based face recognition method. This book provides researchers in pattern recognition and machine learning area with advanced face recognition methods and its new
Novel algorithm for synthesis of fiber gratings
Institute of Scientific and Technical Information of China (English)
Bo LV; Ming CHEN; Dan LU; Taorong GONG; Tangjun LI; Shuisheng JIAN
2009-01-01
A novel algorithm for the synthesis of fiber gratings is presented.For the first time we propose an effective optimal approach to construct a coupling coefficient function by employing 4th-order Runge-Kutta (R-K) analysis method for calculating the reflection spectra of fiber gratings.The numerical results show that with this proposed method, some required optical filters have been yielded with better features compared with other methods such as Gel'Fand-Levitan-Marchenko (GLM) algorithm.In addition, the performance of different interpolation functions particularly utilized in our algorithm, including linear-type, spline-type, and Hermit-type, are discussed in detail.
Harmony Search as a Metaheuristic Algorithm
Yang, Xin-She
2010-01-01
This first chapter intends to review and analyze the powerful new Harmony Search (HS) algorithm in the context of metaheuristic algorithms. I will first outline the fundamental steps of Harmony Search, and how it works. I then try to identify the characteristics of metaheuristics and analyze why HS is a good meta-heuristic algorithm. I then review briefly other popular metaheuristics such as par-ticle swarm optimization so as to find their similarities and differences from HS. Finally, I will discuss the ways to improve and develop new variants of HS, and make suggestions for further research including open questions.
Efficient Algorithms for Langevin and DPD Dynamics.
Goga, N; Rzepiela, A J; de Vries, A H; Marrink, S J; Berendsen, H J C
2012-10-09
In this article, we present several algorithms for stochastic dynamics, including Langevin dynamics and different variants of Dissipative Particle Dynamics (DPD), applicable to systems with or without constraints. The algorithms are based on the impulsive application of friction and noise, thus avoiding the computational complexity of algorithms that apply continuous friction and noise. Simulation results on thermostat strength and diffusion properties for ideal gas, coarse-grained (MARTINI) water, and constrained atomic (SPC/E) water systems are discussed. We show that the measured thermal relaxation rates agree well with theoretical predictions. The influence of various parameters on the diffusion coefficient is discussed.
A New Incremental Support Vector Machine Algorithm
Directory of Open Access Journals (Sweden)
Wenjuan Zhao
2012-10-01
Full Text Available Support vector machine is a popular method in machine learning. Incremental support vector machine algorithm is ideal selection in the face of large learning data set. In this paper a new incremental support vector machine learning algorithm is proposed to improve efficiency of large scale data processing. The model of this incremental learning algorithm is similar to the standard support vector machine. The goal concept is updated by incremental learning. Each training procedure only includes new training data. The time complexity is independent of whole training set. Compared with the other incremental version, the training speed of this approach is improved and the change of hyperplane is reduced.
Performance Simulations of Moving Target Search Algorithms
Directory of Open Access Journals (Sweden)
Peter K. K. Loh
2009-01-01
Full Text Available The design of appropriate moving target search (MTS algorithms for computer-generated bots poses serious challenges as they have to satisfy stringent requirements that include computation and execution efficiency. In this paper, we investigate the performance and behaviour of existing moving target search algorithms when applied to search-and-capture gaming scenarios. As part of the investigation, we also introduce a novel algorithm known as abstraction MTS. We conduct performance simulations with a game bot and moving target within randomly generated mazes of increasing sizes and reveal that abstraction MTS exhibits competitive performance even with large problem spaces.
A simple dot-blot-Sirius red-based assay for collagen quantification.
Rodríguez-Rodríguez, Pilar; Arribas, Silvia M; de Pablo, Angel Luis López; González, M Carmen; Abderrahim, Fatima; Condezo-Hoyos, Luis
2013-08-01
The assessment of collagen content in tissues is important in biomedical research, since this protein is altered in numerous diseases. Hydroxyproline and Sirius red based assays are the most common methods for collagen quantification. However, these procedures have some pitfalls, such as the requirement of oxygen-free medium or expensive equipment and large sample size or being unsuitable for hydrolyzed collagen, respectively. Our objective was to develop a specific, versatile, and user-friendly quantitative method applicable to small tissue samples and extracts obtained from elastin purification, therefore, suitable for simultaneous quantification of elastin. This method is based on the binding of Sirius red to collagen present in a sample immobilized on a PVDF membrane, as in the dot-blot technique, and quantified by a scanner and image analysis software. Sample loading, Sirius red concentration, temperature and incubation time, type of standard substance, albumin interference, and quantification time are optimized. The method enabled the quantification of (1) intact collagen in several rat tissue homogenates, including small resistance-sized arteries, (2) partially hydrolyzed collagen obtained from NaOH extracts, compatible with elastin purification, and (3) for the detection of differences in collagen content between hypertensive and normotensive rats. We conclude that the developed technique can be widely used since it is versatile (quantifies intact and hydrolyzed collagen), requires small sample volumes, is user-friendly (low-cost, easy to use, minimum toxic materials, and reduced time of test), and is specific (minimal interference with serum albumin).
AVQS: attack route-based vulnerability quantification scheme for smart grid.
Ko, Jongbin; Lim, Hyunwoo; Lee, Seokjun; Shon, Taeshik
2014-01-01
A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.
AVQS: Attack Route-Based Vulnerability Quantification Scheme for Smart Grid
Directory of Open Access Journals (Sweden)
Jongbin Ko
2014-01-01
Full Text Available A smart grid is a large, consolidated electrical grid system that includes heterogeneous networks and systems. Based on the data, a smart grid system has a potential security threat in its network connectivity. To solve this problem, we develop and apply a novel scheme to measure the vulnerability in a smart grid domain. Vulnerability quantification can be the first step in security analysis because it can help prioritize the security problems. However, existing vulnerability quantification schemes are not suitable for smart grid because they do not consider network vulnerabilities. We propose a novel attack route-based vulnerability quantification scheme using a network vulnerability score and an end-to-end security score, depending on the specific smart grid network environment to calculate the vulnerability score for a particular attack route. To evaluate the proposed approach, we derive several attack scenarios from the advanced metering infrastructure domain. The experimental results of the proposed approach and the existing common vulnerability scoring system clearly show that we need to consider network connectivity for more optimized vulnerability quantification.
New approach for the quantification of processed animal proteins in feed using light microscopy.
Veys, P; Baeten, V
2010-07-01
A revision of European Union's total feed ban on animal proteins in feed will need robust quantification methods, especially for control analyses, if tolerance levels are to be introduced, as for fishmeal in ruminant feed. In 2006, a study conducted by the Community Reference Laboratory for Animal Proteins in feedstuffs (CRL-AP) demonstrated the deficiency of the official quantification method based on light microscopy. The study concluded that the method had to be revised. This paper puts forward an improved quantification method based on three elements: (1) the preparation of permanent slides with an optical adhesive preserving all morphological markers of bones necessary for accurate identification and precision counting; (2) the use of a counting grid eyepiece reticle; and (3) new definitions for correction factors for the estimated portions of animal particles in the sediment. This revised quantification method was tested on feeds adulterated at different levels with bovine meat and bone meal (MBM) and fishmeal, and it proved to be effortless to apply. The results obtained were very close to the expected values of contamination levels for both types of adulteration (MBM or fishmeal). Calculated values were not only replicable, but also reproducible. The advantages of the new approach, including the benefits of the optical adhesive used for permanent slide mounting and the experimental conditions that need to be met to implement the new method correctly, are discussed.
Volumetric quantification of lung nodules in CT with iterative reconstruction (ASiR and MBIR)
Energy Technology Data Exchange (ETDEWEB)
Chen, Baiyu [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 and Carl E. Ravin Advanced Imaging Laboratories, Duke University, Durham, North Carolina 27705 (United States); Barnhart, Huiman [Department of Biostatistics and Bioinformatics, Duke University, Durham, North Carolina 27705 (United States); Richard, Samuel [Carl E. Ravin Advanced Imaging Laboratories, Duke University, Durham, North Carolina 27705 and Department of Radiology, Duke University, Durham, North Carolina 27705 (United States); Robins, Marthony [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Colsher, James [Department of Radiology, Duke University, Durham, North Carolina 27705 (United States); Samei, Ehsan [Medical Physics Graduate Program, Duke University, Durham, North Carolina 27705 (United States); Carl E. Ravin Advanced Imaging Laboratories, Duke University, Durham, North Carolina 27705 (United States); Department of Radiology, Duke University, Durham, North Carolina 27705 (United States); Department of Physics, Department of Biomedical Engineering, and Department of Electronic and Computer Engineering, Duke University, Durham, North Carolina 27705 (United States)
2013-11-15
Purpose: Volume quantifications of lung nodules with multidetector computed tomography (CT) images provide useful information for monitoring nodule developments. The accuracy and precision of the volume quantification, however, can be impacted by imaging and reconstruction parameters. This study aimed to investigate the impact of iterative reconstruction algorithms on the accuracy and precision of volume quantification with dose and slice thickness as additional variables.Methods: Repeated CT images were acquired from an anthropomorphic chest phantom with synthetic nodules (9.5 and 4.8 mm) at six dose levels, and reconstructed with three reconstruction algorithms [filtered backprojection (FBP), adaptive statistical iterative reconstruction (ASiR), and model based iterative reconstruction (MBIR)] into three slice thicknesses. The nodule volumes were measured with two clinical software (A: Lung VCAR, B: iNtuition), and analyzed for accuracy and precision.Results: Precision was found to be generally comparable between FBP and iterative reconstruction with no statistically significant difference noted for different dose levels, slice thickness, and segmentation software. Accuracy was found to be more variable. For large nodules, the accuracy was significantly different between ASiR and FBP for all slice thicknesses with both software, and significantly different between MBIR and FBP for 0.625 mm slice thickness with Software A and for all slice thicknesses with Software B. For small nodules, the accuracy was more similar between FBP and iterative reconstruction, with the exception of ASIR vs FBP at 1.25 mm with Software A and MBIR vs FBP at 0.625 mm with Software A.Conclusions: The systematic difference between the accuracy of FBP and iterative reconstructions highlights the importance of extending current segmentation software to accommodate the image characteristics of iterative reconstructions. In addition, a calibration process may help reduce the dependency of
Glowworm swarm optimization theory, algorithms, and applications
Kaipa, Krishnanand N
2017-01-01
This book provides a comprehensive account of the glowworm swarm optimization (GSO) algorithm, including details of the underlying ideas, theoretical foundations, algorithm development, various applications, and MATLAB programs for the basic GSO algorithm. It also discusses several research problems at different levels of sophistication that can be attempted by interested researchers. The generality of the GSO algorithm is evident in its application to diverse problems ranging from optimization to robotics. Examples include computation of multiple optima, annual crop planning, cooperative exploration, distributed search, multiple source localization, contaminant boundary mapping, wireless sensor networks, clustering, knapsack, numerical integration, solving fixed point equations, solving systems of nonlinear equations, and engineering design optimization. The book is a valuable resource for researchers as well as graduate and undergraduate students in the area of swarm intelligence and computational intellige...
Middle matching mining algorithm
Institute of Scientific and Technical Information of China (English)
GUO Ping; CHEN Li
2003-01-01
A new algorithm for fast discovery of sequential patterns to solve the problems of too many candidate sets made by SPADE is presented, which is referred to as middle matching algorithm. Experiments on a large customer transaction database consisting of customer_id, transaction time, and transaction items demonstrate that the proposed algorithm performs better than SPADE attributed to its philosophy to generate a candidate set by matching two sequences in the middle place so as to reduce the number of the candidate sets.
Algorithm refinement for fluctuating hydrodynamics
Energy Technology Data Exchange (ETDEWEB)
Williams, Sarah A.; Bell, John B.; Garcia, Alejandro L.
2007-07-03
This paper introduces an adaptive mesh and algorithmrefinement method for fluctuating hydrodynamics. This particle-continuumhybrid simulates the dynamics of a compressible fluid with thermalfluctuations. The particle algorithm is direct simulation Monte Carlo(DSMC), a molecular-level scheme based on the Boltzmann equation. Thecontinuum algorithm is based on the Landau-Lifshitz Navier-Stokes (LLNS)equations, which incorporate thermal fluctuations into macroscopichydrodynamics by using stochastic fluxes. It uses a recently-developedsolver for LLNS, based on third-order Runge-Kutta. We present numericaltests of systems in and out of equilibrium, including time-dependentsystems, and demonstrate dynamic adaptive refinement by the computationof a moving shock wave. Mean system behavior and second moment statisticsof our simulations match theoretical values and benchmarks well. We findthat particular attention should be paid to the spectrum of the flux atthe interface between the particle and continuum methods, specificallyfor the non-hydrodynamic (kinetic) time scales.
A fast meteor detection algorithm
Gural, P.
2016-01-01
A low latency meteor detection algorithm for use with fast steering mirrors had been previously developed to track and telescopically follow meteors in real-time (Gural, 2007). It has been rewritten as a generic clustering and tracking software module for meteor detection that meets both the demanding throughput requirements of a Raspberry Pi while also maintaining a high probability of detection. The software interface is generalized to work with various forms of front-end video pre-processing approaches and provides a rich product set of parameterized line detection metrics. Discussion will include the Maximum Temporal Pixel (MTP) compression technique as a fast thresholding option for feeding the detection module, the detection algorithm trade for maximum processing throughput, details on the clustering and tracking methodology, processing products, performance metrics, and a general interface description.
Ouroboros: A Tool for Building Generic, Hybrid, Divide& Conquer Algorithms
Energy Technology Data Exchange (ETDEWEB)
Johnson, J R; Foster, I
2003-05-01
A hybrid divide and conquer algorithm is one that switches from a divide and conquer to an iterative strategy at a specified problem size. Such algorithms can provide significant performance improvements relative to alternatives that use a single strategy. However, the identification of the optimal problem size at which to switch for a particular algorithm and platform can be challenging. We describe an automated approach to this problem that first conducts experiments to explore the performance space on a particular platform and then uses the resulting performance data to construct an optimal hybrid algorithm on that platform. We implement this technique in a tool, ''Ouroboros'', that automatically constructs a high-performance hybrid algorithm from a set of registered algorithms. We present results obtained with this tool for several classical divide and conquer algorithms, including matrix multiply and sorting, and report speedups of up to six times achieved over non-hybrid algorithms.
An Innovative Thinking-Based Intelligent Information Fusion Algorithm
Directory of Open Access Journals (Sweden)
Huimin Lu
2013-01-01
Full Text Available This study proposes an intelligent algorithm that can realize information fusion in reference to the relative research achievements in brain cognitive theory and innovative computation. This algorithm treats knowledge as core and information fusion as a knowledge-based innovative thinking process. Furthermore, the five key parts of this algorithm including information sense and perception, memory storage, divergent thinking, convergent thinking, and evaluation system are simulated and modeled. This algorithm fully develops innovative thinking skills of knowledge in information fusion and is a try to converse the abstract conception of brain cognitive science to specific and operable research routes and strategies. Furthermore, the influences of each parameter of this algorithm on algorithm performance are analyzed and compared with those of classical intelligent algorithms trough test. Test results suggest that the algorithm proposed in this study can obtain the optimum problem solution by less target evaluation times, improve optimization effectiveness, and achieve the effective fusion of information.
Diagnostic Algorithm Benchmarking
Poll, Scott
2011-01-01
A poster for the NASA Aviation Safety Program Annual Technical Meeting. It describes empirical benchmarking on diagnostic algorithms using data from the ADAPT Electrical Power System testbed and a diagnostic software framework.
Inclusive Flavour Tagging Algorithm
Likhomanenko, Tatiana; Derkach, Denis; Rogozhnikov, Alex
2016-10-01
Identifying the flavour of neutral B mesons production is one of the most important components needed in the study of time-dependent CP violation. The harsh environment of the Large Hadron Collider makes it particularly hard to succeed in this task. We present an inclusive flavour-tagging algorithm as an upgrade of the algorithms currently used by the LHCb experiment. Specifically, a probabilistic model which efficiently combines information from reconstructed vertices and tracks using machine learning is proposed. The algorithm does not use information about underlying physics process. It reduces the dependence on the performance of lower level identification capacities and thus increases the overall performance. The proposed inclusive flavour-tagging algorithm is applicable to tag the flavour of B mesons in any proton-proton experiment.
Hockney, Roger
1987-01-01
Algorithmic phase diagrams are a neat and compact representation of the results of comparing the execution time of several algorithms for the solution of the same problem. As an example, the recent results are shown of Gannon and Van Rosendale on the solution of multiple tridiagonal systems of equations in the form of such diagrams. The act of preparing these diagrams has revealed an unexpectedly complex relationship between the best algorithm and the number and size of the tridiagonal systems, which was not evident from the algebraic formulae in the original paper. Even so, for a particular computer, one diagram suffices to predict the best algorithm for all problems that are likely to be encountered the prediction being read directly from the diagram without complex calculation.
Evaluation Of Algorithms Of Anti- HIV Antibody Tests
Directory of Open Access Journals (Sweden)
Paranjape R.S
1997-01-01
Full Text Available Research question: Can alternate algorithms be used in place of conventional algorithm for epidemiological studies of HIV infection with less expenses? Objective: To compare the results of HIV sero- prevalence as determined by test algorithms combining three kits with conventional test algorithm. Study design: Cross â€" sectional. Participants: 282 truck drivers. Statistical analysis: Sensitivity and specificity analysis and predictive values. Results: Three different algorithms that do not include Western Blot (WB were compared with the conventional algorithm, in a truck driver population with 5.6% prevalence of HIV â€"I infection. Algorithms with one EIA (Genetic Systems or Biotest and a rapid test (immunocomb or with two EIAs showed 100% positive predictive value in relation to the conventional algorithm. Using an algorithm with EIA as screening test and a rapid test as a confirmatory test was 50 to 70% less expensive than the conventional algorithm per positive scrum sample. These algorithms obviate the interpretation of indeterminate results and also give differential diagnosis of HIV-2 infection. Alternate algorithms are ideally suited for community based control programme in developing countries. Application of these algorithms in population with low prevalence should also be studied in order to evaluate universal applicability.
Ultra-Scalable Algorithms for Large-Scale Uncertainty Quantification in Inverse Wave Propagation
2016-03-04
multipole method for volume potentials, ACM Transactions on Mathematical Software , (2016). To appear. [44] D. Malhotra, A. Gholami, and G. Biros, A...Transactions on Mathematical Software (TOMS), 39 (2013), p. 13. [55] B. Quaife and G. Biros, On preconditioners for the Laplace double-layer in 2d, Numerical...Office of Scientific Research Computational Mathematics Program 875 North Randolph Street, Suite 325 Arlington, VA 22203 10. SPONSOR/MONITOR’S
Energy Technology Data Exchange (ETDEWEB)
Kim, Su Jin; Lee, Jae Sung; Kim, Yu Kyeong; Lee, Dong Soo [Seoul National University College of Medicine, Seoul (Korea, Republic of)
2007-07-01
Parametric imaging allows us analysis of the entire brain or body image. Graphical approaches are commonly employed to generate parametric imaging through linear or multilinear regression. However, this linear regression method has limited accuracy due to bias in high level of noise data. Several methods have been proposed to reduce bias for linear regression estimation especially in reversible model. In this study, we focus on generating a net accumulation rate (K{sub i}), which is related to binding parameter in brain receptor study, parametric imaging in an irreversible compartment model using multiple linear analysis. The reliability of a newly developed multiple linear analysis method (MLAIR) was assessed through the Monte Carlo simulation, and we applied it to a [{sup 11}C]MeNTI PET for opioid receptor.
Dicotyledon Weed Quantification Algorithm for Selective Herbicide Application in Maize Crops
DEFF Research Database (Denmark)
Laursen, Morten Stigaard; Jørgensen, Rasmus Nyholm; Midtiby, Henrik Skov
2016-01-01
and executed an automated, large-scale field trial supported by the Armadillo autonomous tool carrier robot. The field trial consisted of 299 maize plots. Half of the plots (parcels) were planned with additional seeded weeds; the other half were planned with naturally occurring weeds. The in-situ evaluation...
A Practical Algorithm for Improving Localization and Quantification of Left Ventricular Scar
Zenger, Brian; Cates, Joshua; Morris, Alan; Kholmovski, Eugene; Au, Alexander; Ranjan, Ravi; Akoum, Nazem; McGann, Chris; Wilson, Brent; Marrouche, Nassir; Han, Frederick T.; MacLeod, Rob S.
2014-01-01
Current approaches to classification of left ventricular scar rely on manual segmentation of myocardial borders and manual classification of scar tissue. In this paper, we propose an novel, semi-automatic approach to segment the left ventricular wall and classify scar tissue using a combination of modern image processing techniques.
Quantification of prebiotics in commercial infant formulas.
Sabater, Carlos; Prodanov, Marin; Olano, Agustín; Corzo, Nieves; Montilla, Antonia
2016-03-01
Since breastfeeding is not always possible, infant formulas (IFs) are supplemented with prebiotic oligosaccharides, such as galactooligosaccharides (GOS) and/or fructooligosaccharides (FOS) to exert similar effects to those of the breast milk. Nowadays, a great number of infant formulas enriched with prebiotics are disposal in the market, however there are scarce data about their composition. In this study, the combined use of two chromatographic methods (GC-FID and HPLC-RID) for the quantification of carbohydrates present in commercial infant formulas have been used. According to the results obtained by GC-FID for products containing prebiotics, the content of FOS, GOS and GOS/FOS was in the ranges of 1.6-5.0, 1.7-3.2, and 0.08-0.25/2.3-3.8g/100g of product, respectively. HPLC-RID analysis allowed quantification of maltodextrins with degree of polymerization (DP) up to 19. The methodology proposed here may be used for routine quality control of infant formula and other food ingredients containing prebiotics.
CT quantification of central airway in tracheobronchomalacia
Energy Technology Data Exchange (ETDEWEB)
Im, Won Hyeong; Jin, Gong Yong; Han, Young Min; Kim, Eun Young [Dept. of Radiology, Chonbuk National University Hospital, Jeonju (Korea, Republic of)
2016-05-15
To know which factors help to diagnose tracheobronchomalacia (TBM) using CT quantification of central airway. From April 2013 to July 2014, 19 patients (68.0 ± 15.0 years; 6 male, 13 female) were diagnosed as TBM on CT. As case-matching, 38 normal subjects (65.5 ± 21.5 years; 6 male, 13 female) were selected. All 57 subjects underwent CT with end-inspiration and end-expiration. Airway parameters of trachea and both main bronchus were assessed using software (VIDA diagnostic). Airway parameters of TBM patients and normal subjects were compared using the Student t-test. In expiration, both wall perimeter and wall thickness in TBM patients were significantly smaller than normal subjects (wall perimeter: trachea, 43.97 mm vs. 49.04 mm, p = 0.020; right main bronchus, 33.52 mm vs. 42.69 mm, p < 0.001; left main bronchus, 26.76 mm vs. 31.88 mm, p = 0.012; wall thickness: trachea, 1.89 mm vs. 2.22 mm, p = 0.017; right main bronchus, 1.64 mm vs. 1.83 mm, p = 0.021; left main bronchus, 1.61 mm vs. 1.75 mm, p = 0.016). Wall thinning and decreased perimeter of central airway of expiration by CT quantification would be a new diagnostic indicators in TBM.
Concurrent bisimulation algorithm
Kułakowski, Konrad
2013-01-01
The coarsest bisimulation-finding problem plays an important role in the formal analysis of concurrent systems. For example, solving this problem allows the behavior of different processes to be compared or specifications to be verified. Hence, in this paper an efficient concurrent bisimulation algorithm is presented. It is based on the sequential Paige and Tarjan algorithm and the concept of the state signatures. The original solution follows Hopcroft's principle "process the smaller half". ...
Parallel Wolff Cluster Algorithms
Bae, S.; Ko, S. H.; Coddington, P. D.
The Wolff single-cluster algorithm is the most efficient method known for Monte Carlo simulation of many spin models. Due to the irregular size, shape and position of the Wolff clusters, this method does not easily lend itself to efficient parallel implementation, so that simulations using this method have thus far been confined to workstations and vector machines. Here we present two parallel implementations of this algorithm, and show that one gives fairly good performance on a MIMD parallel computer.
Implementation of Parallel Algorithms
1993-06-30
their socia ’ relations or to achieve some goals. For example, we define a pair-wise force law of i epulsion and attraction for a group of identical...quantization based compression schemes. Photo-refractive crystals, which provide high density recording in real time, are used as our holographic media . The...of Parallel Algorithms (J. Reif, ed.). Kluwer Academic Pu’ ishers, 1993. (4) "A Dynamic Separator Algorithm", D. Armon and J. Reif. To appear in
Quantum algorithmic information theory
Svozil, Karl
1995-01-01
The agenda of quantum algorithmic information theory, ordered `top-down,' is the quantum halting amplitude, followed by the quantum algorithmic information content, which in turn requires the theory of quantum computation. The fundamental atoms processed by quantum computation are the quantum bits which are dealt with in quantum information theory. The theory of quantum computation will be based upon a model of universal quantum computer whose elementary unit is a two-port interferometer capa...
Institute of Scientific and Technical Information of China (English)
WANG Ji-Ke; MAO Ze-Pu; BIAN Jian-Ming; CAO Guo-Fu; CAO Xue-Xiang; CHEN Shen-Jian; DENG Zi-Yan; FU Cheng-Dong; GAO Yuan-Ning; HE Kang-Lin; HE Miao; HUA Chun-Fei; HUANG Bin; HUANG Xing-Tao; JI Xiao-Sin; LI Fei; LI Hai-Bo; LI Wei-Dong; LIANG Yu-Tie; LIU Chun-Xiu; LIU Huai-Min; LIU Suo; LIU Ying-Jie; MA Qiu-Mei; MA Xiang; MAO Ya-Jun; MO Xiao-Hu; PAN Ming-Hua; PANG Cai-Ying; PING Rong-Gang; QIN Ya-Hong; QIU Jin-Fa; SUN Sheng-Sen; SUN Yong-Zhao; WANG Liang-Liang; WEN Shuo-Pin; WU Ling-Hui; XIE Yu-Guang; XU Min; YAN Liang; YOU Zheng-Yun; YUAN Chang-Zheng; YUAN Ye; ZHANG Bing-Yun; ZHANG Chang-Chun; ZHANG Jian-Yong; ZHANG Xue-Yao; ZHANG Yao; ZHENG Yang-Heng; ZHU Ke-Jun; ZHU Yong-Sheng; ZHU Zhi-Li; ZOU Jia-Heng
2009-01-01
A track fitting algorithm based on the Kalman filter method has been developed for BESⅢ of BEPCⅡ.The effects of multiple scattering and energy loss when the charged particles go through the detector,non-uniformity of magnetic field (NUMF) and wire sag, etc., have been carefully handled.This algorithm works well and the performance satisfies the physical requirements tested by the simulation data.
Optimization algorithms and applications
Arora, Rajesh Kumar
2015-01-01
Choose the Correct Solution Method for Your Optimization ProblemOptimization: Algorithms and Applications presents a variety of solution techniques for optimization problems, emphasizing concepts rather than rigorous mathematical details and proofs. The book covers both gradient and stochastic methods as solution techniques for unconstrained and constrained optimization problems. It discusses the conjugate gradient method, Broyden-Fletcher-Goldfarb-Shanno algorithm, Powell method, penalty function, augmented Lagrange multiplier method, sequential quadratic programming, method of feasible direc
Evans, Josie A R; Jenner, Ellen L; Carr Smith, Hugh D; Berlanga, Oscar; Harding, Stephen J
2016-06-01
Accurate measurement of IgA monoclonal proteins presents a significant challenge to laboratory staff. IgA heavy/light chain (Hevylite, HLC) analysis is an alternative methodology for monoclonal protein assessment, giving an independent measure of IgAκ and IgAλ concentrations. Clonality is assessed by calculating the ratio of involved immunoglobulin to background uninvolved immunoglobulin concentrations (e.g. IgAκ/IgAλ in an IgAκ patient). Here we discuss the challenges faced by the laboratory in IgA monoclonal protein assessment, and compare the performance of Hevylite assays with electrophoresis and total IgA results. We present data which validates the use of Hevylite for response assessment: in most cases, Hevylite provides comparable response assignment to that provided by serum protein electrophoresis (SPE) and total IgA; in other cases Hevylite provides additional information, such as detection of residual disease or relapse.
Optimal reconfiguration of satellite constellations with the auction algorithm
de Weck, Olivier L.; Scialom, Uriel; Siddiqi, Afreen
2008-01-01
Traditionally, satellite constellation design has focused on optimizing global, zonal or regional coverage with a minimum number of satellites. In some instances, however, it is desirable to deploy a constellation in stages to gradually expand capacity. This requires launching additional satellites and reconfiguring the existing on-orbit satellites. Also, a constellation might be retasked and reconfigured after it is initially fielded for operational reasons. This paper presents a methodology for optimizing orbital reconfigurations of satellite constellations. The work focuses on technical aspects for transforming an initial constellation A into a new constellation, B, typically with a larger number of satellites. A general framework was developed to study the orbital reconfiguration problem. The framework was applied to low Earth orbit constellations of communication satellites. This paper specifically addresses the problem of determining the optimal assignment for transferring on-orbit satellites in constellation A to constellation B such that the total ΔV for the reconfiguration is minimized. It is shown that the auction algorithm, used for solving general network flow problems, can efficiently and reliably determine the optimum assignment of satellites of A to slots of B. Based on this methodology, reconfiguration maps can be created, which show the energy required for transforming one constellation into another as a function of type (Street-of-Coverage, Walker, Draim), altitude, ground elevation angle and fold of coverage. Suggested extensions of this work include quantification of the tradeoff between reconfiguration time and ΔV, multiple successive reconfigurations, balancing propellant consumption within the constellation during reconfiguration as well as using reconfigurability as an objective during initial constellation design.
PIV anisotropic denoising using uncertainty quantification
Wieneke, B.
2017-08-01
Recently, progress has been made to reliably compute uncertainty estimates for each velocity vector in planar flow fields measured with 2D-or stereo-PIV. This information can be used for a post-processing denoising scheme to reduce errors by a spatial averaging scheme preserving true flow fluctuations. Starting with a 5 × 5 vector kernel, a second-order 2D-polynomial function is fitted to the flow field. Vectors just outside will be included in the filter kernel if they lie within the uncertainty band around the fitted function. Repeating this procedure, vectors are added in all directions until the true flow field can no longer be approximated by the second-order polynomial function. The center vector is then replaced by the value of the fitted function. The final shape and size of the filter kernel automatically adjusts to local flow gradients in an optimal way preserving true velocity fluctuations above the noise level. This anisotropic denoising scheme is validated first on synthetic vector fields varying spatial wavelengths of the flow field and noise levels relative to the fluctuation amplitude. For wavelengths larger than 5-7 times the spatial resolution, a noise reduction factor of 2-4 is achieved significantly increasing the velocity dynamic range. For large noise levels above 50% of the flow fluctuation, the denoising scheme can no longer distinguish between true flow fluctuations and noise. Finally, it is shown that the procedure performs well for typical experimental PIV vector fields. It provides an effective alternative to more complicated adaptive PIV algorithms optimizing interrogation window sizes and shapes based on seeding density, local flow gradients, and other criteria.
Institute of Scientific and Technical Information of China (English)
无
2008-01-01
The search for patterns or motifs in data represents a problem area of key interest to finance and economic researchers. In this paper, we introduce the motif tracking algorithm (MTA), a novel immune inspired (IS) pattern identification tool that is able to identify unknown motifs of a non specified length which repeat within time series data. The power of the algorithm comes from the fact that it uses a small number of parameters with minimal assumptions regarding the data being examined or the underlying motifs. Our interest lies in applying the algorithm to financial time series data to identify unknown patterns that exist. The algorithm is tested using three separate data sets. Particular suitability to financial data is shown by applying it to oil price data. In all cases, the algorithm identifies the presence of a motif population in a fast and efficient manner due to the utilization of an intuitive symbolic representation.The resulting population of motifs is shown to have considerable potential value for other applications such as forecasting and algorithm seeding.
Directory of Open Access Journals (Sweden)
Wang Zi Min
2016-01-01
Full Text Available With the development of social services, people’s living standards improve further requirements, there is an urgent need for a way to adapt to the complex situation of the new positioning technology. In recent years, RFID technology have a wide range of applications in all aspects of life and production, such as logistics tracking, car alarm, security and other items. The use of RFID technology to locate, it is a new direction in the eyes of the various research institutions and scholars. RFID positioning technology system stability, the error is small and low-cost advantages of its location algorithm is the focus of this study.This article analyzes the layers of RFID technology targeting methods and algorithms. First, RFID common several basic methods are introduced; Secondly, higher accuracy to political network location method; Finally, LANDMARC algorithm will be described. Through this it can be seen that advanced and efficient algorithms play an important role in increasing RFID positioning accuracy aspects.Finally, the algorithm of RFID location technology are summarized, pointing out the deficiencies in the algorithm, and put forward a follow-up study of the requirements, the vision of a better future RFID positioning technology.
Peckham, Scott D.; Kelbert, Anna; Hill, Mary C.; Hutton, Eric W. H.
2016-05-01
Component-based modeling frameworks make it easier for users to access, configure, couple, run and test numerical models. However, they do not typically provide tools for uncertainty quantification or data-based model verification and calibration. To better address these important issues, modeling frameworks should be integrated with existing, general-purpose toolkits for optimization, parameter estimation and uncertainty quantification. This paper identifies and then examines the key issues that must be addressed in order to make a component-based modeling framework interoperable with general-purpose packages for model analysis. As a motivating example, one of these packages, DAKOTA, is applied to a representative but nontrivial surface process problem of comparing two models for the longitudinal elevation profile of a river to observational data. Results from a new mathematical analysis of the resulting nonlinear least squares problem are given and then compared to results from several different optimization algorithms in DAKOTA.
Mainardi, Luca T
2009-01-28
In the last decades, one of the main challenges in the study of heart rate variability (HRV) signals has been the quantification of the low-frequency (LF) and high-frequency (HF) components of the HRV spectrum during non-stationary events. At this regard, different time-frequency and time-varying approaches have been proposed with the aim to track the modification of the HRV spectra during ischaemic attacks, provocative stress testing, sleep or daily-life activities. The quantitative evaluation of power (and frequencies) of the LF and HF components has been approached in various ways depending on the selected time-frequency method. This paper is an excursus through the most common time-frequency/time-varying representation of the HRV signal with a special emphasis on the algorithms employed for the reliable quantification of the LF and HF parameters and their tracking.
Evaluation of computer-assisted quantification of carotid artery stenosis.
Biermann, Christina; Tsiflikas, Ilias; Thomas, Christoph; Kasperek, Bernadette; Heuschmid, Martin; Claussen, Claus D
2012-04-01
The purpose of this study was to evaluate the influence of advanced software assistance on the assessment of carotid artery stenosis; particularly, the inter-observer variability of readers with different level of experience is to be investigated. Forty patients with suspected carotid artery stenosis received head and neck dual-energy CT angiography as part of their pre-interventional workup. Four blinded readers with different levels of experience performed standard imaging interpretation. At least 1 day later, they performed quantification using an advanced vessel analysis software including automatic dual-energy bone and hard plaque removal, automatic and semiautomatic vessel segmentation, as well as creation of curved planar reformation. Results were evaluated for the reproducibility of stenosis quantification of different readers by calculating the kappa and correlation values. Consensus reading of the two most experienced readers was used as the standard of reference. For standard imaging interpretation, experienced readers reached very good (k = 0.85) and good (k = 0.78) inter-observer variability. Inexperienced readers achieved moderate (k = 0.6) and fair (k = 0.24) results. Sensitivity values 80%, 91%, 83%, 77% and specificity values 100%, 84%, 82%, 53% were achieved for significant area stenosis >70%. For grading using advanced vessel analysis software, all readers achieved good inter-observer variability (k = 0.77, 0.72, 0.71, and 0.77). Specificity values of 97%, 95%, 95%, 93% and sensitivity values of 84%, 78%, 86%, 92% were achieved. In conclusion, when supported by advanced vessel analysis software, experienced readers are able to achieve good reproducibility. Even inexperienced readers are able to achieve good results in the assessment of carotid artery stenosis when using advanced vessel analysis software.
Event-Specific Quantification of Radiation Belt Radial Diffusion
Tu, W.; Sarris, T. E.; Ozeke, L.
2016-12-01
Recently, there has been a great emphasis on developing event-specific inputs for radiation belt models, since they are proven critical for reproducing the observed radiation belt dynamics during strong events. For example, our DREAM3D simulation of the 8-9 October 2012 storm demonstrates that event-specific chorus wave model and seed population are critical to reproduce the strong enhancement of MeV electrons in this event. However, the observed fast electron dropout preceding the enhancement was not captured by the simulation, which could be due to the combined effects of fast outward radial diffusion of radiation belt electrons with magnetopause shadowing and enhanced electron precipitation. Without an event-specific quantification of radial diffusion, we cannot resolve the relative contribution of outward radial diffusion and precipitation to the observed electron dropout or realistically reproduce the dynamics during the event. In this work, we provide physical quantification of radial diffusion specific to the October 2012 event by including both real-time and global distributions of ULF waves from a constellation of wave measurements and event-specific estimation of ULF wave mode structure. The global maps of ULF waves during the event are constructed by combining the real-time measurements from the Van Allen Probes, THEMIS, and GOES satellites in space and a large array of ground magnetometers. The real-time ULF wave mode structure is then estimated using the new Cross-Wavelet Transform technique, applied to various azimuthally aligned pairs of ULF wave measurements that are located at the same L shells. The cross power and phase differences between the time series are calculated using the technique, based on which the wave power per mode number is estimated. Finally, the physically estimated radial diffusion coefficients specific to the event are applied to the DREAM3D model to quantify the relative contribution of radial diffusion to the electron dynamics
Disease quantification on PET/CT images without object delineation
Tong, Yubing; Udupa, Jayaram K.; Odhner, Dewey; Wu, Caiyun; Fitzpatrick, Danielle; Winchell, Nicole; Schuster, Stephen J.; Torigian, Drew A.
2017-03-01
The derivation of quantitative information from images to make quantitative radiology (QR) clinically practical continues to face a major image analysis hurdle because of image segmentation challenges. This paper presents a novel approach to disease quantification (DQ) via positron emission tomography/computed tomography (PET/CT) images that explores how to decouple DQ methods from explicit dependence on object segmentation through the use of only object recognition results to quantify disease burden. The concept of an object-dependent disease map is introduced to express disease severity without performing explicit delineation and partial volume correction of either objects or lesions. The parameters of the disease map are estimated from a set of training image data sets. The idea is illustrated on 20 lung lesions and 20 liver lesions derived from 18F-2-fluoro-2-deoxy-D-glucose (FDG)-PET/CT scans of patients with various types of cancers and also on 20 NEMA PET/CT phantom data sets. Our preliminary results show that, on phantom data sets, "disease burden" can be estimated to within 2% of known absolute true activity. Notwithstanding the difficulty in establishing true quantification on patient PET images, our results achieve 8% deviation from "true" estimates, with slightly larger deviations for small and diffuse lesions where establishing ground truth becomes really questionable, and smaller deviations for larger lesions where ground truth set up becomes more reliable. We are currently exploring extensions of the approach to include fully automated body-wide DQ, extensions to just CT or magnetic resonance imaging (MRI) alone, to PET/CT performed with radiotracers other than FDG, and other functional forms of disease maps.
Development of a VHH-Based Erythropoietin Quantification Assay
DEFF Research Database (Denmark)
Kol, Stefan; Beuchert Kallehauge, Thomas; Adema, Simon
2015-01-01
human EPO was evaluated as a capturing antibody in a label-free biolayer interferometry-based quantification assay. Human recombinant EPO can be specifically detected in Chinese hamster ovary cell supernatants in a sensitive and pH-dependent manner. This method enables rapid and robust quantification...
Molecular quantification of genes encoding for green-fluorescent proteins
DEFF Research Database (Denmark)
Felske, A; Vandieken, V; Pauling, B V
2003-01-01
A quantitative PCR approach is presented to analyze the amount of recombinant green fluorescent protein (gfp) genes in environmental DNA samples. The quantification assay is a combination of specific PCR amplification and temperature gradient gel electrophoresis (TGGE). Gene quantification is pro...
[Study on quantificational analysis method for the non-crystalline content in blast furnace slag].
Yan, Ding-Liu; Guo, Pei-Min; Qi, Yuan-Hong; Zhang, Chun-Xia; Wang, Hai-Feng; Dai, Xiao-Tian
2008-02-01
Quantificational analysis method for the non-crystalline and crystalline contents in blast furnace slag was studied by means of X-ray diffraction. The process of quantificational analysis method includes standard samples preparation, samples preparation, X-ray diffraction measurement and data treatment. The data treatment includes integration areas of non-crystalline curve and crystalline peaks in certain diffraction angle range, linear fitting and quantificational coefficient determination. The preparation methods of standard samples for X-ray diffraction of blast furnace slag were proposed, including 100% crystalline sample and 100% non-crystalline sample. The 100% crystalline sample can be obtained by heating blast furnace slag for 12 h at 1 000-1 200 degrees C, and the 100% non-crystalline sample can be obtained by quenching the molten slag with enough water. The X-ray diffraction method of quantificational analysis of non-crystalline content in blast furnace slag was proposed with the 100% non-crystalline and 100% crystalline standard samples, and the quantificational coefficient can be obtained by linear regression on the integration areas of non-crystalline curve and crystalline peaks of X-ray diffraction in the 2-theta range 20 degrees-40 degrees. This method is suitable for the blast furnace slag with the non-crystalline content over 80%. The non-crystalline and crystalline contents of original blast furnace slag are obtained by combining the X-ray diffraction results and mathematical treatment, and this method is suitable for the blast furnace slag with the non-crystalline content over 90%, whose process includes preparing the 100% crystalline standard sample by heating blast furnace slag for 12 h at 1000-1200 degrees C, samples preparation with the 0.02 interval in the 0-0.1 mass ratio range of 100% crystalline to original slag, X-ray diffraction measurement of the samples prepared and data treatment using iterative linear regression. The
Algorithms And Programs For Strong Gravitational Lensing In Kerr Space-time Including Polarization
Chen, Bin; Kantowski, Ronald; Dai, Xinyu; Baron, Eddie; Maddumage, Prasad
2015-01-01
Active galactic nuclei (AGNs) and quasars are important astrophysical objects to understand. Recently, microlensing observations have constrained the size of the quasar X-ray emission region to be of the order of 10 gravitational radii of the central supermassive black hole. For distances within a few gravitational radii, light paths are strongly bent by the strong gravity field of the central black hole. If the central black hole has nonzero angular momentum (spin), a photon's polarization p...
Heterogeneous architecture to process swarm optimization algorithms
Directory of Open Access Journals (Sweden)
Maria A. Dávila-Guzmán
2014-01-01
Full Text Available Since few years ago, the parallel processing has been embedded in personal computers by including co-processing units as the graphics processing units resulting in a heterogeneous platform. This paper presents the implementation of swarm algorithms on this platform to solve several functions from optimization problems, where they highlight their inherent parallel processing and distributed control features. In the swarm algorithms, each individual and dimension problem are parallelized by the granularity of the processing system which also offer low communication latency between individuals through the embedded processing. To evaluate the potential of swarm algorithms on graphics processing units we have implemented two of them: the particle swarm optimization algorithm and the bacterial foraging optimization algorithm. The algorithms’ performance is measured using the acceleration where they are contrasted between a typical sequential processing platform and the NVIDIA GeForce GTX480 heterogeneous platform; the results show that the particle swarm algorithm obtained up to 36.82x and the bacterial foraging swarm algorithm obtained up to 9.26x. Finally, the effect to increase the size of the population is evaluated where we show both the dispersion and the quality of the solutions are decreased despite of high acceleration performance since the initial distribution of the individuals can converge to local optimal solution.
A Fast Generic Sequence Matching Algorithm
Musser, David R
2008-01-01
A string matching -- and more generally, sequence matching -- algorithm is presented that has a linear worst-case computing time bound, a low worst-case bound on the number of comparisons (2n), and sublinear average-case behavior that is better than that of the fastest versions of the Boyer-Moore algorithm. The algorithm retains its efficiency advantages in a wide variety of sequence matching problems of practical interest, including traditional string matching; large-alphabet problems (as in Unicode strings); and small-alphabet, long-pattern problems (as in DNA searches). Since it is expressed as a generic algorithm for searching in sequences over an arbitrary type T, it is well suited for use in generic software libraries such as the C++ Standard Template Library. The algorithm was obtained by adding to the Knuth-Morris-Pratt algorithm one of the pattern-shifting techniques from the Boyer-Moore algorithm, with provision for use of hashing in this technique. In situations in which a hash function or random a...
Dynamic quantification of antigen molecules with flow cytometry
Moskalensky, A.E.; Chernyshev, A.V.; Yurkin, M.A.; Nekrasov, V.M.; Polshchitsin, A.A.; Parks, D.R.; Moore, W.A.; Herzenberg, L.A.; Filatenkov, A.; Maltsev, V.P.; Orlova, D.Y.
2015-01-01
Traditional methods for estimating the number of expressed molecules, based on the detection of target antigens bound with fluorescently labeled antibodies, assume that the antigen-antibody reaction reaches equilibrium. A calibration procedure is used to convert the intensity of the fluorescence signal to the number of target molecules. Along with the different limitations of every calibration system, this substantially limits the applicability of the traditional approaches especially in the case of low affinity antibodies. We address this problem here with studies in which we demonstrate a new approach to the antigen molecule quantification problem. Instead of using a static calibration system, we analyzed mean fluorescence values over time by flow cytometry during antibody-antigen binding. Experimental data obtained with an LSRII cytometer were fitted by a diffusion-reaction mathematical model using the Levenberg–Marquardt nonlinear least squares curve-fitting algorithm in order to obtain the number of target antigen molecules per cell. Results were compared with the Quanti-BRITE calibration system. We conclude that, instead of using experiment-specific calibration, the value of the binding rate constant for each particular antibody-antigen reaction can be used to quantify antigen molecules with flow cytometry. The radius of CD8 antibody molecule binding site was found, that allows recalculating the binding rate constant for other conditions (different sizes of reagent molecules, fluorescent label, medium viscosity and temperature). This approach is independent of specially prepared calibration beads, antibody reagents and the specific dye and can be applied to both low and high affinity antibodies, under both saturating and non-saturating binding conditions. The method was demonstrated on a human blood sample dataset investigating CD8α antigen on T cells in stable binding conditions. PMID:25687877
USACM Thematic Workshop On Uncertainty Quantification And Data-Driven Modeling.
Energy Technology Data Exchange (ETDEWEB)
Stewart, James R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-05-01
The USACM Thematic Workshop on Uncertainty Quantification and Data-Driven Modeling was held on March 23-24, 2017, in Austin, TX. The organizers of the technical program were James R. Stewart of Sandia National Laboratories and Krishna Garikipati of University of Michigan. The administrative organizer was Ruth Hengst, who serves as Program Coordinator for the USACM. The organization of this workshop was coordinated through the USACM Technical Thrust Area on Uncertainty Quantification and Probabilistic Analysis. The workshop website (http://uqpm2017.usacm.org) includes the presentation agenda as well as links to several of the presentation slides (permission to access the presentations was granted by each of those speakers, respectively). Herein, this final report contains the complete workshop program that includes the presentation agenda, the presentation abstracts, and the list of posters.
Äijö, Tarmo; Huang, Yun; Mannerström, Henrik; Chavez, Lukas; Tsagaratou, Ageliki; Rao, Anjana; Lähdesmäki, Harri
2016-03-14
We present a generative model, Lux, to quantify DNA methylation modifications from any combination of bisulfite sequencing approaches, including reduced, oxidative, TET-assisted, chemical-modification assisted, and methylase-assisted bisulfite sequencing data. Lux models all cytosine modifications (C, 5mC, 5hmC, 5fC, and 5caC) simultaneously together with experimental parameters, including bisulfite conversion and oxidation efficiencies, as well as various chemical labeling and protection steps. We show that Lux improves the quantification and comparison of cytosine modification levels and that Lux can process any oxidized methylcytosine sequencing data sets to quantify all cytosine modifications. Analysis of targeted data from Tet2-knockdown embryonic stem cells and T cells during development demonstrates DNA modification quantification at unprecedented detail, quantifies active demethylation pathways and reveals 5hmC localization in putative regulatory regions.
Algorithms for intravenous insulin delivery.
Braithwaite, Susan S; Clement, Stephen
2008-08-01
This review aims to classify algorithms for intravenous insulin infusion according to design. Essential input data include the current blood glucose (BG(current)), the previous blood glucose (BG(previous)), the test time of BG(current) (test time(current)), the test time of BG(previous) (test time(previous)), and the previous insulin infusion rate (IR(previous)). Output data consist of the next insulin infusion rate (IR(next)) and next test time. The classification differentiates between "IR" and "MR" algorithm types, both defined as a rule for assigning an insulin infusion rate (IR), having a glycemic target. Both types are capable of assigning the IR for the next iteration of the algorithm (IR(next)) as an increasing function of BG(current), IR(previous), and rate-of-change of BG with respect to time, each treated as an independent variable. Algorithms of the IR type directly seek to define IR(next) as an incremental adjustment to IR(previous). At test time(current), under an IR algorithm the differences in values of IR(next) that might be assigned depending upon the value of BG(current) are not necessarily continuously dependent upon, proportionate to, or commensurate with either the IR(previous) or the rate-of-change of BG. Algorithms of the MR type create a family of IR functions of BG differing according to maintenance rate (MR), each being an iso-MR curve. The change of IR(next) with respect to BG(current) is a strictly increasing function of MR. At test time(current), algorithms of the MR type use IR(previous) and the rate-of-change of BG to define the MR, multiplier, or column assignment, which will be used for patient assignment to the right iso-MR curve and as precedent for IR(next). Bolus insulin therapy is especially effective when used in proportion to carbohydrate load to cover anticipated incremental transitory enteral or parenteral carbohydrate exposure. Specific distinguishing algorithm design features and choice of parameters may be important to
An Efficient Implementation of the Gliding Box Lacunarity Algorithm
Energy Technology Data Exchange (ETDEWEB)
Charles R. Tolle,; Timothy R. McJunkin; David J. Gorsich
2008-03-01
Lacunarity is a measure of how data fills space. It complements fractal dimension, which measures how much space is filled. Currently, many researchers use the gliding box algorithm for calculating lacunarity. This paper introduces a fast algorithm for making this calculation. The algorithm presented is akin to fast box counting algorithms used by some researchers in estimating fractal dimension. A simplified gliding box measure equation along with key pseudo code implementations for the algorithm are presented. Applications for the gliding box lacunarity measure have included subjects that range from biological community modeling to target detection.
Towards Automatic Controller Design using Multi-Objective Evolutionary Algorithms
DEFF Research Database (Denmark)
Pedersen, Gerulf
of evolutionary computation, a choice was made to use multi-objective algorithms for the purpose of aiding in automatic controller design. More specifically, the choice was made to use the Non-dominated Sorting Genetic Algorithm II (NSGAII), which is one of the most potent algorithms currently in use......, as the foundation for achieving the desired goal. While working with the algorithm, some issues arose which limited the use of the algorithm for unknown problems. These issues included the relative scale of the used fitness functions and the distribution of solutions on the optimal Pareto front. Some work has...
Spectrum Assignment Algorithm for Cognitive Machine-to-Machine Networks
Directory of Open Access Journals (Sweden)
Soheil Rostami
2016-01-01
Full Text Available A novel aggregation-based spectrum assignment algorithm for Cognitive Machine-To-Machine (CM2M networks is proposed. The introduced algorithm takes practical constraints including interference to the Licensed Users (LUs, co-channel interference (CCI among CM2M devices, and Maximum Aggregation Span (MAS into consideration. Simulation results show clearly that the proposed algorithm outperforms State-Of-The-Art (SOTA algorithms in terms of spectrum utilisation and network capacity. Furthermore, the convergence analysis of the proposed algorithm verifies its high convergence rate.
Energy Technology Data Exchange (ETDEWEB)
Hatt, Mathieu; Tixier, Florent; Visvikis, Dimitris [CHRU Morvan, INSERM, UMR 1101, LaTIM, Brest (France); Cheze Le Rest, Catherine [CHU Miletrie, Nuclear Medicine, Poitiers (France); Pradier, Olivier [CHRU Morvan, Radiotherapy, Brest (France)
2013-10-15
Intratumour uptake heterogeneity in PET quantified in terms of textural features for response to therapy has been investigated in several studies, including assessment of their robustness for reconstruction and physiological reproducibility. However, there has been no thorough assessment of the potential impact of preprocessing steps on the resulting quantification and its predictive value. The goal of this work was to assess the robustness of PET heterogeneity in textural features for delineation of functional volumes and partial volume correction (PVC). This retrospective analysis included 50 patients with oesophageal cancer. PVC of each PET image was performed. Tumour volumes were determined using fixed and adaptive thresholding, and the fuzzy locally adaptive Bayesian algorithm, and heterogeneity was quantified using local and regional textural features. Differences in the absolute values of the image-derived parameters considered were assessed using Bland-Altman analysis. The impact on their predictive value for the identification of patient nonresponders was assessed by comparing areas under the receiver operating characteristic curves. Heterogeneity parameters were more dependent on delineation than on PVC. The parameters most sensitive to delineation and PVC were regional ones (intensity variability and size zone variability), whereas local parameters such as entropy and homogeneity were the most robust. Despite the large differences in absolute values obtained from different delineation methods or after PVC, these differences did not necessarily translate into a significant impact on their predictive value. Parameters such as entropy, homogeneity, dissimilarity (for local heterogeneity characterization) and zone percentage (for regional characterization) should be preferred. This selection is based on a demonstrated high differentiation power in terms of predicting response, as well as a significant robustness with respect to the delineation method used