WorldWideScience

Sample records for accurate quantitative snp-typing

  1. Toward Accurate and Quantitative Comparative Metagenomics.

    Nayfach, Stephen; Pollard, Katherine S

    2016-08-25

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  2. SNP typing on the NanoChip electronic microarray

    Børsting, Claus; Sanchez Sanchez, Juan Jose; Morling, Niels

    2005-01-01

    We describe a single nucleotide polymorphism (SNP) typing protocol developed for the NanoChip electronic microarray. The NanoChip array consists of 100 electrodes covered by a thin hydrogel layer containing streptavidin. An electric currency can be applied to one, several, or all electrodes at the...... the bound DNA. Base stacking between the short reporter and the longer stabilizer oligo stabilizes the binding of a matching reporter, whereas the binding of a reporter carrying a mismatch in the SNP position will be relatively weak. Thermal stringency is applied to the NanoChip array according to a...

  3. A sensitive issue: Pyrosequencing as a valuable forensic SNP typing platform

    Harrison, C.; Musgrave-Brown, E.; Bender, K.;

    2006-01-01

    Analysing minute amounts of DNA is a routine challenge in forensics in part due to the poor sensitivity of an instrument and its inability to detect results from forensic samples. In this study, the sensitivity of the Pyrosequencing method is investigated using varying concentrations of DNA and...... Pyrosequencing as a valuable forensic SNP typing platform...

  4. Designer cantilevers for even more accurate quantitative measurements of biological systems with multifrequency AFM

    Contera, S.

    2016-04-01

    Multifrequency excitation/monitoring of cantilevers has made it possible both to achieve fast, relatively simple, nanometre-resolution quantitative mapping of mechanical of biological systems in solution using atomic force microscopy (AFM), and single molecule resolution detection by nanomechanical biosensors. A recent paper by Penedo et al [2015 Nanotechnology 26 485706] has made a significant contribution by developing simple methods to improve the signal to noise ratio in liquid environments, by selectively enhancing cantilever modes, which will lead to even more accurate quantitative measurements.

  5. A correlative imaging based methodology for accurate quantitative assessment of bone formation in additive manufactured implants.

    Geng, Hua; Todd, Naomi M; Devlin-Mullin, Aine; Poologasundarampillai, Gowsihan; Kim, Taek Bo; Madi, Kamel; Cartmell, Sarah; Mitchell, Christopher A; Jones, Julian R; Lee, Peter D

    2016-06-01

    A correlative imaging methodology was developed to accurately quantify bone formation in the complex lattice structure of additive manufactured implants. Micro computed tomography (μCT) and histomorphometry were combined, integrating the best features from both, while demonstrating the limitations of each imaging modality. This semi-automatic methodology registered each modality using a coarse graining technique to speed the registration of 2D histology sections to high resolution 3D μCT datasets. Once registered, histomorphometric qualitative and quantitative bone descriptors were directly correlated to 3D quantitative bone descriptors, such as bone ingrowth and bone contact. The correlative imaging allowed the significant volumetric shrinkage of histology sections to be quantified for the first time (~15 %). This technique demonstrated the importance of location of the histological section, demonstrating that up to a 30 % offset can be introduced. The results were used to quantitatively demonstrate the effectiveness of 3D printed titanium lattice implants. PMID:27153828

  6. Should scatter be corrected in both transmission and emission data for accurate quantitation in cardiac SPET?

    Ideally, reliable quantitation in single-photon emission tomography (SPET) requires both emission and transmission data to be scatter free. Although scatter in emission data has been extensively studied, it is not well known how scatter in transmission data affects relative and absolute quantitation in reconstructed images. We studied SPET quantitative accuracy for different amounts of scatter in emission and transmission data using a Utah phantom and a cardiac Data Spectrum phantom including different attenuating media. Acquisitions over 180 were considered and three projection sets were derived: 20% images and Jaszczak and triple-energy-window scatter-corrected projections. Transmission data were acquired using gadolinium-153 line sources in a 90-110 keV window using a narrow or wide scanning window. The transmission scans were performed either simultaneously with the emission acquisition or 24 h later. Transmission maps were reconstructed using filtered backprojection and μ values were linearly scaled from 100 to 140 keV. Attenuation-corrected images were reconstructed using a conjugate gradient minimal residual algorithm. The μ value underestimation varied between 4% with a narrow transmission window in soft tissue and 22% with a wide window in a material simulating bone. Scatter in the emission and transmission data had little effect on the uniformity of activity distribution in the left ventricle wall and in a uniformly hot compartment of the Utah phantom. Correcting the transmission data for scatter had no impact on contrast between a hot and a cold region or on signal-to-noise ratio (SNR) in regions with uniform activity distribution, while correcting the emission data for scatter improved contrast and reduced SNR. For absolute quantitation, the most accurate results (bias <4% in both phantoms) were obtained when reducing scatter in both emission and transmission data. In conclusion, trying to obtain the same amount of scatter in emission and transmission

  7. Should scatter be corrected in both transmission and emission data for accurate quantitation in cardiac SPET?

    Fakhri, G.E. [Harvard Medical School, Boston, MA (United States). Dept. of Radiology; U494 INSERM, CHU Pitie-Salpetriere, Paris (France); Buvat, I.; Todd-Pokropek, A.; Benali, H. [U494 INSERM, CHU Pitie-Salpetriere, Paris (France); Almeida, P. [Servico de Medicina Nuclear, Hospital Garcia de Orta, Almada (Portugal); Bendriem, B. [CTI, Inc., Knoxville, TN (United States)

    2000-09-01

    Ideally, reliable quantitation in single-photon emission tomography (SPET) requires both emission and transmission data to be scatter free. Although scatter in emission data has been extensively studied, it is not well known how scatter in transmission data affects relative and absolute quantitation in reconstructed images. We studied SPET quantitative accuracy for different amounts of scatter in emission and transmission data using a Utah phantom and a cardiac Data Spectrum phantom including different attenuating media. Acquisitions over 180 were considered and three projection sets were derived: 20% images and Jaszczak and triple-energy-window scatter-corrected projections. Transmission data were acquired using gadolinium-153 line sources in a 90-110 keV window using a narrow or wide scanning window. The transmission scans were performed either simultaneously with the emission acquisition or 24 h later. Transmission maps were reconstructed using filtered backprojection and {mu} values were linearly scaled from 100 to 140 keV. Attenuation-corrected images were reconstructed using a conjugate gradient minimal residual algorithm. The {mu} value underestimation varied between 4% with a narrow transmission window in soft tissue and 22% with a wide window in a material simulating bone. Scatter in the emission and transmission data had little effect on the uniformity of activity distribution in the left ventricle wall and in a uniformly hot compartment of the Utah phantom. Correcting the transmission data for scatter had no impact on contrast between a hot and a cold region or on signal-to-noise ratio (SNR) in regions with uniform activity distribution, while correcting the emission data for scatter improved contrast and reduced SNR. For absolute quantitation, the most accurate results (bias <4% in both phantoms) were obtained when reducing scatter in both emission and transmission data. In conclusion, trying to obtain the same amount of scatter in emission and

  8. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  9. Forensic genetic SNP typing of low-template DNA and highly degraded DNA from crime case samples

    Børsting, Claus; Mogensen, Helle Smidt; Morling, Niels

    2013-01-01

    case investigation and only partial profiles (0-6 STRs) were obtained. Eleven of the samples could not be quantified with the Quantifiler™ Human DNA Quantification kit because of partial or complete inhibition of the PCR. For eight of these samples, SNP typing was only possible when the buffer and DNA......Heterozygote imbalances leading to allele drop-outs and disproportionally large stutters leading to allele drop-ins are known stochastic phenomena related to STR typing of low-template DNA (LtDNA). The large stutters and the many drop-ins in typical STR stutter positions are artifacts from the PCR...... amplification of tandem repeats. These artifacts may be avoided by typing bi-allelic markers instead of STRs. In this work, the SNPforID multiplex assay was used to type LtDNA. A sensitized SNP typing protocol was introduced, that increased signal strengths without increasing noise and without affecting the...

  10. Multiobjective optimization in quantitative structure-activity relationships: deriving accurate and interpretable QSARs.

    Nicolotti, Orazio; Gillet, Valerie J; Fleming, Peter J; Green, Darren V S

    2002-11-01

    Deriving quantitative structure-activity relationship (QSAR) models that are accurate, reliable, and easily interpretable is a difficult task. In this study, two new methods have been developed that aim to find useful QSAR models that represent an appropriate balance between model accuracy and complexity. Both methods are based on genetic programming (GP). The first method, referred to as genetic QSAR (or GPQSAR), uses a penalty function to control model complexity. GPQSAR is designed to derive a single linear model that represents an appropriate balance between the variance and the number of descriptors selected for the model. The second method, referred to as multiobjective genetic QSAR (MoQSAR), is based on multiobjective GP and represents a new way of thinking of QSAR. Specifically, QSAR is considered as a multiobjective optimization problem that comprises a number of competitive objectives. Typical objectives include model fitting, the total number of terms, and the occurrence of nonlinear terms. MoQSAR results in a family of equivalent QSAR models where each QSAR represents a different tradeoff in the objectives. A practical consideration often overlooked in QSAR studies is the need for the model to promote an understanding of the biochemical response under investigation. To accomplish this, chemically intuitive descriptors are needed but do not always give rise to statistically robust models. This problem is addressed by the addition of a further objective, called chemical desirability, that aims to reward models that consist of descriptors that are easily interpretable by chemists. GPQSAR and MoQSAR have been tested on various data sets including the Selwood data set and two different solubility data sets. The study demonstrates that the MoQSAR method is able to find models that are at least as good as models derived using standard statistical approaches and also yields models that allow a medicinal chemist to trade statistical robustness for chemical

  11. Quantitative evaluation of gas entrainment by numerical simulation with accurate physics model

    In the design study on a large-scale sodium-cooled fast reactor (JSFR), the reactor vessel is compactified to reduce the construction costs and enhance the economical competitiveness. However, such a reactor vessel induces higher coolant flows in the vessel and causes several thermal-hydraulics issues, e.g. gas entrainment (GE) phenomenon. The GE in the JSFR may occur at the cover gas-coolant interface in the vessel by a strong vortex at the interface. This type of GE has been studied experimentally, numerically and theoretically. Therefore, the onset condition of the GE can be evaluated conservatively. However, to clarify the negative influences of the GE on the JSFR, not only the onset condition of the GE but also the entrained gas (bubble) flow rate has to be evaluated. As long as we know, studies on the entrained gas flow rates are quite limited in both experimental and numerical fields. In this study, the authors performs numerical simulations to investigate the entrained gas amount in a hollow vortex experiment (a cylindrical vessel experiment). To simulate interfacial deformations accurately, a high-precision numerical simulation algorithm for gas-liquid two-phase flows is employed. In the first place, fine cells are applied to the region near the center of the vortex to reproduce the steep radial gradient of the circumferential velocity in this region. Then, the entrained gas flow rates are evaluated in the simulation results and are compared to the experimental data. As a result, the numerical simulation gives somewhat larger entrained gas flow rate than the experiment. However, both the numerical simulation and experiment show the entrained gas flow rates which are proportional to the outlet water velocity. In conclusion, it is confirmed that the developed numerical simulation algorithm can be applied to the quantitative evaluation of the GE. (authors)

  12. Accurate single-shot quantitative phase imaging of biological specimens with telecentric digital holographic microscopy

    Doblas Expósito, Ana Isabel; Sánchez Ortiga, Emilio; Saavedra Tortosa, Genaro; Martínez Corral, Manuel; García Sucerquia, Jorge Iván

    2014-01-01

    The advantages of using a telecentric imaging system in digital holographic microscopy (DHM) to study biological specimens are highlighted. To this end, the performances of nontelecentric DHM and telecentric DHM are evaluated from the quantitative phase imaging (QPI) point of view. The evaluated stability of the microscope allows single-shot QPI in DHM by using telecentric imaging systems. Quantitative phase maps of a section of the head of the drosophila melanogaster fly and of red blood cel...

  13. There's plenty of gloom at the bottom: the many challenges of accurate quantitation in size-based oligomeric separations.

    Striegel, André M

    2013-11-01

    There is a variety of small-molecule species (e.g., tackifiers, plasticizers, oligosaccharides) the size-based characterization of which is of considerable scientific and industrial importance. Likewise, quantitation of the amount of oligomers in a polymer sample is crucial for the import and export of substances into the USA and European Union (EU). While the characterization of ultra-high molar mass macromolecules by size-based separation techniques is generally considered a challenge, it is this author's contention that a greater challenge is encountered when trying to perform, for quantitation purposes, separations in and of the oligomeric region. The latter thesis is expounded herein, by detailing the various obstacles encountered en route to accurate, quantitative oligomeric separations by entropically dominated techniques such as size-exclusion chromatography, hydrodynamic chromatography, and asymmetric flow field-flow fractionation, as well as by methods which are, principally, enthalpically driven such as liquid adsorption and temperature gradient interaction chromatography. These obstacles include, among others, the diminished sensitivity of static light scattering (SLS) detection at low molar masses, the non-constancy of the response of SLS and of commonly employed concentration-sensitive detectors across the oligomeric region, and the loss of oligomers through the accumulation wall membrane in asymmetric flow field-flow fractionation. The battle is not lost, however, because, with some care and given a sufficient supply of sample, the quantitation of both individual oligomeric species and of the total oligomeric region is often possible. PMID:23887277

  14. Autosomal SNP typing of forensic samples with the GenPlex(TM) HID System: Results of a collaborative study

    Tomas, C.; Axler-DiPerte, G.; Budimlija, Z.M.;

    2011-01-01

    Europe and 5 in the US) in order to test the robustness and reliability of the GenPlex(TM) HID System on forensic samples. Three samples with partly degraded DNA and 10 samples with low amounts of DNA were analyzed in duplicates using various amounts of DNA. In order to compare the performance of the Gen......The GenPlex(TM) HID System (Applied Biosystems - AB) offers typing of 48 of the 52 SNPforID SNPs and amelogenin. Previous studies have shown a high reproducibility of the GenPlex(TM) HID System using 250-500 pg DNA of good quality. An international exercise was performed by 14 laboratories (9 in......Plex(TM) HID System with the most commonly used STR kits, 500 pg of partly degraded DNA from three samples was typed by the laboratories using one or more STR kits. The median SNP typing success rate was 92.3% with 500 pg of partly degraded DNA. Three of the fourteen laboratories counted for more than two...

  15. High-throughput bacterial SNP typing identifies distinct clusters of Salmonella Typhi causing typhoid in Nepalese children

    Holt, Kathryn E

    2010-05-31

    Abstract Background Salmonella Typhi (S. Typhi) causes typhoid fever, which remains an important public health issue in many developing countries. Kathmandu, the capital of Nepal, is an area of high incidence and the pediatric population appears to be at high risk of exposure and infection. Methods We recently defined the population structure of S. Typhi, using new sequencing technologies to identify nearly 2,000 single nucleotide polymorphisms (SNPs) that can be used as unequivocal phylogenetic markers. Here we have used the GoldenGate (Illumina) platform to simultaneously type 1,500 of these SNPs in 62 S. Typhi isolates causing severe typhoid in children admitted to Patan Hospital in Kathmandu. Results Eight distinct S. Typhi haplotypes were identified during the 20-month study period, with 68% of isolates belonging to a subclone of the previously defined H58 S. Typhi. This subclone was closely associated with resistance to nalidixic acid, with all isolates from this group demonstrating a resistant phenotype and harbouring the same resistance-associated SNP in GyrA (Phe83). A secondary clone, comprising 19% of isolates, was observed only during the second half of the study. Conclusions Our data demonstrate the utility of SNP typing for monitoring bacterial populations over a defined period in a single endemic setting. We provide evidence for genotype introduction and define a nalidixic acid resistant subclone of S. Typhi, which appears to be the dominant cause of severe pediatric typhoid in Kathmandu during the study period.

  16. A method for accurate detection of genomic microdeletions using real-time quantitative PCR

    Bassett Anne S

    2005-12-01

    Full Text Available Abstract Background Quantitative Polymerase Chain Reaction (qPCR is a well-established method for quantifying levels of gene expression, but has not been routinely applied to the detection of constitutional copy number alterations of human genomic DNA. Microdeletions or microduplications of the human genome are associated with a variety of genetic disorders. Although, clinical laboratories routinely use fluorescence in situ hybridization (FISH to identify such cryptic genomic alterations, there remains a significant number of individuals in which constitutional genomic imbalance is suspected, based on clinical parameters, but cannot be readily detected using current cytogenetic techniques. Results In this study, a novel application for real-time qPCR is presented that can be used to reproducibly detect chromosomal microdeletions and microduplications. This approach was applied to DNA from a series of patient samples and controls to validate genomic copy number alteration at cytoband 22q11. The study group comprised 12 patients with clinical symptoms of chromosome 22q11 deletion syndrome (22q11DS, 1 patient trisomic for 22q11 and 4 normal controls. 6 of the patients (group 1 had known hemizygous deletions, as detected by standard diagnostic FISH, whilst the remaining 6 patients (group 2 were classified as 22q11DS negative using the clinical FISH assay. Screening of the patients and controls with a set of 10 real time qPCR primers, spanning the 22q11.2-deleted region and flanking sequence, confirmed the FISH assay results for all patients with 100% concordance. Moreover, this qPCR enabled a refinement of the region of deletion at 22q11. Analysis of DNA from chromosome 22 trisomic sample demonstrated genomic duplication within 22q11. Conclusion In this paper we present a qPCR approach for the detection of chromosomal microdeletions and microduplications. The strategic use of in silico modelling for qPCR primer design to avoid regions of repetitive

  17. Development and evaluation of a liquid chromatography-mass spectrometry method for rapid, accurate quantitation of malondialdehyde in human plasma.

    Sobsey, Constance A; Han, Jun; Lin, Karen; Swardfager, Walter; Levitt, Anthony; Borchers, Christoph H

    2016-09-01

    Malondialdhyde (MDA) is a commonly used marker of lipid peroxidation in oxidative stress. To provide a sensitive analytical method that is compatible with high throughput, we developed a multiple reaction monitoring-mass spectrometry (MRM-MS) approach using 3-nitrophenylhydrazine chemical derivatization, isotope-labeling, and liquid chromatography (LC) with electrospray ionization (ESI)-tandem mass spectrometry assay to accurately quantify MDA in human plasma. A stable isotope-labeled internal standard was used to compensate for ESI matrix effects. The assay is linear (R(2)=0.9999) over a 20,000-fold concentration range with a lower limit of quantitation of 30fmol (on-column). Intra- and inter-run coefficients of variation (CVs) were 36h at 5°C. Standards spiked into plasma had recoveries of 92-98%. When compared to a common LC-UV method, the LC-MS method found near-identical MDA concentrations. A pilot project to quantify MDA in patient plasma samples (n=26) in a study of major depressive disorder with winter-type seasonal pattern (MDD-s) confirmed known associations between MDA concentrations and obesity (p<0.02). The LC-MS method provides high sensitivity and high reproducibility for quantifying MDA in human plasma. The simple sample preparation and rapid analysis time (5x faster than LC-UV) offers high throughput for large-scale clinical applications. PMID:27437618

  18. A simple and accurate protocol for absolute polar metabolite quantification in cell cultures using quantitative nuclear magnetic resonance.

    Goldoni, Luca; Beringhelli, Tiziana; Rocchia, Walter; Realini, Natalia; Piomelli, Daniele

    2016-05-15

    Absolute analyte quantification by nuclear magnetic resonance (NMR) spectroscopy is rarely pursued in metabolomics, even though this would allow researchers to compare results obtained using different techniques. Here we report on a new protocol that permits, after pH-controlled serum protein removal, the sensitive quantification (limit of detection [LOD] = 5-25 μM) of hydrophilic nutrients and metabolites in the extracellular medium of cells in cultures. The method does not require the use of databases and uses PULCON (pulse length-based concentration determination) quantitative NMR to obtain results that are significantly more accurate and reproducible than those obtained by CPMG (Carr-Purcell-Meiboom-Gill) sequence or post-processing filtering approaches. Three practical applications of the method highlight its flexibility under different cell culture conditions. We identified and quantified (i) metabolic differences between genetically engineered human cell lines, (ii) alterations in cellular metabolism induced by differentiation of mouse myoblasts into myotubes, and (iii) metabolic changes caused by activation of neurotransmitter receptors in mouse myoblasts. Thus, the new protocol offers an easily implementable, efficient, and versatile tool for the investigation of cellular metabolism and signal transduction. PMID:26898303

  19. Automated and quantitative headspace in-tube extraction for the accurate determination of highly volatile compounds from wines and beers.

    Zapata, Julián; Mateo-Vivaracho, Laura; Lopez, Ricardo; Ferreira, Vicente

    2012-03-23

    An automatic headspace in-tube extraction (ITEX) method for the accurate determination of acetaldehyde, ethyl acetate, diacetyl and other volatile compounds from wine and beer has been developed and validated. Method accuracy is based on the nearly quantitative transference of volatile compounds from the sample to the ITEX trap. For achieving that goal most methodological aspects and parameters have been carefully examined. The vial and sample sizes and the trapping materials were found to be critical due to the pernicious saturation effects of ethanol. Small 2 mL vials containing very small amounts of sample (20 μL of 1:10 diluted sample) and a trap filled with 22 mg of Bond Elut ENV resins could guarantee a complete trapping of sample vapors. The complete extraction requires 100 × 0.5 mL pumping strokes at 60 °C and takes 24 min. Analytes are further desorbed at 240 °C into the GC injector under a 1:5 split ratio. The proportion of analytes finally transferred to the trap ranged from 85 to 99%. The validation of the method showed satisfactory figures of merit. Determination coefficients were better than 0.995 in all cases and good repeatability was also obtained (better than 7% in all cases). Reproducibility was better than 8.3% except for acetaldehyde (13.1%). Detection limits were below the odor detection thresholds of these target compounds in wine and beer and well below the normal ranges of occurrence. Recoveries were not significantly different to 100%, except in the case of acetaldehyde. In such a case it could be determined that the method is not able to break some of the adducts that this compound forms with sulfites. However, such problem was avoided after incubating the sample with glyoxal. The method can constitute a general and reliable alternative for the analysis of very volatile compounds in other difficult matrixes. PMID:22340891

  20. Optimization of Specimen-Handling Procedures for Accurate Quantitation of Levels of Human Immunodeficiency Virus RNA in Plasma by Reverse Transcriptase PCR

    Dickover, Ruth E.; Herman, Steven A.; Saddiq, Khaliq; Wafer, Deborah; Dillon, Maryanne; Bryson, Yvonne J.

    1998-01-01

    Human immunodeficiency virus type 1 (HIV-1) RNA levels in plasma are currently widely used clinically for prognostication and in monitoring antiretroviral therapy. Accurate and reproducible results are critical for patient management. To determine the effects of specimen collection and handling procedures on quantitative measurement of HIV-1 RNA, we compared anticoagulants and sample processing times. Whole blood was collected from 20 HIV-1-infected patients in EDTA, acid citrate dextrose (AC...

  1. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data

    Alves-Ferreira Marcio; Grossi-de-Sa Maria; Brilhante Osmundo; Nardeli Sarah M; Artico Sinara

    2010-01-01

    Abstract Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR). Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on i...

  2. Wavelet prism decomposition analysis applied to CARS spectroscopy: a tool for accurate and quantitative extraction of resonant vibrational responses.

    Kan, Yelena; Lensu, Lasse; Hehl, Gregor; Volkmer, Andreas; Vartiainen, Erik M

    2016-05-30

    We propose an approach, based on wavelet prism decomposition analysis, for correcting experimental artefacts in a coherent anti-Stokes Raman scattering (CARS) spectrum. This method allows estimating and eliminating a slowly varying modulation error function in the measured normalized CARS spectrum and yields a corrected CARS line-shape. The main advantage of the approach is that the spectral phase and amplitude corrections are avoided in the retrieved Raman line-shape spectrum, thus significantly simplifying the quantitative reconstruction of the sample's Raman response from a normalized CARS spectrum in the presence of experimental artefacts. Moreover, the approach obviates the need for assumptions about the modulation error distribution and the chemical composition of the specimens under study. The method is quantitatively validated on normalized CARS spectra recorded for equimolar aqueous solutions of D-fructose, D-glucose, and their disaccharide combination sucrose. PMID:27410113

  3. Is SPECT or CT Based Attenuation Correction More Quantitatively Accurate for Dedicated Breast SPECT Acquired with Non-Traditional Trajectories?

    Perez, Kristy L.; Mann, Steve D.; Pachon, Jan H.; Madhav, Priti; Tornai, Martin P.

    2010-01-01

    Attenuation correction is necessary for SPECT quantification. There are a variety of methods to create attenuation maps. For dedicated breast SPECT imaging, it is unclear if either SPECT- or CT-based attenuation map would provide the most accurate quantification and whether or not segmenting the different tissue types will have an effect on the qunatification. For these experiments, 99mTc diluted in methanol and water was filled into geometric and anthropomorphic breast phantoms and was image...

  4. Self-aliquoting microarray plates for accurate quantitative matrix-assisted laser desorption/ionization mass spectrometry.

    Pabst, Martin; Fagerer, Stephan R; Köhling, Rudolf; Küster, Simon K; Steinhoff, Robert; Badertscher, Martin; Wahl, Fabian; Dittrich, Petra S; Jefimovs, Konstantins; Zenobi, Renato

    2013-10-15

    Matrix-assisted laser desorption/ionization mass spectrometry (MALDI-MS) is a fast analysis tool employed for the detection of a broad range of analytes. However, MALDI-MS has a reputation of not being suitable for quantitative analysis. Inhomogeneous analyte/matrix co-crystallization, spot-to-spot inhomogeneity, as well as a typically low number of replicates are the main contributing factors. Here, we present a novel MALDI sample target for quantitative MALDI-MS applications, which addresses the limitations mentioned above. The platform is based on the recently developed microarray for mass spectrometry (MAMS) technology and contains parallel lanes of hydrophilic reservoirs. Samples are not pipetted manually but deposited by dragging one or several sample droplets with a metal sliding device along these lanes. Sample is rapidly and automatically aliquoted into the sample spots due to the interplay of hydrophilic/hydrophobic interactions. With a few microliters of sample, it is possible to aliquot up to 40 replicates within seconds, each aliquot containing just 10 nL. The analyte droplet dries immediately and homogeneously, and consumption of the whole spot during MALDI-MS analysis is typically accomplished within few seconds. We evaluated these sample targets with respect to their suitability for use with different samples and matrices. Furthermore, we tested their application for generating calibration curves of standard peptides with α-cyano-4-hdydroxycinnamic acid as a matrix. For angiotensin II and [Glu(1)]-fibrinopeptide B we achieved coefficients of determination (r(2)) greater than 0.99 without the use of internal standards. PMID:24003910

  5. Application of an Effective Statistical Technique for an Accurate and Powerful Mining of Quantitative Trait Loci for Rice Aroma Trait.

    Farahnaz Sadat Golestan Hashemi

    Full Text Available When a phenotype of interest is associated with an external/internal covariate, covariate inclusion in quantitative trait loci (QTL analyses can diminish residual variation and subsequently enhance the ability of QTL detection. In the in vitro synthesis of 2-acetyl-1-pyrroline (2AP, the main fragrance compound in rice, the thermal processing during the Maillard-type reaction between proline and carbohydrate reduction produces a roasted, popcorn-like aroma. Hence, for the first time, we included the proline amino acid, an important precursor of 2AP, as a covariate in our QTL mapping analyses to precisely explore the genetic factors affecting natural variation for rice scent. Consequently, two QTLs were traced on chromosomes 4 and 8. They explained from 20% to 49% of the total aroma phenotypic variance. Additionally, by saturating the interval harboring the major QTL using gene-based primers, a putative allele of fgr (major genetic determinant of fragrance was mapped in the QTL on the 8th chromosome in the interval RM223-SCU015RM (1.63 cM. These loci supported previous studies of different accessions. Such QTLs can be widely used by breeders in crop improvement programs and for further fine mapping. Moreover, no previous studies and findings were found on simultaneous assessment of the relationship among 2AP, proline and fragrance QTLs. Therefore, our findings can help further our understanding of the metabolomic and genetic basis of 2AP biosynthesis in aromatic rice.

  6. Selection of accurate reference genes in mouse trophoblast stem cells for reverse transcription-quantitative polymerase chain reaction.

    Motomura, Kaori; Inoue, Kimiko; Ogura, Atsuo

    2016-06-17

    Mouse trophoblast stem cells (TSCs) form colonies of different sizes and morphologies, which might reflect their degrees of differentiation. Therefore, each colony type can have a characteristic gene expression profile; however, the expression levels of internal reference genes may also change, causing fluctuations in their estimated gene expression levels. In this study, we validated seven housekeeping genes by using a geometric averaging method and identified Gapdh as the most stable gene across different colony types. Indeed, when Gapdh was used as the reference, expression levels of Elf5, a TSC marker gene, stringently classified TSC colonies into two groups: a high expression groups consisting of type 1 and 2 colonies, and a lower expression group consisting of type 3 and 4 colonies. This clustering was consistent with our putative classification of undifferentiated/differentiated colonies based on their time-dependent colony transitions. By contrast, use of an unstable reference gene (Rn18s) allowed no such clear classification. Cdx2, another TSC marker, did not show any significant colony type-specific expression pattern irrespective of the reference gene. Selection of stable reference genes for quantitative gene expression analysis might be critical, especially when cell lines consisting of heterogeneous cell populations are used. PMID:26853688

  7. Accurate quantitative 13C NMR spectroscopy: repeatability over time of site-specific 13C isotope ratio determination.

    Caytan, Elsa; Botosoa, Eliot P; Silvestre, Virginie; Robins, Richard J; Akoka, Serge; Remaud, Gérald S

    2007-11-01

    The stability over time (repeatability) for the determination of site-specific 13C/12C ratios at natural abundance by quantitative 13C NMR spectroscopy has been tested on three probes: enriched bilabeled [1,2-13C2]ethanol; ethanol at natural abundance; and vanillin at natural abundance. It is shown in all three cases that the standard deviation for a series of measurements taken every 2-3 months over periods between 9 and 13 months is equal to or smaller than the standard deviation calculated from 5-10 replicate measurements made on a single sample. The precision which can be achieved using the present analytical 13C NMR protocol is higher than the prerequisite value of 1-2 per thousand for the determination of site-specific 13C/12C ratios at natural abundance (13C-SNIF-NMR). Hence, this technique permits the discrimination of very small variations in 13C/12C ratios between carbon positions, as found in biogenic natural products. This observed stability over time in 13C NMR spectroscopy indicates that further improvements in precision will depend primarily on improved signal-to-noise ratio. PMID:17900175

  8. Validation of Reference Genes for Accurate Normalization of Gene Expression in Lilium davidii var. unicolor for Real Time Quantitative PCR.

    XueYan Li

    Full Text Available Lilium is an important commercial market flower bulb. qRT-PCR is an extremely important technique to track gene expression levels. The requirement of suitable reference genes for normalization has become increasingly significant and exigent. The expression of internal control genes in living organisms varies considerably under different experimental conditions. For economically important Lilium, only a limited number of reference genes applied in qRT-PCR have been reported to date. In this study, the expression stability of 12 candidate genes including α-TUB, β-TUB, ACT, eIF, GAPDH, UBQ, UBC, 18S, 60S, AP4, FP, and RH2, in a diverse set of 29 samples representing different developmental processes, three stress treatments (cold, heat, and salt and different organs, has been evaluated. For different organs, the combination of ACT, GAPDH, and UBQ is appropriate whereas ACT together with AP4, or ACT along with GAPDH is suitable for normalization of leaves and scales at different developmental stages, respectively. In leaves, scales and roots under stress treatments, FP, ACT and AP4, respectively showed the most stable expression. This study provides a guide for the selection of a reference gene under different experimental conditions, and will benefit future research on more accurate gene expression studies in a wide variety of Lilium genotypes.

  9. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid-Base and Ligand Binding Equilibria of Aquacobalamin.

    Johnston, Ryne C; Zhou, Jing; Smith, Jeremy C; Parks, Jerry M

    2016-08-01

    Redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. A major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co-ligand binding equilibrium constants (Kon/off), pKas, and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for Co(III), Co(II), and Co(I) species, respectively, and the second model features saturation of each vacant axial coordination site on Co(II) and Co(I) species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co-axial ligand binding, leading to substantial errors in predicted pKas and

  10. Tandem Mass Spectrometry Measurement of the Collision Products of Carbamate Anions Derived from CO2 Capture Sorbents: Paving the Way for Accurate Quantitation

    Jackson, Phil; Fisher, Keith J.; Attalla, Moetaz Ibrahim

    2011-08-01

    The reaction between CO2 and aqueous amines to produce a charged carbamate product plays a crucial role in post-combustion capture chemistry when primary and secondary amines are used. In this paper, we report the low energy negative-ion CID results for several anionic carbamates derived from primary and secondary amines commonly used as post-combustion capture solvents. The study was performed using the modern equivalent of a triple quadrupole instrument equipped with a T-wave collision cell. Deuterium labeling of 2-aminoethanol (1,1,2,2,-d4-2-aminoethanol) and computations at the M06-2X/6-311++G(d,p) level were used to confirm the identity of the fragmentation products for 2-hydroxyethylcarbamate (derived from 2-aminoethanol), in particular the ions CN-, NCO- and facile neutral losses of CO2 and water; there is precedent for the latter in condensed phase isocyanate chemistry. The fragmentations of 2-hydroxyethylcarbamate were generalized for carbamate anions derived from other capture amines, including ethylenediamine, diethanolamine, and piperazine. We also report unequivocal evidence for the existence of carbamate anions derived from sterically hindered amines ( Tris(2-hydroxymethyl)aminomethane and 2-methyl-2-aminopropanol). For the suite of carbamates investigated, diagnostic losses include the decarboxylation product (-CO2, 44 mass units), loss of 46 mass units and the fragments NCO- ( m/z 42) and CN- ( m/z 26). We also report low energy CID results for the dicarbamate dianion (-O2CNHC2H4NHCO{2/-}) commonly encountered in CO2 capture solution utilizing ethylenediamine. Finally, we demonstrate a promising ion chromatography-MS based procedure for the separation and quantitation of aqueous anionic carbamates, which is based on the reported CID findings. The availability of accurate quantitation methods for ionic CO2 capture products could lead to dynamic operational tuning of CO2 capture-plants and, thus, cost-savings via real-time manipulation of solvent

  11. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    Highlights: → Mitochondrial dysfunction is central to many diseases of oxidative stress. → 95% of the mitochondrial genome is duplicated in the nuclear genome. → Dilution of untreated genomic DNA leads to dilution bias. → Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as β-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  12. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    Malik, Afshan N., E-mail: afshan.malik@kcl.ac.uk [King' s College London, Diabetes Research Group, Division of Diabetes and Nutritional Sciences, School of Medicine (United Kingdom); Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil [King' s College London, Diabetes Research Group, Division of Diabetes and Nutritional Sciences, School of Medicine (United Kingdom)

    2011-08-19

    Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  13. Accurate and easy-to-use assessment of contiguous DNA methylation sites based on proportion competitive quantitative-PCR and lateral flow nucleic acid biosensor.

    Xu, Wentao; Cheng, Nan; Huang, Kunlun; Lin, Yuehe; Wang, Chenguang; Xu, Yuancong; Zhu, Longjiao; Du, Dan; Luo, Yunbo

    2016-06-15

    Many types of diagnostic technologies have been reported for DNA methylation, but they require a standard curve for quantification or only show moderate accuracy. Moreover, most technologies have difficulty providing information on the level of methylation at specific contiguous multi-sites, not to mention easy-to-use detection to eliminate labor-intensive procedures. We have addressed these limitations and report here a cascade strategy that combines proportion competitive quantitative PCR (PCQ-PCR) and lateral flow nucleic acid biosensor (LFNAB), resulting in accurate and easy-to-use assessment. The P16 gene with specific multi-methylated sites, a well-studied tumor suppressor gene, was used as the target DNA sequence model. First, PCQ-PCR provided amplification products with an accurate proportion of multi-methylated sites following the principle of proportionality, and double-labeled duplex DNA was synthesized. Then, a LFNAB strategy was further employed for amplified signal detection via immune affinity recognition, and the exact level of site-specific methylation could be determined by the relative intensity of the test line and internal reference line. This combination resulted in all recoveries being greater than 94%, which are pretty satisfactory recoveries in DNA methylation assessment. Moreover, the developed cascades show significantly high usability as a simple, sensitive, and low-cost tool. Therefore, as a universal platform for sensing systems for the detection of contiguous multi-sites of DNA methylation without external standards and expensive instrumentation, this PCQ-PCR-LFNAB cascade method shows great promise for the point-of-care diagnosis of cancer risk and therapeutics. PMID:26914373

  14. Evaluation of the iPLEX(®) Sample ID Plus Panel designed for the Sequenom MassARRAY(®) system. A SNP typing assay developed for human identification and sample tracking based on the SNPforID panel

    Johansen, P; Andersen, J D; Børsting, Claus; Morling, N

    2013-01-01

    Sequenom launched the first commercial SNP typing kit for human identification, named the iPLEX(®) Sample ID Plus Panel. The kit amplifies 47 of the 52 SNPs in the SNPforID panel, amelogenin and two Y-chromosome SNPs in one multiplex PCR. The SNPs were analyzed by single base extension (SBE) and Matrix Assisted Laser Desorption/Ionization-Time of Flight Mass Spectrometry (MALDI-TOF MS). In this study, we evaluated the accuracy and sensitivity of the iPLEX(®) Sample ID Plus Panel by comparing ...

  15. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data

    Alves-Ferreira Marcio

    2010-03-01

    Full Text Available Abstract Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR. Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. Results By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1α5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhβTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. Conclusion We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references

  16. Quantitative Assessment of Protein Structural Models by Comparison of H/D Exchange MS Data with Exchange Behavior Accurately Predicted by DXCOREX

    Liu, Tong; Pantazatos, Dennis; Li, Sheng; Hamuro, Yoshitomo; Hilser, Vincent J.; Woods, Virgil L.

    2012-01-01

    Peptide amide hydrogen/deuterium exchange mass spectrometry (DXMS) data are often used to qualitatively support models for protein structure. We have developed and validated a method (DXCOREX) by which exchange data can be used to quantitatively assess the accuracy of three-dimensional (3-D) models of protein structure. The method utilizes the COREX algorithm to predict a protein's amide hydrogen exchange rates by reference to a hypothesized structure, and these values are used to generate a virtual data set (deuteron incorporation per peptide) that can be quantitatively compared with the deuteration level of the peptide probes measured by hydrogen exchange experimentation. The accuracy of DXCOREX was established in studies performed with 13 proteins for which both high-resolution structures and experimental data were available. The DXCOREX-calculated and experimental data for each protein was highly correlated. We then employed correlation analysis of DXCOREX-calculated versus DXMS experimental data to assess the accuracy of a recently proposed structural model for the catalytic domain of a Ca2+-independent phospholipase A2. The model's calculated exchange behavior was highly correlated with the experimental exchange results available for the protein, supporting the accuracy of the proposed model. This method of analysis will substantially increase the precision with which experimental hydrogen exchange data can help decipher challenging questions regarding protein structure and dynamics.

  17. Influence of storage time on DNA of Chlamydia trachomatis, Ureaplasma urealyticum, and Neisseria gonorrhoeae for accurate detection by quantitative real-time polymerase chain reaction.

    Lu, Y; Rong, C Z; Zhao, J Y; Lao, X J; Xie, L; Li, S; Qin, X

    2016-01-01

    The shipment and storage conditions of clinical samples pose a major challenge to the detection accuracy of Chlamydia trachomatis (CT), Neisseria gonorrhoeae (NG), and Ureaplasma urealyticum (UU) when using quantitative real-time polymerase chain reaction (qRT-PCR). The aim of the present study was to explore the influence of storage time at 4°C on the DNA of these pathogens and its effect on their detection by qRT-PCR. CT, NG, and UU positive genital swabs from 70 patients were collected, and DNA of all samples were extracted and divided into eight aliquots. One aliquot was immediately analyzed with qRT-PCR to assess the initial pathogen load, whereas the remaining samples were stored at 4°C and analyzed after 1, 2, 3, 7, 14, 21, and 28 days. No significant differences in CT, NG, and UU DNA loads were observed between baseline (day 0) and the subsequent time points (days 1, 2, 3, 7, 14, 21, and 28) in any of the 70 samples. Although a slight increase in DNA levels was observed at day 28 compared to day 0, paired sample t-test results revealed no significant differences between the mean DNA levels at different time points following storage at 4°C (all P>0.05). Overall, the CT, UU, and NG DNA loads from all genital swab samples were stable at 4°C over a 28-day period. PMID:27580005

  18. Validation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in strawberry fruits using different cultivars and osmotic stresses.

    Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor

    2015-01-10

    The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts. PMID:25445290

  19. Evaluation of the iPLEX(®) Sample ID Plus Panel designed for the Sequenom MassARRAY(®) system. A SNP typing assay developed for human identification and sample tracking based on the SNPforID panel

    Johansen, P; Andersen, J D; Børsting, Claus;

    2013-01-01

    Sequenom launched the first commercial SNP typing kit for human identification, named the iPLEX(®) Sample ID Plus Panel. The kit amplifies 47 of the 52 SNPs in the SNPforID panel, amelogenin and two Y-chromosome SNPs in one multiplex PCR. The SNPs were analyzed by single base extension (SBE) and...... SNPforID assay. The average call rate for duplicate typing of any one SNPs in the panel was 90.0% when the mass spectra were analyzed automatically with the MassARRAY(®) TYPER 4.0 genotyping software in real time. Two reproducible inconsistencies were observed (error rate: 0.05%) at two different SNP...... loci. In addition, four inconsistencies were observed once. The optimal amount of template DNA in the PCR was ≥10ng. There was a relatively high risk of allele and locus drop-outs when ≤1ng template DNA was used. We developed an R script with a stringent set of "forensic analysis parameters" based on...

  20. The effect of multiple primer-template mismatches on quantitative PCR accuracy and development of a multi-primer set assay for accurate quantification of pcrA gene sequence variants.

    Ledeker, Brett M; De Long, Susan K

    2013-09-01

    Quantitative PCR (qPCR) is a critical tool for quantifying the abundance of specific organisms and the level or expression of target genes in medically and environmentally relevant systems. However, often the power of this tool has been limited because primer-template mismatches, due to sequence variations of targeted genes, can lead to inaccuracies in measured gene quantities, detection failures, and spurious conclusions. Currently available primer design guidelines for qPCR were developed for pure culture applications, and available primer design strategies for mixed cultures were developed for detection rather than accurate quantification. Furthermore, past studies examining the impact of mismatches have focused only on single mismatches while instances of multiple mismatches are common. There are currently no appropriate solutions to overcome the challenges posed by sequence variations. Here, we report results that provide a comprehensive, quantitative understanding of the impact of multiple primer-template mismatches on qPCR accuracy and demonstrate a multi-primer set approach to accurately quantify a model gene pcrA (encoding perchlorate reductase) that has substantial sequence variation. Results showed that for multiple mismatches (up to 3 mismatches) in primer regions where mismatches were previously considered tolerable (middle and 5' end), quantification accuracies could be as low as ~0.1%. Furthermore, tests were run using a published pcrA primer set with mixtures of genomic DNA from strains known to harbor the target gene, and for some mixtures quantification accuracy was as low as ~0.8% or was non-detect. To overcome these limitations, a multiple primer set assay including minimal degeneracies was developed for pcrA genes. This assay resulted in nearly 100% accurate detection for all mixed microbial communities tested. The multi-primer set approach demonstrated herein can be broadly applied to other genes with known sequences. PMID:23806694

  1. A rapid and accurate method for the quantitative estimation of natural polysaccharides and their fractions using high performance size exclusion chromatography coupled with multi-angle laser light scattering and refractive index detector.

    Cheong, Kit-Leong; Wu, Ding-Tao; Zhao, Jing; Li, Shao-Ping

    2015-06-26

    In this study, a rapid and accurate method for quantitative analysis of natural polysaccharides and their different fractions was developed. Firstly, high performance size exclusion chromatography (HPSEC) was utilized to separate natural polysaccharides. And then the molecular masses of their fractions were determined by multi-angle laser light scattering (MALLS). Finally, quantification of polysaccharides or their fractions was performed based on their response to refractive index detector (RID) and their universal refractive index increment (dn/dc). Accuracy of the developed method for the quantification of individual and mixed polysaccharide standards, including konjac glucomannan, CM-arabinan, xyloglucan, larch arabinogalactan, oat β-glucan, dextran (410, 270, and 25 kDa), mixed xyloglucan and CM-arabinan, and mixed dextran 270 K and CM-arabinan was determined, and their average recoveries were between 90.6% and 98.3%. The limits of detection (LOD) and quantification (LOQ) were ranging from 10.68 to 20.25 μg/mL, and 42.70 to 68.85 μg/mL, respectively. Comparing to the conventional phenol sulfuric acid assay and HPSEC coupled with evaporative light scattering detection (HPSEC-ELSD) analysis, the developed HPSEC-MALLS-RID method based on universal dn/dc for the quantification of polysaccharides and their fractions is much more simple, rapid, and accurate with no need of individual polysaccharide standard, as well as free of calibration curve. The developed method was also successfully utilized for quantitative analysis of polysaccharides and their different fractions from three medicinal plants of Panax genus, Panax ginseng, Panax notoginseng and Panax quinquefolius. The results suggested that the HPSEC-MALLS-RID method based on universal dn/dc could be used as a routine technique for the quantification of polysaccharides and their fractions in natural resources. PMID:25990349

  2. Development of a multiplex SNP typing assay for Yersinia pestis Orientalis strains based on Luminex technology%基于Luminex悬浮芯片的鼠疫耶尔森菌SNP分型方法研究

    朱鹏; 张青雯; 祁芝珍; 崔玉军; 肖潇; 杨瑞馥; 谭周进; 宋亚军

    2012-01-01

    目的 利用Luminex悬浮芯片技术,基于单核苷酸多态性(single nucleotide polymorphism,SNP)建立我国鼠疫耶尔森菌东方型菌株(以下简称东方型鼠疫菌)的基因分型方法,为进一步研究东方型鼠疫菌多态性并分析其系统发育关系奠定基础.方法 利用多重PCR,同时扩增东方型鼠疫菌中18个具有分型意义的SNP位点,扩增产物进行多重等位基因特异性引物延伸(allele specific primer extension,ASPE)反应以及Luminex悬浮芯片技术分析,随后利用MasterPlex GT V2.3软件计算平均荧光强度(median fluorescence intensity,MFI)比值,判断各SNP位点的碱基状态;并对SNP分型方法 的重现性进行评价.结果 Luminex多重SNP技术能够快速、高通量地检测出各SNP位点的MFI值,从而在8 h内一次性成功确定东方型鼠疫菌中18个SNP的碱基状态;36株东方型鼠疫菌基于18个SNP可以分为7个基因型.结论 本文建立的Luminex多重SNP检测方法 作为一种高通量检测SNP的技术平台,为后期的东方型鼠疫菌SNP分型及系统发育分析奠定了基础.%Objective To develop a Luminex-based single nucleotide polymorphism ( SNP ) typing assay for the Chinese Yersinia pestis orientalis strains. Method 18 oriental strains specific SNPs were selected and amplified simultaneously by a multiplex PCR assay. The amplicons were subjected to allele specific primer extension ( ASPE ) and Luminex analysis. MasterPlex GT V2. 3 software was used to calculate the median fluorescence intensity ( MFI ) ratios and call SNPs. Assays were repeated to evaluate their reproducibility. Results This assay yielded unambiguous SNP calls for all the 18 targeted SNPs simultaneously with reasonable reproducibility in 8 hours. Thirty-six Y. pestis Orientalis strains were grouped into 7 genotypes based on the SNP profiles. Conclusion The Luminex-assay presented here provides a reliable platform to screen the SNP of Y. pestis Orientalis strains in a high

  3. 粮谷中8种痕量真菌毒素的定量分析方法%Accurate Quantitative Analysis Method for 8 Kinds of Mycotoxins in Cereal

    曹娅; 孙利; 王明林; 冯峰; 储晓刚

    2013-01-01

    建立了大米、小麦和大豆中黄曲霉毒素B1、黄曲霉毒素B2、黄曲霉毒素G1、黄曲霉毒素G2、伏马毒素B1、伏马毒素B2、柄曲霉素和异烟棒曲霉素C8种真菌毒素的高效液相色谱-串联质谱(HPLC-MS/MS)分析方法.样品加入正己烷去除油脂,用60%乙腈振荡液液分配提取,取乙腈水层过滤膜后分析.在电喷雾电离(ESI)正离子模式下采用多反应监测(MRM)进行测定.定量方法采用同位素内标稀释法,8种真菌毒素在各自浓度范围内线性关系良好,线性系数均不低于0.997 0.空白样品的加标回收率为77%~123%,相对标准偏差(RSD)为0.6%~13.3%.该方法操作简单、灵敏度高,可用于粮谷中真菌毒素的检测.%A high performance liquid chromatography - tandem mass spectrometric( HPLC - MS/MS) method was developed for the determination of 8 kinds of mycotoxins (aflatoxin Bl, aflatoxin B2, af-latoxin Gl, aflatoxin G2, fumonisin Bl, fumonisin B2, sterigmatocystin and roquefortine C) in ce real. The sample was diluted with hexane to remove the oil, then extracted with 60% acetonitrile. The acetonitrile layer was made up to constant volume and passed through the filter before tested. The detection was performed in the electrospray ion positive mode and multiple reaction monitoring (MRM) mode, and the quantitation of target compounds was carried out by the isotope dilution assays. The results indicated that the 8 compounds showed good linear relationships in certain concentration ranges, with correlation coefficients not less than than 0.997 0. The average recoveries ranged from 77% to 123%, with relative standard deviations of 0. 6% - 13. 3% . The method was proved to be rapid and sensitive, and could be used for the detection of mycotoxins in cereal.

  4. Accurate Finite Difference Algorithms

    Goodrich, John W.

    1996-01-01

    Two families of finite difference algorithms for computational aeroacoustics are presented and compared. All of the algorithms are single step explicit methods, they have the same order of accuracy in both space and time, with examples up to eleventh order, and they have multidimensional extensions. One of the algorithm families has spectral like high resolution. Propagation with high order and high resolution algorithms can produce accurate results after O(10(exp 6)) periods of propagation with eight grid points per wavelength.

  5. Quantitatively accurate calculations of conductance and thermopower of molecular junctions

    Markussen, Troels; Jin, Chengjun; Thygesen, Kristian Sommer

    2013-01-01

    Thermopower measurements of molecular junctions have recently gained interest as a characterization technique that supplements the more traditional conductance measurements. Here we investigate the electronic conductance and thermopower of benzenediamine (BDA) and benzenedicarbonitrile (BDCN) con...

  6. Quantitative microfocal radiography accurately detects joint changes in rheumatoid arthritis.

    Buckland-Wright, J C; Carmichael, I.; Walker, S R

    1986-01-01

    Microfocal radiography, producing x5 magnified images of the wrist and hands with a high spacial resolution (25 microns) in the film, permitted direct measurement of erosion area and joint space width in patients with rheumatoid arthritis. The magnitude of errors relating to direct measurement, repositioning the wrist and hand on successive x ray visits, repeated identification of erosions and their area calculation were assessed. The coefficients of variation for length and area measurements...

  7. Quantitative film radiography

    We have developed a system of quantitative radiography in order to produce quantitative images displaying homogeneity of parts. The materials that we characterize are synthetic composites and may contain important subtle density variations not discernible by examining a raw film x-radiograph. In order to quantitatively interpret film radiographs, it is necessary to digitize, interpret, and display the images. Our integrated system of quantitative radiography displays accurate, high-resolution pseudo-color images in units of density. We characterize approximately 10,000 parts per year in hundreds of different configurations and compositions with this system. This report discusses: the method; film processor monitoring and control; verifying film and processor performance; and correction of scatter effects

  8. Quantitative EPR A Practitioners Guide

    Eaton, Gareth R; Barr, David P; Weber, Ralph T

    2010-01-01

    This is the first comprehensive yet practical guide for people who perform quantitative EPR measurements. No existing book provides this level of practical guidance to ensure the successful use of EPR. There is a growing need in both industrial and academic research to provide meaningful and accurate quantitative EPR results. This text discusses the various sample, instrument and software related aspects required for EPR quantitation. Specific topics include: choosing a reference standard, resonator considerations (Q, B1, Bm), power saturation characteristics, sample positioning, and finally, putting all the factors together to obtain an accurate spin concentration of a sample.

  9. Towards accurate emergency response behavior

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  10. Accurate determination of antenna directivity

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power...

  11. Quantitative FDG in depression

    Full text: Studies of regional cerebral glucose metabolism (rCMRGlu) using positron emission tomography (PET) in patients with affective disorders have consistently demonstrated reduced metabolism in the frontal regions. Different quantitative and semi-quantitative rCMRGlu regions of interest (ROI) comparisons, e.g. absolute metabolic rates, ratios of dorsolateral prefrontal cortex (DLPFC) to ipsilateral hemisphere cortex, have been reported. These studies suffered from the use of a standard brain atlas to define ROls, whereas in this case study, the individual''s magnetic resonance imaging (MRI) scan was registered with the PET scan to enable accurate neuroanatomical ROI definition for the subject. The patient is a 36-year-old female with a six-week history of major depression (HAM-D = 34, MMSE = 28). A quantitative FDG PET study and an MRI scan were performed. Six MRI-guided ROls (DLPFC, PFC, whole hemisphere) were defined. The average rCMRGlu in the DLPFC (left = 28.8 + 5.8 mol/100g/min; right = 25.6 7.0 mol/100g/min) were slightly reduced compared to the ipsilateral hemispherical rate (left = 30.4 6.8 mol/100g/min; right = 29.5 7.2 mol/100g/min). The ratios of DLPFC to ipsilateral hemispheric rate were close to unity (left = 0.95 0.29; right 0.87 0.32). The right to left DLPFC ratio did not show any significant asymmetry (0.91 0.30). These results do not correlate with earlier published results reporting decreased left DLPFC rates compared to right DLPFC, although our results will need to be replicated with a group of depressed patients. Registration of PET and MRI studies is necessary in ROI-based quantitative FDG PET studies to allow for the normal anatomical variation among individuals, and thus is essential for accurate comparison of rCMRGlu between individuals

  12. Accurate ab initio spin densities

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  13. Accurate Modeling of Advanced Reflectarrays

    Zhou, Min

    of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  14. Accurate thickness measurement of graphene

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  15. A More Accurate Fourier Transform

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  16. Current Status and Advances in Quantitative Proteomic Mass Spectrometry

    Valerie C. Wasinger; Zeng, Ming; Yau, Yunki

    2013-01-01

    The accurate quantitation of proteins and peptides in complex biological systems is one of the most challenging areas of proteomics. Mass spectrometry-based approaches have forged significant in-roads allowing accurate and sensitive quantitation and the ability to multiplex vastly complex samples through the application of robust bioinformatic tools. These relative and absolute quantitative measures using label-free, tags, or stable isotope labelling have their own strengths and limitations. ...

  17. Fast, accurate standardless XRF analysis with IQ+

    Full text: Due to both chemical and physical effects, the most accurate XRF data are derived from calibrations set up using in-type standards, necessitating some prior knowledge of the samples being analysed. Whilst this is often the case for routine samples, particularly in production control, for completely unknown samples the identification and availability of in-type standards can be problematic. Under these circumstances standardless analysis can offer a viable solution. Successful analysis of completely unknown samples requires a complete chemical overview of the speciemen together with the flexibility of a fundamental parameters (FP) algorithm to handle wide-ranging compositions. Although FP algorithms are improving all the time, most still require set-up samples to define the spectrometer response to a particular element. Whilst such materials may be referred to as standards, the emphasis in this kind of analysis is that only a single calibration point is required per element and that the standard chosen does not have to be in-type. The high sensitivities of modern XRF spectrometers, together with recent developments in detector counting electronics that possess a large dynamic range and high-speed data processing capacity bring significant advances to fast, standardless analysis. Illustrated with a tantalite-columbite heavy-mineral concentrate grading use-case, this paper will present the philosophy behind the semi-quantitative IQ+ software and the required hardware. This combination can give a rapid scan-based overview and quantification of the sample in less than two minutes, together with the ability to define channels for specific elements of interest where higher accuracy and lower levels of quantification are required. The accuracy, precision and limitations of standardless analysis will be assessed using certified reference materials of widely differing chemical and physical composition. Copyright (2002) Australian X-ray Analytical Association Inc

  18. 38 CFR 4.46 - Accurate measurement.

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  19. Quantitative Analysis in Multimodality Molecular Imaging

    PET offers the possibility of truly quantitative (physiological) measurements of tracer concentration in vivo. However, there are several issues limiting both visual qualitative interpretation and quantitative analysis capabilities of reconstructed PET images that must be considered in order to fully realize this potential. The major challenges to quantitative PET can be categorized in 5 classes: (i) factors related to imaging system performance and data acquisition protocols (instrumentation and measurement factors), (ii) those related to the physics of photon interaction with biologic tissues (physical factors), (iii) image reconstruction (reconstruction factors), (iv) factors related to patient motion and other physiological issues (physiological factors), and (v) Methodological factors: issues related to difficulties in developing accurate tracer kinetic models, especially at the voxel level. This paper reflects the tremendous increase in interest in quantitative molecular imaging using PET as both clinical and research imaging modality in the past decade. It offers an overview of the entire range of quantitative PET imaging from basic principles to various steps required for obtaining quantitatively accurate data from dedicated standalone PET and combined PET/CT and PET/MR systems including data collection methods and algorithms used to correct for physical degrading factors as well as image processing and analysis techniques and their clinical and research applications. Impact of physical degrading factors including attenuation of photons and contribution from photons scattered in the patient and partial volume effect on the diagnostic quality and quantitative accuracy of PET data will be discussed. Considerable advances have been made and much worthwhile research focused on the development of quantitative imaging protocols incorporating accurate data correction techniques and sophisticated image reconstruction algorithms. The fundamental concepts of

  20. Accurate measurement of streamwise vortices using dual-plane PIV

    Waldman, Rye M.; Breuer, Kenneth S. [Brown University, School of Engineering, Providence, RI (United States)

    2012-11-15

    Low Reynolds number aerodynamic experiments with flapping animals (such as bats and small birds) are of particular interest due to their application to micro air vehicles which operate in a similar parameter space. Previous PIV wake measurements described the structures left by bats and birds and provided insight into the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions based on said measurements. The highly three-dimensional and unsteady nature of the flows associated with flapping flight are major challenges for accurate measurements. The challenge of animal flight measurements is finding small flow features in a large field of view at high speed with limited laser energy and camera resolution. Cross-stream measurement is further complicated by the predominately out-of-plane flow that requires thick laser sheets and short inter-frame times, which increase noise and measurement uncertainty. Choosing appropriate experimental parameters requires compromise between the spatial and temporal resolution and the dynamic range of the measurement. To explore these challenges, we do a case study on the wake of a fixed wing. The fixed model simplifies the experiment and allows direct measurements of the aerodynamic forces via load cell. We present a detailed analysis of the wake measurements, discuss the criteria for making accurate measurements, and present a solution for making quantitative aerodynamic load measurements behind free-flyers. (orig.)

  1. Quantitative Analysis of Radar Returns from Insects

    Riley, J. R.

    1979-01-01

    When a number of flying insects is low enough to permit their resolution as individual radar targets, quantitative estimates of their aerial density are developed. Accurate measurements of heading distribution using a rotating polarization radar to enhance the wingbeat frequency method of identification are presented.

  2. Christhin: Quantitative Analysis of Thin Layer Chromatography

    Barchiesi, Maximiliano; Renaudo, Carlos; Rossi, Pablo; Pramparo, María de Carmen; Nepote, Valeria; Grosso, Nelson Ruben; Gayol, María Fernanda

    2012-01-01

    Manual for Christhin 0.1.36 Christhin (Chromatography Riser Thin) is software developed for the quantitative analysis of data obtained from thin-layer chromatographic techniques (TLC). Once installed on your computer, the program is very easy to use, and provides data quickly and accurately. This manual describes the program, and reading should be enough to use it properly.

  3. Intracranial Calcifications and Hemorrhages: Characterization with Quantitative Susceptibility Mapping

    Chen, Weiwei; Zhu, Wenzhen; Kovanlikaya, IIhami; Kovanlikaya, Arzu; Liu, Tian; Wang, Shuai; Salustri, Carlo; Wang, Yi

    2014-01-01

    Quantitative susceptibility mapping demonstrates the negative susceptibility of calcification and the positive susceptibility of hemorrhage and is superior to phase imaging in the specific detection of intracranial calcifications and accurate detection of intracranial hemorrhages.

  4. Automated Selected Reaction Monitoring Software for Accurate Label-Free Protein Quantification

    Teleman, Johan; Karlsson, Christofer; Waldemarson, Sofia; Hansson, Karin; James, Peter; Malmström, Johan; Levander, Fredrik

    2012-01-01

    Selected reaction monitoring (SRM) is a mass spectrometry method with documented ability to quantify proteins accurately and reproducibly using labeled reference peptides. However, the use of labeled reference peptides becomes impractical if large numbers of peptides are targeted and when high flexibility is desired when selecting peptides. We have developed a label-free quantitative SRM workflow that relies on a new automated algorithm, Anubis, for accurate peak detection. Anubis efficiently...

  5. Laboratory Building for Accurate Determination of Plutonium

    2008-01-01

    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  6. Quantitative lithofacies palaeogeography

    Zeng-Zhao; Feng; Xiu-Juan; Zheng; Zhi-Dong; Bao; Zhen-Kui; Jin; Sheng-He; Wu; You-Bin; He; Yong-Min; Peng; Yu-Qing; Yang; Jia-Qiang; Zhang; Yong-Sheng; Zhang

    2014-01-01

    Quantitative lithofacies palaeogeography is an important discipline of palaeogeography.It is developed on the foundation of traditional lithofacies palaeogeography and palaeogeography,the core of which is the quantitative lithofacies palaeogeographic map.Quantity means that in the palaeogeographic map,the division and identification of each palaeogeographic unit are supported by quantitative data and quantitative fundamental maps.Our lithofacies palaeogeographic maps are quantitative or mainly quantitative.A great number of quantitative lithofacies palaeogeographic maps have been published,and articles and monographs of quantitative lithofacies palaeogeography have been published successively,thus the quantitative lithofacies palaeogeography was formed and established.It is an important development in lithofacies palaeogeography.In composing quantitative lithofacies palaeogeographic maps,the key measure is the single factor analysis and multifactor comprehensive mapping method—methodology of quantitative lithofacies palaeogeography.In this paper,the authors utilize two case studies,one from the Early Ordovician of South China and the other from the Early Ordovician of Ordos,North China,to explain how to use this methodology to compose the quantitative lithofacies palaeogeographic maps,and to discuss the palaeogeographic units in these maps.Finally,three characteristics,i.e.,quantification,multiple orders and multiple types,of quantitative lithofacies palaeogeographic maps are conclusively discussed.

  7. Quantitative investment analysis

    DeFusco, Richard

    2007-01-01

    In the "Second Edition" of "Quantitative Investment Analysis," financial experts Richard DeFusco, Dennis McLeavey, Jerald Pinto, and David Runkle outline the tools and techniques needed to understand and apply quantitative methods to today's investment process.

  8. Computationally efficient and quantitatively accurate multiscale simulation of solid-solution strengthening by ab initio calculation

    Ma, D.; Friák, Martin; von Pezold, J.; Raabe, D.; Neugebauer, J.

    2015-01-01

    Roč. 85, FEB (2015), s. 53-66. ISSN 1359-6454 Institutional support: RVO:68081723 Keywords : Solid-solution strengthening * DFT * Peierls–Nabarro model * Ab initio * Al alloy s Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 4.465, year: 2014

  9. Invariant Image Watermarking Using Accurate Zernike Moments

    Ismail A. Ismail

    2010-01-01

    Full Text Available problem statement: Digital image watermarking is the most popular method for image authentication, copyright protection and content description. Zernike moments are the most widely used moments in image processing and pattern recognition. The magnitudes of Zernike moments are rotation invariant so they can be used just as a watermark signal or be further modified to carry embedded data. The computed Zernike moments in Cartesian coordinate are not accurate due to geometrical and numerical error. Approach: In this study, we employed a robust image-watermarking algorithm using accurate Zernike moments. These moments are computed in polar coordinate, where both approximation and geometric errors are removed. Accurate Zernike moments are used in image watermarking and proved to be robust against different kind of geometric attacks. The performance of the proposed algorithm is evaluated using standard images. Results: Experimental results show that, accurate Zernike moments achieve higher degree of robustness than those approximated ones against rotation, scaling, flipping, shearing and affine transformation. Conclusion: By computing accurate Zernike moments, the embedded bits watermark can be extracted at low error rate.

  10. Accurate strand-specific quantification of viral RNA.

    Nicole E Plaskon

    Full Text Available The presence of full-length complements of viral genomic RNA is a hallmark of RNA virus replication within an infected cell. As such, methods for detecting and measuring specific strands of viral RNA in infected cells and tissues are important in the study of RNA viruses. Strand-specific quantitative real-time PCR (ssqPCR assays are increasingly being used for this purpose, but the accuracy of these assays depends on the assumption that the amount of cDNA measured during the quantitative PCR (qPCR step accurately reflects amounts of a specific viral RNA strand present in the RT reaction. To specifically test this assumption, we developed multiple ssqPCR assays for the positive-strand RNA virus o'nyong-nyong (ONNV that were based upon the most prevalent ssqPCR assay design types in the literature. We then compared various parameters of the ONNV-specific assays. We found that an assay employing standard unmodified virus-specific primers failed to discern the difference between cDNAs generated from virus specific primers and those generated through false priming. Further, we were unable to accurately measure levels of ONNV (- strand RNA with this assay when higher levels of cDNA generated from the (+ strand were present. Taken together, these results suggest that assays of this type do not accurately quantify levels of the anti-genomic strand present during RNA virus infectious cycles. However, an assay permitting the use of a tag-specific primer was able to distinguish cDNAs transcribed from ONNV (- strand RNA from other cDNAs present, thus allowing accurate quantification of the anti-genomic strand. We also report the sensitivities of two different detection strategies and chemistries, SYBR(R Green and DNA hydrolysis probes, used with our tagged ONNV-specific ssqPCR assays. Finally, we describe development, design and validation of ssqPCR assays for chikungunya virus (CHIKV, the recent cause of large outbreaks of disease in the Indian Ocean

  11. Using an Educational Electronic Documentation System to Help Nursing Students Accurately Identify Nursing Diagnoses

    Pobocik, Tamara J.

    2013-01-01

    The use of technology and electronic medical records in healthcare has exponentially increased. This quantitative research project used a pretest/posttest design, and reviewed how an educational electronic documentation system helped nursing students to identify the accurate related to statement of the nursing diagnosis for the patient in the case…

  12. Accurate object tracking system by integrating texture and depth cues

    Chen, Ju-Chin; Lin, Yu-Hang

    2016-03-01

    A robust object tracking system that is invariant to object appearance variations and background clutter is proposed. Multiple instance learning with a boosting algorithm is applied to select discriminant texture information between the object and background data. Additionally, depth information, which is important to distinguish the object from a complicated background, is integrated. We propose two depth-based models that can compensate texture information to cope with both appearance variants and background clutter. Moreover, in order to reduce the risk of drifting problem increased for the textureless depth templates, an update mechanism is proposed to select more precise tracking results to avoid incorrect model updates. In the experiments, the robustness of the proposed system is evaluated and quantitative results are provided for performance analysis. Experimental results show that the proposed system can provide the best success rate and has more accurate tracking results than other well-known algorithms.

  13. Accurate atomic data for industrial plasma applications

    Griesmann, U.; Bridges, J.M.; Roberts, J.R.; Wiese, W.L.; Fuhr, J.R. [National Inst. of Standards and Technology, Gaithersburg, MD (United States)

    1997-12-31

    Reliable branching fraction, transition probability and transition wavelength data for radiative dipole transitions of atoms and ions in plasma are important in many industrial applications. Optical plasma diagnostics and modeling of the radiation transport in electrical discharge plasmas (e.g. in electrical lighting) depend on accurate basic atomic data. NIST has an ongoing experimental research program to provide accurate atomic data for radiative transitions. The new NIST UV-vis-IR high resolution Fourier transform spectrometer has become an excellent tool for accurate and efficient measurements of numerous transition wavelengths and branching fractions in a wide wavelength range. Recently, the authors have also begun to employ photon counting techniques for very accurate measurements of branching fractions of weaker spectral lines with the intent to improve the overall accuracy for experimental branching fractions to better than 5%. They have now completed their studies of transition probabilities of Ne I and Ne II. The results agree well with recent calculations and for the first time provide reliable transition probabilities for many weak intercombination lines.

  14. More accurate picture of human body organs

    Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)

  15. Quantitative biometric phenotype analysis in mouse lenses

    Reilly, Matthew A.; Andley, Usha P.

    2010-01-01

    The disrupted morphology of lenses in mouse models for cataracts precludes accurate in vitro assessment of lens growth by weight. To overcome this limitation, we developed morphometric methods to assess defects in eye lens growth and shape in mice expressing the αA-crystallin R49C (αA-R49C) mutation. Our morphometric methods determine quantitative shape and dry weight of the whole lens from histological sections of the lens. This method was then used to quantitatively compare the biometric gr...

  16. Accurate 3D quantification of the bronchial parameters in MDCT

    Saragaglia, A.; Fetita, C.; Preteux, F.; Brillet, P. Y.; Grenier, P. A.

    2005-08-01

    The assessment of bronchial reactivity and wall remodeling in asthma plays a crucial role in better understanding such a disease and evaluating therapeutic responses. Today, multi-detector computed tomography (MDCT) makes it possible to perform an accurate estimation of bronchial parameters (lumen and wall areas) by allowing a quantitative analysis in a cross-section plane orthogonal to the bronchus axis. This paper provides the tools for such an analysis by developing a 3D investigation method which relies on 3D reconstruction of bronchial lumen and central axis computation. Cross-section images at bronchial locations interactively selected along the central axis are generated at appropriate spatial resolution. An automated approach is then developed for accurately segmenting the inner and outer bronchi contours on the cross-section images. It combines mathematical morphology operators, such as "connection cost", and energy-controlled propagation in order to overcome the difficulties raised by vessel adjacencies and wall irregularities. The segmentation accuracy was validated with respect to a 3D mathematically-modeled phantom of a pair bronchus-vessel which mimics the characteristics of real data in terms of gray-level distribution, caliber and orientation. When applying the developed quantification approach to such a model with calibers ranging from 3 to 10 mm diameter, the lumen area relative errors varied from 3.7% to 0.15%, while the bronchus area was estimated with a relative error less than 5.1%.

  17. Feedback about more accurate versus less accurate trials: differential effects on self-confidence and activation.

    Badami, Rokhsareh; VaezMousavi, Mohammad; Wulf, Gabriele; Namazizadeh, Mahdi

    2012-06-01

    One purpose of the present study was to examine whether self-confidence or anxiety would be differentially affected byfeedback from more accurate rather than less accurate trials. The second purpose was to determine whether arousal variations (activation) would predict performance. On day 1, participants performed a golf putting task under one of two conditions: one group received feedback on the most accurate trials, whereas another group received feedback on the least accurate trials. On day 2, participants completed an anxiety questionnaire and performed a retention test. Shin conductance level, as a measure of arousal, was determined. The results indicated that feedback about more accurate trials resulted in more effective learning as well as increased self-confidence. Also, activation was a predictor of performance. PMID:22808705

  18. How Accurate is inv(A)*b?

    Druinsky, Alex

    2012-01-01

    Several widely-used textbooks lead the reader to believe that solving a linear system of equations Ax = b by multiplying the vector b by a computed inverse inv(A) is inaccurate. Virtually all other textbooks on numerical analysis and numerical linear algebra advise against using computed inverses without stating whether this is accurate or not. In fact, under reasonable assumptions on how the inverse is computed, x = inv(A)*b is as accurate as the solution computed by the best backward-stable solvers. This fact is not new, but obviously obscure. We review the literature on the accuracy of this computation and present a self-contained numerical analysis of it.

  19. Accurate guitar tuning by cochlear implant musicians.

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  20. Accurate guitar tuning by cochlear implant musicians.

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task. PMID:24651081

  1. Quantitative analysis chemistry

    This book is about quantitative analysis chemistry. It is divided into ten chapters, which deal with the basic conception of material with the meaning of analysis chemistry and SI units, chemical equilibrium, basic preparation for quantitative analysis, introduction of volumetric analysis, acid-base titration of outline and experiment examples, chelate titration, oxidation-reduction titration with introduction, titration curve, and diazotization titration, precipitation titration, electrometric titration and quantitative analysis.

  2. Quantitative dispersion microscopy

    Fu, Dan; Choi, Wonshik; Sung, Yongjin; Yaqoob, Zahid; Dasari, Ramachandra R.; Feld, Michael

    2010-01-01

    Refractive index dispersion is an intrinsic optical property and a useful source of contrast in biological imaging studies. In this report, we present the first dispersion phase imaging of living eukaryotic cells. We have developed quantitative dispersion microscopy based on the principle of quantitative phase microscopy. The dual-wavelength quantitative phase microscope makes phase measurements at 310 nm and 400 nm wavelengths to quantify dispersion (refractive index increment ratio) of live...

  3. Accurate Finite Difference Methods for Option Pricing

    Persson, Jonas

    2006-01-01

    Stock options are priced numerically using space- and time-adaptive finite difference methods. European options on one and several underlying assets are considered. These are priced with adaptive numerical algorithms including a second order method and a more accurate method. For American options we use the adaptive technique to price options on one stock with and without stochastic volatility. In all these methods emphasis is put on the control of errors to fulfill predefined tolerance level...

  4. Accurate, reproducible measurement of blood pressure.

    Campbell, N. R.; Chockalingam, A; Fodor, J. G.; McKay, D. W.

    1990-01-01

    The diagnosis of mild hypertension and the treatment of hypertension require accurate measurement of blood pressure. Blood pressure readings are altered by various factors that influence the patient, the techniques used and the accuracy of the sphygmomanometer. The variability of readings can be reduced if informed patients prepare in advance by emptying their bladder and bowel, by avoiding over-the-counter vasoactive drugs the day of measurement and by avoiding exposure to cold, caffeine con...

  5. Accurate variational forms for multiskyrmion configurations

    Jackson, A.D.; Weiss, C.; Wirzba, A.; Lande, A.

    1989-04-17

    Simple variational forms are suggested for the fields of a single skyrmion on a hypersphere, S/sub 3/(L), and of a face-centered cubic array of skyrmions in flat space, R/sub 3/. The resulting energies are accurate at the level of 0.2%. These approximate field configurations provide a useful alternative to brute-force solutions of the corresponding Euler equations.

  6. Efficient Accurate Context-Sensitive Anomaly Detection

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  7. Towards accurate modeling of moving contact lines

    Holmgren, Hanna

    2015-01-01

    The present thesis treats the numerical simulation of immiscible incompressible two-phase flows with moving contact lines. The conventional Navier–Stokes equations combined with a no-slip boundary condition leads to a non-integrable stress singularity at the contact line. The singularity in the model can be avoided by allowing the contact line to slip. Implementing slip conditions in an accurate way is not straight-forward and different regularization techniques exist where ad-hoc procedures ...

  8. Accurate phase-shift velocimetry in rock

    Shukla, Matsyendra Nath; Vallatos, Antoine; Phoenix, Vernon R.; Holmes, William M.

    2016-06-01

    Spatially resolved Pulsed Field Gradient (PFG) velocimetry techniques can provide precious information concerning flow through opaque systems, including rocks. This velocimetry data is used to enhance flow models in a wide range of systems, from oil behaviour in reservoir rocks to contaminant transport in aquifers. Phase-shift velocimetry is the fastest way to produce velocity maps but critical issues have been reported when studying flow through rocks and porous media, leading to inaccurate results. Combining PFG measurements for flow through Bentheimer sandstone with simulations, we demonstrate that asymmetries in the molecular displacement distributions within each voxel are the main source of phase-shift velocimetry errors. We show that when flow-related average molecular displacements are negligible compared to self-diffusion ones, symmetric displacement distributions can be obtained while phase measurement noise is minimised. We elaborate a complete method for the production of accurate phase-shift velocimetry maps in rocks and low porosity media and demonstrate its validity for a range of flow rates. This development of accurate phase-shift velocimetry now enables more rapid and accurate velocity analysis, potentially helping to inform both industrial applications and theoretical models.

  9. Illumination coherence engineering and quantitative phase imaging

    Rodrigo Martín-Romo, José Augusto; Alieva, Tatiana Krasheninnikova

    2014-01-01

    Partially coherent illumination provides significant advantages such as speckle-free imaging and enhanced optical sectioning in optical microscopy. The knowledge of the spatial and temporal coherence is crucial to obtain accurate quantitative phase imaging (QPI) of specimens such as live cells, micrometer-sized particles, etc. In this Letter, we propose a novel technique for illumination coherence engineering. It is based on a DMD projector providing fast switchable both multi-wavelength and ...

  10. Quantitative atomic spectroscopy for primary thermometry

    Truong, Gar-Wing; May, Eric F.; Stace, Thomas M.; Luiten, Andre N.

    2010-01-01

    Quantitative spectroscopy has been used to measure accurately the Doppler-broadening of atomic transitions in $^{85}$Rb vapor. By using a conventional platinum resistance thermometer and the Doppler thermometry technique, we were able to determine $k_B$ with a relative uncertainty of $4.1\\times 10^{-4}$, and with a deviation of $2.7\\times 10^{-4}$ from the expected value. Our experiment, using an effusive vapour, departs significantly from other Doppler-broadened thermometry (DBT) techniques,...

  11. High Frequency QRS ECG Accurately Detects Cardiomyopathy

    Schlegel, Todd T.; Arenare, Brian; Poulin, Gregory; Moser, Daniel R.; Delgado, Reynolds

    2005-01-01

    High frequency (HF, 150-250 Hz) analysis over the entire QRS interval of the ECG is more sensitive than conventional ECG for detecting myocardial ischemia. However, the accuracy of HF QRS ECG for detecting cardiomyopathy is unknown. We obtained simultaneous resting conventional and HF QRS 12-lead ECGs in 66 patients with cardiomyopathy (EF = 23.2 plus or minus 6.l%, mean plus or minus SD) and in 66 age- and gender-matched healthy controls using PC-based ECG software recently developed at NASA. The single most accurate ECG parameter for detecting cardiomyopathy was an HF QRS morphological score that takes into consideration the total number and severity of reduced amplitude zones (RAZs) present plus the clustering of RAZs together in contiguous leads. This RAZ score had an area under the receiver operator curve (ROC) of 0.91, and was 88% sensitive, 82% specific and 85% accurate for identifying cardiomyopathy at optimum score cut-off of 140 points. Although conventional ECG parameters such as the QRS and QTc intervals were also significantly longer in patients than controls (P less than 0.001, BBBs excluded), these conventional parameters were less accurate (area under the ROC = 0.77 and 0.77, respectively) than HF QRS morphological parameters for identifying underlying cardiomyopathy. The total amplitude of the HF QRS complexes, as measured by summed root mean square voltages (RMSVs), also differed between patients and controls (33.8 plus or minus 11.5 vs. 41.5 plus or minus 13.6 mV, respectively, P less than 0.003), but this parameter was even less accurate in distinguishing the two groups (area under ROC = 0.67) than the HF QRS morphologic and conventional ECG parameters. Diagnostic accuracy was optimal (86%) when the RAZ score from the HF QRS ECG and the QTc interval from the conventional ECG were used simultaneously with cut-offs of greater than or equal to 40 points and greater than or equal to 445 ms, respectively. In conclusion 12-lead HF QRS ECG employing

  12. Niche Genetic Algorithm with Accurate Optimization Performance

    LIU Jian-hua; YAN De-kun

    2005-01-01

    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  13. How accurately can we calculate thermal systems?

    The objective was to determine how accurately simple reactor lattice integral parameters can be determined, considering user input, differences in the methods, source data and the data processing procedures and assumptions. Three simple square lattice test cases with different fuel to moderator ratios were defined. The effect of the thermal scattering models were shown to be important and much bigger than the spread in the results. Nevertheless, differences of up to 0.4% in the K-eff calculated by continuous energy Monte Carlo codes were observed even when the same source data were used. (author)

  14. Accurate diagnosis is essential for amebiasis

    2004-01-01

    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  15. Investigations on Accurate Analysis of Microstrip Reflectarrays

    Zhou, Min; Sørensen, S. B.; Kim, Oleksiy S.;

    2011-01-01

    An investigation on accurate analysis of microstrip reflectarrays is presented. Sources of error in reflectarray analysis are examined and solutions to these issues are proposed. The focus is on two sources of error, namely the determination of the equivalent currents to calculate the radiation...... pattern, and the inaccurate mutual coupling between array elements due to the lack of periodicity. To serve as reference, two offset reflectarray antennas have been designed, manufactured and measured at the DTUESA Spherical Near-Field Antenna Test Facility. Comparisons of simulated and measured data are...

  16. Accurate radiative transfer calculations for layered media.

    Selden, Adrian C

    2016-07-01

    Simple yet accurate results for radiative transfer in layered media with discontinuous refractive index are obtained by the method of K-integrals. These are certain weighted integrals applied to the angular intensity distribution at the refracting boundaries. The radiative intensity is expressed as the sum of the asymptotic angular intensity distribution valid in the depth of the scattering medium and a transient term valid near the boundary. Integrated boundary equations are obtained, yielding simple linear equations for the intensity coefficients, enabling the angular emission intensity and the diffuse reflectance (albedo) and transmittance of the scattering layer to be calculated without solving the radiative transfer equation directly. Examples are given of half-space, slab, interface, and double-layer calculations, and extensions to multilayer systems are indicated. The K-integral method is orders of magnitude more accurate than diffusion theory and can be applied to layered scattering media with a wide range of scattering albedos, with potential applications to biomedical and ocean optics. PMID:27409700

  17. Accurate basis set truncation for wavefunction embedding

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  18. Accurate pose estimation for forensic identification

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  19. Accurate determination of characteristic relative permeability curves

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  20. Accurate shear measurement with faint sources

    Zhang, Jun; Foucaud, Sebastien [Center for Astronomy and Astrophysics, Department of Physics and Astronomy, Shanghai Jiao Tong University, 955 Jianchuan road, Shanghai, 200240 (China); Luo, Wentao, E-mail: betajzhang@sjtu.edu.cn, E-mail: walt@shao.ac.cn, E-mail: foucaud@sjtu.edu.cn [Key Laboratory for Research in Galaxies and Cosmology, Shanghai Astronomical Observatory, Nandan Road 80, Shanghai, 200030 (China)

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys.

  1. Accurate, fully-automated NMR spectral profiling for metabolomics.

    Siamak Ravanbakhsh

    Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of

  2. Accurate Telescope Mount Positioning with MEMS Accelerometers

    Mészáros, László; Pál, András; Csépány, Gergely

    2014-01-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  3. Accurate estimation of indoor travel times

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan;

    2014-01-01

    the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...... a minimal-effort setup and self-improving operations due to unsupervised learning---as it is able to adapt implicitly to factors influencing indoor travel times such as elevators, rotating doors or changes in building layout. We evaluate and compare the proposed InTraTime method to indoor adaptions...

  4. Accurate sky background modelling for ESO facilities

    Full text: Ground-based measurements like e.g. high resolution spectroscopy are heavily influenced by several physical processes. Amongst others, line absorption/ emission, air glow by OH molecules, and scattering of photons within the earth's atmosphere make observations in particular from facilities like the future European extremely large telescope a challenge. Additionally, emission from unresolved extrasolar objects, the zodiacal light, the moon and even thermal emission from the telescope and the instrument contribute significantly to the broad band background over a wide wavelength range. In our talk we review these influences and give an overview on how they can be accurately modeled for increasing the overall precision of spectroscopic and imaging measurements. (author)

  5. Accurate valence band width of diamond

    An accurate width is determined for the valence band of diamond by imaging photoelectron momentum distributions for a variety of initial- and final-state energies. The experimental result of 23.0±0.2 eV2 agrees well with first-principles quasiparticle calculations (23.0 and 22.88 eV) and significantly exceeds the local-density-functional width, 21.5±0.2 eV2. This difference quantifies effects of creating an excited hole state (with associated many-body effects) in a band measurement vs studying ground-state properties treated by local-density-functional calculations. copyright 1997 The American Physical Society

  6. Accurate Weather Forecasting for Radio Astronomy

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  7. On Quantitative Rorschach Scales.

    Haggard, Ernest A.

    1978-01-01

    Two types of quantitative Rorschach scales are discussed: first, those based on the response categories of content, location, and the determinants, and second, global scales based on the subject's responses to all ten stimulus cards. (Author/JKS)

  8. Accurate FRET Measurements within Single Diffusing Biomolecules Using Alternating-Laser Excitation

    Lee, Nam Ki; Kapanidis, Achillefs N.; Wang, You; Michalet, Xavier; Mukhopadhyay, Jayanta; Ebright, Richard H.; Weiss, Shimon

    2005-01-01

    Fluorescence resonance energy transfer (FRET) between a donor (D) and an acceptor (A) at the single-molecule level currently provides qualitative information about distance, and quantitative information about kinetics of distance changes. Here, we used the sorting ability of confocal microscopy equipped with alternating-laser excitation (ALEX) to measure accurate FRET efficiencies and distances from single molecules, using corrections that account for cross-talk terms that contaminate the FRE...

  9. Quantitative phase spectroscopy

    Rinehart, Matthew; Zhu, Yizheng; Wax, Adam

    2012-01-01

    Quantitative phase spectroscopy is presented as a novel method of measuring the wavelength-dependent refractive index of microscopic volumes. Light from a broadband source is filtered to an ~5 nm bandwidth and rapidly tuned across the visible spectrum in 1 nm increments by an acousto-optic tunable filter (AOTF). Quantitative phase images of semitransparent samples are recovered at each wavelength using off-axis interferometry and are processed to recover relative and absolute dispersion measu...

  10. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  11. Application of quantitative proteomics expression analysis using stable isotope labeling

    Quantitative protein expression profiling is a crucial part of proteomics and requires technique that are able to efficiently provide accurate, high-throughput and reproducible differential expression values for proteins in two or more biological samples. At present, stable isotope labeling is probably considered as one of the most accurate ways to relatively quantify protein expression levels and additionally stable isotope labeling may be directly combined to LC MS/MS approaches. In summary, this technique has its advantages in quantitative proteomics. The application and the latest progresses about this technique are discussed. (authors)

  12. Rapid quantitative phase imaging for partially coherent light microscopy

    Rodrigo Martín-Romo, José Augusto; Alieva, Tatiana Krasheninnikova

    2014-01-01

    Partially coherent light provides promising advantages for imaging applications. In contrast to its completely coherent counterpart, it prevents image degradation due to speckle noise and decreases cross-talk among the imaged objects. These facts make attractive the partially coherent illumination for accurate quantitative imaging in microscopy. In this work, we present a non-interferometric technique and system for quantitative phase imaging with simultaneous determination of the spatial coh...

  13. Mass Spectrometry-Based Label-Free Quantitative Proteomics

    Chun-Ming Huang; Smith, Jeffrey W.; Wenhong Zhu

    2009-01-01

    In order to study the differential protein expression in complex biological samples, strategies for rapid, highly reproducible and accurate quantification are necessary. Isotope labeling and fluorescent labeling techniques have been widely used in quantitative proteomics research. However, researchers are increasingly turning to label-free shotgun proteomics techniques for faster, cleaner, and simpler results. Mass spectrometry-based label-free quantitative proteomics falls into two general c...

  14. The place of highly accurate methods by RNAA in metrology

    With the introduction of physical metrological concepts to chemical analysis which require that the result should be accompanied by uncertainty statement written down in terms of Sl units, several researchers started to consider lD-MS as the only method fulfilling this requirement. However, recent publications revealed that in certain cases also some expert laboratories using lD-MS and analyzing the same material, produced results for which their uncertainty statements did not overlap, what theoretically should not have taken place. This shows that no monopoly is good in science and it would be desirable to widen the set of methods acknowledged as primary in inorganic trace analysis. Moreover, lD-MS cannot be used for monoisotopic elements. The need for searching for other methods having similar metrological quality as the lD-MS seems obvious. In this paper, our long-time experience on devising highly accurate ('definitive') methods by RNAA for the determination of selected trace elements in biological materials is reviewed. The general idea of definitive methods based on combination of neutron activation with the highly selective and quantitative isolation of the indicator radionuclide by column chromatography followed by gamma spectrometric measurement is reminded and illustrated by examples of the performance of such methods when determining Cd, Co, Mo, etc. lt is demonstrated that such methods are able to provide very reliable results with very low levels of uncertainty traceable to Sl units

  15. Accurate quantification of cells recovered by bronchoalveolar lavage.

    Saltini, C; Hance, A J; Ferrans, V J; Basset, F; Bitterman, P B; Crystal, R G

    1984-10-01

    Quantification of the differential cell count and total number of cells recovered from the lower respiratory tract by bronchoalveolar lavage is a valuable technique for evaluating the alveolitis of patients with inflammatory disorders of the lower respiratory tract. The most commonly used technique for the evaluation of cells recovered by lavage has been to concentrate cells by centrifugation and then to determine total cell number using a hemocytometer and differential cell count from a Wright-Glemsa-stained cytocentrifuge preparation. However, we have noted that the percentage of small cells present in the original cell suspension recovered by lavage is greater than the percentage of lymphocytes identified on cytocentrifuge preparations. Therefore, we developed procedures for determining differential cell counts on lavage cells collected on Millipore filters and stained with hematoxylin-eosin (filter preparations) and compared the results of differential cell counts performed on filter preparations with those obtained using cytocentrifuge preparations. When cells recovered by lavage were collected on filter preparations, accurate differential cell counts were obtained, as confirmed by performing differential cell counts on cell mixtures of known composition, and by comparing differential cell counts obtained using filter preparations stained with hematoxylin-eosin with those obtained using filter preparations stained with a peroxidase cytochemical stain. The morphology of cells displayed on filter preparations was excellent, and interobserver variability in quantitating cell types recovered by lavage was less than 3%.(ABSTRACT TRUNCATED AT 250 WORDS) PMID:6385789

  16. Accurate measurement of streamwise vortices in low speed aerodynamic flows

    Waldman, Rye M.; Kudo, Jun; Breuer, Kenneth S.

    2010-11-01

    Low Reynolds number experiments with flapping animals (such as bats and small birds) are of current interest in understanding biological flight mechanics, and due to their application to Micro Air Vehicles (MAVs) which operate in a similar parameter space. Previous PIV wake measurements have described the structures left by bats and birds, and provided insight to the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions due to significant experimental challenges associated with the highly three-dimensional and unsteady nature of the flows, and the low wake velocities associated with lifting bodies that only weigh a few grams. This requires the high-speed resolution of small flow features in a large field of view using limited laser energy and finite camera resolution. Cross-stream measurements are further complicated by the high out-of-plane flow which requires thick laser sheets and short interframe times. To quantify and address these challenges we present data from a model study on the wake behind a fixed wing at conditions comparable to those found in biological flight. We present a detailed analysis of the PIV wake measurements, discuss the criteria necessary for accurate measurements, and present a new dual-plane PIV configuration to resolve these issues.

  17. Accurate Measurement of the Relative Abundance of Different DNA Species in Complex DNA Mixtures

    Jeong, Sangkyun; Yu, Hyunjoo; Pfeifer, Karl

    2012-01-01

    A molecular tool that can compare the abundances of different DNA sequences is necessary for comparing intergenic or interspecific gene expression. We devised and verified such a tool using a quantitative competitive polymerase chain reaction approach. For this approach, we adapted a competitor array, an artificially made plasmid DNA in which all the competitor templates for the target DNAs are arranged with a defined ratio, and melting analysis for allele quantitation for accurate quantitation of the fractional ratios of competitively amplified DNAs. Assays on two sets of DNA mixtures with explicitly known compositional structures of the test sequences were performed. The resultant average relative errors of 0.059 and 0.021 emphasize the highly accurate nature of this method. Furthermore, the method's capability of obtaining biological data is demonstrated by the fact that it can illustrate the tissue-specific quantitative expression signatures of the three housekeeping genes G6pdx, Ubc, and Rps27 by using the forms of the relative abundances of their transcripts, and the differential preferences of Igf2 enhancers for each of the multiple Igf2 promoters for the transcription. PMID:22334570

  18. Intravital spectral imaging as a tool for accurate measurement of vascularization in mice

    Tsatsanis Christos

    2010-10-01

    Full Text Available Abstract Background Quantitative determination of the development of new blood vessels is crucial for our understanding of the progression of several diseases, including cancer. However, in most cases a high throughput technique that is simple, accurate, user-independent and cost-effective for small animal imaging is not available. Methods In this work we present a simple approach based on spectral imaging to increase the contrast between vessels and surrounding tissue, enabling accurate determination of the blood vessel area. This approach is put to test with a 4T1 breast cancer murine in vivo model and validated with histological and microvessel density analysis. Results We found that one can accurately measure the vascularization area by using excitation/emission filter pairs which enhance the surrounding tissue's autofluorescence, significantly increasing the contrast between surrounding tissue and blood vessels. Additionally, we found excellent correlation between this technique and histological and microvessel density analysis. Conclusions Making use of spectral imaging techniques we have shown that it is possible to accurately determine blood vessel volume intra-vitally. We believe that due to the low cost, accuracy, user-independence and simplicity of this technique, it will be of great value in those cases where in vivo quantitative information is necessary.

  19. Accurate simulation of optical properties in dyes.

    Jacquemin, Denis; Perpète, Eric A; Ciofini, Ilaria; Adamo, Carlo

    2009-02-17

    Since Antiquity, humans have produced and commercialized dyes. To this day, extraction of natural dyes often requires lengthy and costly procedures. In the 19th century, global markets and new industrial products drove a significant effort to synthesize artificial dyes, characterized by low production costs, huge quantities, and new optical properties (colors). Dyes that encompass classes of molecules absorbing in the UV-visible part of the electromagnetic spectrum now have a wider range of applications, including coloring (textiles, food, paintings), energy production (photovoltaic cells, OLEDs), or pharmaceuticals (diagnostics, drugs). Parallel to the growth in dye applications, researchers have increased their efforts to design and synthesize new dyes to customize absorption and emission properties. In particular, dyes containing one or more metallic centers allow for the construction of fairly sophisticated systems capable of selectively reacting to light of a given wavelength and behaving as molecular devices (photochemical molecular devices, PMDs).Theoretical tools able to predict and interpret the excited-state properties of organic and inorganic dyes allow for an efficient screening of photochemical centers. In this Account, we report recent developments defining a quantitative ab initio protocol (based on time-dependent density functional theory) for modeling dye spectral properties. In particular, we discuss the importance of several parameters, such as the methods used for electronic structure calculations, solvent effects, and statistical treatments. In addition, we illustrate the performance of such simulation tools through case studies. We also comment on current weak points of these methods and ways to improve them. PMID:19113946

  20. Fast and Provably Accurate Bilateral Filtering.

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy. PMID:27093722

  1. Accurate adiabatic correction in the hydrogen molecule

    Pachucki, Krzysztof, E-mail: krp@fuw.edu.pl [Faculty of Physics, University of Warsaw, Pasteura 5, 02-093 Warsaw (Poland); Komasa, Jacek, E-mail: komasa@man.poznan.pl [Faculty of Chemistry, Adam Mickiewicz University, Umultowska 89b, 61-614 Poznań (Poland)

    2014-12-14

    A new formalism for the accurate treatment of adiabatic effects in the hydrogen molecule is presented, in which the electronic wave function is expanded in the James-Coolidge basis functions. Systematic increase in the size of the basis set permits estimation of the accuracy. Numerical results for the adiabatic correction to the Born-Oppenheimer interaction energy reveal a relative precision of 10{sup −12} at an arbitrary internuclear distance. Such calculations have been performed for 88 internuclear distances in the range of 0 < R ⩽ 12 bohrs to construct the adiabatic correction potential and to solve the nuclear Schrödinger equation. Finally, the adiabatic correction to the dissociation energies of all rovibrational levels in H{sub 2}, HD, HT, D{sub 2}, DT, and T{sub 2} has been determined. For the ground state of H{sub 2} the estimated precision is 3 × 10{sup −7} cm{sup −1}, which is almost three orders of magnitude higher than that of the best previous result. The achieved accuracy removes the adiabatic contribution from the overall error budget of the present day theoretical predictions for the rovibrational levels.

  2. Accurate fission data for nuclear safety

    Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S

    2013-01-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...

  3. Quantitative analysis of endogenous compounds.

    Thakare, Rhishikesh; Chhonker, Yashpal S; Gautam, Nagsen; Alamoudi, Jawaher Abdullah; Alnouti, Yazen

    2016-09-01

    Accurate quantitative analysis of endogenous analytes is essential for several clinical and non-clinical applications. LC-MS/MS is the technique of choice for quantitative analyses. Absolute quantification by LC/MS requires preparing standard curves in the same matrix as the study samples so that the matrix effect and the extraction efficiency for analytes are the same in both the standard and study samples. However, by definition, analyte-free biological matrices do not exist for endogenous compounds. To address the lack of blank matrices for the quantification of endogenous compounds by LC-MS/MS, four approaches are used including the standard addition, the background subtraction, the surrogate matrix, and the surrogate analyte methods. This review article presents an overview these approaches, cite and summarize their applications, and compare their advantages and disadvantages. In addition, we discuss in details, validation requirements and compatibility with FDA guidelines to ensure method reliability in quantifying endogenous compounds. The standard addition, background subtraction, and the surrogate analyte approaches allow the use of the same matrix for the calibration curve as the one to be analyzed in the test samples. However, in the surrogate matrix approach, various matrices such as artificial, stripped, and neat matrices are used as surrogate matrices for the actual matrix of study samples. For the surrogate analyte approach, it is required to demonstrate similarity in matrix effect and recovery between surrogate and authentic endogenous analytes. Similarly, for the surrogate matrix approach, it is required to demonstrate similar matrix effect and extraction recovery in both the surrogate and original matrices. All these methods represent indirect approaches to quantify endogenous compounds and regardless of what approach is followed, it has to be shown that none of the validation criteria have been compromised due to the indirect analyses. PMID

  4. Towards a more accurate concept of fuels

    Full text: The introduction of LEU in Atucha and the approval of CARA show an advancement of the Argentine power stations fuels, which stimulate and show a direction to follow. In the first case, the use of enriched U fuel relax an important restriction related to neutronic economy; that means that it is possible to design less penalized fuels using more Zry. The second case allows a decrease in the lineal power of the rods, enabling a better performance of the fuel in normal and also in accident conditions. In this work we wish to emphasize this last point, trying to find a design in which the surface power of the rod is diminished. Hence, in accident conditions owing to lack of coolant, the cladding tube will not reach temperatures that will produce oxidation, with the corresponding H2 formation and with plasticity enough to form blisters, which will obstruct the reflooding and hydration that will produce fragility and rupture of the cladding tube, with the corresponding radioactive material dispersion. This work is oriented to find rods designs with quasi rectangular geometry to lower the surface power of the rods, in order to obtain a lower central temperature of the rod. Thus, critical temperatures will not be reached in case of lack of coolant. This design is becoming a reality after PPFAE's efforts in search of cladding tubes fabrication with different circumferential values, rectangular in particular. This geometry, with an appropriate pellet design, can minimize the pellet-cladding interaction and, through the accurate width election, non rectified pellets could be used. This means an important economy in pellets production, as well as an advance in the fabrication of fuels in gloves box and hot cells in the future. The sequence to determine critical geometrical parameters is described and some rod dispositions are explored

  5. Accurate orbit propagation with planetary close encounters

    Baù, Giulio; Milani Comparetti, Andrea; Guerra, Francesca

    2015-08-01

    We tackle the problem of accurately propagating the motion of those small bodies that undergo close approaches with a planet. The literature is lacking on this topic and the reliability of the numerical results is not sufficiently discussed. The high-frequency components of the perturbation generated by a close encounter makes the propagation particularly challenging both from the point of view of the dynamical stability of the formulation and the numerical stability of the integrator. In our approach a fixed step-size and order multistep integrator is combined with a regularized formulation of the perturbed two-body problem. When the propagated object enters the region of influence of a celestial body, the latter becomes the new primary body of attraction. Moreover, the formulation and the step-size will also be changed if necessary. We present: 1) the restarter procedure applied to the multistep integrator whenever the primary body is changed; 2) new analytical formulae for setting the step-size (given the order of the multistep, formulation and initial osculating orbit) in order to control the accumulation of the local truncation error and guarantee the numerical stability during the propagation; 3) a new definition of the region of influence in the phase space. We test the propagator with some real asteroids subject to the gravitational attraction of the planets, the Yarkovsky and relativistic perturbations. Our goal is to show that the proposed approach improves the performance of both the propagator implemented in the OrbFit software package (which is currently used by the NEODyS service) and of the propagator represented by a variable step-size and order multistep method combined with Cowell's formulation (i.e. direct integration of position and velocity in either the physical or a fictitious time).

  6. Accurate paleointensities - the multi-method approach

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  7. Towards Accurate Application Characterization for Exascale (APEX)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  8. Accurate hydrocarbon estimates attained with radioactive isotope

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  9. Quantitative bone gallium scintigraphy in osteomyelitis

    Gallium imaging offers many practical advantages over indium-111-labeled leukocyte imaging, and calculating quantitative ratios in addition to performing the routine bone-gallium images allows accurate and easy evaluation of patients with suspected osteomyelitis. To add objectivity and improve the accuracy and confidence in diagnosis of osteomyelitis, quantitative comparison of abnormalities seen on bone scans and gallium scans was performed. One hundred and ten adult patients with 126 sites of suspected osteomyelitis were evaluated and categorized by gallium-to-bone ratios, gallium-to-background ratios, and spatial incongruency of gallium and bone activity. Combined evaluation using these criteria gave a 70% sensitivity and 93% specificity for the diagnosis of osteomyelitis. (orig.)

  10. Uncertainty Quantification for Quantitative Imaging Holdup Measurements

    Bevill, Aaron M [ORNL; Bledsoe, Keith C [ORNL

    2016-01-01

    In nuclear fuel cycle safeguards, special nuclear material "held up" in pipes, ducts, and glove boxes causes significant uncertainty in material-unaccounted-for estimates. Quantitative imaging is a proposed non-destructive assay technique with potential to estimate the holdup mass more accurately and reliably than current techniques. However, uncertainty analysis for quantitative imaging remains a significant challenge. In this work we demonstrate an analysis approach for data acquired with a fast-neutron coded aperture imager. The work includes a calibrated forward model of the imager. Cross-validation indicates that the forward model predicts the imager data typically within 23%; further improvements are forthcoming. A new algorithm based on the chi-squared goodness-of-fit metric then uses the forward model to calculate a holdup confidence interval. The new algorithm removes geometry approximations that previous methods require, making it a more reliable uncertainty estimator.

  11. A quantitative description for efficient financial markets

    Immonen, Eero

    2015-09-01

    In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.

  12. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  13. Quantitative Compositional Reasoning

    Chatterjee, K.; Alfaro, de L.; Faella, M.; Henzinger, T.A.; Majumdar, R.; Stoelinga, M.I.A.

    2006-01-01

    We present a compositional theory of system verifica tion, where specifications assign real-numbered costs to systems. These costs can express a wide variety of quantitative system properties, such as resource consumption, price or a measure of how well a system satisfies its specification. The theo

  14. Critical Quantitative Inquiry in Context

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  15. DNA DAMAGE QUANTITATION BY ALKALINE GEL ELECTROPHORESIS.

    SUTHERLAND,B.M.; BENNETT,P.V.; SUTHERLAND, J.C.

    2004-03-24

    Physical and chemical agents in the environment, those used in clinical applications, or encountered during recreational exposures to sunlight, induce damages in DNA. Understanding the biological impact of these agents requires quantitation of the levels of such damages in laboratory test systems as well as in field or clinical samples. Alkaline gel electrophoresis provides a sensitive (down to {approx} a few lesions/5Mb), rapid method of direct quantitation of a wide variety of DNA damages in nanogram quantities of non-radioactive DNAs from laboratory, field, or clinical specimens, including higher plants and animals. This method stems from velocity sedimentation studies of DNA populations, and from the simple methods of agarose gel electrophoresis. Our laboratories have developed quantitative agarose gel methods, analytical descriptions of DNA migration during electrophoresis on agarose gels (1-6), and electronic imaging for accurate determinations of DNA mass (7-9). Although all these components improve sensitivity and throughput of large numbers of samples (7,8,10), a simple version using only standard molecular biology equipment allows routine analysis of DNA damages at moderate frequencies. We present here a description of the methods, as well as a brief description of the underlying principles, required for a simplified approach to quantitation of DNA damages by alkaline gel electrophoresis.

  16. Quantitative Phase Retrieval in Transmission Electron Microscopy

    McLeod, Robert Alexander

    Phase retrieval in the transmission electron microscope offers the unique potential to collect quantitative data regarding the electric and magnetic properties of materials at the nanoscale. Substantial progress in the field of quantitative phase imaging was made by improvements to the technique of off-axis electron holography. In this thesis, several breakthroughs have been achieved that improve the quantitative analysis of phase retrieval. An accurate means of measuring the electron wavefront coherence in two-dimensions was developed and pratical applications demonstrated. The detector modulation-transfer function (MTF) was assessed by slanted-edge, noise, and the novel holographic techniques. It was shown the traditional slanted-edge technique underestimates the MTF. In addition, progress was made in dark and gain reference normalization of images, and it was shown that incomplete read-out is a concern for slow-scan CCD detectors. Last, the phase error due to electron shot noise was reduced by the technique of summation of hologram series. The phase error, which limits the finest electric and magnetic phenomena which can be investigated, was reduced by over 900 % with no loss of spatial resolution. Quantitative agreement between the experimental root-mean-square phase error and the analytical prediction of phase error was achieved.

  17. Quantitative analysis of methylation of genomic loci in early-stage rectal cancer predicts distant recurrence.

    Maat, M.F. de; Velde, C.J. van de; Werff, M.P. van der; Putter, H.; Umetani, N.; Klein-Kranenbarg, E.M.; Turner, R.R.; Krieken, J.H.J.M. van; Bilchik, A.; Tollenaar, R.A.; Hoon, D.S.

    2008-01-01

    PURPOSE: There are no accurate prognostic biomarkers specific for rectal cancer. Epigenetic aberrations, in the form of DNA methylation, accumulate early during rectal tumor formation. In a preliminary study, we investigated absolute quantitative methylation changes associated with tumor progression

  18. Impact of reconstruction parameters on quantitative I-131 SPECT

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  <1%,‑26% and 33%, respectively. The adverse effects of partial volume were significantly smaller with the Monte Carlo simulated CDR correction compared with geometric Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated

  19. SPECT-OPT multimodal imaging enables accurate evaluation of radiotracers for β-cell mass assessments.

    Eter, Wael A; Parween, Saba; Joosten, Lieke; Frielink, Cathelijne; Eriksson, Maria; Brom, Maarten; Ahlgren, Ulf; Gotthardt, Martin

    2016-01-01

    Single Photon Emission Computed Tomography (SPECT) has become a promising experimental approach to monitor changes in β-cell mass (BCM) during diabetes progression. SPECT imaging of pancreatic islets is most commonly cross-validated by stereological analysis of histological pancreatic sections after insulin staining. Typically, stereological methods do not accurately determine the total β-cell volume, which is inconvenient when correlating total pancreatic tracer uptake with BCM. Alternative methods are therefore warranted to cross-validate β-cell imaging using radiotracers. In this study, we introduce multimodal SPECT - optical projection tomography (OPT) imaging as an accurate approach to cross-validate radionuclide-based imaging of β-cells. Uptake of a promising radiotracer for β-cell imaging by SPECT, (111)In-exendin-3, was measured by ex vivo-SPECT and cross evaluated by 3D quantitative OPT imaging as well as with histology within healthy and alloxan-treated Brown Norway rat pancreata. SPECT signal was in excellent linear correlation with OPT data as compared to histology. While histological determination of islet spatial distribution was challenging, SPECT and OPT revealed similar distribution patterns of (111)In-exendin-3 and insulin positive β-cell volumes between different pancreatic lobes, both visually and quantitatively. We propose ex vivo SPECT-OPT multimodal imaging as a highly accurate strategy for validating the performance of β-cell radiotracers. PMID:27080529

  20. Accurate phase measurements for thick spherical objects using optical quadrature microscopy

    Warger, William C., II; DiMarzio, Charles A.

    2009-02-01

    In vitro fertilization (IVF) procedures have resulted in the birth of over three million babies since 1978. Yet the live birth rate in the United States was only 34% in 2005, with 32% of the successful pregnancies resulting in multiple births. These multiple pregnancies were directly attributed to the transfer of multiple embryos to increase the probability that a single, healthy embryo was included. Current viability markers used for IVF, such as the cell number, symmetry, size, and fragmentation, are analyzed qualitatively with differential interference contrast (DIC) microscopy. However, this method is not ideal for quantitative measures beyond the 8-cell stage of development because the cells overlap and obstruct the view within and below the cluster of cells. We have developed the phase-subtraction cell-counting method that uses the combination of DIC and optical quadrature microscopy (OQM) to count the number of cells accurately in live mouse embryos beyond the 8-cell stage. We have also created a preliminary analysis to measure the cell symmetry, size, and fragmentation quantitatively by analyzing the relative dry mass from the OQM image in conjunction with the phase-subtraction count. In this paper, we will discuss the characterization of OQM with respect to measuring the phase accurately for spherical samples that are much larger than the depth of field. Once fully characterized and verified with human embryos, this methodology could provide the means for a more accurate method to score embryo viability.

  1. Quantitative shadowgraphy made easy

    This paper is dedicated to the learning of the shadowgraphy technique at graduate and undergraduate levels. It presents an experiment that allows measurement of the refractive index of butane with the help of an affordable quantitative shadowgraphy bench. The paper is constituted of two distinctive parts. The first presents the theory and data processing involved in shadowgraphy, introducing the concept of Abel inversion and modern data computer processing for graduate students. The second focuses on the experimental set-up and results; here, a qualitative interpretation of shadowgrams suitable for undergraduate students is given, as well as a quantitative explanation using a butane gas jet. The refractive index of butane measured with our simple experimental set-up is in close agreement with values available in the literature. (paper)

  2. Energy & Climate: Getting Quantitative

    Wolfson, Richard

    2011-11-01

    A noted environmentalist claims that buying an SUV instead of a regular car is energetically equivalent to leaving your refrigerator door open for seven years. A fossil-fuel apologist argues that solar energy is a pie-in-the-sky dream promulgated by na"ive environmentalists, because there's nowhere near enough solar energy to meet humankind's energy demand. A group advocating shutdown of the Vermont Yankee nuclear plant claims that 70% of its electrical energy is lost in transmission lines. Around the world, thousands agitate for climate action, under the numerical banner ``350.'' Neither the environmentalist, the fossil-fuel apologist, the antinuclear activists, nor most of those marching under the ``350'' banner can back up their assertions with quantitative arguments. Yet questions about energy and its environmental impacts almost always require quantitative answers. Physics can help! This poster gives some cogent examples, based on the newly published 2^nd edition of the author's textbook Energy, Environment, and Climate.

  3. Quantitative lacrimal scintigraphy

    Quantitative radioisotope dacryoscintigraphy was performed using a gamma camera fitted with a computer system. ROI selection with corresponding time/activity curves allowed the calculation of some parameters such as appearance time and mean transit time in every segment of the lacrimal ducts. The numerical values give some exact clues concerning the knowledge of lacrimal physiology as well as the follow-up of dacryo- and rhino-pathology. (author)

  4. Quantitative analysis of myocardial tissue with digital autofluorescence microscopy

    Thomas Jensen

    2016-01-01

    Full Text Available Background: The opportunity offered by whole slide scanners of automated histological analysis implies an ever increasing importance of digital pathology. To go beyond the importance of conventional pathology, however, digital pathology may need a basic histological starting point similar to that of hematoxylin and eosin staining in conventional pathology. This study presents an automated fluorescence-based microscopy approach providing highly detailed morphological data from unstained microsections. This data may provide a basic histological starting point from which further digital analysis including staining may benefit. Methods: This study explores the inherent tissue fluorescence, also known as autofluorescence, as a mean to quantitate cardiac tissue components in histological microsections. Data acquisition using a commercially available whole slide scanner and an image-based quantitation algorithm are presented. Results: It is shown that the autofluorescence intensity of unstained microsections at two different wavelengths is a suitable starting point for automated digital analysis of myocytes, fibrous tissue, lipofuscin, and the extracellular compartment. The output of the method is absolute quantitation along with accurate outlines of above-mentioned components. The digital quantitations are verified by comparison to point grid quantitations performed on the microsections after Van Gieson staining. Conclusion: The presented method is amply described as a prestain multicomponent quantitation and outlining tool for histological sections of cardiac tissue. The main perspective is the opportunity for combination with digital analysis of stained microsections, for which the method may provide an accurate digital framework.

  5. Foucault test: a quantitative evaluation method.

    Rodríguez, Gustavo; Villa, Jesús; Ivanov, Rumen; González, Efrén; Martínez, Geminiano

    2016-08-01

    Reliable and accurate testing methods are essential to guiding the polishing process during the figuring of optical telescope mirrors. With the natural advancement of technology, the procedures and instruments used to carry out this delicate task have consistently increased in sensitivity, but also in complexity and cost. Fortunately, throughout history, the Foucault knife-edge test has shown the potential to measure transverse aberrations in the order of the wavelength, mainly when described in terms of physical theory, which allows a quantitative interpretation of its characteristic shadowmaps. Our previous publication on this topic derived a closed mathematical formulation that directly relates the knife-edge position with the observed irradiance pattern. The present work addresses the quite unexplored problem of the wavefront's gradient estimation from experimental captures of the test, which is achieved by means of an optimization algorithm featuring a proposed ad hoc cost function. The partial derivatives thereby calculated are then integrated by means of a Fourier-based algorithm to retrieve the mirror's actual surface profile. To date and to the best of our knowledge, this is the very first time that a complete mathematical-grounded treatment of this optical phenomenon is presented, complemented by an image-processing algorithm which allows a quantitative calculation of the corresponding slope at any given point of the mirror's surface, so that it becomes possible to accurately estimate the aberrations present in the analyzed concave device just through its associated foucaultgrams. PMID:27505659

  6. Mining tandem mass spectral data to develop a more accurate mass error model for peptide identification.

    Fu, Yan; Gao, Wen; He, Simin; Sun, Ruixiang; Zhou, Hu; Zeng, Rong

    2007-01-01

    The assumption on the mass error distribution of fragment ions plays a crucial role in peptide identification by tandem mass spectra. Previous mass error models are the simplistic uniform or normal distribution with empirically set parameter values. In this paper, we propose a more accurate mass error model, namely conditional normal model, and an iterative parameter learning algorithm. The new model is based on two important observations on the mass error distribution, i.e. the linearity between the mean of mass error and the ion mass, and the log-log linearity between the standard deviation of mass error and the peak intensity. To our knowledge, the latter quantitative relationship has never been reported before. Experimental results demonstrate the effectiveness of our approach in accurately quantifying the mass error distribution and the ability of the new model to improve the accuracy of peptide identification. PMID:17990507

  7. Qualitative and Quantitative Analysis of the Market – A Factor for Competitive Advantage of Enterprises

    Karolina Ilieska

    2005-01-01

    The goal of the market analyses is to define accurately the needs, motives and the behavior of the customers so that it can determine the product requests and services if there are any, as well as their quantity. The qualitative methods for data collecting give out an inaccurate vision for the structure and the significance of the information referring to the motives and habits of the purchase.Quantitative methods research the appearances from the quantitative point of view. The quantitative ...

  8. Quantitative Computed Tomography

    Balda, Michael

    2011-01-01

    Computed Tomography (CT) is a wide-spread medical imaging modality. Traditional CT yields information on a patient's anatomy in form of slice images or volume data. Hounsfield Units (HU) are used to quantify the imaged tissue properties. Due to the polychromatic nature of X-rays in CT, the HU values for a specific tissue depend on its density and composition but also on CT system parameters and settings and the surrounding materials. The main objective of Quantitative CT (QCT) is measuring ch...

  9. F# for quantitative finance

    Astborg, Johan

    2013-01-01

    To develop your confidence in F#, this tutorial will first introduce you to simpler tasks such as curve fitting. You will then advance to more complex tasks such as implementing algorithms for trading semi-automation in a practical scenario-based format.If you are a data analyst or a practitioner in quantitative finance, economics, or mathematics and wish to learn how to use F# as a functional programming language, this book is for you. You should have a basic conceptual understanding of financial concepts and models. Elementary knowledge of the .NET framework would also be helpful.

  10. Quantitative rainbow schlieren deflectometry

    Greenberg, Paul S.; Klimek, Robert B.; Buchele, Donald R.

    1995-01-01

    In the rainbow schlieren apparatus, a continuously graded rainbow filter is placed in the back focal plane of the decollimating lens. Refractive-index gradients in the test section thus appear as gradations in hue rather than irradiance. A simple system is described wherein a conventional color CCD array and video digitizer are used to quantify accurately the color attributes of the resulting image, and hence the associated ray deflections. The present system provides a sensitivity comparable with that of conventional interferometry, while being simpler to implement and less sensitive to mechanical misalignment.

  11. Accurate core-electron binding energy shifts from density functional theory

    Current review covers description of density functional methods of calculation of accurate core-electron binding energy (CEBE) of second and third row atoms; applications of calculated CEBEs and CEBE shifts (ΔCEBEs) in elucidation of topics such as: hydrogen-bonding, peptide bond, polymers, DNA bases, Hammett substituent (σ) constants, inductive and resonance effects, quantitative structure activity relationship (QSAR), and solid state effect (WD). This review limits itself to works of mainly Chong and his coworkers for the period post-2002. It is not a fully comprehensive account of the current state of the art.

  12. Accurate core-electron binding energy shifts from density functional theory

    Takahata, Yuji, E-mail: taka@iqm.unicamp.b [Amazonas State University, School of Engineering, Av. Darcy Vargas, 1200, Parque 10 - CEP 69065-020, Manaus, Amazonas (Brazil); Department of Chemistry, University of Campinas-UNICAMP, Campinas 13084-862, Sao Paulo (Brazil); Marques, Alberto Dos Santos [Amazonas State University, School of Engineering, Av. Darcy Vargas, 1200, Parque 10 - CEP 69065-020, Manaus, Amazonas (Brazil)

    2010-05-15

    Current review covers description of density functional methods of calculation of accurate core-electron binding energy (CEBE) of second and third row atoms; applications of calculated CEBEs and CEBE shifts (DELTACEBEs) in elucidation of topics such as: hydrogen-bonding, peptide bond, polymers, DNA bases, Hammett substituent (sigma) constants, inductive and resonance effects, quantitative structure activity relationship (QSAR), and solid state effect (WD). This review limits itself to works of mainly Chong and his coworkers for the period post-2002. It is not a fully comprehensive account of the current state of the art.

  13. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    Pan, Bing

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost. © 2014 Elsevier Ltd.

  14. Quantitative imaging with a mobile phone microscope.

    Arunan Skandarajah

    Full Text Available Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone-based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications.

  15. Accurate Jones Matrix of the Practical Faraday Rotator

    王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝

    2003-01-01

    The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications.

  16. Quantitative and qualitative coronary arteriography. 1

    The clinical objectives of arteriography are to obtain information that contributes to an understanding of the mechanisms of the clinical syndrome, provides prognostic information, facilitates therapeutic decisions, and guides invasive therapy. Quantitative and improved qualitative assessments of arterial disease provide us with a common descriptive language which has the potential to accomplish these objectives more effectively and thus to improve clinical outcome. In certain situations, this potential has been demonstrated. Clinical investigation using quantitative techniques has definitely contributed to our understanding of disease mechanisms and of atherosclerosis progression/regression. Routine quantitation of clinical images should permit more accurate and repeatable estimates of disease severity and promises to provide useful estimates of coronary flow reserve. But routine clinical QCA awaits more cost- and time-efficient methods and clear proof of a clinical advantage. Careful inspection of highly magnified, high-resolution arteriographic images reveals morphologic features related to the pathophysiology of the clinical syndrome and to the likelihood of future progression or regression of obstruction. Features that have been found useful include thrombus in its various forms, ulceration and irregularity, eccentricity, flexing and dissection. The description of such high-resolution features should be included among, rather than excluded from, the goals of image processing, since they contribute substantially to the understanding and treatment of the clinical syndrome. (author). 81 refs.; 8 figs.; 1 tab

  17. Some exercises in quantitative NMR imaging

    The articles represented in this thesis result from a series of investigations that evaluate the potential of NMR imaging as a quantitative research tool. In the first article the possible use of proton spin-lattice relaxation time T1 in tissue characterization, tumor recognition and monitoring tissue response to radiotherapy is explored. The next article addresses the question whether water proton spin-lattice relaxation curves of biological tissues are adequately described by a single time constant T1, and analyzes the implications of multi-exponentiality for quantitative NMR imaging. In the third article the use of NMR imaging as a quantitative research tool is discussed on the basis of phantom experiments. The fourth article describes a method which enables unambiguous retrieval of sign information in a set of magnetic resonance images of the inversion recovery type. The next article shows how this method can be adapted to allow accurate calculation of T1 pictures on a pixel-by-pixel basis. The sixth article, finally, describes a simulation procedure which enables a straightforward determination of NMR imaging pulse sequence parameters for optimal tissue contrast. (orig.)

  18. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  19. Conjugate whole-body scanning system for quantitative measurement of organ distribution in vivo

    The determination of accurate, quantitative, biokinetic distribution of an internally dispersed radionuclide in humans is important in making realistic radiation absorbed dose estimates, studying biochemical transformations in health and disease, and developing clinical procedures indicative of abnormal functions. In order to collect these data, a whole-body imaging system is required which provides both adequate spatial resolution and some means of absolute quantitation. Based on these considerations, a new whole-body scanning system has been designed and constructed that employs the conjugate counting technique. The conjugate whole-body scanning system provides an efficient and accurate means of collecting absolute quantitative organ distribution data of radioactivity in vivo

  20. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  1. Multiplex PCR with minisequencing as an effective high-throughput SNP typing method for formalin-fixed tissue

    Gilbert, Marcus T P; Sanchez, Juan J; Haselkorn, Tamara;

    2007-01-01

    Extensive collections of formalin-fixed paraffin-embedded (FFPE) tissues exist that could be exploited for genetic analyses in order to provide important insights into the genetic basis of disease or host/pathogen cointeractions. We report here an evaluation of a 44 SNP multiplex genotyping method...... information using simple, single reactions and minute amounts of archival tissue/DNA. In the light of this evidence, we suggest that the systematic screening of FFPE collections may in the future provide valuable insights into the past. Udgivelsesdato: 2007-Jul...

  2. Peopling of the North Circumpolar Region – Insights from Y Chromosome STR and SNP Typing of Greenlanders

    Olofsson, Jill Katharina; Pereira, Vania; Børsting, Claus;

    2015-01-01

    17 Y-chromosomal short tandem repeats (Y-STRs). Approximately 40% of the analyzed Greenlandic Y chromosomes were of European origin (I-M170, R1a-M513 and R1b-M343). Y chromosomes of European origin were mainly found in individuals from the west and south coasts of Greenland, which is in agreement...... origin in the northeastern parts of North America and could be descendants of the Dorset culture. This in turn points to the possibility that the current Inuit population in Greenland is comprised of individuals of both Thule and Dorset descent....

  3. Multidetector row computed tomography may accurately estimate plaque vulnerability. Does MDCT accurately estimate plaque vulnerability? (Pro)

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)

  4. Multidetector row computed tomography may accurately estimate plaque vulnerability: does MDCT accurately estimate plaque vulnerability? (Pro).

    Komatsu, Sei; Imai, Atsuko; Kodama, Kazuhisa

    2011-01-01

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal ECG and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. PMID:21532180

  5. Accurate Quantification of Lipid Species by Electrospray Ionization Mass Spectrometry — Meets a Key Challenge in Lipidomics

    Kui Yang

    2011-11-01

    Full Text Available Electrospray ionization mass spectrometry (ESI-MS has become one of the most popular and powerful technologies to identify and quantify individual lipid species in lipidomics. Meanwhile, quantitative analysis of lipid species by ESI-MS has also become a major obstacle to meet the challenges of lipidomics. Herein, we discuss the principles, advantages, and possible limitations of different mass spectrometry-based methodologies for lipid quantification, as well as a few practical issues important for accurate quantification of individual lipid species. Accordingly, accurate quantification of individual lipid species, one of the key challenges in lipidomics, can be practically met.

  6. Quantitative velocity modulation spectroscopy

    Hodges, James N.; McCall, Benjamin J.

    2016-05-01

    Velocity Modulation Spectroscopy (VMS) is arguably the most important development in the 20th century for spectroscopic study of molecular ions. For decades, interpretation of VMS lineshapes has presented challenges due to the intrinsic covariance of fit parameters including velocity modulation amplitude, linewidth, and intensity. This limitation has stifled the growth of this technique into the quantitative realm. In this work, we show that subtle changes in the lineshape can be used to help address this complexity. This allows for determination of the linewidth, intensity relative to other transitions, velocity modulation amplitude, and electric field strength in the positive column of a glow discharge. Additionally, we explain the large homogeneous component of the linewidth that has been previously described. Using this component, the ion mobility can be determined.

  7. Quantitative metamaterial property extraction

    Schurig, David

    2015-01-01

    We examine an extraction model for metamaterials, not previously reported, that gives precise, quantitative and causal representation of S parameter data over a broad frequency range, up to frequencies where the free space wavelength is only a modest factor larger than the unit cell dimension. The model is comprised of superposed, slab shaped response regions of finite thickness, one for each observed resonance. The resonance dispersion is Lorentzian and thus strictly causal. This new model is compared with previous models for correctness likelihood, including an appropriate Occam's factor for each fit parameter. We find that this new model is by far the most likely to be correct in a Bayesian analysis of model fits to S parameter simulation data for several classic metamaterial unit cells.

  8. Quantitative Hyperspectral Reflectance Imaging

    Ted A.G. Steemers

    2008-09-01

    Full Text Available Hyperspectral imaging is a non-destructive optical analysis technique that can for instance be used to obtain information from cultural heritage objects unavailable with conventional colour or multi-spectral photography. This technique can be used to distinguish and recognize materials, to enhance the visibility of faint or obscured features, to detect signs of degradation and study the effect of environmental conditions on the object. We describe the basic concept, working principles, construction and performance of a laboratory instrument specifically developed for the analysis of historical documents. The instrument measures calibrated spectral reflectance images at 70 wavelengths ranging from 365 to 1100 nm (near-ultraviolet, visible and near-infrared. By using a wavelength tunable narrow-bandwidth light-source, the light energy used to illuminate the measured object is minimal, so that any light-induced degradation can be excluded. Basic analysis of the hyperspectral data includes a qualitative comparison of the spectral images and the extraction of quantitative data such as mean spectral reflectance curves and statistical information from user-defined regions-of-interest. More sophisticated mathematical feature extraction and classification techniques can be used to map areas on the document, where different types of ink had been applied or where one ink shows various degrees of degradation. The developed quantitative hyperspectral imager is currently in use by the Nationaal Archief (National Archives of The Netherlands to study degradation effects of artificial samples and original documents, exposed in their permanent exhibition area or stored in their deposit rooms.

  9. Accurate formulas for the penalty caused by interferometric crosstalk

    Rasmussen, Christian Jørgen; Liu, Fenghai; Jeppesen, Palle

    2000-01-01

    New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas.......New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas....

  10. A new, accurate predictive model for incident hypertension

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  11. 78 FR 34604 - Submitting Complete and Accurate Information

    2013-06-10

    ... COMMISSION 10 CFR Part 50 Submitting Complete and Accurate Information AGENCY: Nuclear Regulatory Commission... accurate information as would a licensee or an applicant for a license.'' DATES: Submit comments by August... may submit comments by any of the following methods (unless this document describes a different...

  12. Electronic imaging systems for quantitative electrophoresis of DNA

    Gel electrophoresis is one of the most powerful and widely used methods for the separation of DNA. During the last decade, instruments have been developed that accurately quantitate in digital form the distribution of materials in a gel or on a blot prepared from a gel. In this paper, I review the various physical properties that can be used to quantitate the distribution of DNA on gels or blots and the instrumentation that has been developed to perform these tasks. The emphasis here is on DNA, but much of what is said also applies to RNA, proteins and other molecules. 36 refs

  13. The accurate assessment of small-angle X-ray scattering data

    A set of quantitative techniques is suggested for assessing SAXS data quality. These are applied in the form of a script, SAXStats, to a test set of 27 proteins, showing that these techniques are more sensitive than manual assessment of data quality. Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality

  14. Identification and validation of reference genes for accurate normalization of real-time quantitative PCR data in kiwifruit.

    Ferradás, Yolanda; Rey, Laura; Martínez, Óscar; Rey, Manuel; González, M Victoria

    2016-05-01

    Identification and validation of reference genes are required for the normalization of qPCR data. We studied the expression stability produced by eight primer pairs amplifying four common genes used as references for normalization. Samples representing different tissues, organs and developmental stages in kiwifruit (Actinidia chinensis var. deliciosa (A. Chev.) A. Chev.) were used. A total of 117 kiwifruit samples were divided into five sample sets (mature leaves, axillary buds, stigmatic arms, fruit flesh and seeds). All samples were also analysed as a single set. The expression stability of the candidate primer pairs was tested using three algorithms (geNorm, NormFinder and BestKeeper). The minimum number of reference genes necessary for normalization was also determined. A unique primer pair was selected for amplifying the 18S rRNA gene. The primer pair selected for amplifying the ACTIN gene was different depending on the sample set. 18S 2 and ACT 2 were the candidate primer pairs selected for normalization in the three sample sets (mature leaves, fruit flesh and stigmatic arms). 18S 2 and ACT 3 were the primer pairs selected for normalization in axillary buds. No primer pair could be selected for use as the reference for the seed sample set. The analysis of all samples in a single set did not produce the selection of any stably expressing primer pair. Considering data previously reported in the literature, we validated the selected primer pairs amplifying the FLOWERING LOCUS T gene for use in the normalization of gene expression in kiwifruit. PMID:26897117

  15. Automated and Accurate Detection of Soma Location and Surface Morphology in Large-Scale 3D Neuron Images

    Cheng Yan; Anan Li; Bin Zhang,; Wenxiang Ding; Qingming Luo; Hui Gong

    2013-01-01

    Automated and accurate localization and morphometry of somas in 3D neuron images is essential for quantitative studies of neural networks in the brain. However, previous methods are limited in obtaining the location and surface morphology of somas with variable size and uneven staining in large-scale 3D neuron images. In this work, we proposed a method for automated soma locating in large-scale 3D neuron images that contain relatively sparse soma distributions. This method involves three step...

  16. Programmable Quantitative DNA Nanothermometers.

    Gareau, David; Desrosiers, Arnaud; Vallée-Bélisle, Alexis

    2016-07-13

    Developing molecules, switches, probes or nanomaterials that are able to respond to specific temperature changes should prove of utility for several applications in nanotechnology. Here, we describe bioinspired strategies to design DNA thermoswitches with programmable linear response ranges that can provide either a precise ultrasensitive response over a desired, small temperature interval (±0.05 °C) or an extended linear response over a wide temperature range (e.g., from 25 to 90 °C). Using structural modifications or inexpensive DNA stabilizers, we show that we can tune the transition midpoints of DNA thermometers from 30 to 85 °C. Using multimeric switch architectures, we are able to create ultrasensitive thermometers that display large quantitative fluorescence gains within small temperature variation (e.g., > 700% over 10 °C). Using a combination of thermoswitches of different stabilities or a mix of stabilizers of various strengths, we can create extended thermometers that respond linearly up to 50 °C in temperature range. Here, we demonstrate the reversibility, robustness, and efficiency of these programmable DNA thermometers by monitoring temperature change inside individual wells during polymerase chain reactions. We discuss the potential applications of these programmable DNA thermoswitches in various nanotechnology fields including cell imaging, nanofluidics, nanomedecine, nanoelectronics, nanomaterial, and synthetic biology. PMID:27058370

  17. Quantitive DNA Fiber Mapping

    Lu, Chun-Mei; Wang, Mei; Greulich-Bode, Karin M.; Weier, Jingly F.; Weier, Heinz-Ulli G.

    2008-01-28

    Several hybridization-based methods used to delineate single copy or repeated DNA sequences in larger genomic intervals take advantage of the increased resolution and sensitivity of free chromatin, i.e., chromatin released from interphase cell nuclei. Quantitative DNA fiber mapping (QDFM) differs from the majority of these methods in that it applies FISH to purified, clonal DNA molecules which have been bound with at least one end to a solid substrate. The DNA molecules are then stretched by the action of a receding meniscus at the water-air interface resulting in DNA molecules stretched homogeneously to about 2.3 kb/{micro}m. When non-isotopically, multicolor-labeled probes are hybridized to these stretched DNA fibers, their respective binding sites are visualized in the fluorescence microscope, their relative distance can be measured and converted into kilobase pairs (kb). The QDFM technique has found useful applications ranging from the detection and delineation of deletions or overlap between linked clones to the construction of high-resolution physical maps to studies of stalled DNA replication and transcription.

  18. Quantitative Electron Nanodiffraction.

    Spence, John [Arizona State University

    2015-01-30

    This Final report summarizes progress under this award for the final reporting period 2002 - 2013 in our development of quantitive electron nanodiffraction to materials problems, especially devoted to atomistic processes in semiconductors and electronic oxides such as the new artificial oxide multilayers, where our microdiffraction is complemented with energy-loss spectroscopy (ELNES) and aberration-corrected STEM imaging (9). The method has also been used to map out the chemical bonds in the important GaN semiconductor (1) used for solid state lighting, and to understand the effects of stacking sequence variations and interfaces in digital oxide superlattices (8). Other projects include the development of a laser-beam Zernike phase plate for cryo-electron microscopy (5) (based on the Kapitza-Dirac effect), work on reconstruction of molecular images using the scattering from many identical molecules lying in random orientations (4), a review article on space-group determination for the International Tables on Crystallography (10), the observation of energy-loss spectra with millivolt energy resolution and sub-nanometer spatial resolution from individual point defects in an alkali halide, a review article for the Centenary of X-ray Diffration (17) and the development of a new method of electron-beam lithography (12). We briefly summarize here the work on GaN, on oxide superlattice ELNES, and on lithography by STEM.

  19. Accurate calculation of diffraction-limited encircled and ensquared energy.

    Andersen, Torben B

    2015-09-01

    Mathematical properties of the encircled and ensquared energy functions for the diffraction-limited point-spread function (PSF) are presented. These include power series and a set of linear differential equations that facilitate the accurate calculation of these functions. Asymptotic expressions are derived that provide very accurate estimates for the relative amount of energy in the diffraction PSF that fall outside a square or rectangular large detector. Tables with accurate values of the encircled and ensquared energy functions are also presented. PMID:26368873

  20. Accurately bearing measurement in non-cooperative passive location system

    The system of non-cooperative passive location based on array is proposed. In the system, target is detected by beamforming and Doppler matched filtering; and bearing is measured by a long-base-ling interferometer which is composed of long distance sub-arrays. For the interferometer with long-base-line, the bearing is measured accurately but ambiguously. To realize unambiguous accurately bearing measurement, beam width and multiple constraint adoptive beamforming technique is used to resolve azimuth ambiguous. Theory and simulation result shows this method is effective to realize accurately bearing measurement in no-cooperate passive location system. (authors)

  1. Quantitative thermal diffusivity measurements of composites

    Heath, D. M.; Winfree, W. P.; Heyman, J. S.; Miller, W. E.; Welch, C. S.

    1986-01-01

    A remote radiometric technique for making quantitative thermal diffusivity measurements is described. The technique was designed to make assessments of the structural integrity of large composite parts, such as wings, and can be performed at field sites. In the measurement technique, a CO2 laser beam is scanned using two orthogonal servo-controlled deflecting mirrors. An infrared imager, whose scanning mirrors oscillate in the vertical and the horizontal directions, is used as the detector. The analysis technique used to extract the diffusivity from these images is based on a thin infinite plate assumption, which requires waiting a given period of time for the temperature to equilibrate throughout the thickness of the sample. The technique is shown to be accurate to within two percent for values of the order of those for composite diffusivities, and to be insensitive to convection losses.

  2. Dual interference channel quantitative phase microscopy of live cell dynamics

    Shaked, Natan T.; Rinehart, Matthew T.; Wax, Adam

    2009-01-01

    We introduce and experimentally demonstrate a new fast and accurate method for quantitative imaging of the dynamics of live biological cells. Using a dual-channel interferometric setup, two phase-shifted interferograms of nearly-transparent biological samples are acquired in a single digital camera exposure, and digitally processed into the phase profile of the sample. Since two interferograms of the same sample are acquired simultaneously, most of the common phase noise is eliminated, enabli...

  3. A simplified and accurate detection of the genetically modified wheat MON71800 with one calibrator plasmid.

    Kim, Jae-Hwan; Park, Saet-Byul; Roh, Hyo-Jeong; Park, Sunghoon; Shin, Min-Ki; Moon, Gui Im; Hong, Jin-Hwan; Kim, Hae-Yeong

    2015-06-01

    With the increasing number of genetically modified (GM) events, unauthorized GMO releases into the food market have increased dramatically, and many countries have developed detection tools for them. This study described the qualitative and quantitative detection methods of unauthorized the GM wheat MON71800 with a reference plasmid (pGEM-M71800). The wheat acetyl-CoA carboxylase (acc) gene was used as the endogenous gene. The plasmid pGEM-M71800, which contains both the acc gene and the event-specific target MON71800, was constructed as a positive control for the qualitative and quantitative analyses. The limit of detection in the qualitative PCR assay was approximately 10 copies. In the quantitative PCR assay, the standard deviation and relative standard deviation repeatability values ranged from 0.06 to 0.25 and from 0.23% to 1.12%, respectively. This study supplies a powerful and very simple but accurate detection strategy for unauthorized GM wheat MON71800 that utilizes a single calibrator plasmid. PMID:25624198

  4. Quantitative Literacy: Geosciences and Beyond

    Richardson, R. M.; McCallum, W. G.

    2002-12-01

    Quantitative literacy seems like such a natural for the geosciences, right? The field has gone from its origin as a largely descriptive discipline to one where it is hard to imagine failing to bring a full range of mathematical tools to the solution of geological problems. Although there are many definitions of quantitative literacy, we have proposed one that is analogous to the UNESCO definition of conventional literacy: "A quantitatively literate person is one who, with understanding, can both read and represent quantitative information arising in his or her everyday life." Central to this definition is the concept that a curriculum for quantitative literacy must go beyond the basic ability to "read and write" mathematics and develop conceptual understanding. It is also critical that a curriculum for quantitative literacy be engaged with a context, be it everyday life, humanities, geoscience or other sciences, business, engineering, or technology. Thus, our definition works both within and outside the sciences. What role do geoscience faculty have in helping students become quantitatively literate? Is it our role, or that of the mathematicians? How does quantitative literacy vary between different scientific and engineering fields? Or between science and nonscience fields? We will argue that successful quantitative literacy curricula must be an across-the-curriculum responsibility. We will share examples of how quantitative literacy can be developed within a geoscience curriculum, beginning with introductory classes for nonmajors (using the Mauna Loa CO2 data set) through graduate courses in inverse theory (using singular value decomposition). We will highlight six approaches to across-the curriculum efforts from national models: collaboration between mathematics and other faculty; gateway testing; intensive instructional support; workshops for nonmathematics faculty; quantitative reasoning requirement; and individual initiative by nonmathematics faculty.

  5. Accurate backgrounds to Higgs production at the LHC

    Kauer, N

    2007-01-01

    Corrections of 10-30% for backgrounds to the H --> WW --> l^+l^-\\sla{p}_T search in vector boson and gluon fusion at the LHC are reviewed to make the case for precise and accurate theoretical background predictions.

  6. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  7. Accurate wall thickness measurement using autointerference of circumferential Lamb wave

    In this paper, a method of accurately measuring the pipe wall thickness by using noncontact air-coupled ultrasonic transducer (NAUT) was presented. In this method, accurate measurement of angular wave number (AWN) is a key technique because the AWN is changes minutely with the wall thickness. An autointerference of the circumferential (C-) Lamb wave was used for accurate measurements of the AWN. Principle of the method was first explained. Modified method for measuring the wall thickness near a butt weld line was also proposed and its accuracy was evaluated within 6 μm error. It was also shown in the paper that wall thickness measurement was accurately carried out beyond the difference among the sensors by calibrating the frequency response of the sensors. (author)

  8. Highly Accurate Sensor for High-Purity Oxygen Determination Project

    National Aeronautics and Space Administration — In this STTR effort, Los Gatos Research (LGR) and the University of Wisconsin (UW) propose to develop a highly-accurate sensor for high-purity oxygen determination....

  9. Quantitative computed tomography

    Adams, Judith E. [Royal Infirmary and University, Manchester (United Kingdom)], E-mail: judith.adams@manchester.ac.uk

    2009-09-15

    Quantitative computed tomography (QCT) was introduced in the mid 1970s. The technique is most commonly applied to 2D slices in the lumbar spine to measure trabecular bone mineral density (BMD; mg/cm{sup 3}). Although not as widely utilized as dual-energy X-ray absortiometry (DXA) QCT has some advantages when studying the skeleton (separate measures of cortical and trabecular BMD; measurement of volumetric, as opposed to 'areal' DXA-BMDa, so not size dependent; geometric and structural parameters obtained which contribute to bone strength). A limitation is that the World Health Organisation (WHO) definition of osteoporosis in terms of bone densitometry (T score -2.5 or below using DXA) is not applicable. QCT can be performed on conventional body CT scanners, or at peripheral sites (radius, tibia) using smaller, less expensive dedicated peripheral CT scanners (pQCT). Although the ionising radiation dose of spinal QCT is higher than for DXA, the dose compares favorably with those of other radiographic procedures (spinal radiographs) performed in patients suspected of having osteoporosis. The radiation dose from peripheral QCT scanners is negligible. Technical developments in CT (spiral multi-detector CT; improved spatial resolution) allow rapid acquisition of 3D volume images which enable QCT to be applied to the clinically important site of the proximal femur, more sophisticated analysis of cortical and trabecular bone, the imaging of trabecular structure and the application of finite element analysis (FEA). Such research studies contribute importantly to the understanding of bone growth and development, the effect of disease and treatment on the skeleton and the biomechanics of bone strength and fracture.

  10. Quantitative Luminescence Imaging System

    The goal of the MEASUREMENT OF CHEMILUMINESCENCE project is to develop and deliver a suite of imaging radiometric instruments for measuring spatial distributions of chemiluminescence. Envisioned deliverables include instruments working at the microscopic, macroscopic, and life-sized scales. Both laboratory and field portable instruments are envisioned. The project also includes development of phantoms as enclosures for the diazoluminomelanin (DALM) chemiluminescent chemistry. A suite of either phantoms in a variety of typical poses, or phantoms that could be adjusted to a variety of poses, is envisioned. These are to include small mammals (rats), mid-sized mammals (monkeys), and human body parts. A complete human phantom that can be posed is a long-term goal of the development. Taken together, the chemistry and instrumentation provide a means for imaging rf dosimetry based on chemiluminescence induced by the heat resulting from rf energy absorption. The first delivered instrument, the Quantitative Luminescence Imaging System (QLIS), resulted in a patent, and an R ampersand D Magazine 1991 R ampersand D 100 award, recognizing it as one of the 100 most significant technological developments of 1991. The current status of the project is that three systems have been delivered, several related studies have been conducted, two preliminary human hand phantoms have been delivered, system upgrades have been implemented, and calibrations have been maintained. Current development includes sensitivity improvements to the microscope-based system; extension of the large-scale (potentially life-sized targets) system to field portable applications; extension of the 2-D large-scale system to 3-D measurement; imminent delivery of a more refined human hand phantom and a rat phantom; rf, thermal and imaging subsystem integration; and continued calibration and upgrade support

  11. Quantitative micro-CT

    Prevrhal, Sven

    2005-09-01

    Micro-CT for bone structural analysis has progressed from an in-vitro laboratory technique to devices for in-vivo assessment of small animals and the peripheral human skeleton. Currently, topological parameters of bone architecture are the primary goals of analysis. Additional measurement of the density or degree of mineralization (DMB) of trabecular and cortical bone at the microscopic level is desirable to study effects of disease and treatment progress. This information is not commonly extracted because of the challenges of accurate measurement and calibration at the tissue level. To assess the accuracy of micro-CT DMB measurements in a realistic but controlled situation, we prepared bone-mimicking watery solutions at concentrations of 100 to 600 mg/cm3 K2PO4H and scanned them with micro-CT, both in glass vials and microcapillary tubes with inner diameters of 50, 100 and 150 μm to simulate trabecular thickness. Values of the linear attenuation coefficients μ in the reconstructed image are commonly affected by beam hardening effects for larger samples and by partial volume effects for small volumes. We implemented an iterative reconstruction technique to reduce beam hardening. Partial voluming was sought to be reduced by excluding voxels near the tube wall. With these two measures, improvement on the constancy of the reconstructed voxel values and linearity with solution concentration could be observed to over 90% accuracy. However, since the expected change in real bone is small more measurements are needed to confirm that micro-CT can indeed be adapted to assess bone mineralization at the tissue level.

  12. Quantitative Nuclear Medicine. Chapter 17

    Planar imaging is still used in clinical practice although tomographic imaging (single photon emission computed tomography (SPECT) and positron emission tomography (PET)) is becoming more established. In this chapter, quantitative methods for both imaging techniques are presented. Planar imaging is limited to single photon. For both SPECT and PET, the focus is on the quantitative methods that can be applied to reconstructed images

  13. Quantitative linguistics within Czech contexts

    Králík, Jan

    Berlin : Mouton de Gruyter, 2007 - (Grzybek, P.; Köhler, R.), s. 343-351 ISBN 978-3-11-019354-1. - (Quantitative Linguistics 62) R&D Projects: GA AV ČR 1ET101120413 Institutional research plan: CEZ:AV0Z90610521 Keywords : quantitative linguistics Subject RIV: AI - Linguistics

  14. Mastering R for quantitative finance

    Berlinger, Edina; Badics, Milán; Banai, Ádám; Daróczi, Gergely; Dömötör, Barbara; Gabler, Gergely; Havran, Dániel; Juhász, Péter; Margitai, István; Márkus, Balázs; Medvegyev, Péter; Molnár, Julia; Szucs, Balázs Árpád; Tuza, Ágnes; Vadász, Tamás; Váradi, Kata; Vidovics-Dancs, Ágnes

    2015-01-01

    This book is intended for those who want to learn how to use R's capabilities to build models in quantitative finance at a more advanced level. If you wish to perfectly take up the rhythm of the chapters, you need to be at an intermediate level in quantitative finance and you also need to have a reasonable knowledge of R.

  15. Quantitative diagnosis of skeletons with demineralizing osteopathy

    The quantitative diagnosis of bone diseases must be assessed according to the accuracy of the applied method, the expense in apparatus, personnel and financial resources and the comparability of results. Nuclide absorptiometry and in the future perhaps computed tomography represent the most accurate methods for determining the mineral content of bones. Their application is the clinics' prerogative because of the costs. Morphometry provides quantiative information, in particular in course control, and enables an objective judgement of visual pictures. It requires little expenditure and should be combined with microradioscopy. Direct comparability of the findings of different working groups is most easy in morphometry; it depends on the equipment in computerized tomography and is still hardly possible in nuclide absorptiometry. For fundamental physical reason, it will hardly be possible to produce a low-cost, fast and easy-to-handle instrument for the determination of the mineral salt concentration in bones. Instead, there is rather a trend towards more expensive equipment, e.g. CT instruments; the universal use of these instruments, however, will help to promote quantitative diagnoses. (orig.)

  16. Quantitative Species Measurements In Microgravity Combustion Flames

    Chen, Shin-Juh; Pilgrim, Jeffrey S.; Silver, Joel A.; Piltch, Nancy D.

    2003-01-01

    The capability of models and theories to accurately predict and describe the behavior of low gravity flames can only be verified by quantitative measurements. Although video imaging, simple temperature measurements, and velocimetry methods have provided useful information in many cases, there is still a need for quantitative species measurements. Over the past decade, we have been developing high sensitivity optical absorption techniques to permit in situ, non-intrusive, absolute concentration measurements for both major and minor flames species using diode lasers. This work has helped to establish wavelength modulation spectroscopy (WMS) as an important method for species detection within the restrictions of microgravity-based measurements. More recently, in collaboration with Prof. Dahm at the University of Michigan, a new methodology combining computed flame libraries with a single experimental measurement has allowed us to determine the concentration profiles for all species in a flame. This method, termed ITAC (Iterative Temperature with Assumed Chemistry) was demonstrated for a simple laminar nonpremixed methane-air flame at both 1-g and at 0-g in a vortex ring flame. In this paper, we report additional normal and microgravity experiments which further confirm the usefulness of this approach. We also present the development of a new type of laser. This is an external cavity diode laser (ECDL) which has the unique capability of high frequency modulation as well as a very wide tuning range. This will permit the detection of multiple species with one laser while using WMS detection.

  17. Quantitative tomographic measurements of opaque multiphase flows

    GEORGE,DARIN L.; TORCZYNSKI,JOHN R.; SHOLLENBERGER,KIM ANN; O' HERN,TIMOTHY J.; CECCIO,STEVEN L.

    2000-03-01

    An electrical-impedance tomography (EIT) system has been developed for quantitative measurements of radial phase distribution profiles in two-phase and three-phase vertical column flows. The EIT system is described along with the computer algorithm used for reconstructing phase volume fraction profiles. EIT measurements were validated by comparison with a gamma-densitometry tomography (GDT) system. The EIT system was used to accurately measure average solid volume fractions up to 0.05 in solid-liquid flows, and radial gas volume fraction profiles in gas-liquid flows with gas volume fractions up to 0.15. In both flows, average phase volume fractions and radial volume fraction profiles from GDT and EIT were in good agreement. A minor modification to the formula used to relate conductivity data to phase volume fractions was found to improve agreement between the methods. GDT and EIT were then applied together to simultaneously measure the solid, liquid, and gas radial distributions within several vertical three-phase flows. For average solid volume fractions up to 0.30, the gas distribution for each gas flow rate was approximately independent of the amount of solids in the column. Measurements made with this EIT system demonstrate that EIT may be used successfully for noninvasive, quantitative measurements of dispersed multiphase flows.

  18. Quantitative methods for indirect CT lymphography

    In this investigation, we applied quantitative CT methods to characterize contrast enhanced lymph nodes opacified using iodinated contrast media for indirect CT lymphography. Iodinated nanoparticles were injected into the buccal submucosa and SQ into the metatarsus and metacarpus of four normal swine (1.0-4.0 ml/site, 76 mg I/ml). Attenuation (HU), volume (cm(3)), iodine concentration (mg I/cm(3)), total iodine uptake (mg I), contrast-to-noise ratio (CNR), and percent injected dose (%ID) were estimated in opacified inguinal, cervical and parotid/mandibular lymph nodes using manual image segmentation techniques on 24 hour post-contrast CT images. Lymph node volumes estimated by multiple slice ROI analysis were compared with estimates obtained by post-excisional weight measurements. HU and iodine concentration increased 5-20 foldin opacified nodes (p < 0.01) and CNR increased more than four-fold (p < 0.001), %ID ranged between 3.5 and 11.9% and did not appear dose related. ROI estimated lymph node volumes approximated volumes calculated from weight measurements. (R-2 = 0.94, p < 0.0001). We conclude that interstitially injected iodinated nanoparticles increase attenuationand conspicuity of targeted nodes on CT images. Quantitative methods could play an important clinical role in more accurate metastasis detection

  19. Quantitative shadowgraphy and proton radiography for large intensity modulations

    Kasim, Muhammad Firmansyah; Ratan, Naren; Sadler, James; Chen, Nicholas; Savert, Alexander; Trines, Raoul; Bingham, Robert; Burrows, Philip N; Kaluza, Malte C; Norreys, Peter

    2016-01-01

    Shadowgraphy is a technique widely used to diagnose objects or systems in various fields in physics and engineering. In shadowgraphy, an optical beam is deflected by the object and then the intensity modulation is captured on a screen placed some distance away. However, retrieving quantitative information from the shadowgrams themselves is a challenging task because of the non-linear nature of the process. Here, a novel method to retrieve quantitative information from shadowgrams, based on computational geometry, is presented for the first time. This process can be applied to proton radiography for electric and magnetic field diagnosis in high-energy-density plasmas and has been benchmarked using a toroidal magnetic field as the object, among others. It is shown that the method can accurately retrieve quantitative parameters with error bars less than 10%, even when caustics are present. The method is also shown to be robust enough to process real experimental results with simple pre- and post-processing techn...

  20. Reconstructing absorption and scattering distributions in quantitative photoacoustic tomography

    Quantitative photoacoustic tomography is a novel hybrid imaging technique aiming at estimating optical parameters inside tissues. The method combines (functional) optical information and accurate anatomical information obtained using ultrasound techniques. The optical inverse problem of quantitative photoacoustic tomography is to estimate the optical parameters within tissue when absorbed optical energy density is given. In this paper we consider reconstruction of absorption and scattering distributions in quantitative photoacoustic tomography. The radiative transport equation and diffusion approximation are used as light transport models and solutions in different size domains are investigated. The simulations show that scaling of the data, for example by using logarithmic data, can be expected to significantly improve the convergence of the minimization algorithm. Furthermore, both the radiative transport equation and diffusion approximation can give good estimates for absorption. However, depending on the optical properties and the size of the domain, the diffusion approximation may not produce as good estimates for scattering as the radiative transport equation. (paper)

  1. Mass Spectrometry-Based Label-Free Quantitative Proteomics

    Wenhong Zhu

    2010-01-01

    Full Text Available In order to study the differential protein expression in complex biological samples, strategies for rapid, highly reproducible and accurate quantification are necessary. Isotope labeling and fluorescent labeling techniques have been widely used in quantitative proteomics research. However, researchers are increasingly turning to label-free shotgun proteomics techniques for faster, cleaner, and simpler results. Mass spectrometry-based label-free quantitative proteomics falls into two general categories. In the first are the measurements of changes in chromatographic ion intensity such as peptide peak areas or peak heights. The second is based on the spectral counting of identified proteins. In this paper, we will discuss the technologies of these label-free quantitative methods, statistics, available computational software, and their applications in complex proteomics studies.

  2. A Quantitative Assessment Approach to COTS Component Security

    Jinfu Chen

    2013-01-01

    Full Text Available The vulnerability of software components hinders the development of component technology. An effective assessment approach to component security level can promote the development of component technology. Thus, the current paper proposes a quantitative assessment approach to COTS (commercial-off-the-shelf component security. The steps of interface fault injection and the assessment framework are given based on the internal factors of the tested component. The quantitative assessment algorithm and formula of component security level are also presented. The experiment results show that the approach not only can detect component security vulnerabilities effectively but also quantitatively assess the component security level. The score of component security can be accurately calculated, which represents the security level of the tested component.

  3. Quantitative boron detection by neutron transmission method

    //Quantitative boron detection is mainly performed by chemical methods like colorimetric titration. High neutron absorption cross section of natural boron makes attractive its detection by absorption measurements. This work is an extension of earlier investigations where neutron radiography technique was used for boron detection. In the present investigation, the neutron absorption rate of boron containing solutions is the way to measure quantitatively the boron content of the solutions. The investigation was carried out in Istanbul TRIGA Mark-II reactor. In the end of the experiments, it was observed that even |ppw| grade boron in aqueous solution can be easily detected. The use of this method is certainly very useful for boron utilizing industries like glass and steel industries.The major disadvantage of the method is the obligation to use always aqueous solutions to be able to detect homogeneously the boron content. Then, steel or glass samples have to be put first in an appropriate solution form. The irradiation of steel samples can give the distribution of boron by the help of a imaging and this suggested method will give its quantitative measurement. The superiority of this method are its quick response time and its accuracy. To test this accuracy, a supposed unknown , solution of boric acid is irradiated and then calculated by the help of the calibration curve. The measured value of boric acid was 0.89 mg and the calculated value was found to be 0.98 mg which gives an accuracy of 10 %. It was also seen that the method is more accurate for low concentration. (authors)

  4. Accurately measuring dynamic coefficient of friction in ultraform finishing

    Briggs, Dennis; Echaves, Samantha; Pidgeon, Brendan; Travis, Nathan; Ellis, Jonathan D.

    2013-09-01

    UltraForm Finishing (UFF) is a deterministic sub-aperture computer numerically controlled grinding and polishing platform designed by OptiPro Systems. UFF is used to grind and polish a variety of optics from simple spherical to fully freeform, and numerous materials from glasses to optical ceramics. The UFF system consists of an abrasive belt around a compliant wheel that rotates and contacts the part to remove material. This work aims to accurately measure the dynamic coefficient of friction (μ), how it changes as a function of belt wear, and how this ultimately affects material removal rates. The coefficient of friction has been examined in terms of contact mechanics and Preston's equation to determine accurate material removal rates. By accurately predicting changes in μ, polishing iterations can be more accurately predicted, reducing the total number of iterations required to meet specifications. We have established an experimental apparatus that can accurately measure μ by measuring triaxial forces during translating loading conditions or while manufacturing the removal spots used to calculate material removal rates. Using this system, we will demonstrate μ measurements for UFF belts during different states of their lifecycle and assess the material removal function from spot diagrams as a function of wear. Ultimately, we will use this system for qualifying belt-wheel-material combinations to develop a spot-morphing model to better predict instantaneous material removal functions.

  5. Understanding quantitative research: part 1.

    Hoe, Juanita; Hoare, Zoë

    This article, which is the first in a two-part series, provides an introduction to understanding quantitative research, basic statistics and terminology used in research articles. Critical appraisal of research articles is essential to ensure that nurses remain up to date with evidence-based practice to provide consistent and high-quality nursing care. This article focuses on developing critical appraisal skills and understanding the use and implications of different quantitative approaches to research. Part two of this article will focus on explaining common statistical terms and the presentation of statistical data in quantitative research. PMID:23346707

  6. Simple and accurate analytical calculation of shortest path lengths

    Melnik, Sergey

    2016-01-01

    We present an analytical approach to calculating the distribution of shortest paths lengths (also called intervertex distances, or geodesic paths) between nodes in unweighted undirected networks. We obtain very accurate results for synthetic random networks with specified degree distribution (the so-called configuration model networks). Our method allows us to accurately predict the distribution of shortest path lengths on real-world networks using their degree distribution, or joint degree-degree distribution. Compared to some other methods, our approach is simpler and yields more accurate results. In order to obtain the analytical results, we use the analogy between an infection reaching a node in $n$ discrete time steps (i.e., as in the susceptible-infected epidemic model) and that node being at a distance $n$ from the source of the infection.

  7. Accurate and Simple Calibration of DLP Projector Systems

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods...... require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here...... does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination of...

  8. Accurate level set method for simulations of liquid atomization☆

    Changxiao Shao; Kun Luo; Jianshan Yang; Song Chen; Jianren Fan

    2015-01-01

    Computational fluid dynamics is an efficient numerical approach for spray atomization study, but it is chal enging to accurately capture the gas–liquid interface. In this work, an accurate conservative level set method is intro-duced to accurately track the gas–liquid interfaces in liquid atomization. To validate the capability of this method, binary drop collision and drop impacting on liquid film are investigated. The results are in good agreement with experiment observations. In addition, primary atomization (swirling sheet atomization) is studied using this method. To the swirling sheet atomization, it is found that Rayleigh–Taylor instability in the azimuthal direction causes the primary breakup of liquid sheet and complex vortex structures are clustered around the rim of the liq-uid sheet. The effects of central gas velocity and liquid–gas density ratio on atomization are also investigated. This work lays a solid foundation for further studying the mechanism of spray atomization.

  9. Accurate nuclear radii and binding energies from a chiral interaction

    Ekstrom, A; Wendt, K A; Hagen, G; Papenbrock, T; Carlsson, B D; Forssen, C; Hjorth-Jensen, M; Navratil, P; Nazarewicz, W

    2015-01-01

    The accurate reproduction of nuclear radii and binding energies is a long-standing challenge in nuclear theory. To address this problem two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective 3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.

  10. Evaluating Multiplexed Quantitative Phosphopeptide Analysis on a Hybrid Quadrupole Mass Filter/Linear Ion Trap/Orbitrap Mass Spectrometer

    Erickson, Brian K.; Jedrychowski, Mark P.; McAlister, Graeme C.; Everley, Robert A.; Kunz, Ryan; Gygi, Steven P.

    2014-01-01

    As a driver for many biological processes, phosphorylation remains an area of intense research interest. Advances in multiplexed quantitation utilizing isobaric tags (e.g., TMT and iTRAQ) have the potential to create a new paradigm in quantitative proteomics. New instrumentation and software are propelling these multiplexed workflows forward, which results in more accurate, sensitive, and reproducible quantitation across tens of thousands of phosphopeptides. This study assesses the performanc...

  11. Equivalent method for accurate solution to linear interval equations

    王冲; 邱志平

    2013-01-01

    Based on linear interval equations, an accurate interval finite element method for solving structural static problems with uncertain parameters in terms of optimization is discussed. On the premise of ensuring the consistency of solution sets, the original interval equations are equivalently transformed into some deterministic inequations. On this basis, calculating the structural displacement response with interval parameters is predigested to a number of deterministic linear optimization problems. The results are proved to be accurate to the interval governing equations. Finally, a numerical example is given to demonstrate the feasibility and efficiency of the proposed method.

  12. Accurate upwind-monotone (nonoscillatory) methods for conservation laws

    Huynh, Hung T.

    1992-01-01

    The well known MUSCL scheme of Van Leer is constructed using a piecewise linear approximation. The MUSCL scheme is second order accurate at the smooth part of the solution except at extrema where the accuracy degenerates to first order due to the monotonicity constraint. To construct accurate schemes which are free from oscillations, the author introduces the concept of upwind monotonicity. Several classes of schemes, which are upwind monotone and of uniform second or third order accuracy are then presented. Results for advection with constant speed are shown. It is also shown that the new scheme compares favorably with state of the art methods.

  13. High accurate thickness gauge system of zirconium and Zircaloy-2 layers for zirconium liner cladding tubes

    In boiling water reactors, zirconium(Zr)-Zircaloy cladding tubes have been put into practice for lengthening a life cycle of the cladding tube. The cladding tube is a duplex tube with an inner layer of pure Zr bonded to Zircaloy-2 layer metallurgically. The assurance of the inner and outer layer thickness is essential for a reliability of the cladding tube. A new thickness gauge system in the manufacturing process has been developed to measure the thickness of each layer over an entire tube length instead of the conventional microscopic viewing method. This system uses an eddy current method and an ultrasonic method. In this paper, the quantitative analysis of undesirable factors in eddy current method and the signal processing method for accurate measurement are described. The outline of fully automated thickness gauge system is also reported

  14. Chemically Accurate Simulation of a Polyatomic Molecule-Metal Surface Reaction.

    Nattino, Francesco; Migliorini, Davide; Kroes, Geert-Jan; Dombrowski, Eric; High, Eric A; Killelea, Daniel R; Utz, Arthur L

    2016-07-01

    Although important to heterogeneous catalysis, the ability to accurately model reactions of polyatomic molecules with metal surfaces has not kept pace with developments in gas phase dynamics. Partnering the specific reaction parameter (SRP) approach to density functional theory with ab initio molecular dynamics (AIMD) extends our ability to model reactions with metals with quantitative accuracy from only the lightest reactant, H2, to essentially all molecules. This is demonstrated with AIMD calculations on CHD3 + Ni(111) in which the SRP functional is fitted to supersonic beam experiments, and validated by showing that AIMD with the resulting functional reproduces initial-state selected sticking measurements with chemical accuracy (4.2 kJ/mol ≈ 1 kcal/mol). The need for only semilocal exchange makes our scheme computationally tractable for dissociation on transition metals. PMID:27284787

  15. Accurate Non-adiabatic Quantum Dynamics from Pseudospectral Sampling of Time-dependent Gaussian Basis Sets

    Heaps, Charles W

    2016-01-01

    Quantum molecular dynamics requires an accurate representation of the molecular potential energy surface from a minimal number of electronic structure calculations, particularly for nonadiabatic dynamics where excited states are required. In this paper, we employ pseudospectral sampling of time-dependent Gaussian basis functions for the simulation of non-adiabatic dynamics. Unlike other methods, the pseudospectral Gaussian molecular dynamics tests the Schr\\"{o}dinger equation with $N$ Dirac delta functions located at the centers of the Gaussian functions reducing the scaling of potential energy evaluations from $\\mathcal{O}(N^2)$ to $\\mathcal{O}(N)$. By projecting the Gaussian basis onto discrete points in space, the method is capable of efficiently and quantitatively describing nonadiabatic population transfer and intra-surface quantum coherence. We investigate three model systems; the photodissociation of three coupled Morse oscillators, the bound state dynamics of two coupled Morse oscillators, and a two-d...

  16. Accurate formula for dissipative interaction in frequency modulation atomic force microscopy

    Suzuki, Kazuhiro; Matsushige, Kazumi; Yamada, Hirofumi [Department of Electronic Science and Engineering, Kyoto University, Katsura, Nishikyo, Kyoto 615-8510 (Japan); Kobayashi, Kei [Department of Electronic Science and Engineering, Kyoto University, Katsura, Nishikyo, Kyoto 615-8510 (Japan); The Hakubi Center for Advanced Research, Kyoto University, Katsura, Nishikyo, Kyoto 615-8520 (Japan); Labuda, Aleksander [Department of Physics, McGill University, Montreal H3A 2T8 (Canada)

    2014-12-08

    Much interest has recently focused on the viscosity of nano-confined liquids. Frequency modulation atomic force microscopy (FM-AFM) is a powerful technique that can detect variations in the conservative and dissipative forces between a nanometer-scale tip and a sample surface. We now present an accurate formula to convert the dissipation power of the cantilever measured during the experiment to damping of the tip-sample system. We demonstrated the conversion of the dissipation power versus tip-sample separation curve measured using a colloidal probe cantilever on a mica surface in water to the damping curve, which showed a good agreement with the theoretical curve. Moreover, we obtained the damping curve from the dissipation power curve measured on the hydration layers on the mica surface using a nanometer-scale tip, demonstrating that the formula allows us to quantitatively measure the viscosity of a nano-confined liquid using FM-AFM.

  17. Accurate formula for dissipative interaction in frequency modulation atomic force microscopy

    Suzuki, Kazuhiro; Kobayashi, Kei; Labuda, Aleksander; Matsushige, Kazumi; Yamada, Hirofumi

    2014-12-01

    Much interest has recently focused on the viscosity of nano-confined liquids. Frequency modulation atomic force microscopy (FM-AFM) is a powerful technique that can detect variations in the conservative and dissipative forces between a nanometer-scale tip and a sample surface. We now present an accurate formula to convert the dissipation power of the cantilever measured during the experiment to damping of the tip-sample system. We demonstrated the conversion of the dissipation power versus tip-sample separation curve measured using a colloidal probe cantilever on a mica surface in water to the damping curve, which showed a good agreement with the theoretical curve. Moreover, we obtained the damping curve from the dissipation power curve measured on the hydration layers on the mica surface using a nanometer-scale tip, demonstrating that the formula allows us to quantitatively measure the viscosity of a nano-confined liquid using FM-AFM.

  18. Accurate determination of the 235U isotope abundance by gamma spectrometry

    The purpose of this manual is to serve as guide in applications of the Certified Reference Material EC-NRM-171/NBS-SRM-969 for accurate U-235 isotope abundance measurements on bulk uranium samples by means of gamma spectrometry. The manual provides a thorough description of this non-destructive assay technique. Crucial measurement parameters affecting the accuracy of the gamma-spectrometric U-235 isotope abundance determination are discussed in detail and, whereever possible, evaluated quantitatively. The correction terms and tolerance limits given refer both to physical and chemical properties of the samples under assay and to relevant parameters of typical measurement systems such as counting geometry, signal processing, data evaluation and calibration. (orig.)

  19. Accurate formula for dissipative interaction in frequency modulation atomic force microscopy

    Much interest has recently focused on the viscosity of nano-confined liquids. Frequency modulation atomic force microscopy (FM-AFM) is a powerful technique that can detect variations in the conservative and dissipative forces between a nanometer-scale tip and a sample surface. We now present an accurate formula to convert the dissipation power of the cantilever measured during the experiment to damping of the tip-sample system. We demonstrated the conversion of the dissipation power versus tip-sample separation curve measured using a colloidal probe cantilever on a mica surface in water to the damping curve, which showed a good agreement with the theoretical curve. Moreover, we obtained the damping curve from the dissipation power curve measured on the hydration layers on the mica surface using a nanometer-scale tip, demonstrating that the formula allows us to quantitatively measure the viscosity of a nano-confined liquid using FM-AFM

  20. Chemically Accurate Simulation of a Polyatomic Molecule-Metal Surface Reaction

    2016-01-01

    Although important to heterogeneous catalysis, the ability to accurately model reactions of polyatomic molecules with metal surfaces has not kept pace with developments in gas phase dynamics. Partnering the specific reaction parameter (SRP) approach to density functional theory with ab initio molecular dynamics (AIMD) extends our ability to model reactions with metals with quantitative accuracy from only the lightest reactant, H2, to essentially all molecules. This is demonstrated with AIMD calculations on CHD3 + Ni(111) in which the SRP functional is fitted to supersonic beam experiments, and validated by showing that AIMD with the resulting functional reproduces initial-state selected sticking measurements with chemical accuracy (4.2 kJ/mol ≈ 1 kcal/mol). The need for only semilocal exchange makes our scheme computationally tractable for dissociation on transition metals. PMID:27284787

  1. Quantitative methods in accounting research

    Marek Gruszczynski

    2009-01-01

    Quantitative methods are in frequent use in modern accounting research. The evidence may be found e.g. in the journals like “Journal of Accounting Research”, “European Accounting Review”, “Review of Quantitative Finance and Accounting” or in the Accounting Research Network in SSRN base. Paper presents a brief survey of research areas and statistical-econometric approaches in accounting research. Particular reference goes to research on corporate disclosure. Methodological component of the pap...

  2. 09432 Report -- Quantitative Software Design

    Kreissig, Astrid; Poernomo, Iman; Reussner, Ralf

    2010-01-01

    Between 20.10.09 and 23.10.09, the Dagstuhl Seminar 09432, Quantitative Software Design, was held at the International Conference and Research Center (IBFI), Schloss Dagstuhl. Quantitative software design is a field of research that is not yet firmly established. A number of challenging open research issues are only recently being addressed by the academic research community (see below). The topic is also gaining increasing emphasis in industrial research, as any progress...

  3. Plant biology through quantitative proteomics

    Bygdell, Joakim

    2013-01-01

    Over the last decade the field of mass spectrometry based proteomics has advanced from qualitative, analyses leading to publications revolving around lists of identified proteins and peptides, to addressing more biologically relevant issues requiring measurement of the abundance of identified proteins and hence quantitive mass spectrometry. The work described in this thesis addresses problems with quantitive proteomics in plant sciences, particularly complications caused by the complexity...

  4. Assessing size of pituitary adenomas: a comparison of qualitative and quantitative methods on MR

    Davies, Benjamin M; Carr, Elizabeth; Soh, Calvin; Gnanalingham, Kanna K.

    2016-01-01

    Background A variety of methods are used for estimating pituitary tumour size in clinical practice and in research. Quantitative methods, such as maximum tumour dimension, and qualitative methods, such as Hardy and Knosp grades, are well established but do not give an accurate assessment of the tumour volume. We therefore sought to compare existing measures of pituitary tumours with more quantitative methods of tumour volume estimation. Method Magnetic resonance imaging was reviewed for 99 co...

  5. QIN. Promise and pitfalls of quantitative imaging in oncology clinical trials

    Kurland, Brenda F; Gerstner, Elizabeth R.; Mountz, James M; Schwartz, Lawrence H.; Ryan, Christopher W.; Graham, Michael M.; Buatti, John M.; Fennessy, Fiona M.; Eikman, Edward A.; Kumar, Virendra; Forster, Kenneth M.; Wahl, Richard L.; Lieberman, Frank S.

    2012-01-01

    Quantitative imaging using CT, MRI, and PET modalities will play an increasingly important role in the design of oncology trials addressing molecularly targeted, personalized therapies. The advent of molecularly targeted therapies, exemplified by antiangiogenic drugs, creates new complexities in the assessment of response. The Quantitative Imaging Network (QIN) addresses the need for imaging modalities which can accurately and reproducibly measure not just change in tumor size, but changes in...

  6. A Simple, Quantitative Method Using Alginate Gel to Determine Rat Colonic Tumor Volume In Vivo

    Amy A. Irving; Young, Lindsay B; Pleiman, Jennifer K; Konrath, Michael J; Marzella, Blake; Nonte, Michael; Cacciatore, Justin; Ford, Madeline R; Clipson, Linda; Amos-Landgraf, James M.; Dove, William F.

    2014-01-01

    Many studies of the response of colonic tumors to therapeutics use tumor multiplicity as the endpoint to determine the effectiveness of the agent. These studies can be greatly enhanced by accurate measurements of tumor volume. Here we present a quantitative method to easily and accurately determine colonic tumor volume. This approach uses a biocompatible alginate to create a negative mold of a tumor-bearing colon; this mold is then used to make positive casts of dental stone that replicate th...

  7. Is Expressive Language Disorder an Accurate Diagnostic Category?

    Leonard, Laurence B.

    2009-01-01

    Purpose: To propose that the diagnostic category of "expressive language disorder" as distinct from a disorder of both expressive and receptive language might not be accurate. Method: Evidence that casts doubt on a pure form of this disorder is reviewed from several sources, including the literature on genetic findings, theories of language…

  8. Accurate momentum transfer cross section for the attractive Yukawa potential

    Khrapak, S. A., E-mail: Sergey.Khrapak@dlr.de [Forschungsgruppe Komplexe Plasmen, Deutsches Zentrum für Luft- und Raumfahrt, Oberpfaffenhofen (Germany)

    2014-04-15

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within ±2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  9. Is a Writing Sample Necessary for "Accurate Placement"?

    Sullivan, Patrick; Nielsen, David

    2009-01-01

    The scholarship about assessment for placement is extensive and notoriously ambiguous. Foremost among the questions that continue to be unresolved in this scholarship is this one: Is a writing sample necessary for "accurate placement"? Using a robust data sample of student assessment essays and ACCUPLACER test scores, we put this question to the…

  10. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  11. Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks

    Bahrepour, Majid; Meratnia, Nirvana; Havinga, Paul J.M.

    2010-01-01

    Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless s

  12. Efficient and accurate sound propagation using adaptive rectangular decomposition.

    Raghuvanshi, Nikunj; Narain, Rahul; Lin, Ming C

    2009-01-01

    Accurate sound rendering can add significant realism to complement visual display in interactive applications, as well as facilitate acoustic predictions for many engineering applications, like accurate acoustic analysis for architectural design. Numerical simulation can provide this realism most naturally by modeling the underlying physics of wave propagation. However, wave simulation has traditionally posed a tough computational challenge. In this paper, we present a technique which relies on an adaptive rectangular decomposition of 3D scenes to enable efficient and accurate simulation of sound propagation in complex virtual environments. It exploits the known analytical solution of the Wave Equation in rectangular domains, and utilizes an efficient implementation of the Discrete Cosine Transform on Graphics Processors (GPU) to achieve at least a 100-fold performance gain compared to a standard Finite-Difference Time-Domain (FDTD) implementation with comparable accuracy, while also being 10-fold more memory efficient. Consequently, we are able to perform accurate numerical acoustic simulation on large, complex scenes in the kilohertz range. To the best of our knowledge, it was not previously possible to perform such simulations on a desktop computer. Our work thus enables acoustic analysis on large scenes and auditory display for complex virtual environments on commodity hardware. PMID:19590105

  13. Accurate Period Approximation for Any Simple Pendulum Amplitude

    XUE De-Sheng; ZHOU Zhao; GAO Mei-Zhen

    2012-01-01

    Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed.Based on an approximation of the elliptic integral,two new logarithmic formulae for large amplitude close to 180° are obtained.Considering the trigonometric function modulation results from the dependence of relative error on the amplitude,we realize accurate approximation period expressions for any amplitude between 0 and 180°.A relative error less than 0.02% is achieved for any amplitude.This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.%Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed. Based on an approximation of the elliptic integral, two new logarithmic formulae for large amplitude close to 180° are obtained. Considering the trigonometric function modulation results from the dependence of relative error on the amplitude, we realize accurate approximation period expressions for any amplitude between 0 and 180°. A relative error less than 0.02% is achieved for any amplitude. This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.

  14. Second-order accurate nonoscillatory schemes for scalar conservation laws

    Huynh, Hung T.

    1989-01-01

    Explicit finite difference schemes for the computation of weak solutions of nonlinear scalar conservation laws is presented and analyzed. These schemes are uniformly second-order accurate and nonoscillatory in the sense that the number of extrema of the discrete solution is not increasing in time.

  15. Accurate segmentation of dense nanoparticles by partially discrete electron tomography

    Roelandts, T., E-mail: tom.roelandts@ua.ac.be [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, K.J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, 1098 XG Amsterdam (Netherlands); Biermans, E. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Kuebel, C. [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Sijbers, J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)

    2012-03-15

    Accurate segmentation of nanoparticles within various matrix materials is a difficult problem in electron tomography. Due to artifacts related to image series acquisition and reconstruction, global thresholding of reconstructions computed by established algorithms, such as weighted backprojection or SIRT, may result in unreliable and subjective segmentations. In this paper, we introduce the Partially Discrete Algebraic Reconstruction Technique (PDART) for computing accurate segmentations of dense nanoparticles of constant composition. The particles are segmented directly by the reconstruction algorithm, while the surrounding regions are reconstructed using continuously varying gray levels. As no properties are assumed for the other compositions of the sample, the technique can be applied to any sample where dense nanoparticles must be segmented, regardless of the surrounding compositions. For both experimental and simulated data, it is shown that PDART yields significantly more accurate segmentations than those obtained by optimal global thresholding of the SIRT reconstruction. -- Highlights: Black-Right-Pointing-Pointer We present a novel reconstruction method for partially discrete electron tomography. Black-Right-Pointing-Pointer It accurately segments dense nanoparticles directly during reconstruction. Black-Right-Pointing-Pointer The gray level to use for the nanoparticles is determined objectively. Black-Right-Pointing-Pointer The method expands the set of samples for which discrete tomography can be applied.

  16. Accurate momentum transfer cross section for the attractive Yukawa potential

    Khrapak, Sergey

    2014-01-01

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within 2% in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  17. Accurate momentum transfer cross section for the attractive Yukawa potential

    Khrapak, S. A.

    2014-01-01

    Accurate expression for the momentum transfer cross section for the attractive Yukawa potential is proposed. This simple analytic expression agrees with the numerical results better than to within $\\pm 2\\%$ in the regime relevant for ion-particle collisions in complex (dusty) plasmas.

  18. On the importance of having accurate data for astrophysical modelling

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  19. Isodesmic reaction for accurate theoretical pKa calculations of amino acids and peptides.

    Sastre, S; Casasnovas, R; Muñoz, F; Frau, J

    2016-04-20

    Theoretical and quantitative prediction of pKa values at low computational cost is a current challenge in computational chemistry. We report that the isodesmic reaction scheme provides semi-quantitative predictions (i.e. mean absolute errors of 0.5-1.0 pKa unit) for the pKa1 (α-carboxyl), pKa2 (α-amino) and pKa3 (sidechain groups) of a broad set of amino acids and peptides. This method fills the gaps of thermodynamic cycles for the computational pKa calculation of molecules that are unstable in the gas phase or undergo proton transfer reactions or large conformational changes from solution to the gas phase. We also report the key criteria to choose a reference species to make accurate predictions. This method is computationally inexpensive and makes use of standard density functional theory (DFT) and continuum solvent models. It is also conceptually simple and easy to use for researchers not specialized in theoretical chemistry methods. PMID:27052591

  20. Very accurate determination of trace amounts of selenium in biological materials by Radiochemical Neutron Activation Analysis

    Selenium is both a toxic and an essential trace element for humans and animals. The purpose of this work was to elaborate a very accurate (definitive) method for the determination of selenium traces in different types of biological materials. The method is based on a combination of neutron activation and quantitative and very selective radiochemical separation of selenium by ion-exchange and extraction chromatography, followed by gamma-spectrometric measurement of 75Se. Three amines: 2,3-diaminonaphtalene, 3,3'-diaminobenzidine and 4-nitro-phenyldiamine supported on Bio Beads SM-2 or Amberlite XAD-4 were chosen to batch experiments. Using 3,3'-diaminobenzidine tracer experiments were carried out with the unirradiated biological samples. They have proved that the whole radiochemical separation procedure is quantitative. Gamma-ray spectrum of the selenium fraction practically did not show any other activities except background peaks. The obtained results demonstrate good agreement of results obtained by our new '' definitive '' method for the determination of selenium with the certified values

  1. Accurate non-adiabatic quantum dynamics from pseudospectral sampling of time-dependent Gaussian basis sets

    Heaps, Charles W.; Mazziotti, David A.

    2016-08-01

    Quantum molecular dynamics requires an accurate representation of the molecular potential energy surface from a minimal number of electronic structure calculations, particularly for nonadiabatic dynamics where excited states are required. In this paper, we employ pseudospectral sampling of time-dependent Gaussian basis functions for the simulation of non-adiabatic dynamics. Unlike other methods, the pseudospectral Gaussian molecular dynamics tests the Schrödinger equation with N Dirac delta functions located at the centers of the Gaussian functions reducing the scaling of potential energy evaluations from O ( N 2 ) to O ( N ) . By projecting the Gaussian basis onto discrete points in space, the method is capable of efficiently and quantitatively describing the nonadiabatic population transfer and intra-surface quantum coherence. We investigate three model systems: the photodissociation of three coupled Morse oscillators, the bound state dynamics of two coupled Morse oscillators, and a two-dimensional model for collinear triatomic vibrational dynamics. In all cases, the pseudospectral Gaussian method is in quantitative agreement with numerically exact calculations. The results are promising for nonadiabatic molecular dynamics in molecular systems where strongly correlated ground or excited states require expensive electronic structure calculations.

  2. A statistical framework for accurate taxonomic assignment of metagenomic sequencing reads.

    Hongmei Jiang

    Full Text Available The advent of next-generation sequencing technologies has greatly promoted the field of metagenomics which studies genetic material recovered directly from an environment. Characterization of genomic composition of a metagenomic sample is essential for understanding the structure of the microbial community. Multiple genomes contained in a metagenomic sample can be identified and quantitated through homology searches of sequence reads with known sequences catalogued in reference databases. Traditionally, reads with multiple genomic hits are assigned to non-specific or high ranks of the taxonomy tree, thereby impacting on accurate estimates of relative abundance of multiple genomes present in a sample. Instead of assigning reads one by one to the taxonomy tree as many existing methods do, we propose a statistical framework to model the identified candidate genomes to which sequence reads have hits. After obtaining the estimated proportion of reads generated by each genome, sequence reads are assigned to the candidate genomes and the taxonomy tree based on the estimated probability by taking into account both sequence alignment scores and estimated genome abundance. The proposed method is comprehensively tested on both simulated datasets and two real datasets. It assigns reads to the low taxonomic ranks very accurately. Our statistical approach of taxonomic assignment of metagenomic reads, TAMER, is implemented in R and available at http://faculty.wcas.northwestern.edu/hji403/MetaR.htm.

  3. A microfabrication-based approach to quantitative isothermal titration calorimetry.

    Wang, Bin; Jia, Yuan; Lin, Qiao

    2016-04-15

    Isothermal titration calorimetry (ITC) directly measures heat evolved in a chemical reaction to determine equilibrium binding properties of biomolecular systems. Conventional ITC instruments are expensive, use complicated design and construction, and require long analysis times. Microfabricated calorimetric devices are promising, although they have yet to allow accurate, quantitative ITC measurements of biochemical reactions. This paper presents a microfabrication-based approach to integrated, quantitative ITC characterization of biomolecular interactions. The approach integrates microfabricated differential calorimetric sensors with microfluidic titration. Biomolecules and reagents are introduced at each of a series of molar ratios, mixed, and allowed to react. The reaction thermal power is differentially measured, and used to determine the thermodynamic profile of the biomolecular interactions. Implemented in a microdevice featuring thermally isolated, well-defined reaction volumes with minimized fluid evaporation as well as highly sensitive thermoelectric sensing, the approach enables accurate and quantitative ITC measurements of protein-ligand interactions under different isothermal conditions. Using the approach, we demonstrate ITC characterization of the binding of 18-Crown-6 with barium chloride, and the binding of ribonuclease A with cytidine 2'-monophosphate within reaction volumes of approximately 0.7 µL and at concentrations down to 2mM. For each binding system, the ITC measurements were completed with considerably reduced analysis times and material consumption, and yielded a complete thermodynamic profile of the molecular interaction in agreement with published data. This demonstrates the potential usefulness of our approach for biomolecular characterization in biomedical applications. PMID:26655185

  4. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  5. Quantitative texton sequences for legible bivariate maps.

    Ware, Colin

    2009-01-01

    Representing bivariate scalar maps is a common but difficult visualization problem. One solution has been to use two dimensional color schemes, but the results are often hard to interpret and inaccurately read. An alternative is to use a color sequence for one variable and a texture sequence for another. This has been used, for example, in geology, but much less studied than the two dimensional color scheme, although theory suggests that it should lead to easier perceptual separation of information relating to the two variables. To make a texture sequence more clearly readable the concept of the quantitative texton sequence (QTonS) is introduced. A QTonS is defined a sequence of small graphical elements, called textons, where each texton represents a different numerical value and sets of textons can be densely displayed to produce visually differentiable textures. An experiment was carried out to compare two bivariate color coding schemes with two schemes using QTonS for one bivariate map component and a color sequence for the other. Two different key designs were investigated (a key being a sequence of colors or textures used in obtaining quantitative values from a map). The first design used two separate keys, one for each dimension, in order to measure how accurately subjects could independently estimate the underlying scalar variables. The second key design was two dimensional and intended to measure the overall integral accuracy that could be obtained. The results show that the accuracy is substantially higher for the QTonS/color sequence schemes. A hypothesis that texture/color sequence combinations are better for independent judgments of mapped quantities was supported. A second experiment probed the limits of spatial resolution for QTonSs. PMID:19834229

  6. Method for Accurately Calibrating a Spectrometer Using Broadband Light

    Simmons, Stephen; Youngquist, Robert

    2011-01-01

    A novel method has been developed for performing very fine calibration of a spectrometer. This process is particularly useful for modern miniature charge-coupled device (CCD) spectrometers where a typical factory wavelength calibration has been performed and a finer, more accurate calibration is desired. Typically, the factory calibration is done with a spectral line source that generates light at known wavelengths, allowing specific pixels in the CCD array to be assigned wavelength values. This method is good to about 1 nm across the spectrometer s wavelength range. This new method appears to be accurate to about 0.1 nm, a factor of ten improvement. White light is passed through an unbalanced Michelson interferometer, producing an optical signal with significant spectral variation. A simple theory can be developed to describe this spectral pattern, so by comparing the actual spectrometer output against this predicted pattern, errors in the wavelength assignment made by the spectrometer can be determined.

  7. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    Jianhua Zhang

    2014-01-01

    Full Text Available This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views’ calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.

  8. Accurate multireference study of Si3 electronic manifold

    Goncalves, Cayo Emilio Monteiro; Braga, Joao Pedro

    2016-01-01

    Since it has been shown that the silicon trimer has a highly multi-reference character, accurate multi-reference configuration interaction calculations are performed to elucidate its electronic manifold. Emphasis is given to the long range part of the potential, aiming to understand the atom-diatom collisions dynamical aspects, to describe conical intersections and important saddle points along the reactive path. Potential energy surface main features analysis are performed for benchmarking, and highly accurate values for structures, vibrational constants and energy gaps are reported, as well as the unpublished spin-orbit coupling magnitude. The results predict that inter-system crossings will play an important role in dynamical simulations, specially in triplet state quenching, making the problem of constructing a precise potential energy surface more complicated and multi-layer dependent. The ground state is predicted to be the singlet one, but since the singlet-triplet gap is rather small (2.448 kJ/mol) bo...

  9. Simple and High-Accurate Schemes for Hyperbolic Conservation Laws

    Renzhong Feng

    2014-01-01

    Full Text Available The paper constructs a class of simple high-accurate schemes (SHA schemes with third order approximation accuracy in both space and time to solve linear hyperbolic equations, using linear data reconstruction and Lax-Wendroff scheme. The schemes can be made even fourth order accurate with special choice of parameter. In order to avoid spurious oscillations in the vicinity of strong gradients, we make the SHA schemes total variation diminishing ones (TVD schemes for short by setting flux limiter in their numerical fluxes and then extend these schemes to solve nonlinear Burgers’ equation and Euler equations. The numerical examples show that these schemes give high order of accuracy and high resolution results. The advantages of these schemes are their simplicity and high order of accuracy.

  10. Fixed-Wing Micro Aerial Vehicle for Accurate Corridor Mapping

    Rehak, M.; Skaloud, J.

    2015-08-01

    In this study we present a Micro Aerial Vehicle (MAV) equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm) mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  11. Accurate Development of Thermal Neutron Scattering Cross Section Libraries

    Hawari, Ayman; Dunn, Michael

    2014-06-10

    The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.

  12. Accurate Load Modeling Based on Analytic Hierarchy Process

    Zhenshu Wang

    2016-01-01

    Full Text Available Establishing an accurate load model is a critical problem in power system modeling. That has significant meaning in power system digital simulation and dynamic security analysis. The synthesis load model (SLM considers the impact of power distribution network and compensation capacitor, while randomness of power load is more precisely described by traction power system load model (TPSLM. On the basis of these two load models, a load modeling method that combines synthesis load with traction power load is proposed in this paper. This method uses analytic hierarchy process (AHP to interact with two load models. Weight coefficients of two models can be calculated after formulating criteria and judgment matrixes and then establishing a synthesis model by weight coefficients. The effectiveness of the proposed method was examined through simulation. The results show that accurate load modeling based on AHP can effectively improve the accuracy of load model and prove the validity of this method.

  13. Accurate adjoint design sensitivities for nano metal optics.

    Hansen, Paul; Hesselink, Lambertus

    2015-09-01

    We present a method for obtaining accurate numerical design sensitivities for metal-optical nanostructures. Adjoint design sensitivity analysis, long used in fluid mechanics and mechanical engineering for both optimization and structural analysis, is beginning to be used for nano-optics design, but it fails for sharp-cornered metal structures because the numerical error in electromagnetic simulations of metal structures is highest at sharp corners. These locations feature strong field enhancement and contribute strongly to design sensitivities. By using high-accuracy FEM calculations and rounding sharp features to a finite radius of curvature we obtain highly-accurate design sensitivities for 3D metal devices. To provide a bridge to the existing literature on adjoint methods in other fields, we derive the sensitivity equations for Maxwell's equations in the PDE framework widely used in fluid mechanics. PMID:26368483

  14. Efficient and Accurate Robustness Estimation for Large Complex Networks

    Wandelt, Sebastian

    2016-01-01

    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  15. The FLUKA code: An accurate simulation tool for particle therapy

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  16. A novel automated image analysis method for accurate adipocyte quantification

    Osman, Osman S.; Selway, Joanne L; Kępczyńska, Małgorzata A; Stocker, Claire J.; O’Dowd, Jacqueline F; Cawthorne, Michael A.; Arch, Jonathan RS; Jassim, Sabah; Langlands, Kenneth

    2013-01-01

    Increased adipocyte size and number are associated with many of the adverse effects observed in metabolic disease states. While methods to quantify such changes in the adipocyte are of scientific and clinical interest, manual methods to determine adipocyte size are both laborious and intractable to large scale investigations. Moreover, existing computational methods are not fully automated. We, therefore, developed a novel automatic method to provide accurate measurements of the cross-section...

  17. Combinatorial Approaches to Accurate Identification of Orthologous Genes

    Shi, Guanqun

    2011-01-01

    The accurate identification of orthologous genes across different species is a critical and challenging problem in comparative genomics and has a wide spectrum of biological applications including gene function inference, evolutionary studies and systems biology. During the past several years, many methods have been proposed for ortholog assignment based on sequence similarity, phylogenetic approaches, synteny information, and genome rearrangement. Although these methods share many commonly a...

  18. Strategy Guideline. Accurate Heating and Cooling Load Calculations

    Burdick, Arlan [IBACOS, Inc., Pittsburgh, PA (United States)

    2011-06-01

    This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.

  19. Strategy Guideline: Accurate Heating and Cooling Load Calculations

    Burdick, A.

    2011-06-01

    This guide presents the key criteria required to create accurate heating and cooling load calculations and offers examples of the implications when inaccurate adjustments are applied to the HVAC design process. The guide shows, through realistic examples, how various defaults and arbitrary safety factors can lead to significant increases in the load estimate. Emphasis is placed on the risks incurred from inaccurate adjustments or ignoring critical inputs of the load calculation.

  20. Evaluation of accurate eye corner detection methods for gaze estimation

    Bengoechea, Jose Javier; Cerrolaza, Juan J.; Villanueva, Arantxa; Cabeza, Rafael

    2014-01-01

    Accurate detection of iris center and eye corners appears to be a promising approach for low cost gaze estimation. In this paper we propose novel eye inner corner detection methods. Appearance and feature based segmentation approaches are suggested. All these methods are exhaustively tested on a realistic dataset containing images of subjects gazing at different points on a screen. We have demonstrated that a method based on a neural network presents the best performance even in light changin...

  1. Building with Drones: Accurate 3D Facade Reconstruction using MAVs

    Daftry, Shreyansh; Hoppe, Christof; Bischof, Horst

    2015-01-01

    Automatic reconstruction of 3D models from images using multi-view Structure-from-Motion methods has been one of the most fruitful outcomes of computer vision. These advances combined with the growing popularity of Micro Aerial Vehicles as an autonomous imaging platform, have made 3D vision tools ubiquitous for large number of Architecture, Engineering and Construction applications among audiences, mostly unskilled in computer vision. However, to obtain high-resolution and accurate reconstruc...

  2. Mouse models of human AML accurately predict chemotherapy response

    Zuber, Johannes; Radtke, Ina; Pardee, Timothy S.; Zhao, Zhen; Rappaport, Amy R.; Luo, Weijun; McCurrach, Mila E.; Yang, Miao-Miao; Dolan, M. Eileen; Kogan, Scott C.; Downing, James R.; Lowe, Scott W.

    2009-01-01

    The genetic heterogeneity of cancer influences the trajectory of tumor progression and may underlie clinical variation in therapy response. To model such heterogeneity, we produced genetically and pathologically accurate mouse models of common forms of human acute myeloid leukemia (AML) and developed methods to mimic standard induction chemotherapy and efficiently monitor therapy response. We see that murine AMLs harboring two common human AML genotypes show remarkably diverse responses to co...

  3. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Zhang Mingheng; Zhen Yaobao; Hui Ganglong; Chen Gang

    2013-01-01

    Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM) are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the mul...

  4. Accurate calibration of stereo cameras for machine vision

    Li, Liangfu; Feng, Zuren; Feng, Yuanjing

    2004-01-01

    Camera calibration is an important task for machine vision, whose goal is to obtain the internal and external parameters of each camera. With these parameters, the 3D positions of a scene point, which is identified and matched in two stereo images, can be determined by the triangulation theory. This paper presents a new accurate estimation of CCD camera parameters for machine vision. We present a fast technique to estimate the camera center with special arrangement of calibration target and t...

  5. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Mark Shortis

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation a...

  6. Fast and Accurate Bilateral Filtering using Gauss-Polynomial Decomposition

    Chaudhury, Kunal N.

    2015-01-01

    The bilateral filter is a versatile non-linear filter that has found diverse applications in image processing, computer vision, computer graphics, and computational photography. A widely-used form of the filter is the Gaussian bilateral filter in which both the spatial and range kernels are Gaussian. A direct implementation of this filter requires $O(\\sigma^2)$ operations per pixel, where $\\sigma$ is the standard deviation of the spatial Gaussian. In this paper, we propose an accurate approxi...

  7. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John

    2010-01-01

    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  8. Dejavu: An Accurate Energy-Efficient Outdoor Localization System

    Aly, Heba; Youssef, Moustafa

    2013-01-01

    We present Dejavu, a system that uses standard cell-phone sensors to provide accurate and energy-efficient outdoor localization suitable for car navigation. Our analysis shows that different road landmarks have a unique signature on cell-phone sensors; For example, going inside tunnels, moving over bumps, going up a bridge, and even potholes all affect the inertial sensors on the phone in a unique pattern. Dejavu employs a dead-reckoning localization approach and leverages these road landmark...

  9. Accurate Parameter Estimation for Unbalanced Three-Phase System

    Yuan Chen; Hing Cheung So

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newt...

  10. Accurate, inexpensive testing of laser pointer power for safe operation

    An accurate, inexpensive test-bed for the measurement of optical power emitted from handheld lasers is described. The setup consists of a power meter, optical bandpass filters, an adjustable iris and self-centering lens mounts. We demonstrate this test-bed by evaluating the output power of 23 laser pointers with respect to the limits imposed by the US Code of Federal Regulations. We find a compliance rate of only 26%. A discussion of potential laser pointer hazards is included. (paper)

  11. DOMAC: an accurate, hybrid protein domain prediction server

    Cheng, Jianlin

    2007-01-01

    Protein domain prediction is important for protein structure prediction, structure determination, function annotation, mutagenesis analysis and protein engineering. Here we describe an accurate protein domain prediction server (DOMAC) combining both template-based and ab initio methods. The preliminary version of the server was ranked among the top domain prediction servers in the seventh edition of Critical Assessment of Techniques for Protein Structure Prediction (CASP7), 2006. DOMAC server...

  12. A multiple more accurate Hardy-Littlewood-Polya inequality

    Qiliang Huang

    2012-11-01

    Full Text Available By introducing multi-parameters and conjugate exponents and using Euler-Maclaurin’s summation formula, we estimate the weight coefficient and prove a multiple more accurate Hardy-Littlewood-Polya (H-L-P inequality, which is an extension of some earlier published results. We also prove that the constant factor in the new inequality is the best possible, and obtain its equivalent forms.

  13. Shock Emergence in Supernovae: Limiting Cases and Accurate Approximations

    Ro, Stephen

    2013-01-01

    We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient the solutions match the outcome of shock acceleration in exponential atmospheres.

  14. Shock Emergence in Supernovae: Limiting Cases and Accurate Approximations

    Ro, Stephen; Matzner, Christopher D.

    2013-08-01

    We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere, there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient, the solutions match the outcome of shock acceleration in exponential atmospheres.

  15. SHOCK EMERGENCE IN SUPERNOVAE: LIMITING CASES AND ACCURATE APPROXIMATIONS

    Ro, Stephen; Matzner, Christopher D. [Department of Astronomy and Astrophysics, University of Toronto, 50 St. George St., Toronto, ON M5S 3H4 (Canada)

    2013-08-10

    We examine the dynamics of accelerating normal shocks in stratified planar atmospheres, providing accurate fitting formulae for the scaling index relating shock velocity to the initial density and for the post-shock acceleration factor as functions of the polytropic and adiabatic indices which parameterize the problem. In the limit of a uniform initial atmosphere, there are analytical formulae for these quantities. In the opposite limit of a very steep density gradient, the solutions match the outcome of shock acceleration in exponential atmospheres.

  16. An accurate and robust gyroscope-gased pedometer.

    Lim, Yoong P; Brown, Ian T; Khoo, Joshua C T

    2008-01-01

    Pedometers are known to have steps estimation issues. This is mainly attributed to their innate acceleration based measuring sensory. A micro-machined gyroscope (better immunity to acceleration) based pedometer is proposed. Through syntactic data recognition of apriori knowledge of human shank's dynamics and temporally précised detection of heel strikes permitted by Wavelet decomposition, an accurate and robust pedometer is acquired. PMID:19163737

  17. Accurate calculation of thermal noise in multilayer coating

    Gurkovsky, Alexey; Vyatchanin, Sergey

    2010-01-01

    We derive accurate formulas for thermal fluctuations in multilayer interferometric coating taking into account light propagation inside the coating. In particular, we calculate the reflected wave phase as a function of small displacements of the boundaries between the layers using transmission line model for interferometric coating and derive formula for spectral density of reflected phase in accordance with Fluctuation-Dissipation Theorem. We apply the developed approach for calculation of t...

  18. Novel multi-beam radiometers for accurate ocean surveillance

    Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.;

    2014-01-01

    Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions, co......, conical scanners and push-broom antennas are compared. The comparison will cover reflector optics and focal plane array configuration....

  19. Strategy for accurate liver intervention by an optical tracking system

    Lin, Qinyong; Yang, Rongqian; Cai, Ken; Guan, Peifeng; Xiao, Weihu; Wu, Xiaoming

    2015-01-01

    Image-guided navigation for radiofrequency ablation of liver tumors requires the accurate guidance of needle insertion into a tumor target. The main challenge of image-guided navigation for radiofrequency ablation of liver tumors is the occurrence of liver deformations caused by respiratory motion. This study reports a strategy of real-time automatic registration to track custom fiducial markers glued onto the surface of a patient’s abdomen to find the respiratory phase, in which the static p...

  20. Efficient and Accurate Path Cost Estimation Using Trajectory Data

    Dai, Jian; Yang, Bin; Guo, Chenjuan; Jensen, Christian S.

    2015-01-01

    Using the growing volumes of vehicle trajectory data, it becomes increasingly possible to capture time-varying and uncertain travel costs in a road network, including travel time and fuel consumption. The current paradigm represents a road network as a graph, assigns weights to the graph's edges by fragmenting trajectories into small pieces that fit the underlying edges, and then applies a routing algorithm to the resulting graph. We propose a new paradigm that targets more accurate and more ...

  1. Accurate molecular classification of cancer using simple rules

    Gotoh Osamu; Wang Xiaosheng

    2009-01-01

    Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often ...

  2. Accurate Identification of Fear Facial Expressions Predicts Prosocial Behavior

    Marsh, Abigail A.; Kozak, Megan N.; Ambady, Nalini

    2007-01-01

    The fear facial expression is a distress cue that is associated with the provision of help and prosocial behavior. Prior psychiatric studies have found deficits in the recognition of this expression by individuals with antisocial tendencies. However, no prior study has shown accuracy for recognition of fear to predict actual prosocial or antisocial behavior in an experimental setting. In 3 studies, the authors tested the prediction that individuals who recognize fear more accurately will beha...

  3. Continuous glucose monitors prove highly accurate in critically ill children

    Bridges, Brian C.; Preissig, Catherine M; Maher, Kevin O.; Rigby, Mark R

    2010-01-01

    Introduction Hyperglycemia is associated with increased morbidity and mortality in critically ill patients and strict glycemic control has become standard care for adults. Recent studies have questioned the optimal targets for such management and reported increased rates of iatrogenic hypoglycemia in both critically ill children and adults. The ability to provide accurate, real-time continuous glucose monitoring would improve the efficacy and safety of this practice in critically ill patients...

  4. Accurate quantum state estimation via "Keeping the experimentalist honest"

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick

    2006-01-01

    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  5. A highly accurate method to solve Fisher’s equation

    Mehdi Bastani; Davod Khojasteh Salkuyeh

    2012-03-01

    In this study, we present a new and very accurate numerical method to approximate the Fisher’s-type equations. Firstly, the spatial derivative in the proposed equation is approximated by a sixth-order compact finite difference (CFD6) scheme. Secondly, we solve the obtained system of differential equations using a third-order total variation diminishing Runge–Kutta (TVD-RK3) scheme. Numerical examples are given to illustrate the efficiency of the proposed method.

  6. Accurate Method for Determining Adhesion of Cantilever Beams

    Michalske, T.A.; de Boer, M.P.

    1999-01-08

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  7. A robust and accurate formulation of molecular and colloidal electrostatics

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  8. Robust Small Sample Accurate Inference in Moment Condition Models

    Serigne N. Lo; Elvezio Ronchetti

    2006-01-01

    Procedures based on the Generalized Method of Moments (GMM) (Hansen, 1982) are basic tools in modern econometrics. In most cases, the theory available for making inference with these procedures is based on first order asymptotic theory. It is well-known that the (first order) asymptotic distribution does not provide accurate p-values and confidence intervals in moderate to small samples. Moreover, in the presence of small deviations from the assumed model, p-values and confidence intervals ba...

  9. Is bioelectrical impedance accurate for use in large epidemiological studies?

    Merchant Anwar T

    2008-09-01

    Full Text Available Abstract Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI, waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study.

  10. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry

  11. Can blind persons accurately assess body size from the voice?

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. PMID:27095264

  12. An accurate determination of the flux within a slab

    During the past decade, several articles have been written concerning accurate solutions to the monoenergetic neutron transport equation in infinite and semi-infinite geometries. The numerical formulations found in these articles were based primarily on the extensive theoretical investigations performed by the open-quotes transport greatsclose quotes such as Chandrasekhar, Busbridge, Sobolev, and Ivanov, to name a few. The development of numerical solutions in infinite and semi-infinite geometries represents an example of how mathematical transport theory can be utilized to provide highly accurate and efficient numerical transport solutions. These solutions, or analytical benchmarks, are useful as open-quotes industry standards,close quotes which provide guidance to code developers and promote learning in the classroom. The high accuracy of these benchmarks is directly attributable to the rapid advancement of the state of computing and computational methods. Transport calculations that were beyond the capability of the open-quotes supercomputersclose quotes of just a few years ago are now possible at one's desk. In this paper, we again build upon the past to tackle the slab problem, which is of the next level of difficulty in comparison to infinite media problems. The formulation is based on the monoenergetic Green's function, which is the most fundamental transport solution. This method of solution requires a fast and accurate evaluation of the Green's function, which, with today's computational power, is now readily available

  13. Accurate pose estimation using single marker single camera calibration system

    Pati, Sarthak; Erat, Okan; Wang, Lejing; Weidert, Simon; Euler, Ekkehard; Navab, Nassir; Fallavollita, Pascal

    2013-03-01

    Visual marker based tracking is one of the most widely used tracking techniques in Augmented Reality (AR) applications. Generally, multiple square markers are needed to perform robust and accurate tracking. Various marker based methods for calibrating relative marker poses have already been proposed. However, the calibration accuracy of these methods relies on the order of the image sequence and pre-evaluation of pose-estimation errors, making the method offline. Several studies have shown that the accuracy of pose estimation for an individual square marker depends on camera distance and viewing angle. We propose a method to accurately model the error in the estimated pose and translation of a camera using a single marker via an online method based on the Scaled Unscented Transform (SUT). Thus, the pose estimation for each marker can be estimated with highly accurate calibration results independent of the order of image sequences compared to cases when this knowledge is not used. This removes the need for having multiple markers and an offline estimation system to calculate camera pose in an AR application.

  14. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Marcondes, Michel L.; Shukla, Gaurav; da Silveira, Pedro; Wentzcovitch, Renata M.

    2015-12-01

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  15. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  16. Interacting with image hierarchies for fast and accurate object segmentation

    Beard, David V.; Eberly, David H.; Hemminger, Bradley M.; Pizer, Stephen M.; Faith, R. E.; Kurak, Charles; Livingston, Mark

    1994-05-01

    Object definition is an increasingly important area of medical image research. Accurate and fairly rapid object definition is essential for measuring the size and, perhaps more importantly, the change in size of anatomical objects such as kidneys and tumors. Rapid and fairly accurate object definition is essential for 3D real-time visualization including both surgery planning and Radiation oncology treatment planning. One approach to object definition involves the use of 3D image hierarchies, such as Eberly's Ridge Flow. However, the image hierarchy segmentation approach requires user interaction in selecting regions and subtrees. Further, visualizing and comprehending the anatomy and the selected portions of the hierarchy can be problematic. In this paper we will describe the Magic Crayon tool which allows a user to define rapidly and accurately various anatomical objects by interacting with image hierarchies such as those generated with Eberly's Ridge Flow algorithm as well as other 3D image hierarchies. Preliminary results suggest that fairly complex anatomical objects can be segmented in under a minute with sufficient accuracy for 3D surgery planning, 3D radiation oncology treatment planning, and similar applications. Potential modifications to the approach for improved accuracy are summarized.

  17. Can clinicians accurately assess esophageal dilation without fluoroscopy?

    Bailey, A D; Goldner, F

    1990-01-01

    This study questioned whether clinicians could determine the success of esophageal dilation accurately without the aid of fluoroscopy. Twenty patients were enrolled with the diagnosis of distal esophageal stenosis, including benign peptic stricture (17), Schatski's ring (2), and squamous cell carcinoma of the esophagus (1). Dilation attempts using only Maloney dilators were monitored fluoroscopically by the principle investigator, the physician and patient being unaware of the findings. Physicians then predicted whether or not their dilations were successful, and they examined various features to determine their usefulness in predicting successful dilation. They were able to predict successful dilation accurately in 97% of the cases studied; however, their predictions of unsuccessful dilation were correct only 60% of the time. Features helpful in predicting passage included easy passage of the dilator (98%) and the patient feeling the dilator in the stomach (95%). Excessive resistance suggesting unsuccessful passage was an unreliable feature and was often due to the dilator curling in the stomach. When Maloney dilators are used to dilate simple distal strictures, if the physician predicts successful passage, he is reliably accurate without the use of fluoroscopy; however, if unsuccessful passage is suspected, fluoroscopy must be used for confirmation. PMID:2210278

  18. Quantitative two-qutrit entanglement

    We introduce the new concept of axisymmetric bipartite states. For d x d-dimensional systems these states form a two-parameter family of nontrivial mixed states that include the isotropic states. We present exact quantitative results for class-specific entanglement as well as for the negativity and I-concurrence of two-qutrit axisymmetric states. These results have interesting applications such as for quantitative witnesses of class-specific entanglement in arbitrary two-qutrit states and as device-independent witness for the number of entangled dimensions.

  19. Towards quantitative assessment of calciphylaxis

    Deserno, Thomas M.; Sárándi, István.; Jose, Abin; Haak, Daniel; Jonas, Stephan; Specht, Paula; Brandenburg, Vincent

    2014-03-01

    Calciphylaxis is a rare disease that has devastating conditions associated with high morbidity and mortality. Calciphylaxis is characterized by systemic medial calcification of the arteries yielding necrotic skin ulcerations. In this paper, we aim at supporting the installation of multi-center registries for calciphylaxis, which includes a photographic documentation of skin necrosis. However, photographs acquired in different centers under different conditions using different equipment and photographers cannot be compared quantitatively. For normalization, we use a simple color pad that is placed into the field of view, segmented from the image, and its color fields are analyzed. In total, 24 colors are printed on that scale. A least-squares approach is used to determine the affine color transform. Furthermore, the card allows scale normalization. We provide a case study for qualitative assessment. In addition, the method is evaluated quantitatively using 10 images of two sets of different captures of the same necrosis. The variability of quantitative measurements based on free hand photography is assessed regarding geometric and color distortions before and after our simple calibration procedure. Using automated image processing, the standard deviation of measurements is significantly reduced. The coefficients of variations yield 5-20% and 2-10% for geometry and color, respectively. Hence, quantitative assessment of calciphylaxis becomes practicable and will impact a better understanding of this rare but fatal disease.

  20. Time-resolved quantitative phosphoproteomics

    Verano-Braga, Thiago; Schwämmle, Veit; Sylvester, Marc;

    2012-01-01

    proteins involved in the Ang-(1-7) signaling, we performed a mass spectrometry-based time-resolved quantitative phosphoproteome study of human aortic endothelial cells (HAEC) treated with Ang-(1-7). We identified 1288 unique phosphosites on 699 different proteins with 99% certainty of correct peptide...

  1. La quantite en islandais modern

    Magnús Pétursson

    2015-11-01

    Full Text Available La réalisation phonétique de la quantité en syllabe accentuée dans la lecture de deux textes continus. Le problème de la quantité est un des problèmes les plus étudiés dans la phonologie de l'islandais moderne. Du point de vue phonologique il semble qu'on ne peut pas espérer apporter du nouveau, les possibilités théoriques ayant été pratiquement épuisées comme nous 1'avons rappelé dans notre étude récente (Pétursson 1978, pp. 76-78. Le résultat le plus inattendu des recherches des dernières années est sans doute la découverte d'une différenciation quantitative entre le Nord et le Sud de l'Islande (Pétursson 1976a. Il est pourtant encore prématuré de parler de véritables zones quantitatives puisqu'on n' en connaît ni les limites ni l' étendue sur le plan géographique.

  2. Quantitative Genomics of Male Reproduction

    The objective of the review was to establish the current status of quantitative genomics for male reproduction. Genetic variation exists for male reproduction traits. These traits are expensive and time consuming traits to evaluate through conventional breeding schemes. Genomics is an alternative to...

  3. Compositional and Quantitative Model Checking

    Larsen, Kim Guldstrand

    This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based on...

  4. Quantitative genomics of female reproduction

    Numerous quantitative trait loci (QTL) for reproductive traits in domestic livestock have been described in the literature. In this chapter, the components needed for detection of reproductive trait QTL are described, including collection of phenotypes, genotypes, and the appropriate statistical ana...

  5. Quantitation of erythropoiesis in myelomatosis

    Birgens, H S; Hansen, O P; Henriksen, Jens Henrik Sahl; Wantzin, P

    1979-01-01

    Quantitation of the erythropoiesis with radio-iron (59Fe) was applied to 9 patients with untreated myelomatosis. The method included blocking of the 59Fe reutilization by injection of non-radioactive iron. There was no uniform pattern in the Fe-kinetics values. The Plasma Iron Turnover (PIT) and...

  6. Quantitative Characterisation of Surface Texture

    De Chiffre, Leonardo; Lonardo, P.M.; Trumpold, H.; Lucca, D.A.; Goch, G.; Brown, C. A.; Raja, J.; Hansen, Hans Nørgaard

    2000-01-01

    This paper reviews the different methods used to give a quantitative characterisation of surface texture. The paper contains a review of conventional 2D as well as 3D roughness parameters, with particular emphasis on recent international standards and developments. It presents new texture...

  7. QRAS-the quantitative risk assessment system

    This paper presents an overview of QRAS, the Quantitative Risk Assessment System. QRAS is a PC-based software tool for conducting Probabilistic Risk Assessments (PRAs), which was developed to address risk analysis needs held by NASA. QRAS is, however, applicable in a wide range of industries. The philosophy behind the development of QRAS is to bridge communication and skill gaps between managers, engineers, and risk analysts by using representations of the risk model and analysis results that are easy to comprehend by each of those groups. For that purpose, event sequence diagrams (ESD) are being used as a replacement for event trees (ET) to model scenarios, and the quantification of events is possible through a set of quantification models familiar to engineers. An automated common cause failure (CCF) modeling tool further aids the risk modeling. QRAS applies BDD-based algorithms for the accurate and efficient computation of risk results. The paper presents QRAS' modeling and analysis capabilities. The performance of the underlying BDD algorithm is assessed and compared to that of another PRA software tool, using a case study extracted from the International Space Station PRA

  8. Quantitative analysis of galaxy-galaxy lensing

    Schneider, P J; Schneider, Peter; Rix, Hans Walter

    1996-01-01

    In this paper we explore a quantitative and efficient method to constrain the halo properties of distant galaxy populations through ``galaxy--galaxy" lensing and show that the mean masses and sizes of halos can be estimated accurately, without excessive data requirements. Specifically, we propose a maximum-likelihood analysis which takes full account of the actual image ellipticities, positions and apparent magnitudes. We apply it to simulated observations, using the same model for the lensing galaxy population as in BBS, where the galaxy halos are described by isothermal spheres with velocity dispersion \\sigma, truncated at a radius s. Both parameters are assumed to scale with the luminosity of the galaxy. The best fitting values are then determined with the maximum-likelihood analysis. We explore two different observing strategies, (a) taking deep images (e.g., with HST) on small fields, and (b) using shallower images on larger fields. We find that \\sigma_* can be determined to \\lesssim10\\% accuracy if a sa...

  9. Quantitative Proteomic Approaches for Studying Phosphotyrosine Signaling

    Ding, Shi-Jian; Qian, Weijun; Smith, Richard D.

    2007-02-01

    Protein tyrosine phosphorylation is a fundamental mechanism for controlling many aspects of cellular processes, as well as aspects of human health and diseases. Compared to phosphoserine (pSer) and phosphothreonine (pThr), phosphotyrosine (pTyr) signaling is more tightly regulated, but often more challenging to characterize due to significantly lower level of tyrosine phosphorylation (a relative abundance of 1800:200:1 was estimated for pSer/pThr/pTyr in vertebrate cells[1]). In this review, we outline the recent advances in analytical methodologies for enrichment, identification, and accurate quantitation of tyrosine phosphorylated proteins and peptides using antibody-based technologies, capillary liquid chromatography (LC) coupled with mass spectrometry (MS), and various stable isotope labeling strategies, as well as non-MS-based methods such as protein or peptide array methods. These proteomic technological advances provide powerful tools for potentially understanding signal transduction at the system level and provide a basis for discovering novel drug targets for human diseases. [1] Hunter, T. (1998) The Croonian Lecture 1997. The phosphorylation of proteins on tyrosine: its role in cell growth and disease. Philos. Trans. R. Soc. Lond. B Biol. Sci. 353, 583–605

  10. Quantitative atomic spectroscopy for primary thermometry

    Quantitative spectroscopy has been used to measure accurately the Doppler broadening of atomic transitions in 85Rb vapor. By using a conventional platinum resistance thermometer and the Doppler thermometry technique, we were able to determine kB with a relative uncertainty of 4.1x10-4 and with a deviation of 2.7x10-4 from the expected value. Our experiment, using an effusive vapor, departs significantly from other Doppler-broadened thermometry (DBT) techniques, which rely on weakly absorbing molecules in a diffusive regime. In these circumstances, very different systematic effects such as magnetic sensitivity and optical pumping are dominant. Using the model developed recently by Stace and Luiten, we estimate the perturbation due to optical pumping of the measured kB value was less than 4x10-6. The effects of optical pumping on atomic and molecular DBT experiments is mapped over a wide range of beam size and saturation intensity, indicating possible avenues for improvement. We also compare the line-broadening mechanisms, windows of operation and detection limits of some recent DBT experiments.

  11. Quantitative atomic spectroscopy for primary thermometry

    Truong, Gar-Wing; Stace, Thomas M; Luiten, Andre N

    2010-01-01

    Quantitative spectroscopy has been used to measure accurately the Doppler-broadening of atomic transitions in $^{85}$Rb vapor. By using a conventional platinum resistance thermometer and the Doppler thermometry technique, we were able to determine $k_B$ with a relative uncertainty of $4.1\\times 10^{-4}$, and with a deviation of $2.7\\times 10^{-4}$ from the expected value. Our experiment, using an effusive vapour, departs significantly from other Doppler-broadened thermometry (DBT) techniques, which rely on weakly absorbing molecules in a diffusive regime. In these circumstances, very different systematic effects such as magnetic sensitivity and optical pumping are dominant. Using the model developed recently by Stace and Luiten, we estimate the perturbation due to optical pumping of the measured $k_B$ value was less than $4\\times 10^{-6}$. The effects of optical pumping on atomic and molecular DBT experiments is mapped over a wide range of beam size and saturation intensity, indicating possible avenues for im...

  12. Issues and Applications in Label-Free Quantitative Mass Spectrometry

    Xianyin Lai

    2013-01-01

    Full Text Available To address the challenges associated with differential expression proteomics, label-free mass spectrometric protein quantification methods have been developed as alternatives to array-based, gel-based, and stable isotope tag or label-based approaches. In this paper, we focus on the issues associated with label-free methods that rely on quantitation based on peptide ion peak area measurement. These issues include chromatographic alignment, peptide qualification for quantitation, and normalization. In addressing these issues, we present various approaches, assembled in a recently developed label-free quantitative mass spectrometry platform, that overcome these difficulties and enable comprehensive, accurate, and reproducible protein quantitation in highly complex protein mixtures from experiments with many sample groups. As examples of the utility of this approach, we present a variety of cases where the platform was applied successfully to assess differential protein expression or abundance in body fluids, in vitro nanotoxicology models, tissue proteomics in genetic knock-in mice, and cell membrane proteomics.

  13. Quantitation, network and function of protein phosphorylation in plant cell

    Lin eZHU

    2013-01-01

    Full Text Available Protein phosphorylation is one of the most important post-translational modifications (PTMs as it participates in regulating various cellular processes and biological functions. It is therefore crucial to identify phosphorylated proteins to construct a phosphor-relay network, and eventually to understand the underlying molecular regulatory mechanism in response to both internal and external stimuli. The changes in phosphorylation status at these novel phosphosites can be accurately measured using a 15N-stable isotopic labeling in Arabidopsis (SILIA quantitative proteomic approach in a high-throughput manner. One of the unique characteristics of the SILIA quantitative phosphoproteomic approach is the preservation of native PTM status on protein during the entire peptide preparation procedure. Evolved from SILIA is another quantitative PTM proteomic approach, AQUIP (absolute quantitation of isoforms of post-translationally modified proteins, which was developed by combining the advantages of targeted proteomics with SILIA. Bioinformatics-based phosphorylation site prediction coupled with an MS-based in vitro kinase assay is an additional way to extend the capability of phosphosite identification from the total cellular protein. The combined use of SILIA and AQUIP provides a novel strategy for molecular systems biological study and for investigation of in vivo biological functions of these phosphoprotein isoforms and combinatorial codes of PTMs.

  14. A gold nanoparticle-based semi-quantitative and quantitative ultrasensitive paper sensor for the detection of twenty mycotoxins

    Kong, Dezhao; Liu, Liqiang; Song, Shanshan; Suryoprabowo, Steven; Li, Aike; Kuang, Hua; Wang, Libing; Xu, Chuanlai

    2016-02-01

    A semi-quantitative and quantitative multi-immunochromatographic (ICA) strip detection assay was developed for the simultaneous detection of twenty types of mycotoxins from five classes, including zearalenones (ZEAs), deoxynivalenols (DONs), T-2 toxins (T-2s), aflatoxins (AFs), and fumonisins (FBs), in cereal food samples. Sensitive and specific monoclonal antibodies were selected for this assay. The semi-quantitative results were obtained within 20 min by the naked eye, with visual limits of detection for ZEAs, DONs, T-2s, AFs and FBs of 0.1-0.5, 2.5-250, 0.5-1, 0.25-1 and 2.5-10 μg kg-1, and cut-off values of 0.25-1, 5-500, 1-10, 0.5-2.5 and 5-25 μg kg-1, respectively. The quantitative results were obtained using a hand-held strip scan reader, with the calculated limits of detection for ZEAs, DONs, T-2s, AFs and FBs of 0.04-0.17, 0.06-49, 0.15-0.22, 0.056-0.49 and 0.53-1.05 μg kg-1, respectively. The analytical results of spiked samples were in accordance with the accurate content in the simultaneous detection analysis. This newly developed ICA strip assay is suitable for the on-site detection and rapid initial screening of mycotoxins in cereal samples, facilitating both semi-quantitative and quantitative determination.A semi-quantitative and quantitative multi-immunochromatographic (ICA) strip detection assay was developed for the simultaneous detection of twenty types of mycotoxins from five classes, including zearalenones (ZEAs), deoxynivalenols (DONs), T-2 toxins (T-2s), aflatoxins (AFs), and fumonisins (FBs), in cereal food samples. Sensitive and specific monoclonal antibodies were selected for this assay. The semi-quantitative results were obtained within 20 min by the naked eye, with visual limits of detection for ZEAs, DONs, T-2s, AFs and FBs of 0.1-0.5, 2.5-250, 0.5-1, 0.25-1 and 2.5-10 μg kg-1, and cut-off values of 0.25-1, 5-500, 1-10, 0.5-2.5 and 5-25 μg kg-1, respectively. The quantitative results were obtained using a hand-held strip scan

  15. Comparison of quantitative and semiquantitative culture techniques for burn biopsy.

    Buchanan, K.; Heimbach, D. M.; Minshew, B H; Coyle, M B

    1986-01-01

    Accurate evaluation of bacterial colonization as a predictive index for wound sepsis has relied on a quantitative culture technique that provides exact colony counts per gram of tissue by culture of five serial dilutions of biopsy tissue homogenate. The method, while useful to the physician, is both labor intensive and expensive. In this study 78 eschar biopsies were cultured by a semiquantitative technique that involved the use of 0.1- and 0.01-ml samples of inocula and by the serial dilutio...

  16. Quantitative phylogenetic assessment of microbial communities indiverse environments

    von Mering, C.; Hugenholtz, P.; Raes, J.; Tringe, S.G.; Doerks,T.; Jensen, L.J.; Ward, N.; Bork, P.

    2007-01-01

    The taxonomic composition of environmental communities is an important indicator of their ecology and function. Here, we use a set of protein-coding marker genes, extracted from large-scale environmental shotgun sequencing data, to provide a more direct, quantitative and accurate picture of community composition than traditional rRNA-based approaches using polymerase chain reaction (PCR). By mapping marker genes from four diverse environmental data sets onto a reference species phylogeny, we show that certain communities evolve faster than others, determine preferred habitats for entire microbial clades, and provide evidence that such habitat preferences are often remarkably stable over time.

  17. Quantitative studies of multiphoton ionization using tunable VUV radiation

    The storage ring free electron laser makes studies of multiphoton ionization in the vacuum ultraviolet possible. At relatively low laser intensities one can study two-photon resonant three-photon ionization of atoms in a regime where perturbation theory works well. In this regime cross sections for the multiphoton processes can be measured accurately and then used for sensitive, quantitative detection of atoms. At higher intensities higher-order processes such as multiple ionization can take place. The tunability, variable pulse length, and well characterized spatial distribution of the FEL is important in unraveling the mechanisms of these processes

  18. Quantitative modeling of the physiology of ascites in portal hypertension

    Levitt David G

    2012-03-01

    Full Text Available Abstract Although the factors involved in cirrhotic ascites have been studied for a century, a number of observations are not understood, including the action of diuretics in the treatment of ascites and the ability of the plasma-ascitic albumin gradient to diagnose portal hypertension. This communication presents an explanation of ascites based solely on pathophysiological alterations within the peritoneal cavity. A quantitative model is described based on experimental vascular and intraperitoneal pressures, lymph flow, and peritoneal space compliance. The model's predictions accurately mimic clinical observations in ascites, including the magnitude and time course of changes observed following paracentesis or diuretic therapy.

  19. Ratio of slopes method for quantitative analysis in ceramic bodies

    A quantitative x-ray diffraction analysis technique developed at University of Sheffield was adopted, rather than the previously widely used internal standard method, to determine the amount of the phases present in a reformulated whiteware porcelain and a BaTiO sub 3 electrochemical material. This method, although still employs an internal standard, was found to be very easy and accurate. The required weight fraction of a phase in the mixture to be analysed is determined from the ratio of slopes of two linear plots, designated as the analysis and reference lines, passing through their origins using the least squares method

  20. Quantitative myocardial blood flow with Rubidium-82 PET

    Hagemann, Christoffer E; Ghotbi, Adam A; Kjær, Andreas; Hasbak, Philip

    2015-01-01

    Positron emission tomography (PET) allows assessment of myocardial blood flow in absolute terms (ml/min/g). Quantification of myocardial blood flow (MBF) and myocardial flow reserve (MFR) extend the scope of conventional semi-quantitative myocardial perfusion imaging (MPI): e.g. in 1...... diagnose and risk stratify CAD patients, while assessing the potential of the modality in clinical practice.......) identification of the extent of a multivessel coronary artery disease (CAD) burden, 2) patients with balanced 3-vessel CAD, 3) patients with subclinical CAD, and 4) patients with regional flow variance, despite of a high global MFR. A more accurate assessment of the ischemic burden in patients with intermediate...

  1. Detection of group a streptococcal pharyngitis by quantitative PCR

    Dunne, Eileen M.; Marshall, Julia L; Baker, Ciara A; Manning, Jayne; Gonis, Gena; Danchin, Margaret H; Smeesters, Pierre R; Satzke, Catherine; Steer, Andrew C.

    2013-01-01

    Background Group A streptococcus (GAS) is the most common bacterial cause of sore throat. School-age children bear the highest burden of GAS pharyngitis. Accurate diagnosis is difficult: the majority of sore throats are viral in origin, culture-based identification of GAS requires 24–48 hours, and up to 15% of children are asymptomatic throat carriers of GAS. The aim of this study was to develop a quantitative polymerase chain reaction (qPCR) assay for detecting GAS pharyngitis and assess its...

  2. Quantitative lung scintigraphy and spirometry in bronchogenic carcinoma

    Tumor size, location, scintigraphic and spirometric data were evaluated in 80 patients suffering from squamous cell carcinoma. Perfusion, ventilation, washout data, as well as vital capacity and forced exspiratory volume in 1.0 sec showed decreasing values with more proximal bronchial obstruction. A statistically significant inverse correlation was found betwen tumor diameter and ventilation data in peripheral and central tumors. Washout data increased with tumor size in masses with peripheral location. Spirometric data were reduced in all patients regardless of tumor size and location. We were able to demonstrate that the quantitative evaluation of scintigraphic images can be used for accurate assessment of both postoperative lung function and operability. (orig.)

  3. Verification of Scientific Simulations via Hypothesis-Driven Comparative and Quantitative Visualization

    Ahrens, James P [ORNL; Heitmann, Katrin [ORNL; Petersen, Mark R [ORNL; Woodring, Jonathan [Los Alamos National Laboratory (LANL); Williams, Sean [Los Alamos National Laboratory (LANL); Fasel, Patricia [Los Alamos National Laboratory (LANL); Ahrens, Christine [Los Alamos National Laboratory (LANL); Hsu, Chung-Hsing [ORNL; Geveci, Berk [ORNL

    2010-11-01

    This article presents a visualization-assisted process that verifies scientific-simulation codes. Code verification is necessary because scientists require accurate predictions to interpret data confidently. This verification process integrates iterative hypothesis verification with comparative, feature, and quantitative visualization. Following this process can help identify differences in cosmological and oceanographic simulations.

  4. The Anatomy of the Lie: An Exploratory Investigation via the Quantitative EEG.

    Thorton, Kirtley E.

    1995-01-01

    Explores the use of quantitative EEG in the detection of lies employing a special electrode cap and specialized video recording and audio equipment. The method offers the ability to decide when it can predict accurately and when it can predict with 100% accuracy. (JPS)

  5. Composition and Quantitation of Microalgal Lipids by ERETIC 1H NMR Method

    Angelo Fontana; Adele Cutignano; Angela Sardo; Giuliana d'Ippolito; Carmela Gallo; Genoveffa Nuzzo

    2013-01-01

    Accurate characterization of biomass constituents is a crucial aspect of research in the biotechnological application of natural products. Here we report an efficient, fast and reproducible method for the identification and quantitation of fatty acids and complex lipids (triacylglycerols, glycolipids, phospholipids) in microalgae under investigation for the development of functional health products (probiotics, food ingredients, drugs, etc.) or third generation biofuels. The procedure consist...

  6. Quantitative PCR for Detection and Enumeration of Genetic Markers of Bovine Fecal Pollution

    Accurate assessment of health risks associated with bovine (cattle) fecal pollution requires a reliable host-specific genetic marker and a rapid quantification method. We report the development of quantitative PCR assays for the detection of two recently described cow feces-spec...

  7. Preliminary Study on the Feasibility of Performing Quantitative Precipitation Estimation Using X-band Radar

    Figueras i Ventura, J.; C. Z. van de Beek; H. W. J. Russchenberg; R. Uijlenhoet

    2009-01-01

    IRCTR has built an experimental X-band Doppler po-larimetric weather radar system aimed at obtaining high temporal and spatial resolution measurements of precipitation, with particular interest in light rain and drizzle. In this paper a first analysis of the feasibility of obtaining accurate quantitative precipitation estimation from the radar data performed using a high density network of rain gauges is presented.

  8. Preliminary Study on the Feasibility of Performing Quantitative Precipitation Estimation Using X-band Radar

    Figueras i Ventura, J.; Beek van de, C.Z.; Russchenberg, H.W.J.; Uijlenhoet, R.

    2009-01-01

    IRCTR has built an experimental X-band Doppler po-larimetric weather radar system aimed at obtaining high temporal and spatial resolution measurements of precipitation, with particular interest in light rain and drizzle. In this paper a first analysis of the feasibility of obtaining accurate quantit

  9. Accurate LAI retrieval method based on PROBA/CHRIS data

    W. Fan

    2009-11-01

    Full Text Available Leaf area index (LAI is one of the key structural variables in terrestrial vegetation ecosystems. Remote sensing offers a chance to derive LAI in regional scales accurately. Variations of background, atmospheric conditions and the anisotropy of canopy reflectance are three factors that can strongly restrain the accuracy of retrieved LAI. Based on the hybrid canopy reflectance model, a new hyperspectral directional second derivative method (DSD is proposed in this paper. This method can estimate LAI accurately through analyzing the canopy anisotropy. The effect of the background can also be effectively removed. So the inversion precision and the dynamic range can be improved remarkably, which has been proved by numerical simulations. As the derivative method is very sensitive to the random noise, we put forward an innovative filtering approach, by which the data can be de-noised in spectral and spatial dimensions synchronously. It shows that the filtering method can remove the random noise effectively; therefore, the method can be performed to the remotely sensed hyperspectral image. The study region is situated in Zhangye, Gansu Province, China; the hyperspectral and multi-angular image of the study region has been acquired from Compact High-Resolution Imaging Spectrometer/Project for On-Board Autonomy (CHRIS/PROBA, on 4 and 14 June 2008. After the pre-processing procedures, the DSD method was applied, and the retrieve LAI was validated by the ground truth of 11 sites. It shows that by applying innovative filtering method, the new LAI inversion method is accurate and effective.

  10. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T.; Cerutti, Francesco; Chin, Mary P. W.; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G.; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R.; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both 4He and 12C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth–dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  11. Fast and accurate estimation for astrophysical problems in large databases

    Richards, Joseph W.

    2010-10-01

    A recent flood of astronomical data has created much demand for sophisticated statistical and machine learning tools that can rapidly draw accurate inferences from large databases of high-dimensional data. In this Ph.D. thesis, methods for statistical inference in such databases will be proposed, studied, and applied to real data. I use methods for low-dimensional parametrization of complex, high-dimensional data that are based on the notion of preserving the connectivity of data points in the context of a Markov random walk over the data set. I show how this simple parameterization of data can be exploited to: define appropriate prototypes for use in complex mixture models, determine data-driven eigenfunctions for accurate nonparametric regression, and find a set of suitable features to use in a statistical classifier. In this thesis, methods for each of these tasks are built up from simple principles, compared to existing methods in the literature, and applied to data from astronomical all-sky surveys. I examine several important problems in astrophysics, such as estimation of star formation history parameters for galaxies, prediction of redshifts of galaxies using photometric data, and classification of different types of supernovae based on their photometric light curves. Fast methods for high-dimensional data analysis are crucial in each of these problems because they all involve the analysis of complicated high-dimensional data in large, all-sky surveys. Specifically, I estimate the star formation history parameters for the nearly 800,000 galaxies in the Sloan Digital Sky Survey (SDSS) Data Release 7 spectroscopic catalog, determine redshifts for over 300,000 galaxies in the SDSS photometric catalog, and estimate the types of 20,000 supernovae as part of the Supernova Photometric Classification Challenge. Accurate predictions and classifications are imperative in each of these examples because these estimates are utilized in broader inference problems

  12. Fast and Accurate Construction of Confidence Intervals for Heritability.

    Schweiger, Regev; Kaufman, Shachar; Laaksonen, Reijo; Kleber, Marcus E; März, Winfried; Eskin, Eleazar; Rosset, Saharon; Halperin, Eran

    2016-06-01

    Estimation of heritability is fundamental in genetic studies. Recently, heritability estimation using linear mixed models (LMMs) has gained popularity because these estimates can be obtained from unrelated individuals collected in genome-wide association studies. Typically, heritability estimation under LMMs uses the restricted maximum likelihood (REML) approach. Existing methods for the construction of confidence intervals and estimators of SEs for REML rely on asymptotic properties. However, these assumptions are often violated because of the bounded parameter space, statistical dependencies, and limited sample size, leading to biased estimates and inflated or deflated confidence intervals. Here, we show that the estimation of confidence intervals by state-of-the-art methods is inaccurate, especially when the true heritability is relatively low or relatively high. We further show that these inaccuracies occur in datasets including thousands of individuals. Such biases are present, for example, in estimates of heritability of gene expression in the Genotype-Tissue Expression project and of lipid profiles in the Ludwigshafen Risk and Cardiovascular Health study. We also show that often the probability that the genetic component is estimated as 0 is high even when the true heritability is bounded away from 0, emphasizing the need for accurate confidence intervals. We propose a computationally efficient method, ALBI (accurate LMM-based heritability bootstrap confidence intervals), for estimating the distribution of the heritability estimator and for constructing accurate confidence intervals. Our method can be used as an add-on to existing methods for estimating heritability and variance components, such as GCTA, FaST-LMM, GEMMA, or EMMAX. PMID:27259052

  13. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features. PMID:27242956

  14. Accurate energies of the He atom with undergraduate quantum mechanics

    Massé, Robert C.; Walker, Thad G.

    2015-08-01

    Estimating the energies and splitting of the 1s2s singlet and triplet states of helium is a classic exercise in quantum perturbation theory but yields only qualitatively correct results. Using a six-line computer program, the 1s2s energies calculated by matrix diagonalization using a seven-state basis improve the results to 0.4% error or better. This is an effective and practical illustration of the quantitative power of quantum mechanics, at a level accessible to undergraduate students.

  15. An accurate RLGC circuit model for dual tapered TSV structure

    A fast RLGC circuit model with analytical expression is proposed for the dual tapered through-silicon via (TSV) structure in three-dimensional integrated circuits under different slope angles at the wide frequency region. By describing the electrical characteristics of the dual tapered TSV structure, the RLGC parameters are extracted based on the numerical integration method. The RLGC model includes metal resistance, metal inductance, substrate resistance, outer inductance with skin effect and eddy effect taken into account. The proposed analytical model is verified to be nearly as accurate as the Q3D extractor but more efficient. (semiconductor integrated circuits)

  16. Accurately Determining the Risks of Rising Sea Level

    Marbaix, Philippe; Nicholls, Robert J.

    2007-10-01

    With the highest density of people and the greatest concentration of economic activity located in the coastal regions, sea level rise is an important concern as the climate continues to warm. Subsequent flooding may potentially disrupt industries, populations, and livelihoods, particularly in the long term if the climate is not quickly stabilized [McGranahan et al., 2007; Tol et al., 2006]. To help policy makers understand these risks, a more accurate description of hazards posed by rising sea levels is needed at the global scale, even though the impacts in specific regions are better known.

  17. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Shortis, Mark

    2015-01-01

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems. PMID:26690172

  18. Accurate analysis of EBSD data for phase identification

    Palizdar, Y; Cochrane, R C; Brydson, R; Leary, R; Scott, A J, E-mail: preyp@leeds.ac.u [Institute for Materials Research, University of Leeds, Leeds LS2 9JT UK (United Kingdom)

    2010-07-01

    This paper aims to investigate the reliability of software default settings in the analysis of EBSD results. To study the effect of software settings on the EBSD results, the presence of different phases in high Al steel has been investigated by EBSD. The results show the importance of appropriate automated analysis parameters for valid and reliable phase discrimination. Specifically, the importance of the minimum number of indexed bands and the maximum solution error have been investigated with values of 7-9 and 1.0-1.5{sup 0} respectively, found to be needed for accurate analysis.

  19. Accurate characterization of OPVs: Device masking and different solar simulators

    Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.;

    2013-01-01

    One of the prime objects of organic solar cell research has been to improve the power conversion efficiency. Unfortunately, the accurate determination of this property is not straight forward and has led to the recommendation that record devices be tested and certified at a few accredited...... laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...

  20. Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.

    Robinson, David

    2014-12-01

    A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation. PMID:26583218

  1. Accurate method of modeling cluster scaling relations in modified gravity

    He, Jian-hua; Li, Baojiu

    2016-06-01

    We propose a new method to model cluster scaling relations in modified gravity. Using a suite of nonradiative hydrodynamical simulations, we show that the scaling relations of accumulated gas quantities, such as the Sunyaev-Zel'dovich effect (Compton-y parameter) and the x-ray Compton-y parameter, can be accurately predicted using the known results in the Λ CDM model with a precision of ˜3 % . This method provides a reliable way to analyze the gas physics in modified gravity using the less demanding and much more efficient pure cold dark matter simulations. Our results therefore have important theoretical and practical implications in constraining gravity using cluster surveys.

  2. Accurate Programming: Thinking about programs in terms of properties

    Walid Taha

    2011-09-01

    Full Text Available Accurate programming is a practical approach to producing high quality programs. It combines ideas from test-automation, test-driven development, agile programming, and other state of the art software development methods. In addition to building on approaches that have proven effective in practice, it emphasizes concepts that help programmers sharpen their understanding of both the problems they are solving and the solutions they come up with. This is achieved by encouraging programmers to think about programs in terms of properties.

  3. Accurate studies on dissociation energies of diatomic molecules

    SUN; WeiGuo; FAN; QunChao

    2007-01-01

    The molecular dissociation energies of some electronic states of hydride and N2 molecules were studied using a parameter-free analytical formula suggested in this study and the algebraic method (AM) proposed recently. The results show that the accurate AM dissociation energies DeAM agree excellently with experimental dissociation energies Deexpt, and that the dissociation energy of an electronic state such as the 23△g state of 7Li2 whose experimental value is not available can be predicted using the new formula.

  4. Pink-Beam, Highly-Accurate Compact Water Cooled Slits

    Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Element Analysis of the system are presented

  5. Accurate laboratory boresight alignment of transmitter/receiver optical axes

    Martinek, Stephen J.

    1986-01-01

    An apparatus and procedure for the boresight alignment of the transmitter and receiver optical axes of a laser radar system are described. This accurate technique is applicable to both shared and dual aperture systems. A laser autostigmatic cube interferometer (LACI) is utilized to align a paraboloid in autocollimation. The LACI pinhole located at the paraboloid center of curvature becomes the far field receiver track and transmit reference point when illuminated by the transmit beam via a fiber optic pick-off/delay line. Boresight alignment accuracy better than 20 microrad is achievable.

  6. Fast and accurate methods of independent component analysis: A survey

    Tichavský, Petr; Koldovský, Zbyněk

    2011-01-01

    Roč. 47, č. 3 (2011), s. 426-438. ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf

  7. Simple, Accurate, and Robust Nonparametric Blind Super-Resolution

    Shao, Wen-Ze; Elad, Michael

    2015-01-01

    This paper proposes a simple, accurate, and robust approach to single image nonparametric blind Super-Resolution (SR). This task is formulated as a functional to be minimized with respect to both an intermediate super-resolved image and a nonparametric blur-kernel. The proposed approach includes a convolution consistency constraint which uses a non-blind learning-based SR result to better guide the estimation process. Another key component is the unnatural bi-l0-l2-norm regularization imposed...

  8. Accurate Image Super-Resolution Using Very Deep Convolutional Networks

    Kim, Jiwon; Lee, Jung Kwon; Lee, Kyoung Mu

    2015-01-01

    We present a highly accurate single-image super-resolution (SR) method. Our method uses a very deep convolutional network inspired by VGG-net used for ImageNet classification \\cite{simonyan2015very}. We find increasing our network depth shows a significant improvement in accuracy. Our final model uses 20 weight layers. By cascading small filters many times in a deep network structure, contextual information over large image regions is exploited in an efficient way. With very deep networks, ho...

  9. Optimized pulse sequences for the accurate measurement of aortic compliance

    Aortic compliance is potentially an important cardiovascular diagnostic parameter by virtue of a proposed correlation with cardiovascular fitness. Measurement requires cross-sectional images of the ascending and descending aorta in systole and diastole for measurement of aortic lumen areas. Diastolic images have poor vessel- wall delineation due to signal from slow-flowing blood. A comparison has been carried out using presaturation (SAT) RF pulses, transparent RF pulses, and flow-compensated gradients in standard pulse sequences to improve vessel-wall delineation in diastole. Properly timed SAT pulses provide the most consistent vessel-wall delineation and the most accurate measurement of aortic compliance

  10. Global cDNA Amplification Combined with Real-Time RT–PCR: Accurate Quantification of Multiple Human Potassium Channel Genes at the Single Cell Level

    Al-Taher, A.; Bashein, A.; Nolan, T.; Hollingsworth, M.; Brady, G

    2000-01-01

    We have developed a sensitive quantitative RT–PCR procedure suitable for the analysis of small samples, including single cells, and have used it to measure levels of potassium channel mRNAs in a panel of human tissues and small numbers of cells grown in culture. The method involves an initial global amplification of cDNA derived from all added polyadenylated mRNA followed by quantitative RT–PCR of individual genes using specific primers. In order to facilitate rapid and accurate processing of...

  11. A Software Package of Quantitative SPECT Image Reconstruction for Measurement of Physiological in Vivo Parameter

    Accurate estimation of radioactivity is essential for the quantitative measurement of physiological in vivo parameter in the medical field using nuclear medicine imaging. Among many nuclear medicine modalities, single photon emission computed tomography (SPECT) has been widely used in many clinical studies. Many SPECT studies with quantitative manner have been reported and evaluated, which have been contributed to the advance of SPECT technique and wide spread of its use. However, SPECT is still not employed in quantitative study as much as positron emission tomography (PET) has done. Recently, we reported an approach to quantify radioactivity accurately using SPECT, and evaluated its applicability in real measurement of physiological parameter [1-8]. Based on these reports, we developed a software package (QSPECT) for image reconstruction of SPECT data

  12. Isomerism of Cyanomethanimine: Accurate Structural, Energetic, and Spectroscopic Characterization.

    Puzzarini, Cristina

    2015-11-25

    The structures, relative stabilities, and rotational and vibrational parameters of the Z-C-, E-C-, and N-cyanomethanimine isomers have been evaluated using state-of-the-art quantum-chemical approaches. Equilibrium geometries have been calculated by means of a composite scheme based on coupled-cluster calculations that accounts for the extrapolation to the complete basis set limit and core-correlation effects. The latter approach is proved to provide molecular structures with an accuracy of 0.001-0.002 Å and 0.05-0.1° for bond lengths and angles, respectively. Systematically extrapolated ab initio energies, accounting for electron correlation through coupled-cluster theory, including up to single, double, triple, and quadruple excitations, and corrected for core-electron correlation and anharmonic zero-point vibrational energy, have been used to accurately determine relative energies and the Z-E isomerization barrier with an accuracy of about 1 kJ/mol. Vibrational and rotational spectroscopic parameters have been investigated by means of hybrid schemes that allow us to obtain rotational constants accurate to about a few megahertz and vibrational frequencies with a mean absolute error of ∼1%. Where available, for all properties considered, a very good agreement with experimental data has been observed. PMID:26529434

  13. Accurate phylogenetic classification of DNA fragments based onsequence composition

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  14. Canadian consumer issues in accurate and fair electricity metering

    The Public Interest Advocacy Centre (PIAC), located in Ottawa, participates in regulatory proceedings concerning electricity and natural gas to support public and consumer interest. PIAC provides legal representation, research and policy support and public advocacy. A study aimed toward the determination of the issues at stake for residential electricity consumers in the provision of fair and accurate electricity metering, was commissioned by Measurement Canada in consultation with Industry Canada's Consumer Affairs. The metering of electricity must be carried out in a fair and efficient manner for all residential consumers. The Electricity, Gas and Inspection Act was developed to ensure compliance with standards for measuring instrumentation. The accurate metering of electricity through the distribution systems for electricity in Canada represents the main focus of this study and report. The role played by Measurement Canada and the increased efficiencies of service delivery by Measurement Canada or the changing of electricity market conditions are of special interest. The role of Measurement Canada was explained, as were the concerns of residential consumers. A comparison was then made between the interests of residential consumers and those of commercial and industrial electricity consumers in electricity metering. Selected American and Commonwealth jurisdictions were reviewed in light of their electricity metering practices. A section on compliance and conflict resolution was included, in addition to a section on the use of voluntary codes for compliance and conflict resolution

  15. How accurately can 21cm tomography constrain cosmology?

    Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver

    2008-07-01

    There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6noise, to uncertainties in the reionization history, and to the level of contamination from astrophysical foregrounds. We derive simple analytic estimates for how various assumptions affect an experiment’s sensitivity, and we find that the modeling of reionization is the most important, followed by the array layout. We present an accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.

  16. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C7H10O2, for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules

  17. Cerebral fat embolism: Use of MR spectroscopy for accurate diagnosis

    Laxmi Kokatnur

    2015-01-01

    Full Text Available Cerebral fat embolism (CFE is an uncommon but serious complication following orthopedic procedures. It usually presents with altered mental status, and can be a part of fat embolism syndrome (FES if associated with cutaneous and respiratory manifestations. Because of the presence of other common factors affecting the mental status, particularly in the postoperative period, the diagnosis of CFE can be challenging. Magnetic resonance imaging (MRI of brain typically shows multiple lesions distributed predominantly in the subcortical region, which appear as hyperintense lesions on T2 and diffusion weighted images. Although the location offers a clue, the MRI findings are not specific for CFE. Watershed infarcts, hypoxic encephalopathy, disseminated infections, demyelinating disorders, diffuse axonal injury can also show similar changes on MRI of brain. The presence of fat in these hyperintense lesions, identified by MR spectroscopy as raised lipid peaks will help in accurate diagnosis of CFE. Normal brain tissue or conditions producing similar MRI changes will not show any lipid peak on MR spectroscopy. We present a case of CFE initially misdiagnosed as brain stem stroke based on clinical presentation and cranial computed tomography (CT scan, and later, MR spectroscopy elucidated the accurate diagnosis.

  18. The economic value of accurate wind power forecasting to utilities

    Watson, S.J. [Rutherford Appleton Lab., Oxfordshire (United Kingdom); Giebel, G.; Joensen, A. [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    With increasing penetrations of wind power, the need for accurate forecasting is becoming ever more important. Wind power is by its very nature intermittent. For utility schedulers this presents its own problems particularly when the penetration of wind power capacity in a grid reaches a significant level (>20%). However, using accurate forecasts of wind power at wind farm sites, schedulers are able to plan the operation of conventional power capacity to accommodate the fluctuating demands of consumers and wind farm output. The results of a study to assess the value of forecasting at several potential wind farm sites in the UK and in the US state of Iowa using the Reading University/Rutherford Appleton Laboratory National Grid Model (NGM) are presented. The results are assessed for different types of wind power forecasting, namely: persistence, optimised numerical weather prediction or perfect forecasting. In particular, it will shown how the NGM has been used to assess the value of numerical weather prediction forecasts from the Danish Meteorological Institute model, HIRLAM, and the US Nested Grid Model, which have been `site tailored` by the use of the linearized flow model WA{sup s}P and by various Model output Statistics (MOS) and autoregressive techniques. (au)

  19. Atomic spectroscopy and highly accurate measurement: determination of fundamental constants

    This document reviews the theoretical and experimental achievements of the author concerning highly accurate atomic spectroscopy applied for the determination of fundamental constants. A pure optical frequency measurement of the 2S-12D 2-photon transitions in atomic hydrogen and deuterium has been performed. The experimental setting-up is described as well as the data analysis. Optimized values for the Rydberg constant and Lamb shifts have been deduced (R = 109737.31568516 (84) cm-1). An experiment devoted to the determination of the fine structure constant with an aimed relative uncertainty of 10-9 began in 1999. This experiment is based on the fact that Bloch oscillations in a frequency chirped optical lattice are a powerful tool to transfer coherently many photon momenta to the atoms. We have used this method to measure accurately the ratio h/m(Rb). The measured value of the fine structure constant is α-1 = 137.03599884 (91) with a relative uncertainty of 6.7*10-9. The future and perspectives of this experiment are presented. This document presented before an academic board will allow his author to manage research work and particularly to tutor thesis students. (A.C.)

  20. KFM: a homemade yet accurate and dependable fallout meter

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of +-25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The step-by-step illustrated instructions for making and using a KFM are presented. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM

  1. Accurate location estimation of moving object In Wireless Sensor network

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  2. Accurate interlaminar stress recovery from finite element analysis

    Tessler, Alexander; Riggs, H. Ronald

    1994-01-01

    The accuracy and robustness of a two-dimensional smoothing methodology is examined for the problem of recovering accurate interlaminar shear stress distributions in laminated composite and sandwich plates. The smoothing methodology is based on a variational formulation which combines discrete least-squares and penalty-constraint functionals in a single variational form. The smoothing analysis utilizes optimal strains computed at discrete locations in a finite element analysis. These discrete strain data are smoothed with a smoothing element discretization, producing superior accuracy strains and their first gradients. The approach enables the resulting smooth strain field to be practically C1-continuous throughout the domain of smoothing, exhibiting superconvergent properties of the smoothed quantity. The continuous strain gradients are also obtained directly from the solution. The recovered strain gradients are subsequently employed in the integration o equilibrium equations to obtain accurate interlaminar shear stresses. The problem is a simply-supported rectangular plate under a doubly sinusoidal load. The problem has an exact analytic solution which serves as a measure of goodness of the recovered interlaminar shear stresses. The method has the versatility of being applicable to the analysis of rather general and complex structures built of distinct components and materials, such as found in aircraft design. For these types of structures, the smoothing is achieved with 'patches', each patch covering the domain in which the smoothed quantity is physically continuous.

  3. Accurate rest frequencies of methanol maser and dark cloud lines

    Müller, H S P; Maeder, H

    2004-01-01

    We report accurate laboratory measurements of selected methanol transition frequencies between 0.834 and 230 GHz in order to facilitate astronomical velocity analyses. New data have been obtained between 10 and 27 GHz and between 60 and 119 GHz. Emphasis has been put on known or potential interstellar maser lines as well as on transitions suitable for the investigation of cold dark clouds. Because of the narrow line widths (<0.5 kms-1) of maser lines and lines detected in dark molecular clouds, accurate frequencies are needed for comparison of the velocities of different methanol lines with each other as well as with lines from other species. In particular, frequencies for a comprehensive set of transitions are given which, because of their low level energies (< 20 cm-1 or 30 K) are potentially detectable in cold clouds. Global Hamiltonian fits generally do not yet yield the required accuracy. Additionally, we report transition frequencies for other lines that may be used to test and to improve existing...

  4. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  5. QUANTITATIVE CONFOCAL LASER SCANNING MICROSCOPY

    Merete Krog Raarup

    2011-05-01

    Full Text Available This paper discusses recent advances in confocal laser scanning microscopy (CLSM for imaging of 3D structure as well as quantitative characterization of biomolecular interactions and diffusion behaviour by means of one- and two-photon excitation. The use of CLSM for improved stereological length estimation in thick (up to 0.5 mm tissue is proposed. The techniques of FRET (Fluorescence Resonance Energy Transfer, FLIM (Fluorescence Lifetime Imaging Microscopy, FCS (Fluorescence Correlation Spectroscopy and FRAP (Fluorescence Recovery After Photobleaching are introduced and their applicability for quantitative imaging of biomolecular (co-localization and trafficking in live cells described. The advantage of two-photon versus one-photon excitation in relation to these techniques is discussed.

  6. Quantitative phase imaging with neutrons

    Full text: The use of thermal neutrons in contact radiography and tomography provides a powerful non-destructive analysis technique for materials that are difficult to study with x-rays. In this presentation we explore quantitative phase imaging using neutrons. We demonstrate a new class of phase-sensitive neutron radiography, using a simple experimental geometry, that provides independent quantitative phase and amplitude images of the sample. Moreover, the coherence requirements on the neutrons for the observation of phase effects are very modest, allowing use of the relatively limited neutron flux. The technique is applicable in cases of extreme phase gradient where image resolution would preclude interferometric determination. Further, our method allows weakly absorbing samples to be visualised at greatly reduced radiation doses

  7. Directional and quantitative phosphorylation networks

    Jørgensen, Claus; Linding, Rune

    2008-01-01

    for unravelling phosphorylation-mediated cellular interaction networks. In particular, we will discuss how the combination of new quantitative mass-spectrometric technologies and computational algorithms together are enhancing mapping of these largely uncharted dynamic networks. By combining quantitative......Directionality in protein signalling networks is due to modulated protein-protein interactions and is fundamental for proper signal progression and response to external and internal cues. This property is in part enabled by linear motifs embedding post-translational modification sites. These serve...... as recognition sites, guiding phosphorylation by kinases and subsequent binding of modular domains (e.g. SH2 and BRCT). Characterization of such modification-modulated interactions on a proteome-wide scale requires extensive computational and experimental analysis. Here, we review the latest advances in methods...

  8. A quantitative ELISA for dystrophin.

    Morris, G E; Ellis, J M; Nguyen, T M

    1993-05-01

    A novel approach to the quantitation of the muscular dystrophy protein, dystrophin, in muscle extracts is described. The two-site ELISA uses two monoclonal antibodies against dystrophin epitopes which lie close together in the rod domain of the dystrophin molecule in order to minimize the effects of dystrophin degradation. Dystrophin is assayed in its native form by extracting with non-ionic detergents and avoiding the use of SDS. PMID:8486926

  9. Quantitative Methods for Teaching Review

    Irina Milnikova; Tamara Shioshvili

    2011-01-01

    A new method of quantitative evaluation of teaching processes is elaborated. On the base of scores data, the method permits to evaluate efficiency of teaching within one group of students and comparative teaching efficiency in two or more groups. As basic characteristics of teaching efficiency heterogeneity, stability and total variability indices both for only one group and for comparing different groups are used. The method is easy to use and permits to rank results of teaching review which...

  10. Quantitative aspects of magnetospheric physics

    In this book, certain quantitative aspects of magnetospheric physics are described as used at the earth which illustrate the complex and wondrous ways in which the basic laws of physics enable us to obtain an understanding of our surroundings. The author investigates the charged-particle motion in magnetic and electric fields; the trapping region and currents due to trapped particles; the existence of large-scale electric fields in the magnetosphere; the effect of plasma waves on the distribution of particles. (Auth.)

  11. Essays on Quantitative Risk Management

    Fei, Fei

    2013-01-01

    The costly lessons from global crisis in the past decade reinforce the importance as well as challenges of risk management. This thesis explores several core concepts of quantitative risk management and provides further insight. We start with rating migration risk and propose a Mixture of Markov Chains (MMC) model to account for stochastic business cycle effects in credit rating migration risk. The model shows superior in-sample estimation and out-of-sample predication than its rivals. Co...

  12. Quantitative characterisation of sedimentary grains

    Tunwal, Mohit; Mulchrone, Kieran F.; Meere, Patrick A.

    2016-04-01

    Analysis of sedimentary texture helps in determining the formation, transportation and deposition processes of sedimentary rocks. Grain size analysis is traditionally quantitative, whereas grain shape analysis is largely qualitative. A semi-automated approach to quantitatively analyse shape and size of sand sized sedimentary grains is presented. Grain boundaries are manually traced from thin section microphotographs in the case of lithified samples and are automatically identified in the case of loose sediments. Shape and size paramters can then be estimated using a software package written on the Mathematica platform. While automated methodology already exists for loose sediment analysis, the available techniques for the case of lithified samples are limited to cases of high definition thin section microphotographs showing clear contrast between framework grains and matrix. Along with the size of grain, shape parameters such as roundness, angularity, circularity, irregularity and fractal dimension are measured. A new grain shape parameter developed using Fourier descriptors has also been developed. To test this new approach theoretical examples were analysed and produce high quality results supporting the accuracy of the algorithm. Furthermore sandstone samples from known aeolian and fluvial environments from the Dingle Basin, County Kerry, Ireland were collected and analysed. Modern loose sediments from glacial till from County Cork, Ireland and aeolian sediments from Rajasthan, India have also been collected and analysed. A graphical summary of the data is presented and allows for quantitative distinction between samples extracted from different sedimentary environments.

  13. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  14. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab

  15. Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm

    Huang, Yanhua; Gu, Lizhi

    2015-09-01

    The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and

  16. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these

  17. The importance of accurate meteorological input fields and accurate planetary boundary layer parameterizations, tested against ETEX-1

    Atmospheric transport of air pollutants is, in principle, a well understood process. If information about the state of the atmosphere is given in all details (infinitely accurate information about wind speed, etc.) and infinitely fast computers are available then the advection equation could in principle be solved exactly. This is, however, not the case: discretization of the equations and input data introduces some uncertainties and errors in the results. Therefore many different issues have to be carefully studied in order to diminish these uncertainties and to develop an accurate transport model. Some of these are e.g. the numerical treatment of the transport equation, accuracy of the mean meteorological input fields and parameterizations of sub-grid scale phenomena (as e.g. parameterizations of the 2 nd and higher order turbulence terms in order to reach closure in the perturbation equation). A tracer model for studying transport and dispersion of air pollution caused by a single but strong source is under development. The model simulations from the first ETEX release illustrate the differences caused by using various analyzed fields directly in the tracer model or using a meteorological driver. Also different parameterizations of the mixing height and the vertical exchange are compared. (author)

  18. How accurately can suborbital experiments measure the CMB?

    Great efforts are currently being channeled into ground- and balloon-based CMB experiments, mainly to explore polarization and anisotropy on small angular scales. To optimize instrumental design and assess experimental prospects, it is important to understand in detail the atmosphere-related systematic errors that limit the science achievable with new instruments. As a step in this direction, we spatially compare the 648 square degree ground- and balloon-based QMASK map with the atmosphere-free WMAP map, finding beautiful agreement on all angular scales where both are sensitive. Although much work remains on quantifying atmospheric effects on CMB experiments, this is a reassuring quantitative assessment of the power of the state-of-the-art fast-Fourier-transform- and matrix-based mapmaking techniques that have been used for QMASK and virtually all subsequent experiments

  19. A new accurate pill recognition system using imprint information

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  20. Spectropolarimetrically accurate magnetohydrostatic sunspot model for forward modelling in helioseismology

    Przybylski, D; Cally, P S

    2015-01-01

    We present a technique to construct a spectropolarimetrically accurate magneto-hydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion and absorption in the solar interior and photosphere with the sunspot embedded into it. With the $6173\\mathrm{\\AA}$ magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as full Stokes vector for the simulation at various positions at the solar disk, and analyse the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterised. An increase in acoustic power in the simulated observ...

  1. Analytical method to accurately predict LMFBR core flow distribution

    An accurate and detailed representation of the flow distribution in LMFBR cores is very important as the starting point and basis of the thermal and structural core design. Previous experience indicated that the steady state and transient core design is as good as the core orificing; thus, a new orificing philosophy satisfying a priori all design constraints was developd. However, optimized orificing is a necessary, but not sufficient condition for achieving the optimum core flow distribution, which is affected by the hydraulic characteristics of the remainder of the primary system. Consequently, an analytical model of the overall primary system was developed, resulting in the CATFISH computer code, which, even though specifically written for LMFBRs, can be used for any reactor employing ducted assemblies

  2. Accurate performance analysis of opportunistic decode-and-forward relaying

    Tourki, Kamel

    2011-07-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  3. Accurate numerical solution of compressible, linear stability equations

    Malik, M. R.; Chuang, S.; Hussaini, M. Y.

    1982-01-01

    The present investigation is concerned with a fourth order accurate finite difference method and its application to the study of the temporal and spatial stability of the three-dimensional compressible boundary layer flow on a swept wing. This method belongs to the class of compact two-point difference schemes discussed by White (1974) and Keller (1974). The method was apparently first used for solving the two-dimensional boundary layer equations. Attention is given to the governing equations, the solution technique, and the search for eigenvalues. A general purpose subroutine is employed for solving a block tridiagonal system of equations. The computer time can be reduced significantly by exploiting the special structure of two matrices.

  4. Accurate volume measurement system for plutonium nitrate solution

    An accurate volume measurement system for a large amount of plutonium nitrate solution stored in a reprocessing or a conversion plant has been developed at the Plutonium Conversion Development Facility (PCDF) in the Power Reactor and Nuclear Fuel Development Corp. (PNC) Tokai Works. A pair of differential digital quartz pressure transducers is utilized in the volume measurement system. To obtain high accuracy, it is important that the non-linearity of the transducer is minimized within the measurement range, the zero point is stabilized, and the damping property of the pneumatic line is designed to minimize pressure oscillation. The accuracy of the pressure measurement can always be within 2Pa with re-calibration once a year. In the PCDF, the overall uncertainty of the volume measurement has been evaluated to be within 0.2 %. This system has been successfully applied to the Japanese government's and IAEA's routine inspection since 1984. (author)

  5. Accurate bond dissociation energies (D 0) for FHF- isotopologues

    Stein, Christopher; Oswald, Rainer; Sebald, Peter; Botschwina, Peter; Stoll, Hermann; Peterson, Kirk A.

    2013-09-01

    Accurate bond dissociation energies (D 0) are determined for three isotopologues of the bifluoride ion (FHF-). While the zero-point vibrational contributions are taken from our previous work (P. Sebald, A. Bargholz, R. Oswald, C. Stein, P. Botschwina, J. Phys. Chem. A, DOI: 10.1021/jp3123677), the equilibrium dissociation energy (D e ) of the reaction ? was obtained by a composite method including frozen-core (fc) CCSD(T) calculations with basis sets up to cardinal number n = 7 followed by extrapolation to the complete basis set limit. Smaller terms beyond fc-CCSD(T) cancel each other almost completely. The D 0 values of FHF-, FDF-, and FTF- are predicted to be 15,176, 15,191, and 15,198 cm-1, respectively, with an uncertainty of ca. 15 cm-1.

  6. Efficient and Accurate Indoor Localization Using Landmark Graphs

    Gu, F.; Kealy, A.; Khoshelham, K.; Shang, J.

    2016-06-01

    Indoor localization is important for a variety of applications such as location-based services, mobile social networks, and emergency response. Fusing spatial information is an effective way to achieve accurate indoor localization with little or with no need for extra hardware. However, existing indoor localization methods that make use of spatial information are either too computationally expensive or too sensitive to the completeness of landmark detection. In this paper, we solve this problem by using the proposed landmark graph. The landmark graph is a directed graph where nodes are landmarks (e.g., doors, staircases, and turns) and edges are accessible paths with heading information. We compared the proposed method with two common Dead Reckoning (DR)-based methods (namely, Compass + Accelerometer + Landmarks and Gyroscope + Accelerometer + Landmarks) by a series of experiments. Experimental results show that the proposed method can achieve 73% accuracy with a positioning error less than 2.5 meters, which outperforms the other two DR-based methods.

  7. An Integrative Approach to Accurate Vehicle Logo Detection

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  8. Accurate characterisation of post moulding shrinkage of polymer parts

    Neves, L. C.; De Chiffre, L.; González-Madruga, D.;

    2015-01-01

    The work deals with experimental determination of the shrinkage of polymer parts after injection moulding. A fixture for length measurements on 8 parts at the same time was designed and manufactured in Invar, mounted with 8 electronic gauges, and provided with 3 temperature sensors. The fixture was...... used to record the length at a well-defined position on each part continuously, starting from approximately 10 minutes after moulding and covering a time period of 7 days. Two series of shrinkage curves were analysed and length values after stabilisation extracted and compared for all 16 parts. Values...... were compensated with respect to the effect from temperature variations during the measurements. Prediction of the length after stabilisation was carried out by fitting data at different stages of shrinkage. Uncertainty estimations were carried out and a procedure for the accurate characterisation of...

  9. Accurate finite difference methods for time-harmonic wave propagation

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  10. Accurate Modeling of Buck Converters with Magnetic-Core Inductors

    Astorino, Antonio; Antonini, Giulio; Swaminathan, Madhavan

    2015-01-01

    In this paper, a modeling approach for buck converters with magnetic-core inductors is presented. Due to the high nonlinearity of magnetic materials, the frequency domain analysis of such circuits is not suitable for an accurate description of their behaviour. Hence, in this work, a timedomain model...... of buck converters with magnetic-core inductors in a SimulinkR environment is proposed. As an example, the presented approach is used to simulate an eight-phase buck converter. The simulation results show that an unexpected system behaviour in terms of current ripple amplitude needs the inductor core...

  11. Accurate Parallel Algorithm for Adini Nonconforming Finite Element

    罗平; 周爱辉

    2003-01-01

    Multi-parameter asymptotic expansions are interesting since they justify the use of multi-parameter extrapolation which can be implemented in parallel and are well studied in many papers for the conforming finite element methods. For the nonconforming finite element methods, however, the work of the multi-parameter asymptotic expansions and extrapolation have seldom been found in the literature. This paper considers the solution of the biharmonic equation using Adini nonconforming finite elements and reports new results for the multi-parameter asymptotic expansions and extrapolation. The Adini nonconforming finite element solution of the biharmonic equation is shown to have a multi-parameter asymptotic error expansion and extrapolation. This expansion and a multi-parameter extrapolation technique were used to develop an accurate approximation parallel algorithm for the biharmonic equation. Finally, numerical results have verified the extrapolation theory.

  12. Accurate Derivative Evaluation for any Grad-Shafranov Solver

    Ricketson, L F; Rachh, M; Freidberg, J P

    2015-01-01

    We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad-Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.

  13. Accurate derivative evaluation for any Grad-Shafranov solver

    Ricketson, L. F.; Cerfon, A. J.; Rachh, M.; Freidberg, J. P.

    2016-01-01

    We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad-Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.

  14. Accurate monitoring of large aligned objects with videometric techniques

    Klumb, F; Grussenmeyer, P

    1999-01-01

    This paper describes a new videometric technique designed to monitor the deformations and misalignments of large vital components in the centre of a future particle detector. It relies on a geometrical principle called "reciprocal collimation" of two CCD cameras: the combination of the video devices in pair gives rise to a network of well located reference lines that surround the object to be surveyed. Each observed point, which in practice is a bright point-like light- source, is accurately located with respect to this network of neighbouring axes. Adjustment calculations provide the three- dimensional position of the object fitted with various light-sources. An experimental test-bench, equipped with four cameras, has corroborated the precision predicted by previous simulations of the system. (11 refs).

  15. Methods for Accurate Free Flight Measurement of Drag Coefficients

    Courtney, Elya; Courtney, Michael

    2015-01-01

    This paper describes experimental methods for free flight measurement of drag coefficients to an accuracy of approximately 1%. There are two main methods of determining free flight drag coefficients, or equivalent ballistic coefficients: 1) measuring near and far velocities over a known distance and 2) measuring a near velocity and time of flight over a known distance. Atmospheric conditions must also be known and nearly constant over the flight path. A number of tradeoffs are important when designing experiments to accurately determine drag coefficients. The flight distance must be large enough so that the projectile's loss of velocity is significant compared with its initial velocity and much larger than the uncertainty in the near and/or far velocity measurements. On the other hand, since drag coefficients and ballistic coefficients both depend on velocity, the change in velocity over the flight path should be small enough that the average drag coefficient over the path (which is what is really determined)...

  16. Natural orbital expansions of highly accurate three-body wavefunctions

    Natural orbital expansions are considered for highly accurate three-body wavefunctions written in the relative coordinates r32, r31 and r21. Our present method is applied to the ground S(L = 0) -state wavefunctions of the Ps- and inftyH- ions. Our best variational energies computed herein for these systems are E(Ps-) = -0.262 005 070 232 980 107 7666 au and E(inftyH- =-0.5277510165443771965865 au, respectively. The variational wavefunctions determined for these systems contain between 2000 and 4200 exponential basis functions. In general, the natural orbital expansions of these functions are compact and rapidly convergent functions, which are represented as linear combinations of some relatively simple functions. The natural orbitals can be very useful in various applications, including photodetachment and scattering problems

  17. Fast and accurate automated cell boundary determination for fluorescence microscopy

    Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider

    2013-07-01

    Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.

  18. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results. PMID:26731454

  19. Accurate macroscale modelling of spatial dynamics in multiple dimensions

    Roberts, A ~J; Bunder, J ~E

    2011-01-01

    Developments in dynamical systems theory provides new support for the macroscale modelling of pdes and other microscale systems such as Lattice Boltzmann, Monte Carlo or Molecular Dynamics simulators. By systematically resolving subgrid microscale dynamics the dynamical systems approach constructs accurate closures of macroscale discretisations of the microscale system. Here we specifically explore reaction-diffusion problems in two spatial dimensions as a prototype of generic systems in multiple dimensions. Our approach unifies into one the modelling of systems by a type of finite elements, and the `equation free' macroscale modelling of microscale simulators efficiently executing only on small patches of the spatial domain. Centre manifold theory ensures that a closed model exist on the macroscale grid, is emergent, and is systematically approximated. Dividing space either into overlapping finite elements or into spatially separated small patches, the specially crafted inter-element\\slash patch coupling als...

  20. How accurately can digital images depict conventional radiographs

    The purpose of this paper is to investigate how accurately the video image of a digitized chest radiograph can depict normal anatomic configurations of thoracic organs seen on a conventional radiograph. These configurations are important to diagnosis of diseases of the chest. Chest radiographs of 50 individuals diagnosed as normal were analyzed. Three chest physicians and one radiologist reviewed 50 pairs of digitized images (digitized in 0.125-mm pixel size, 10-bit gray scale, displayed on 1,024 x 1.536, 8-bit gray scale) constructed and conventional films. The visibility of eight structures (spinal process, trachea, right and left main bronchus, anterior tip of right fourth rib, vessels behind diaphragm and cardiac shadow, and descending aorta behind heart) was graded into five levels of confidence

  1. Can a surgeon drill accurately at a specified angle?

    Brioschi, Valentina; Cook, Jodie; Arthurs, Gareth I

    2016-01-01

    Objectives To investigate whether a surgeon can drill accurately a specified angle and whether surgeon experience, task repetition, drill bit size and perceived difficulty influence drilling angle accuracy. Methods The sample population consisted of final-year students (n=25), non-specialist veterinarians (n=22) and board-certified orthopaedic surgeons (n=8). Each participant drilled a hole twice in a horizontal oak plank at 30°, 45°, 60°, 80°, 85° and 90° angles with either a 2.5  or a 3.5 mm drill bit. Participants then rated the perceived difficulty to drill each angle. The true angle of each hole was measured using a digital goniometer. Results Greater drilling accuracy was achieved at angles closer to 90°. An error of ≤±4° was achieved by 84.5 per cent of participants drilling a 90° angle compared with approximately 20 per cent of participants drilling a 30–45° angle. There was no effect of surgeon experience, task repetition or drill bit size on the mean error for intended versus achieved angle. Increased perception of difficulty was associated with the more acute angles and decreased accuracy, but not experience level. Clinical significance This study shows that surgeon ability to drill accurately (within ±4° error) is limited, particularly at angles ≤60°. In situations where drill angle is critical, use of computer-assisted navigation or custom-made drill guides may be preferable. PMID:27547423

  2. Bayesian calibration of power plant models for accurate performance prediction

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  3. Population variability complicates the accurate detection of climate change responses.

    McCain, Christy; Szewczyk, Tim; Bracy Knight, Kevin

    2016-06-01

    The rush to assess species' responses to anthropogenic climate change (CC) has underestimated the importance of interannual population variability (PV). Researchers assume sampling rigor alone will lead to an accurate detection of response regardless of the underlying population fluctuations of the species under consideration. Using population simulations across a realistic, empirically based gradient in PV, we show that moderate to high PV can lead to opposite and biased conclusions about CC responses. Between pre- and post-CC sampling bouts of modeled populations as in resurvey studies, there is: (i) A 50% probability of erroneously detecting the opposite trend in population abundance change and nearly zero probability of detecting no change. (ii) Across multiple years of sampling, it is nearly impossible to accurately detect any directional shift in population sizes with even moderate PV. (iii) There is up to 50% probability of detecting a population extirpation when the species is present, but in very low natural abundances. (iv) Under scenarios of moderate to high PV across a species' range or at the range edges, there is a bias toward erroneous detection of range shifts or contractions. Essentially, the frequency and magnitude of population peaks and troughs greatly impact the accuracy of our CC response measurements. Species with moderate to high PV (many small vertebrates, invertebrates, and annual plants) may be inaccurate 'canaries in the coal mine' for CC without pertinent demographic analyses and additional repeat sampling. Variation in PV may explain some idiosyncrasies in CC responses detected so far and urgently needs more careful consideration in design and analysis of CC responses. PMID:26725404

  4. An accurate and portable solid state neutron rem meter

    Accurately resolving the ambient neutron dose equivalent spanning the thermal to 15 MeV energy range with a single configuration and lightweight instrument is desirable. This paper presents the design of a portable, high intrinsic efficiency, and accurate neutron rem meter whose energy-dependent response is electronically adjusted to a chosen neutron dose equivalent standard. The instrument may be classified as a moderating type neutron spectrometer, based on an adaptation to the classical Bonner sphere and position sensitive long counter, which, simultaneously counts thermalized neutrons by high thermal efficiency solid state neutron detectors. The use of multiple detectors and moderator arranged along an axis of symmetry (e.g., long axis of a cylinder) with known neutron-slowing properties allows for the construction of a linear combination of responses that approximate the ambient neutron dose equivalent. Variations on the detector configuration are investigated via Monte Carlo N-Particle simulations to minimize the total instrument mass while maintaining acceptable response accuracy—a dose error less than 15% for bare 252Cf, bare AmBe, an epi-thermal and mixed monoenergetic sources is found at less than 4.5 kg moderator mass in all studied cases. A comparison of the energy dependent dose equivalent response and resultant energy dependent dose equivalent error of the present dosimeter to commercially-available portable rem meters and the prior art are presented. Finally, the present design is assessed by comparison of the simulated output resulting from applications of several known neutron sources and dose rates

  5. Accurate thermodynamic characterization of a synthetic coal mine methane mixture

    Highlights: • Accurate density data of a 10 components synthetic coal mine methane mixture are presented. • Experimental data are compared with the densities calculated from the GERG-2008 equation of state. • Relative deviations in density were within a 0.2% band at temperatures above 275 K. • Densities at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations. -- Abstract: In the last few years, coal mine methane (CMM) has gained significance as a potential non-conventional gas fuel. The progressive depletion of common fossil fuels reserves and, on the other hand, the positive estimates of CMM resources as a by-product of mining promote this fuel gas as a promising alternative fuel. The increasing importance of its exploitation makes it necessary to check the capability of the present-day models and equations of state for natural gas to predict the thermophysical properties of gases with a considerably different composition, like CMM. In this work, accurate density measurements of a synthetic CMM mixture are reported in the temperature range from (250 to 400) K and pressures up to 15 MPa, as part of the research project EMRP ENG01 of the European Metrology Research Program for the characterization of non-conventional energy gases. Experimental data were compared with the densities calculated with the GERG-2008 equation of state. Relative deviations between experimental and estimated densities were within a 0.2% band at temperatures above 275 K, while data at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations

  6. Accurate molecular classification of cancer using simple rules

    Gotoh Osamu

    2009-10-01

    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  7. Development of a Novel Reference Plasmid for Accurate Quantification of Genetically Modified Kefeng6 Rice DNA in Food and Feed Samples

    Liang Li; Xiujie Zhang; Yusong Wan; Wujun Jin

    2013-01-01

    Reference plasmids are an essential tool for the quantification of genetically modified (GM) events. Quantitative real-time PCR (qPCR) is the most commonly used method to characterize and quantify reference plasmids. However, the precision of this method is often limited by calibration curves, and qPCR data can be affected by matrix differences between the standards and samples. Here, we describe a digital PCR (dPCR) approach that can be used to accurately measure the novel reference plasmid ...

  8. Analysis of the neurotoxic plasticizer n-butylbenzenesulfonamide by gas chromatography combined with accurate mass selected ion monitoring.

    Duffield, P; Bourne, D; Tan, K; Garruto, R M; Duncan, M W

    1994-01-01

    The plasticizer, n-butylbenzenesulfonamide (NBBS), is reported to be neurotoxic when inoculated intracisternally or intraperitoneally into rabbits. Because NBBS is commonly used in the production of polyamide (nylon) plastics and is soluble in water, the disposal of NBBS-containing plastics in landfill sites could result in NBBS appearing in the leachate. Further, NBBS could also be leached from packaging into their contents. To allow us to examine the risks posed by NBBS in the environment, we have developed a quantitative assay for this compound. The assay employs a one-step extraction into dichloromethane followed by gas chromatography with accurate mass selected ion recording. The assay incorporates [13C6]NBBS as an internal standard to allow precise quantitation, and four separate ion chromatograms are recorded. NBBS was found in some Australian domestic solidwaste landfill leachate (from less than 0.3 to 94.6 ng/mL), but ground water in the vicinity of a landfill had only trace quantities of NBBS. NBBS was also quantitated in some bottled and cask wines, and levels varied from not detected to 2.17 ng/mL (n = 14). Additional studies are required to assess the public health risks associated with the use of NBBS as a plasticizer. PMID:7861748

  9. Very accurate (definitive) methods by radiochemical NAA and their significance for quality assurance in trace analysis

    The idea of very accurate (definitive) methods by RNAA for the determination of individual trace elements in selected matrices is presented. The approach is based on combination of neutron activation with selective and truly quantitative post-irradiation isolation of an indicator radionuclide by column chromatography followed by high resolution γ-ray spectrometric measurement. The method should be, in principle, a single element method to optimize all conditions with respect to determination of this particular element. Radiochemical separation scheme should assure separation of the analyte from practically all accompanying radionuclides to provide interference-free γ-ray spectrometric measurement and achieving best detection limits. The method should have some intrinsic mechanisms incorporated into the procedure preventing any possibility of making gross errors. Several criteria were formulated which must be simultaneously fulfilled in order to acknowledge the analytical result as obtained by definitive method. Such methods are not intended for routine measurements but rather for verifying the accuracy of other methods of analysis and certification of the candidate reference materials. The usefulness of such methods is illustrated on the example of Cd and references are given to similar methods elaborated for the determination of several other elements (Co, Cu, Mo, Ni and U) in biological materials. (author)

  10. Hydration free energies of cyanide and hydroxide ions from molecular dynamics simulations with accurate force fields

    Lee, M.W.; Meuwly, M.

    2013-01-01

    The evaluation of hydration free energies is a sensitive test to assess force fields used in atomistic simulations. We showed recently that the vibrational relaxation times, 1D- and 2D-infrared spectroscopies for CN(-) in water can be quantitatively described from molecular dynamics (MD) simulations with multipolar force fields and slightly enlarged van der Waals radii for the C- and N-atoms. To validate such an approach, the present work investigates the solvation free energy of cyanide in water using MD simulations with accurate multipolar electrostatics. It is found that larger van der Waals radii are indeed necessary to obtain results close to the experimental values when a multipolar force field is used. For CN(-), the van der Waals ranges refined in our previous work yield hydration free energy between -72.0 and -77.2 kcal mol(-1), which is in excellent agreement with the experimental data. In addition to the cyanide ion, we also study the hydroxide ion to show that the method used here is readily applicable to similar systems. Hydration free energies are found to sensitively depend on the intermolecular interactions, while bonded interactions are less important, as expected. We also investigate in the present work the possibility of applying the multipolar force field in scoring trajectories generated using computationally inexpensive methods, which should be useful in broader parametrization studies with reduced computational resources, as scoring is much faster than the generation of the trajectories.

  11. Optimal target VOI size for accurate 4D coregistration of DCE-MRI

    Park, Brian; Mikheev, Artem; Zaim Wadghiri, Youssef; Bertrand, Anne; Novikov, Dmitry; Chandarana, Hersh; Rusinek, Henry

    2016-03-01

    Dynamic contrast enhanced (DCE) MRI has emerged as a reliable and diagnostically useful functional imaging technique. DCE protocol typically lasts 3-15 minutes and results in a time series of N volumes. For automated analysis, it is important that volumes acquired at different times be spatially coregistered. We have recently introduced a novel 4D, or volume time series, coregistration tool based on a user-specified target volume of interest (VOI). However, the relationship between coregistration accuracy and target VOI size has not been investigated. In this study, coregistration accuracy was quantitatively measured using various sized target VOIs. Coregistration of 10 DCE-MRI mouse head image sets were performed with various sized VOIs targeting the mouse brain. Accuracy was quantified by measures based on the union and standard deviation of the coregistered volume time series. Coregistration accuracy was determined to improve rapidly as the size of the VOI increased and approached the approximate volume of the target (mouse brain). Further inflation of the VOI beyond the volume of the target (mouse brain) only marginally improved coregistration accuracy. The CPU time needed to accomplish coregistration is a linear function of N that varied gradually with VOI size. From the results of this study, we recommend the optimal size of the VOI to be slightly overinclusive, approximately by 5 voxels, of the target for computationally efficient and accurate coregistration.

  12. Qualitative and Quantitative Sentiment Proxies

    Zhao, Zeyan; Ahmad, Khurshid

    2015-01-01

    Sentiment analysis is a content-analytic investigative framework for researchers, traders and the general public involved in financial markets. This analysis is based on carefully sourced and elaborately constructed proxies for market sentiment and has emerged as a basis for analysing movements in...... trading volumes. The case study we use is a small market index (Danish Stock Exchange Index, OMXC 20, together with prevailing sentiment in Denmark, to evaluate the impact of sentiment on OMXC 20. Furthermore, we introduce a rather novel and quantitative sentiment proxy, that is the use of the index of a...

  13. Quantitative relationships in delphinid neocortex

    Mortensen, Heidi S.; Pakkenberg, Bente; Dam, Maria;

    2014-01-01

    Possessing large brains and complex behavioral patterns, cetaceans are believed to be highly intelligent. Their brains, which are the largest in the Animal Kingdom and have enormous gyrification compared with terrestrial mammals, have long been of scientific interest. Few studies, however, report...... density in long-finned pilot whales is lower than that in humans, their higher cell number appears to be due to their larger brain. Accordingly, our findings make an important contribution to the ongoing debate over quantitative relationships in the mammalian brain....

  14. Quantitative scattering of melanin solutions

    Riesz, J; Meredith, P; Gilmore, Joel; Meredith, Paul; Riesz, Jennifer

    2005-01-01

    The optical scattering coefficient of a dilute, well solubilised eumelanin solution has been accurately measured as a function of incident wavelength, and found to contribute less than 6% of the total optical attenuation between 210 and 325nm. At longer wavelengths (325nm to 800nm) the scattering was less than the minimum sensitivity of our instrument. This indicates that UV and visible optical density spectra can be interpreted as true absorption with a high degree of confidence. The scattering coefficient vs wavelength was found to be consistent with Rayleigh Theory for a particle radius of 38+-1nm.

  15. Recent Developments in Quantitative Finance: An Overview

    Chang, Chia-Lin; Hu, Shing-Yang; Yu, Shih-Ti

    2014-01-01

    Quantitative finance combines mathematical finance, financial statistics, financial econometrics and empirical finance to provide a solid quantitative foundation for the analysis of financial issues. The purpose of this special issue on “Recent developments in quantitative finance” is to highlight some areas of research in which novel methods in quantitative finance have contributed significantly to the analysis of financial issues, specifically fast methods for large-scale non-elliptical por...

  16. Quantitative graph theory mathematical foundations and applications

    Dehmer, Matthias

    2014-01-01

    The first book devoted exclusively to quantitative graph theory, Quantitative Graph Theory: Mathematical Foundations and Applications presents and demonstrates existing and novel methods for analyzing graphs quantitatively. Incorporating interdisciplinary knowledge from graph theory, information theory, measurement theory, and statistical techniques, this book covers a wide range of quantitative-graph theoretical concepts and methods, including those pertaining to real and random graphs such as:Comparative approaches (graph similarity or distance)Graph measures to characterize graphs quantitat

  17. Spectroscopically Accurate Line Lists for Application in Sulphur Chemistry

    Underwood, D. S.; Azzam, A. A. A.; Yurchenko, S. N.; Tennyson, J.

    2013-09-01

    Monitoring sulphur chemistry is thought to be of great importance for exoplanets. Doing this requires detailed knowledge of the spectroscopic properties of sulphur containing molecules such as hydrogen sulphide (H2S) [1], sulphur dioxide (SO2), and sulphur trioxide (SO3). Each of these molecules can be found in terrestrial environments, produced in volcano emissions on Earth, and analysis of their spectroscopic data can prove useful to the characterisation of exoplanets, as well as the study of planets in our own solar system, with both having a possible presence on Venus. A complete, high temperature list of line positions and intensities for H32 2 S is presented. The DVR3D program suite is used to calculate the bound ro-vibration energy levels, wavefunctions, and dipole transition intensities using Radau coordinates. The calculations are based on a newly determined, spectroscopically refined potential energy surface (PES) and a new, high accuracy, ab initio dipole moment surface (DMS). Tests show that the PES enables us to calculate the line positions accurately and the DMS gives satisfactory results for line intensities. Comparisons with experiment as well as with previous theoretical spectra will be presented. The results of this study will form an important addition to the databases which are considered as sources of information for space applications; especially, in analysing the spectra of extrasolar planets, and remote sensing studies for Venus and Earth, as well as laboratory investigations and pollution studies. An ab initio line list for SO3 was previously computed using the variational nuclear motion program TROVE [2], and was suitable for modelling room temperature SO3 spectra. The calculations considered transitions in the region of 0-4000 cm-1 with rotational states up to J = 85, and includes 174,674,257 transitions. A list of 10,878 experimental transitions had relative intensities placed on an absolute scale, and were provided in a form suitable

  18. OSEM in accurate evaluation of CAD using cardiac SPECT

    Filtered Backprojection (FBP) has been used as standard processing method for almost all of the SPECT studies. Recently, iterative algorithms such as MLEM and the newest one, Ordered Subset Expectation Maximization (OSEM) introduced as an alternative method to reconstruction and applied for many SPECT studies like brain, bone, and...Application of this method in car dia SPECT was restricted merely in some of research centers. However, does iterative reconstruction as compared to FBP increase the accuracy of myocardial perfusion SPECT in evaluation of coronary artery disease? We aimed to investigate the diagnostic performance of OSEM over FBP. 15 coronary artery diseased patients underwent stress/rest cardiac perfusion study. The images were obtained by SMV SPECT system and two-day protocol was performed. Each raw data was processed by both methods. FBP was done with a METZ 4 filter and iterative reconstruction was done with the OSEM software (5 iterative, 16-10 iterative, 32-5 iterative, 16 and...) without any truncation. Comparison of OSEM versus FBP was performed both visually and quantitatively used a square ROI over each of 8 segments of LV and a mean activity ratio (defect to normal ratio) and signal to noise (Bk g. activity) were obtained. All quantitatively parameters were analyzed with SPSS software statistically and correlation coefficient was calculated upon Pear son's correlation formula. Kean correlation coefficient value between OSEM and FBP was high (r2=0/96) when an negative number of 5 was used. Difference value for mean of D/N activity was <2.5% and for in count ratio was low (4%). Visually, the image quality was rather high in preference of OSEM to FBP. the above results revealed similarity between these two methods with reduction of noise in OSEM reconstructed images, and indicated that OSEM can provide improvement in accuracy if co per number of iteration and subset level were selected. The obtained results concluded that OSEM could be used as

  19. A spectroscopic transfer standard for accurate atmospheric CO measurements

    Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker

    2016-04-01

    Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been

  20. Accurate in-line CD metrology for nanometer semiconductor manufacturing

    Perng, Baw-Ching; Shieh, Jyu-Horng; Jang, S.-M.; Liang, M.-S.; Huang, Renee; Chen, Li-Chien; Hwang, Ruey-Lian; Hsu, Joe; Fong, David

    2006-03-01

    The need for absolute accuracy is increasing as semiconductor-manufacturing technologies advance to sub-65nm nodes, since device sizes are reducing to sub-50nm but offsets ranging from 5nm to 20nm are often encountered. While TEM is well-recognized as the most accurate CD metrology, direct comparison between the TEM data and in-line CD data might be misleading sometimes due to different statistical sampling and interferences from sidewall roughness. In this work we explore the capability of CD-AFM as an accurate in-line CD reference metrology. Being a member of scanning profiling metrology, CD-AFM has the advantages of avoiding e-beam damage and minimum sample damage induced CD changes, in addition to the capability of more statistical sampling than typical cross section metrologies. While AFM has already gained its reputation on the accuracy of depth measurement, not much data was reported on the accuracy of CD-AFM for CD measurement. Our main focus here is to prove the accuracy of CD-AFM and show its measuring capability for semiconductor related materials and patterns. In addition to the typical precision check, we spent an intensive effort on examining the bias performance of this CD metrology, which is defined as the difference between CD-AFM data and the best-known CD value of the prepared samples. We first examine line edge roughness (LER) behavior for line patterns of various materials, including polysilicon, photoresist, and a porous low k material. Based on the LER characteristics of each patterning, a method is proposed to reduce its influence on CD measurement. Application of our method to a VLSI nanoCD standard is then performed, and agreement of less than 1nm bias is achieved between the CD-AFM data and the standard's value. With very careful sample preparations and TEM tool calibration, we also obtained excellent correlation between CD-AFM and TEM for poly-CDs ranging from 70nm to 400nm. CD measurements of poly ADI and low k trenches are also

  1. Passive samplers accurately predict PAH levels in resident crayfish.

    Paulik, L Blair; Smith, Brian W; Bergmann, Alan J; Sower, Greg J; Forsberg, Norman D; Teeguarden, Justin G; Anderson, Kim A

    2016-02-15

    Contamination of resident aquatic organisms is a major concern for environmental risk assessors. However, collecting organisms to estimate risk is often prohibitively time and resource-intensive. Passive sampling accurately estimates resident organism contamination, and it saves time and resources. This study used low density polyethylene (LDPE) passive water samplers to predict polycyclic aromatic hydrocarbon (PAH) levels in signal crayfish, Pacifastacus leniusculus. Resident crayfish were collected at 5 sites within and outside of the Portland Harbor Superfund Megasite (PHSM) in the Willamette River in Portland, Oregon. LDPE deployment was spatially and temporally paired with crayfish collection. Crayfish visceral and tail tissue, as well as water-deployed LDPE, were extracted and analyzed for 62 PAHs using GC-MS/MS. Freely-dissolved concentrations (Cfree) of PAHs in water were calculated from concentrations in LDPE. Carcinogenic risks were estimated for all crayfish tissues, using benzo[a]pyrene equivalent concentrations (BaPeq). ∑PAH were 5-20 times higher in viscera than in tails, and ∑BaPeq were 6-70 times higher in viscera than in tails. Eating only tail tissue of crayfish would therefore significantly reduce carcinogenic risk compared to also eating viscera. Additionally, PAH levels in crayfish were compared to levels in crayfish collected 10years earlier. PAH levels in crayfish were higher upriver of the PHSM and unchanged within the PHSM after the 10-year period. Finally, a linear regression model predicted levels of 34 PAHs in crayfish viscera with an associated R-squared value of 0.52 (and a correlation coefficient of 0.72), using only the Cfree PAHs in water. On average, the model predicted PAH concentrations in crayfish tissue within a factor of 2.4±1.8 of measured concentrations. This affirms that passive water sampling accurately estimates PAH contamination in crayfish. Furthermore, the strong predictive ability of this simple model suggests

  2. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    Farmerie William G

    2006-08-01

    Full Text Available Abstract Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20 System (454 Life Sciences Corporation, to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae and Platanus occidentalis (Platanaceae. Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy

  3. Automatic classification and accurate size measurement of blank mask defects

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  4. Strategies for quantitation of phosphoproteomic data

    Palmisano, Giuseppe; Thingholm, Tine Engberg

    2010-01-01

    Recent developments in phosphoproteomic sample-preparation techniques and sensitive mass spectrometry instrumentation have led to large-scale identifications of phosphoproteins and phosphorylation sites from highly complex samples. This has facilitated the implementation of different quantitation...... on different quantitation strategies. Methods for metabolic labeling, chemical modification and label-free quantitation and their applicability or inapplicability in phosphoproteomic studies are discussed....

  5. Quantitative remote visual inspection in nuclear power industry

    A borescope is an instrument that is used within the power industry to visually inspect remote locations. It is typically used for inspections of heat exchangers, condensers, boiler tubes, and steam generators and in many general inspection applications. The optical system of a borescope, like the human eye, does not have a fixed magnification. When viewing an object close up, it appears large; when the same object is viewed from afar, it appears small. Humans, though, have two separate eyes and a brain that process information to calculate the size of an object. These attributes are considered secondary information. Until now, making a measurement using a borescope has been an educated guess. There has always been a need to make accurate measurements from borescope images. The realization of this capability would make remote visual inspection a quantitative nondestructive testing method versus a qualitative one. For nuclear power plants, it is an excellent technique for maintaining radiation levels as low as reasonably achievable. Remote visual measurement provides distance and limits the exposure time needed to make accurate measurements. The design problem, therefore, was to develop the capability to make accurate and repeatable measurements of objects or physical defects with a borescope-type instrument. The solution was achieved by designing a borescope with a novel shadow projection mechanism, integrated with an electronics module containing the video display circuitry and a measurement computer

  6. Quantitative Characterization of Nanostructured Materials

    Dr. Frank (Bud) Bridges, University of California-Santa Cruz

    2010-08-05

    The two-and-a-half day symposium on the "Quantitative Characterization of Nanostructured Materials" will be the first comprehensive meeting on this topic held under the auspices of a major U.S. professional society. Spring MRS Meetings provide a natural venue for this symposium as they attract a broad audience of researchers that represents a cross-section of the state-of-the-art regarding synthesis, structure-property relations, and applications of nanostructured materials. Close interactions among the experts in local structure measurements and materials researchers will help both to identify measurement needs pertinent to real-world materials problems and to familiarize the materials research community with the state-of-the-art local structure measurement techniques. We have chosen invited speakers that reflect the multidisciplinary and international nature of this topic and the need to continually nurture productive interfaces among university, government and industrial laboratories. The intent of the symposium is to provide an interdisciplinary forum for discussion and exchange of ideas on the recent progress in quantitative characterization of structural order in nanomaterials using different experimental techniques and theory. The symposium is expected to facilitate discussions on optimal approaches for determining atomic structure at the nanoscale using combined inputs from multiple measurement techniques.

  7. Quantitative Analysis of Face Symmetry.

    Tamir, Abraham

    2015-06-01

    The major objective of this article was to report quantitatively the degree of human face symmetry for reported images taken from the Internet. From the original image of a certain person that appears in the center of each triplet, 2 symmetric combinations were constructed that are based on the left part of the image and its mirror image (left-left) and on the right part of the image and its mirror image (right-right). By applying a computer software that enables to determine length, surface area, and perimeter of any geometric shape, the following measurements were obtained for each triplet: face perimeter and area; distance between the pupils; mouth length; its perimeter and area; nose length and face length, usually below the ears; as well as the area and perimeter of the pupils. Then, for each of the above measurements, the value C, which characterizes the degree of symmetry of the real image with respect to the combinations right-right and left-left, was calculated. C appears on the right-hand side below each image. A high value of C indicates a low symmetry, and as the value is decreasing, the symmetry is increasing. The magnitude on the left relates to the pupils and compares the difference between the area and perimeter of the 2 pupils. The major conclusion arrived at here is that the human face is asymmetric to some degree; the degree of asymmetry is reported quantitatively under each portrait. PMID:26080172

  8. Quantitative evaluation of Alzheimer's disease

    Duchesne, S.; Frisoni, G. B.

    2009-02-01

    We propose a single, quantitative metric called the disease evaluation factor (DEF) and assess its efficiency at estimating disease burden in normal, control subjects (CTRL) and probable Alzheimer's disease (AD) patients. The study group consisted in 75 patients with a diagnosis of probable AD and 75 age-matched normal CTRL without neurological or neuropsychological deficit. We calculated a reference eigenspace of MRI appearance from reference data, in which our CTRL and probable AD subjects were projected. We then calculated the multi-dimensional hyperplane separating the CTRL and probable AD groups. The DEF was estimated via a multidimensional weighted distance of eigencoordinates for a given subject and the CTRL group mean, along salient principal components forming the separating hyperplane. We used quantile plots, Kolmogorov-Smirnov and χ2 tests to compare the DEF values and test that their distribution was normal. We used a linear discriminant test to separate CTRL from probable AD based on the DEF factor, and reached an accuracy of 87%. A quantitative biomarker in AD would act as an important surrogate marker of disease status and progression.

  9. Quantitative measurements in capsule endoscopy.

    Keuchel, M; Kurniawan, N; Baltes, P; Bandorski, D; Koulaouzidis, A

    2015-10-01

    This review summarizes several approaches for quantitative measurement in capsule endoscopy. Video capsule endoscopy (VCE) typically provides wireless imaging of small bowel. Currently, a variety of quantitative measurements are implemented in commercially available hardware/software. The majority is proprietary and hence undisclosed algorithms. Measurement of amount of luminal contamination allows calculating scores from whole VCE studies. Other scores express the severity of small bowel lesions in Crohn׳s disease or the degree of villous atrophy in celiac disease. Image processing with numerous algorithms of textural and color feature extraction is further in the research focuses for automated image analysis. These tools aim to select single images with relevant lesions as blood, ulcers, polyps and tumors or to omit images showing only luminal contamination. Analysis of motility pattern, size measurement and determination of capsule localization are additional topics. Non-visual wireless capsules transmitting data acquired with specific sensors from the gastrointestinal (GI) tract are available for clinical routine. This includes pH measurement in the esophagus for the diagnosis of acid gastro-esophageal reflux. A wireless motility capsule provides GI motility analysis on the basis of pH, pressure, and temperature measurement. Electromagnetically tracking of another motility capsule allows visualization of motility. However, measurement of substances by GI capsules is of great interest but still at an early stage of development. PMID:26299419

  10. Quantitative information in medical imaging

    When developing new imaging or image processing techniques, one constantly has in mind that the new technique should provide a better, or more optimal answer to medical tasks than existing techniques do 'Better' or 'more optimal' imply some kind of standard by which one can measure imaging or image processing performance. The choice of a particular imaging modality to answer a diagnostic task, such as the detection of coronary artery stenosis is also based on an implicit optimalisation of performance criteria. Performance is measured by the ability to provide information about an object (patient) to the person (referring doctor) who ordered a particular task. In medical imaging the task is generally to find quantitative information on bodily function (biochemistry, physiology) and structure (histology, anatomy). In medical imaging, a wide range of techniques is available. Each technique has it's own characteristics. The techniques discussed in this paper are: nuclear magnetic resonance, X-ray fluorescence, scintigraphy, positron emission tomography, applied potential tomography, computerized tomography, and compton tomography. This paper provides a framework for the comparison of imaging performance, based on the way the quantitative information flow is altered by the characteristics of the modality

  11. Quantitative lipopolysaccharide analysis using HPLC/MS/MS and its combination with the limulus amebocyte lysate assay[S

    Pais De Barros, Jean-Paul; Gautier, Thomas; Sali, Wahib; Adrie, Christophe; Choubley, Hélène; Charron, Emilie; Lalande, Caroline; Le Guern, Naig; Deckert, Valérie; Monchi, Mehran; Quenot, Jean-Pierre; Lagrost, Laurent

    2015-01-01

    Quantitation of plasma lipopolysaccharides (LPSs) might be used to document Gram-negative bacterial infection. In the present work, LPS-derived 3-hydroxymyristate was extracted from plasma samples with an organic solvent, separated by reversed phase HPLC, and quantitated by MS/MS. This mass assay was combined with the limulus amebocyte lysate (LAL) bioassay to monitor neutralization of LPS activity in biological samples. The described HPLC/MS/MS method is a reliable, practical, accurate, and ...

  12. Towards more accurate and reliable predictions for nuclear applications

    The need for nuclear data far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in fundamental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. In the present contribution, the reliability and accuracy of recent nuclear theories are discussed for most of the relevant quantities needed to estimate reaction cross sections and beta-decay rates, namely nuclear masses, nuclear level densities, gamma-ray strength, fission properties and beta-strength functions. It is shown that nowadays, mean-field models can be tuned at the same level of accuracy as the phenomenological models, renormalized on experimental data if needed, and therefore can replace the phenomenogical inputs in the prediction of nuclear data. While fundamental nuclear physicists keep on improving state-of-the-art models, e.g. within the shell model or ab initio models, nuclear applications could make use of their most recent results as quantitative constraints or guides to improve the predictions in energy or mass domain that will remain inaccessible experimentally. (orig.)

  13. Accurate Complex Systems Design: Integrating Serious Games with Petri Nets

    Kirsten Sinclair

    2016-03-01

    Full Text Available Difficulty understanding the large number of interactions involved in complex systems makes their successful engineering a problem. Petri Nets are one graphical modelling technique used to describe and check proposed designs of complex systems thoroughly. While automatic analysis capabilities of Petri Nets are useful, their visual form is less so, particularly for communicating the design they represent. In engineering projects, this can lead to a gap in communications between people with different areas of expertise, negatively impacting achieving accurate designs.In contrast, although capable of representing a variety of real and imaginary objects effectively, behaviour of serious games can only be analysed manually through interactive simulation. This paper examines combining the complementary strengths of Petri Nets and serious games. The novel contribution of this work is a serious game prototype of a complex system design that has been checked thoroughly. Underpinned by Petri Net analysis, the serious game can be used as a high-level interface to communicate and refine the design.Improvement of a complex system design is demonstrated by applying the integration to a proof-of-concept case study.   

  14. An accurate δf method for neoclassical transport calculation

    A δf method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of δf method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the δf simulation shows a significantly upgraded performance for neoclassical transport study. (author)

  15. Progress in Fast, Accurate Multi-scale Climate Simulations

    Collins, William D [Lawrence Berkeley National Laboratory (LBNL); Johansen, Hans [Lawrence Berkeley National Laboratory (LBNL); Evans, Katherine J [ORNL; Woodward, Carol S. [Lawrence Livermore National Laboratory (LLNL); Caldwell, Peter [Lawrence Livermore National Laboratory (LLNL)

    2015-01-01

    We present a survey of physical and computational techniques that have the potential to con- tribute to the next generation of high-fidelity, multi-scale climate simulations. Examples of the climate science problems that can be investigated with more depth include the capture of remote forcings of localized hydrological extreme events, an accurate representation of cloud features over a range of spatial and temporal scales, and parallel, large ensembles of simulations to more effectively explore model sensitivities and uncertainties. Numerical techniques, such as adaptive mesh refinement, implicit time integration, and separate treatment of fast physical time scales are enabling improved accuracy and fidelity in simulation of dynamics and allow more complete representations of climate features at the global scale. At the same time, part- nerships with computer science teams have focused on taking advantage of evolving computer architectures, such as many-core processors and GPUs, so that these approaches which were previously considered prohibitively costly have become both more efficient and scalable. In combination, progress in these three critical areas is poised to transform climate modeling in the coming decades.

  16. Faster and More Accurate Sequence Alignment with SNAP

    Zaharia, Matei; Curtis, Kristal; Fox, Armando; Patterson, David; Shenker, Scott; Stoica, Ion; Karp, Richard M; Sittler, Taylor

    2011-01-01

    We present the Scalable Nucleotide Alignment Program (SNAP), a new short and long read aligner that is both more accurate (i.e., aligns more reads with fewer errors) and 10-100x faster than state-of-the-art tools such as BWA. Unlike recent aligners based on the Burrows-Wheeler transform, SNAP uses a simple hash index of short seed sequences from the genome, similar to BLAST's. However, SNAP greatly reduces the number and cost of local alignment checks performed through several measures: it uses longer seeds to reduce the false positive locations considered, leverages larger memory capacities to speed index lookup, and excludes most candidate locations without fully computing their edit distance to the read. The result is an algorithm that scales well for reads from one hundred to thousands of bases long and provides a rich error model that can match classes of mutations (e.g., longer indels) that today's fast aligners ignore. We calculate that SNAP can align a dataset with 30x coverage of a human genome in le...

  17. Accurate Detection of Rifampicin-Resistant Mycobacterium Tuberculosis Strains

    Keum-Soo Song

    2016-03-01

    Full Text Available In 2013 alone, the death rate among the 9.0 million people infected with Mycobacterium tuberculosis (TB worldwide was around 14%, which is unacceptably high. An empiric treatment of patients infected with TB or drug-resistant Mycobacterium tuberculosis (MDR-TB strain can also result in the spread of MDR-TB. The diagnostic tools which are rapid, reliable, and have simple experimental protocols can significantly help in decreasing the prevalence rate of MDR-TB strain. We report the evaluation of the 9G technology based 9G DNAChips that allow accurate detection and discrimination of TB and MDR-TB-RIF. One hundred and thirteen known cultured samples were used to evaluate the ability of 9G DNAChip in the detection and discrimination of TB and MDR-TB-RIF strains. Hybridization of immobilized probes with the PCR products of TB and MDR-TB-RIF strains allow their detection and discrimination. The accuracy of 9G DNAChip was determined by comparing its results with sequencing analysis and drug susceptibility testing. Sequencing analysis showed 100% agreement with the results of 9G DNAChip. The 9G DNAChip showed very high sensitivity (95.4% and specificity (100%.

  18. Accurate measurement of liquid transport through nanoscale conduits.

    Alibakhshi, Mohammad Amin; Xie, Quan; Li, Yinxiao; Duan, Chuanhua

    2016-01-01

    Nanoscale liquid transport governs the behaviour of a wide range of nanofluidic systems, yet remains poorly characterized and understood due to the enormous hydraulic resistance associated with the nanoconfinement and the resulting minuscule flow rates in such systems. To overcome this problem, here we present a new measurement technique based on capillary flow and a novel hybrid nanochannel design and use it to measure water transport through single 2-D hydrophilic silica nanochannels with heights down to 7 nm. Our results show that silica nanochannels exhibit increased mass flow resistance compared to the classical hydrodynamics prediction. This difference increases with decreasing channel height and reaches 45% in the case of 7 nm nanochannels. This resistance increase is attributed to the formation of a 7-angstrom-thick stagnant hydration layer on the hydrophilic surfaces. By avoiding use of any pressure and flow sensors or any theoretical estimations the hybrid nanochannel scheme enables facile and precise flow measurement through single nanochannels, nanotubes, or nanoporous media and opens the prospect for accurate characterization of both hydrophilic and hydrophobic nanofluidic systems. PMID:27112404

  19. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  20. Study of accurate volume measurement system for plutonium nitrate solution

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  1. An Accurate Projector Calibration Method Based on Polynomial Distortion Representation

    Miao Liu

    2015-10-01

    Full Text Available In structure light measurement systems or 3D printing systems, the errors caused by optical distortion of a digital projector always affect the precision performance and cannot be ignored. Existing methods to calibrate the projection distortion rely on calibration plate and photogrammetry, so the calibration performance is largely affected by the quality of the plate and the imaging system. This paper proposes a new projector calibration approach that makes use of photodiodes to directly detect the light emitted from a digital projector. By analyzing the output sequence of the photoelectric module, the pixel coordinates can be accurately obtained by the curve fitting method. A polynomial distortion representation is employed to reduce the residuals of the traditional distortion representation model. Experimental results and performance evaluation show that the proposed calibration method is able to avoid most of the disadvantages in traditional methods and achieves a higher accuracy. This proposed method is also practically applicable to evaluate the geometric optical performance of other optical projection system.

  2. A New Path Generation Algorithm Based on Accurate NURBS Curves

    Sawssen Jalel

    2016-04-01

    Full Text Available The process of finding an optimum, smooth and feasible global path for mobile robot navigation usually involves determining the shortest polyline path, which will be subsequently smoothed to satisfy the requirements. Within this context, this paper deals with a novel roadmap algorithm for generating an optimal path in terms of Non-Uniform Rational B-Splines (NURBS curves. The generated path is well constrained within the curvature limit by exploiting the influence of the weight parameter of NURBS and/or the control points’ locations. The novelty of this paper lies in the fact that NURBS curves are not used only as a means of smoothing, but they are also involved in meeting the system’s constraints via a suitable parameterization of the weights and locations of control points. The accurate parameterization of weights allows for a greater benefit to be derived from the influence and geometrical effect of this factor, which has not been well investigated in previous works. The effectiveness of the proposed algorithm is demonstrated through extensive MATLAB computer simulations.

  3. Accurate ab initio vibrational energies of methyl chloride

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH335Cl and CH337Cl. The respective PESs, CBS-35 HL, and CBS-37 HL, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY 3Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35 HL and CBS-37 HL PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm−1, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH3Cl without empirical refinement of the respective PESs

  4. Accurate ab initio vibrational energies of methyl chloride

    Owens, Alec, E-mail: owens@mpi-muelheim.mpg.de [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan [Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Thiel, Walter [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany)

    2015-06-28

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup  HL}, and CBS-37{sup  HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup  HL} and CBS-37{sup  HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.

  5. An accurate {delta}f method for neoclassical transport calculation

    Wang, W.X.; Nakajima, N.; Murakami, S.; Okamoto, M. [National Inst. for Fusion Science, Toki, Gifu (Japan)

    1999-03-01

    A {delta}f method, solving drift kinetic equation, for neoclassical transport calculation is presented in detail. It is demonstrated that valid results essentially rely on the correct evaluation of marker density g in weight calculation. A general and accurate weighting scheme is developed without using some assumed g in weight equation for advancing particle weights, unlike the previous schemes. This scheme employs an additional weight function to directly solve g from its kinetic equation using the idea of {delta}f method. Therefore the severe constraint that the real marker distribution must be consistent with the initially assumed g during a simulation is relaxed. An improved like-particle collision scheme is presented. By performing compensation for momentum, energy and particle losses arising from numerical errors, the conservations of all the three quantities are greatly improved during collisions. Ion neoclassical transport due to self-collisions is examined under finite banana case as well as zero banana limit. A solution with zero particle and zero energy flux (in case of no temperature gradient) over whole poloidal section is obtained. With the improvement in both like-particle collision scheme and weighting scheme, the {delta}f simulation shows a significantly upgraded performance for neoclassical transport study. (author)

  6. Iterative feature refinement for accurate undersampled MR image reconstruction

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  7. A Distributed Weighted Voting Approach for Accurate Eye Center Estimation

    Gagandeep Singh

    2013-05-01

    Full Text Available This paper proposes a novel approach for accurate estimation of eye center in face images. A distributed voting based approach in which every pixel votes is adopted for potential eye center candidates. The votes are distributed over a subset of pixels which lie in a direction which is opposite to gradient direction and the weightage of votes is distributed according to a novel mechanism.  First, image is normalized to eliminate illumination variations and its edge map is generated using Canny edge detector. Distributed voting is applied on the edge image to generate different eye center candidates. Morphological closing and local maxima search are used to reduce the number of candidates. A classifier based on spatial and intensity information is used to choose the correct candidates for the locations of eye center. The proposed approach was tested on BioID face database and resulted in better Iris detection rate than the state-of-the-art. The proposed approach is robust against illumination variation, small pose variations, presence of eye glasses and partial occlusion of eyes.Defence Science Journal, 2013, 63(3, pp.292-297, DOI:http://dx.doi.org/10.14429/dsj.63.2763

  8. Reusable, robust, and accurate laser-generated photonic nanosensor.

    Yetisen, Ali K; Montelongo, Yunuen; da Cruz Vasconcellos, Fernando; Martinez-Hurtado, J L; Neupane, Sankalpa; Butt, Haider; Qasim, Malik M; Blyth, Jeffrey; Burling, Keith; Carmody, J Bryan; Evans, Mark; Wilkinson, Timothy D; Kubota, Lauro T; Monteiro, Michael J; Lowe, Christopher R

    2014-06-11

    Developing noninvasive and accurate diagnostics that are easily manufactured, robust, and reusable will provide monitoring of high-risk individuals in any clinical or point-of-care environment. We have developed a clinically relevant optical glucose nanosensor that can be reused at least 400 times without a compromise in accuracy. The use of a single 6 ns laser (λ = 532 nm, 200 mJ) pulse rapidly produced off-axis Bragg diffraction gratings consisting of ordered silver nanoparticles embedded within a phenylboronic acid-functionalized hydrogel. This sensor exhibited reversible large wavelength shifts and diffracted the spectrum of narrow-band light over the wavelength range λpeak ≈ 510-1100 nm. The experimental sensitivity of the sensor permits diagnosis of glucosuria in the urine samples of diabetic patients with an improved performance compared to commercial high-throughput urinalysis devices. The sensor response was achieved within 5 min, reset to baseline in ∼10 s. It is anticipated that this sensing platform will have implications for the development of reusable, equipment-free colorimetric point-of-care diagnostic devices for diabetes screening. PMID:24844116

  9. An Accurate ANFIS-based MPPT for Solar PV System

    Ahmed Bin-Halabi

    2014-06-01

    Full Text Available It has been found from the literature review that the ANFIS-based maximum power point tracking (MPPT techniques are very fast and accurate in tracking the MPP at any weather conditions, and they have smaller power losses if trained well. Unfortunately, this is true in simulation, but in practice they do not work very well because they do not take aging of solar cells as well as the effect of dust and shading into account. In other words, the solar irradiance measured by solar irradiance sensor is not always the same irradiance that influences the PV module. The main objective of this work is to design and practically implement an MPPT system for solar PV with high speed, high efficiency, and relatively easy implementation in order to improve the efficiency of solar energy conversion. This MPPT system is based on ANFIS technique. The contribution of this research is eliminating the need of irradiance sensor while having the same adequate performance obtained by the ANFIS with irradiance sensor, both, in simulation as well as in experimental implementation. The proposed technique has been validated by comparing the practical results of the implemented setup to simulations. Experimental results have showed good agreement with simulation results.

  10. AUTOMATED, HIGHLY ACCURATE VERIFICATION OF RELAP5-3D

    George L Mesina; David Aumiller; Francis Buschman

    2014-07-01

    Computer programs that analyze light water reactor safety solve complex systems of governing, closure and special process equations to model the underlying physics. In addition, these programs incorporate many other features and are quite large. RELAP5-3D[1] has over 300,000 lines of coding for physics, input, output, data management, user-interaction, and post-processing. For software quality assurance, the code must be verified and validated before being released to users. Verification ensures that a program is built right by checking that it meets its design specifications. Recently, there has been an increased importance on the development of automated verification processes that compare coding against its documented algorithms and equations and compares its calculations against analytical solutions and the method of manufactured solutions[2]. For the first time, the ability exists to ensure that the data transfer operations associated with timestep advancement/repeating and writing/reading a solution to a file have no unintended consequences. To ensure that the code performs as intended over its extensive list of applications, an automated and highly accurate verification method has been modified and applied to RELAP5-3D. Furthermore, mathematical analysis of the adequacy of the checks used in the comparisons is provided.

  11. Accurate and efficient waveforms for compact binaries on eccentric orbits

    Huerta, E A; McWilliams, Sean T; O'Shaughnessy, Richard; Yunes, Nicolas

    2014-01-01

    Compact binaries that emit gravitational waves in the sensitivity band of ground-based detectors can have non-negligible eccentricities just prior to merger, depending on the formation scenario. We develop a purely analytic, frequency-domain model for gravitational waves emitted by compact binaries on orbits with small eccentricity, which reduces to the quasi-circular post-Newtonian approximant TaylorF2 at zero eccentricity and to the post-circular approximation of Yunes et al. (2009) at small eccentricity. Our model uses a spectral approximation to the (post-Newtonian) Kepler problem to model the orbital phase as a function of frequency, accounting for eccentricity effects up to ${\\cal{O}}(e^8)$ at each post-Newtonian order. Our approach accurately reproduces an alternative time-domain eccentric waveform model for eccentricities $e\\in [0, 0.4]$ and binaries with total mass less than 12 solar masses. As an application, we evaluate the signal amplitude that eccentric binaries produce in different networks of e...

  12. Accurate analysis of multicomponent fuel spray evaporation in turbulent flow

    Rauch, Bastian; Calabria, Raffaela; Chiariello, Fabio; Le Clercq, Patrick; Massoli, Patrizio; Rachner, Michael

    2012-04-01

    The aim of this paper is to perform an accurate analysis of the evaporation of single component and binary mixture fuels sprays in a hot weakly turbulent pipe flow by means of experimental measurement and numerical simulation. This gives a deeper insight into the relationship between fuel composition and spray evaporation. The turbulence intensity in the test section is equal to 10%, and the integral length scale is three orders of magnitude larger than the droplet size while the turbulence microscale (Kolmogorov scales) is of same order as the droplet diameter. The spray produced by means of a calibrated droplet generator was injected in a gas flow electrically preheated. N-nonane, isopropanol, and their mixtures were used in the tests. The generalized scattering imaging technique was applied to simultaneously determine size, velocity, and spatial location of the droplets carried by the turbulent flow in the quartz tube. The spray evaporation was computed using a Lagrangian particle solver coupled to a gas-phase solver. Computations of spray mean diameter and droplet size distributions at different locations along the pipe compare very favorably with the measurement results. This combined research tool enabled further investigation concerning the influencing parameters upon the evaporation process such as the turbulence, droplet internal mixing, and liquid-phase thermophysical properties.

  13. Accurate measurement of RF exposure from emerging wireless communication systems

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  14. Accurate reading with sequential presentation of single letters

    Nicholas Seow Chiang Price

    2012-10-01

    Full Text Available Rapid, accurate reading is possible when isolated, single words from a sentence are sequentially presented at a fixed spatial location. We investigated if reading of words and sentences is possible when single letters are rapidly presented at the fovea under user-controlled or automatically-controlled rates. When tested with complete sentences, trained participants achieved reading rates of over 60 words/minute and accuracies of over 90% with the single letter reading (SLR method and naive participants achieved average reading rates over 30 wpm with >90% accuracy. Accuracy declined as individual letters were presented for shorter periods of time, even when the overall reading rate was maintained by increasing the duration of spaces between words. Words in the lexicon that occur more frequently were identified with higher accuracy and more quickly, demonstrating that trained participants have lexical access. In combination, our data strongly suggest that comprehension is possible and that SLR is a practicable form of reading under conditions in which normal scanning of text is not possible, or for scenarios with limited spatial and temporal resolution such as patients with low vision or prostheses.

  15. Accurate stereochemistry for two related 22,26-epiminocholestene derivatives

    Regioselective opening of ring E of solasodine under various conditions afforded (25R)-22,26-epimino@@cholesta-5,22(N)-di@@ene-3β,16β-diyl diacetate (previously known as 3,16-diacetyl pseudosolasodine B), C31H47NO4, or (22S,25R)-16β-hydr@@oxy-22,26-epimino@@cholesta-5-en-3β-yl acetate (a derivative of the naturally occurring alkaloid oblonginine), C29H47NO3. In both cases, the reactions are carried out with retention of chirality at the C16, C20 and C25 stereogenic centers, which are found to be S, S and R, respectively. Although pseudosolasodine was synthesized 50 years ago, these accurate assignments clarify some controversial points about the actual stereochemistry for these alkaloids. This is of particular importance in the case of oblonginine, since this compound is currently under consideration for the treatment of aphasia arising from apoplexy; the present study defines a diastereoisomerically pure compound for pharmacological studies

  16. How Accurate Is Pierce's Theory of Traveling Wave Tube?

    Simon, D. H.; Chernin, D.; Wong, P.; Zhang, P.; Lau, Y. Y.; Dong, C. F.; Hoff, B.; Gilgenbach, R. M.

    2015-11-01

    This paper provides a rigorous test of the accuracy of Pierce's classical theory of traveling wave tubes (TWTs). The EXACT dispersion relation for a dielectric TWT is derived, from which the spatial amplification rate, ki, is calculated. This ki is compared with that obtained from Pierce's widely used 3-wave theory and his more general 4-wave theory (which includes the reverse propagating circuit mode). We have used various procedures to extract Pierce's gain parameter C and space charge parameter Q from the exact dispersion relation. We find that, in general, the 3-wave theory is a poor representation to the exact dispersion relation if C >0.05. However, the 4-wave theory gives excellent agreement even for C as high as 0.12 and over more than 20 percent bandwidth, if the quantity (k2 × C3) is evaluated accurately as a function of frequency, and if Q is expanded to first order in the wavenumber k, where Q is the difference between the exact dispersion relation and its 4-wave representation in which Q is set to zero. Similar tests will be performed on the disk-on-rod slow wave TWT, for which the hot tube dispersion relation including all space harmonics has been obtained. Supported by AFOSR FA9550-14-1-0309, FA9550-15-1-0097, AFRL FA9451-14-1-0374, and L-3 Communications.

  17. Accurate measurement of RF exposure from emerging wireless communication systems

    Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno

    2013-04-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  18. Data fusion for accurate microscopic rough surface metrology.

    Chen, Yuhang

    2016-06-01

    Data fusion for rough surface measurement and evaluation was analyzed on simulated datasets, one with higher density (HD) but lower accuracy and the other with lower density (LD) but higher accuracy. Experimental verifications were then performed on laser scanning microscopy (LSM) and atomic force microscopy (AFM) characterizations of surface areal roughness artifacts. The results demonstrated that the fusion based on Gaussian process models is effective and robust under different measurement biases and noise strengths. All the amplitude, height distribution, and spatial characteristics of the original sample structure can be precisely recovered, with better metrological performance than any individual measurements. As for the influencing factors, the HD noise has a relatively weaker effect as compared with the LD noise. Furthermore, to enable an accurate fusion, the ratio of LD sampling interval to surface autocorrelation length should be smaller than a critical threshold. In general, data fusion is capable of enhancing the nanometrology of rough surfaces by combining efficient LSM measurement and down-sampled fast AFM scan. The accuracy, resolution, spatial coverage and efficiency can all be significantly improved. It is thus expected to have potential applications in development of hybrid microscopy and in surface metrology. PMID:27058888

  19. Downhole temperature tool accurately measures well bore profile

    This paper reports that an inexpensive temperature tool provides accurate temperatures measurements during drilling operations for better design of cement jobs, workovers, well stimulation, and well bore hydraulics. Valid temperature data during specific wellbore operations can improve initial job design, fluid testing, and slurry placement, ultimately enhancing well bore performance. This improvement applies to cement slurries, breaker activation for slurries, breaker activation for stimulation and profile control, and fluid rheological properties for all downhole operations. The temperature tool has been run standalone mounted inside drill pipe, on slick wire line and braided cable, and as a free-falltool. It has also been run piggyback on both directional surveys (slick line and free-fall) and standard logging runs. This temperature measuring system has been used extensively in field well bores to depths of 20,000 ft. The temperature tool is completely reusable in the field, ever similar to the standard directional survey tools used on may drilling rigs. The system includes a small, rugged, programmable temperature sensor, a standard body housing, various adapters for specific applications, and a personal computer (PC) interface

  20. Accurate measurement of liquid transport through nanoscale conduits

    Alibakhshi, Mohammad Amin; Xie, Quan; Li, Yinxiao; Duan, Chuanhua

    2016-04-01

    Nanoscale liquid transport governs the behaviour of a wide range of nanofluidic systems, yet remains poorly characterized and understood due to the enormous hydraulic resistance associated with the nanoconfinement and the resulting minuscule flow rates in such systems. To overcome this problem, here we present a new measurement technique based on capillary flow and a novel hybrid nanochannel design and use it to measure water transport through single 2-D hydrophilic silica nanochannels with heights down to 7 nm. Our results show that silica nanochannels exhibit increased mass flow resistance compared to the classical hydrodynamics prediction. This difference increases with decreasing channel height and reaches 45% in the case of 7 nm nanochannels. This resistance increase is attributed to the formation of a 7-angstrom-thick stagnant hydration layer on the hydrophilic surfaces. By avoiding use of any pressure and flow sensors or any theoretical estimations the hybrid nanochannel scheme enables facile and precise flow measurement through single nanochannels, nanotubes, or nanoporous media and opens the prospect for accurate characterization of both hydrophilic and hydrophobic nanofluidic systems.

  1. Fast and Accurate Brain Image Retrieval Using Gabor Wavelet Algorithm

    J.Esther

    2014-01-01

    Full Text Available CBIR in medical image databases are used to assist physician in diagnosis the diseases and also used to aid diagnosis by identifying similar past cases. In order to retrieve a fast, accurate and an effective similarity of images from the large data set. The pre-processing step is extraction of brain. It removes the unwanted non-brain areas like scalp, skull, neck, eyes, ear etc from the MRI Head scan images. After removing the unwanted areas of non-brain region, it is very effective to retrieve the similar images. In this paper it is proposed a brain extraction technique using fuzzy morphological operators. For the experimental results 1200 MRI images are taken from scan centre and some brain images are collected from web and these have been implemented with popular brain extraction algorithm of Graph- Cut Algorithm (GCUT and Expectation Maximization algorithm (EMA. The experiment result shows that the proposed algorithm fuzzy morphological operator algorithm (FMOA is prompting the best promising results. Using this FMOA result retrieved the brain image from the large collection of databases using Gabor-Wavelet Transform.

  2. Fast, accurate, robust and Open Source Brain Extraction Tool (OSBET)

    Namias, R.; Donnelly Kehoe, P.; D'Amato, J. P.; Nagel, J.

    2015-12-01

    The removal of non-brain regions in neuroimaging is a critical task to perform a favorable preprocessing. The skull-stripping depends on different factors including the noise level in the image, the anatomy of the subject being scanned and the acquisition sequence. For these and other reasons, an ideal brain extraction method should be fast, accurate, user friendly, open-source and knowledge based (to allow for the interaction with the algorithm in case the expected outcome is not being obtained), producing stable results and making it possible to automate the process for large datasets. There are already a large number of validated tools to perform this task but none of them meets the desired characteristics. In this paper we introduced an open source brain extraction tool (OSBET), composed of four steps using simple well-known operations such as: optimal thresholding, binary morphology, labeling and geometrical analysis that aims to assemble all the desired features. We present an experiment comparing OSBET with other six state-of-the-art techniques against a publicly available dataset consisting of 40 T1-weighted 3D scans and their corresponding manually segmented images. OSBET gave both: a short duration with an excellent accuracy, getting the best Dice Coefficient metric. Further validation should be performed, for instance, in unhealthy population, to generalize its usage for clinical purposes.

  3. Accurate methodology for channel bow impact on CPR

    An overview is given of existing CPR design criteria and the methods used in BWR reload analysis to evaluate the impact of channel bow on CPR margins. Potential weaknesses in today's methodologies are discussed. Westinghouse in collaboration with KKL and Axpo - operator and owner of the Leibstadt NPP - has developed an enhanced CPR methodology based on a new criterion to protect against dryout during normal operation and with a more rigorous treatment of channel bow. The new steady-state criterion is expressed in terms of an upper limit of 0.01 for the dryout failure probability per year. This is considered a meaningful and appropriate criterion that can be directly related to the probabilistic criteria set-up for the analyses of Anticipated Operation Occurrences (AOOs) and accidents. In the Monte Carlo approach a statistical modeling of channel bow and an accurate evaluation of CPR response functions allow the associated CPR penalties to be included directly in the plant SLMCPR and OLMCPR in a best-estimate manner. In this way, the treatment of channel bow is equivalent to all other uncertainties affecting CPR. The enhanced CPR methodology has been implemented in the Westinghouse Monte Carlo code, McSLAP. The methodology improves the quality of dryout safety assessments by supplying more valuable information and better control of conservatisms in establishing operational limits for CPR. The methodology is demonstrated with application examples from the introduction at KKL. (orig.)

  4. AN ACCURATE FLUX DENSITY SCALE FROM 1 TO 50 GHz

    Perley, R. A.; Butler, B. J., E-mail: RPerley@nrao.edu, E-mail: BButler@nrao.edu [National Radio Astronomy Observatory, P.O. Box O, Socorro, NM 87801 (United States)

    2013-02-15

    We develop an absolute flux density scale for centimeter-wavelength astronomy by combining accurate flux density ratios determined by the Very Large Array between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by the Wilkinson Microwave Anisotropy Probe. The radio sources 3C123, 3C196, 3C286, and 3C295 are found to be varying at a level of less than {approx}5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1%-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC 7027, NGC 6542, and MWC 349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, 3C138, and 3C147 for each of the 17 observation dates, spanning 1983-2012. The planets Venus, Uranus, and Neptune are included in our observations, and we derive their brightness temperatures over the same frequency range.

  5. Accurate calculations of bound rovibrational states for argon trimer

    Brandon, Drew; Poirier, Bill [Department of Chemistry and Biochemistry, and Department of Physics, Texas Tech University, Box 41061, Lubbock, Texas 79409-1061 (United States)

    2014-07-21

    This work presents a comprehensive quantum dynamics calculation of the bound rovibrational eigenstates of argon trimer (Ar{sub 3}), using the ScalIT suite of parallel codes. The Ar{sub 3} rovibrational energy levels are computed to a very high level of accuracy (10{sup −3} cm{sup −1} or better), and up to the highest rotational and vibrational excitations for which bound states exist. For many of these rovibrational states, wavefunctions are also computed. Rare gas clusters such as Ar{sub 3} are interesting because the interatomic interactions manifest through long-range van der Waals forces, rather than through covalent chemical bonding. As a consequence, they exhibit strong Coriolis coupling between the rotational and vibrational degrees of freedom, as well as highly delocalized states, all of which renders accurate quantum dynamical calculation difficult. Moreover, with its (comparatively) deep potential well and heavy masses, Ar{sub 3} is an especially challenging rare gas trimer case. There are a great many rovibrational eigenstates to compute, and a very high density of states. Consequently, very few previous rovibrational state calculations for Ar{sub 3} may be found in the current literature—and only for the lowest-lying rotational excitations.

  6. Accurate calculations of bound rovibrational states for argon trimer

    This work presents a comprehensive quantum dynamics calculation of the bound rovibrational eigenstates of argon trimer (Ar3), using the ScalIT suite of parallel codes. The Ar3 rovibrational energy levels are computed to a very high level of accuracy (10−3 cm−1 or better), and up to the highest rotational and vibrational excitations for which bound states exist. For many of these rovibrational states, wavefunctions are also computed. Rare gas clusters such as Ar3 are interesting because the interatomic interactions manifest through long-range van der Waals forces, rather than through covalent chemical bonding. As a consequence, they exhibit strong Coriolis coupling between the rotational and vibrational degrees of freedom, as well as highly delocalized states, all of which renders accurate quantum dynamical calculation difficult. Moreover, with its (comparatively) deep potential well and heavy masses, Ar3 is an especially challenging rare gas trimer case. There are a great many rovibrational eigenstates to compute, and a very high density of states. Consequently, very few previous rovibrational state calculations for Ar3 may be found in the current literature—and only for the lowest-lying rotational excitations

  7. Accurate measurement of oxygen consumption in children undergoing cardiac catheterization.

    Li, Jia

    2013-01-01

    Oxygen consumption (VO(2) ) is an important part of hemodynamics using the direct Fick principle in children undergoing cardiac catheterization. Accurate measurement of VO(2) is vital. Obviously, any error in the measurement of VO(2) will translate directly into an equivalent percentage under- or overestimation of blood flows and vascular resistances. It remains common practice to estimate VO(2) values from published predictive equations. Among these, the LaFarge equation is the most commonly used equation and gives the closest estimation with the least bias and limits of agreement. However, considerable errors are introduced by the LaFarge equation, particularly in children younger than 3 years of age. Respiratory mass spectrometry remains the "state-of-the-art" method, allowing highly sensitive, rapid and simultaneous measurement of multiple gas fractions. The AMIS 2000 quadrupole respiratory mass spectrometer system has been adapted to measure VO(2) in children under mechanical ventilation with pediatric ventilators during cardiac catheterization. The small sampling rate, fast response time and long tubes make the equipment a unique and powerful tool for bedside continuous measurement of VO(2) in cardiac catheterization for both clinical and research purposes. PMID:22488802

  8. The Global Geodetic Infrastructure for Accurate Monitoring of Earth Systems

    Weston, Neil; Blackwell, Juliana; Wang, Yan; Willis, Zdenka

    2014-05-01

    The National Geodetic Survey (NGS) and the Integrated Ocean Observing System (IOOS), two Program Offices within the National Ocean Service, NOAA, routinely collect, analyze and disseminate observations and products from several of the 17 critical systems identified by the U.S. Group on Earth Observations. Gravity, sea level monitoring, coastal zone and ecosystem management, geo-hazards and deformation monitoring and ocean surface vector winds are the primary Earth systems that have active research and operational programs in NGS and IOOS. These Earth systems collect terrestrial data but most rely heavily on satellite-based sensors for analyzing impacts and monitoring global change. One fundamental component necessary for monitoring via satellites is having a stable, global geodetic infrastructure where an accurate reference frame is essential for consistent data collection and geo-referencing. This contribution will focus primarily on system monitoring, coastal zone management and global reference frames and how the scientific contributions from NGS and IOOS continue to advance our understanding of the Earth and the Global Geodetic Observing System.

  9. Symphony: A Framework for Accurate and Holistic WSN Simulation

    Laurynas Riliskis

    2015-02-01

    Full Text Available Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles.

  10. Symphony: a framework for accurate and holistic WSN simulation.

    Riliskis, Laurynas; Osipov, Evgeny

    2015-01-01

    Research on wireless sensor networks has progressed rapidly over the last decade, and these technologies have been widely adopted for both industrial and domestic uses. Several operating systems have been developed, along with a multitude of network protocols for all layers of the communication stack. Industrial Wireless Sensor Network (WSN) systems must satisfy strict criteria and are typically more complex and larger in scale than domestic systems. Together with the non-deterministic behavior of network hardware in real settings, this greatly complicates the debugging and testing of WSN functionality. To facilitate the testing, validation, and debugging of large-scale WSN systems, we have developed a simulation framework that accurately reproduces the processes that occur inside real equipment, including both hardware- and software-induced delays. The core of the framework consists of a virtualized operating system and an emulated hardware platform that is integrated with the general purpose network simulator ns-3. Our framework enables the user to adjust the real code base as would be done in real deployments and also to test the boundary effects of different hardware components on the performance of distributed applications and protocols. Additionally we have developed a clock emulator with several different skew models and a component that handles sensory data feeds. The new framework should substantially shorten WSN application development cycles. PMID:25723144

  11. An accurately fast algorithm of calculating reflection/transmission coefficients

    CASTAGNA; J; P

    2008-01-01

    For the boundary between transversely isotropic media with a vertical axis of symmetry (VTI media), the interface between a liquid and a VTI medium, and the free-surface of an elastic half-space of a VTI medium, an accurately fast algorithm was presented for calculating reflection/transmission (R/T) coefficients. Specially, the case of post-critical angle incidence was considered. Although we only performed the numerical calculation for the models of the VTI media, the calculated results can be extended to the models of transversely isotropic media with a horizontal axis of rotation symmetry (HTI media). Compared to previous work, this algorithm can be used not only for the calculation of R/T coefficients of the boundary between ellipsoidally anisotropic media, but also for that between generally anisotropic media, and the speed and accuracy of this algorithm are faster and higher. According to the anisotropic parameters of some rocks given by the published literature, we performed the calculation of R/T coefficients by using this algorithm and analyzed the effect of the rock anisotropy on R/T coefficients. We used Snell’s law and the energy balance principle to perform verification for the calculated results.

  12. Accurate transition rates for intercombination lines of singly ionized nitrogen

    The transition energies and rates for the 2s22p23P1,2-2s2p35S2o and 2s22p3s-2s22p3p intercombination transitions have been calculated using term-dependent nonorthogonal orbitals in the multiconfiguration Hartree-Fock approach. Several sets of spectroscopic and correlation nonorthogonal functions have been chosen to describe adequately term dependence of wave functions and various correlation corrections. Special attention has been focused on the accurate representation of strong interactions between the 2s2p31,3P1o and 2s22p3s 1,3P1olevels. The relativistic corrections are included through the one-body mass correction, Darwin, and spin-orbit operators and two-body spin-other-orbit and spin-spin operators in the Breit-Pauli Hamiltonian. The importance of core-valence correlation effects has been examined. The accuracy of present transition rates is evaluated by the agreement between the length and velocity formulations combined with the agreement between the calculated and measured transition energies. The present results for transition probabilities, branching fraction, and lifetimes have been compared with previous calculations and experiments.

  13. An Accurate Flux Density Scale from 1 to 50 GHz

    Perley, Rick A

    2012-01-01

    We develop an absolute flux density scale for cm-wavelength astronomy by combining accurate flux density ratios determined by the VLA between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by WMAP. The radio sources 3C123, 3C196, 3C286 and 3C295 are found to be varying at a level of less than ~5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC7027, NGC6542, and MWC349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, ...

  14. Accurate Detection of Rifampicin-Resistant Mycobacterium Tuberculosis Strains.

    Song, Keum-Soo; Nimse, Satish Balasaheb; Kim, Hee Jin; Yang, Jeongseong; Kim, Taisun

    2016-01-01

    In 2013 alone, the death rate among the 9.0 million people infected with Mycobacterium tuberculosis (TB) worldwide was around 14%, which is unacceptably high. An empiric treatment of patients infected with TB or drug-resistant Mycobacterium tuberculosis (MDR-TB) strain can also result in the spread of MDR-TB. The diagnostic tools which are rapid, reliable, and have simple experimental protocols can significantly help in decreasing the prevalence rate of MDR-TB strain. We report the evaluation of the 9G technology based 9G DNAChips that allow accurate detection and discrimination of TB and MDR-TB-RIF. One hundred and thirteen known cultured samples were used to evaluate the ability of 9G DNAChip in the detection and discrimination of TB and MDR-TB-RIF strains. Hybridization of immobilized probes with the PCR products of TB and MDR-TB-RIF strains allow their detection and discrimination. The accuracy of 9G DNAChip was determined by comparing its results with sequencing analysis and drug susceptibility testing. Sequencing analysis showed 100% agreement with the results of 9G DNAChip. The 9G DNAChip showed very high sensitivity (95.4%) and specificity (100%). PMID:26999135

  15. A quantitative fitness analysis workflow.

    Banks, A P; Lawless, C; Lydall, D A

    2012-01-01

    Quantitative Fitness Analysis (QFA) is an experimental and computational workflow for comparing fitnesses of microbial cultures grown in parallel(1,2,3,4). QFA can be applied to focused observations of single cultures but is most useful for genome-wide genetic interaction or drug screens investigating up to thousands of independent cultures. The central experimental method is the inoculation of independent, dilute liquid microbial cultures onto solid agar plates which are incubated and regularly photographed. Photographs from each time-point are analyzed, producing quantitative cell density estimates, which are used to construct growth curves, allowing quantitative fitness measures to be derived. Culture fitnesses can be compared to quantify and rank genetic interaction strengths or drug sensitivities. The effect on culture fitness of any treatments added into substrate agar (e.g. small molecules, antibiotics or nutrients) or applied to plates externally (e.g. UV irradiation, temperature) can be quantified by QFA. The QFA workflow produces growth rate estimates analogous to those obtained by spectrophotometric measurement of parallel liquid cultures in 96-well or 200-well plate readers. Importantly, QFA has significantly higher throughput compared with such methods. QFA cultures grow on a solid agar surface and are therefore well aerated during growth without the need for stirring or shaking. QFA throughput is not as high as that of some Synthetic Genetic Array (SGA) screening methods(5,6). However, since QFA cultures are heavily diluted before being inoculated onto agar, QFA can capture more complete growth curves, including exponential and saturation phases(3). For example, growth curve observations allow culture doubling times to be estimated directly with high precision, as discussed previously(1). Here we present a specific QFA protocol applied to thousands of S. cerevisiae cultures which are automatically handled by robots during inoculation, incubation and

  16. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    1999-06-01

    A team of radio astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) to make the most accurate measurement ever made of the distance to a faraway galaxy. Their direct measurement calls into question the precision of distance determinations made by other techniques, including those announced last week by a team using the Hubble Space Telescope. The radio astronomers measured a distance of 23.5 million light-years to a galaxy called NGC 4258 in Ursa Major. "Ours is a direct measurement, using geometry, and is independent of all other methods of determining cosmic distances," said Jim Herrnstein, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. The team says their measurement is accurate to within less than a million light-years, or four percent. The galaxy is also known as Messier 106 and is visible with amateur telescopes. Herrnstein, along with James Moran and Lincoln Greenhill of the Harvard- Smithsonian Center for Astrophysics; Phillip Diamond, of the Merlin radio telescope facility at Jodrell Bank and the University of Manchester in England; Makato Inoue and Naomasa Nakai of Japan's Nobeyama Radio Observatory; Mikato Miyoshi of Japan's National Astronomical Observatory; Christian Henkel of Germany's Max Planck Institute for Radio Astronomy; and Adam Riess of the University of California at Berkeley, announced their findings at the American Astronomical Society's meeting in Chicago. "This is an incredible achievement to measure the distance to another galaxy with this precision," said Miller Goss, NRAO's Director of VLA/VLBA Operations. "This is the first time such a great distance has been measured this accurately. It took painstaking work on the part of the observing team, and it took a radio telescope the size of the Earth -- the VLBA -- to make it possible," Goss said. "Astronomers have sought to determine the Hubble Constant, the rate of expansion of the universe, for decades. This will in turn lead to an

  17. SNPdetector: A Software Tool for Sensitive and Accurate SNP Detection.

    2005-10-01

    Full Text Available Identification of single nucleotide polymorphisms (SNPs and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool, and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozgyosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov.

  18. TOWARDS MORE ACCURATE CLUSTERING METHOD BY USING DYNAMIC TIME WARPING

    Khadoudja Ghanem

    2013-03-01

    Full Text Available An intrinsic problem of classifiers based on machine learning (ML methods is that their learning time grows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM algorithm to develop an efficient Hidden Markov Model (HMM training. The idea of the proposed process consists of two steps. In the first step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity function to cluster similar examples. In the second step, all formulas in the classical HMM training algorithm (EM associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods.

  19. Towards More Accurate Clutering Method by Using Dynamic Time Warping

    Khadoudja Ghanem

    2013-04-01

    Full Text Available An intrinsic problem of classifiers based on machine learning (ML methods is that their learning timegrows as the size and complexity of the training dataset increases. For this reason, it is important to have efficient computational methods and algorithms that can be applied on large datasets, such that it is still possible to complete the machine learning tasks in reasonable time. In this context, we present in this paper a more accurate simple process to speed up ML methods. An unsupervised clustering algorithm is combined with Expectation, Maximization (EM algorithm to develop an efficient Hidden Markov Model (HMM training. The idea of the proposed process consists of two steps. In the first step, training instances with similar inputs are clustered and a weight factor which represents the frequency of these instances is assigned to each representative cluster. Dynamic Time Warping technique is used as a dissimilarity function to cluster similar examples. In the second step, all formulas in the classical HMM training algorithm (EM associated with the number of training instances are modified to include the weight factor in appropriate terms. This process significantly accelerates HMM training while maintaining the same initial, transition and emission probabilities matrixes as those obtained with the classical HMM training algorithm. Accordingly, the classification accuracy is preserved. Depending on the size of the training set, speedups of up to 2200 times is possible when the size is about 100.000 instances. The proposed approach is not limited to training HMMs, but it can be employed for a large variety of MLs methods

  20. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  1. CT-Analyst: fast and accurate CBR emergency assessment

    Boris, Jay; Fulton, Jack E., Jr.; Obenschain, Keith; Patnaik, Gopal; Young, Theodore, Jr.

    2004-08-01

    An urban-oriented emergency assessment system for airborne Chemical, Biological, and Radiological (CBR) threats, called CT-Analyst and based on new principles, gives greater accuracy and much greater speed than possible with current alternatives. This paper explains how this has been done. The increased accuracy derives from detailed, three-dimensional CFD computations including, solar heating, buoyancy, complete building geometry specification, trees, wind fluctuations, and particle and droplet distributions (as appropriate). This paper shows how a very finite number of such computations for a given area can be extended to all wind directions and speeds, and all likely sources and source locations using a new data structure called Dispersion Nomographs. Finally, we demonstrate a portable, entirely graphical software tool called CT-Analyst that embodies this entirely new, high-resolution technology and runs effectively on small personal computers. Real-time users don't have to wait for results because accurate answers are available with near zero-latency (that is 10 - 20 scenarios per second). Entire sequences of cases (e.g. a continuously changing source location or wind direction) can be computed and displayed as continuous-action movies. Since the underlying database has been precomputed, the door is wide open for important new real-time, zero-latency functions such as sensor data fusion, backtracking to an unknown source location, and even evacuation route planning. Extensions of the technology to sensor location optimization, buildings, tunnels, and integration with other advanced technologies, e.g. micrometeorology or detailed wind field measurements, will be discussed briefly here.

  2. Copeptin does not accurately predict disease severity in imported malaria

    van Wolfswinkel Marlies E

    2012-01-01

    Full Text Available Abstract Background Copeptin has recently been identified to be a stable surrogate marker for the unstable hormone arginine vasopressin (AVP. Copeptin has been shown to correlate with disease severity in leptospirosis and bacterial sepsis. Hyponatraemia is common in severe imported malaria and dysregulation of AVP release has been hypothesized as an underlying pathophysiological mechanism. The aim of the present study was to evaluate the performance of copeptin as a predictor of disease severity in imported malaria. Methods Copeptin was measured in stored serum samples of 204 patients with imported malaria that were admitted to our Institute for Tropical Diseases in Rotterdam in the period 1999-2010. The occurrence of WHO defined severe malaria was the primary end-point. The diagnostic performance of copeptin was compared to that of previously evaluated biomarkers C-reactive protein, procalcitonin, lactate and sodium. Results Of the 204 patients (141 Plasmodium falciparum, 63 non-falciparum infection, 25 had severe malaria. The Area Under the ROC curve of copeptin for severe disease (0.66 [95% confidence interval 0.59-0.72] was comparable to that of lactate, sodium and procalcitonin. C-reactive protein (0.84 [95% CI 0.79-0.89] had a significantly better performance as a biomarker for severe malaria than the other biomarkers. Conclusions C-reactive protein but not copeptin was found to be an accurate predictor for disease severity in imported malaria. The applicability of copeptin as a marker for severe malaria in clinical practice is limited to exclusion of severe malaria.

  3. Accurate calculation of (31)P NMR chemical shifts in polyoxometalates.

    Pascual-Borràs, Magda; López, Xavier; Poblet, Josep M

    2015-04-14

    We search for the best density functional theory strategy for the determination of (31)P nuclear magnetic resonance (NMR) chemical shifts, δ((31)P), in polyoxometalates. Among the variables governing the quality of the quantum modelling, we tackle herein the influence of the functional and the basis set. The spin-orbit and solvent effects were routinely included. To do so we analysed the family of structures α-[P2W18-xMxO62](n-) with M = Mo(VI), V(V) or Nb(V); [P2W17O62(M'R)](n-) with M' = Sn(IV), Ge(IV) and Ru(II) and [PW12-xMxO40](n-) with M = Pd(IV), Nb(V) and Ti(IV). The main results suggest that, to date, the best procedure for the accurate calculation of δ((31)P) in polyoxometalates is the combination of TZP/PBE//TZ2P/OPBE (for NMR//optimization step). The hybrid functionals (PBE0, B3LYP) tested herein were applied to the NMR step, besides being more CPU-consuming, do not outperform pure GGA functionals. Although previous studies on (183)W NMR suggested that the use of very large basis sets like QZ4P were needed for geometry optimization, the present results indicate that TZ2P suffices if the functional is optimal. Moreover, scaling corrections were applied to the results providing low mean absolute errors below 1 ppm for δ((31)P), which is a step forward in order to confirm or predict chemical shifts in polyoxometalates. Finally, via a simplified molecular model, we establish how the small variations in δ((31)P) arise from energy changes in the occupied and virtual orbitals of the PO4 group. PMID:25738630

  4. Accurate mobile malware detection and classification in the cloud.

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service. PMID:26543718

  5. Absolute Quantitative MALDI Imaging Mass Spectrometry: A Case of Rifampicin in Liver Tissues.

    Chumbley, Chad W; Reyzer, Michelle L; Allen, Jamie L; Marriner, Gwendolyn A; Via, Laura E; Barry, Clifton E; Caprioli, Richard M

    2016-02-16

    Matrix-assisted laser desorption/ionization (MALDI) imaging mass spectrometry (IMS) elucidates molecular distributions in thin tissue sections. Absolute pixel-to-pixel quantitation has remained a challenge, primarily lacking validation of the appropriate analytical methods. In the present work, isotopically labeled internal standards are applied to tissue sections to maximize quantitative reproducibility and yield accurate quantitative results. We have developed a tissue model for rifampicin (RIF), an antibiotic used to treat tuberculosis, and have tested different methods of applying an isotopically labeled internal standard for MALDI IMS analysis. The application of the standard and subsequently the matrix onto tissue sections resulted in quantitation that was not statistically significantly different from results obtained using HPLC-MS/MS of tissue extracts. Quantitative IMS experiments were performed on liver tissue from an animal dosed in vivo. Each microspot in the quantitative images measures the local concentration of RIF in the thin tissue section. Lower concentrations were detected from the blood vessels and around the portal tracts. The quantitative values obtained from these measurements were comparable (>90% similarity) to HPLC-MS/MS results obtained from extracts of the same tissue. PMID:26814665

  6. GPC and quantitative phase imaging

    Palima, Darwin; Bañas, Andrew Rafael; Villangca, Mark Jayson; Glückstad, Jesper

    2016-03-01

    Generalized Phase Contrast (GPC) is a light efficient method for generating speckle-free contiguous optical distributions using binary-only or analog phase levels. It has been used in applications such as optical trapping and manipulation, active microscopy, structured illumination, optical security, parallel laser marking and labelling and recently in contemporary biophotonics applications such as for adaptive and parallel two-photon optogenetics and neurophotonics. We will present our most recent GPC developments geared towards these applications. We first show a very compact static light shaper followed by the potential of GPC for biomedical and multispectral applications where we experimentally demonstrate the active light shaping of a supercontinuum laser over most of the visible wavelength range. Finally, we discuss how GPC can be advantageously applied for Quantitative Phase Imaging (QPI).

  7. Quantitative measurement of blood cells

    Full text: We are observing and measuring the varying development reaction stages of blood cells to different saline solutions. The imaging process is based on a common path interferometer which is realized with a spatial light modulator (SLM) in the Fourier plane after the microscope objective. With the SLM we can shift the phase of the transmitted light with respect to the phase of signal wave. This principle is used for the phase contrast microscopy method where we take four pictures of the same image with different phase shifts in order to calculate the complex field of the measured cell. This microscope technique obtains quantitative data about the blood cell's surface in different development stages, amplitude and phase differences inside the cell itself. (author)

  8. Quantitative analysis of Boehm's GC

    GUAN Xue-tao; ZHANG Yuan-rui; GOU Xiao-gang; CHENG Xu

    2003-01-01

    The term garbage collection describes the automated process of finding previously allocated memorythatis no longer in use in order to make the memory available to satisfy subsequent allocation requests. Wehave reviewed existing papers and implementations of GC, and especially analyzed Boehm' s C codes, which isa real-time mark-sweep GC running under Linux and ANSI C standard. In this paper, we will quantitatively an-alyze the performance of different configurations of Boehm' s collector subjected to different workloads. Reportedmeasurements demonstrate that a refined garbage collector is a viable alternative to traditional explicit memorymanagement techniques, even for low-level languages. It is more a trade-off for certain system than an all-or-nothing proposition.

  9. Quantitative patterns in drone wars

    Garcia-Bernardo, Javier; Dodds, Peter Sheridan; Johnson, Neil F.

    2016-02-01

    Attacks by drones (i.e., unmanned combat air vehicles) continue to generate heated political and ethical debates. Here we examine the quantitative nature of drone attacks, focusing on how their intensity and frequency compare with that of other forms of human conflict. Instead of the power-law distribution found recently for insurgent and terrorist attacks, the severity of attacks is more akin to lognormal and exponential distributions, suggesting that the dynamics underlying drone attacks lie beyond these other forms of human conflict. We find that the pattern in the timing of attacks is consistent with one side having almost complete control, an important if expected result. We show that these novel features can be reproduced and understood using a generative mathematical model in which resource allocation to the dominant side is regulated through a feedback loop.

  10. Automated quantitative analysis for pneumoconiosis

    Kondo, Hiroshi; Zhao, Bin; Mino, Masako

    1998-09-01

    Automated quantitative analysis for pneumoconiosis is presented. In this paper Japanese standard radiographs of pneumoconiosis are categorized by measuring the area density and the number density of small rounded opacities. And furthermore the classification of the size and shape of the opacities is made from the measuring of the equivalent radiuses of each opacity. The proposed method includes a bi- level unsharp masking filter with a 1D uniform impulse response in order to eliminate the undesired parts such as the images of blood vessels and ribs in the chest x-ray photo. The fuzzy contrast enhancement is also introduced in this method for easy and exact detection of small rounded opacities. Many simulation examples show that the proposed method is more reliable than the former method.

  11. Quantitative evaluation of dermatological antiseptics.

    Leitch, C S; Leitch, A E; Tidman, M J

    2015-12-01

    Topical antiseptics are frequently used in dermatological management, yet evidence for the efficacy of traditional generic formulations is often largely anecdotal. We tested the in vitro bactericidal activity of four commonly used topical antiseptics against Staphylococcus aureus, using a modified version of the European Standard EN 1276, a quantitative suspension test for evaluation of the bactericidal activity of chemical disinfectants and antiseptics. To meet the standard for antiseptic effectiveness of EN 1276, at least a 5 log10 reduction in bacterial count within 5 minutes of exposure is required. While 1% benzalkonium chloride and 6% hydrogen peroxide both achieved a 5 log10 reduction in S. aureus count, neither 2% aqueous eosin nor 1 : 10 000 potassium permanganate showed significant bactericidal activity compared with control at exposure periods of up to 1 h. Aqueous eosin and potassium permanganate may have desirable astringent properties, but these results suggest they lack effective antiseptic activity, at least against S. aureus. PMID:26456933

  12. Quantitative neutron phase contrast tomography

    Conventional neutron radiography and tomography are based on the attenuation contrast induced by the sample. In the last few years, another source of image contrast, the so-called phase contrast, has been introduced. The imaging methods to detect phase changes due to the interaction with the sample improve continuously, and several techniques are established. One method to discover phase shifts is diffraction enhanced imaging using a double-crystal diffractometer. It is described how the refractive index distribution of a sample can be recovered quantitatively in tomographic reconstructions from data achieved by this technique. Using reference samples with a well-known refractive index distribution, high accuracy with deviations of only a few per cent could be found in the reconstructions for all used materials

  13. Innovations in Quantitative Risk Management

    Scherer, Matthias; Zagst, Rudi

    2015-01-01

    Quantitative models are omnipresent –but often controversially discussed– in todays risk management practice. New regulations, innovative financial products, and advances in valuation techniques provide a continuous flow of challenging problems for financial engineers and risk managers alike. Designing a sound stochastic model requires finding a careful balance between parsimonious model assumptions, mathematical viability, and interpretability of the output. Moreover, data requirements and the end-user training are to be considered as well. The KPMG Center of Excellence in Risk Management conference Risk Management Reloaded and this proceedings volume contribute to bridging the gap between academia –providing methodological advances– and practice –having a firm understanding of the economic conditions in which a given model is used. Discussed fields of application range from asset management, credit risk, and energy to risk management issues in insurance. Methodologically, dependence modeling...

  14. Quantitative analysis of qualitative images

    Hockney, David; Falco, Charles M.

    2005-03-01

    We show optical evidence that demonstrates artists as early as Jan van Eyck and Robert Campin (c1425) used optical projections as aids for producing their paintings. We also have found optical evidence within works by later artists, including Bermejo (c1475), Lotto (c1525), Caravaggio (c1600), de la Tour (c1650), Chardin (c1750) and Ingres (c1825), demonstrating a continuum in the use of optical projections by artists, along with an evolution in the sophistication of that use. However, even for paintings where we have been able to extract unambiguous, quantitative evidence of the direct use of optical projections for producing certain of the features, this does not mean that paintings are effectively photographs. Because the hand and mind of the artist are intimately involved in the creation process, understanding these complex images requires more than can be obtained from only applying the equations of geometrical optics.

  15. Quantitative Characterisation of Surface Texture

    De Chiffre, Leonardo; Lonardo, P.M.; Trumpold, H.; Lucca, D.A.; Goch, G.; Brown, C. A.; Raja, J.; Hansen, Hans Nørgaard

    2000-01-01

    This paper reviews the different methods used to give a quantitative characterisation of surface texture. The paper contains a review of conventional 2D as well as 3D roughness parameters, with particular emphasis on recent international standards and developments. It presents new texture...... characterisation methods, such as fractals, wavelets, change trees and others, including for each method a short review, the parameters that the new methods calculate, and applications of the methods to solve surface problems. The paper contains a discussion on the relevance of the different parameters and...... quantification methods in terms of functional correlations, and it addresses the need for reducing the large number of existing parameters. The review considers the present situation and gives suggestions for future activities....

  16. Optimization of quantitative infrared analysis

    Duerst, Richard W.; Breneman, W. E.; Dittmar, Rebecca M.; Drugge, Richard E.; Gagnon, Jim E.; Pranis, Robert A.; Spicer, Colleen K.; Stebbings, William L.; Westberg, J. W.; Duerst, Marilyn D.

    1994-01-01

    A number of industrial processes, especially quality assurance procedures, accept information on relative quantities of components in mixtures, whenever absolute values for the quantitative analysis are unavailable. These relative quantities may be determined from infrared intensity ratios even though known standards are unavailable. Repeatability [vs precisionhl in quantitative analysis is a critical parameter for meaningful results. In any given analysis, multiple runs provide "answers" with a certain standard deviation. Obviously, the lower the standard deviation, the better the precision. In attempting to minimize the standard deviation and thus improve precision, we need to delineate which contributing factors we have control over (such as sample preparation techniques, data analysis methodology) and which factors we have little control over (environmental and instrument noise, for example). For a given set of conditions, the best instrumental precision achievable on an IR instrument should be determinable. Traditionally, the term "signal-to-noise" (S/N) has been used for a single spectrum, realizing that S/N improves with an increase in number of scans coadded for generation of that single spectrum. However, the S/N ratio does not directly reflect the precision achievable for an absorbing band. We prefer to use the phrase "maximum achievable instrument precision" (MAIP), which is equivalent to the minimum relative standard deviation for a given peak (either height or area) in spectra. For a specific analysis, the analyst should have in mind the desired precision. Only if the desired precision is less than the MA1P will the analysis be feasible. Once the MAIP is established, other experimental procedures may be modified to improve the analytical precision, if it is below that which is expected (the MAIP).

  17. Differentiation of renal clear cell carcinoma and renal papillary carcinoma using quantitative CT enhancement parameters

    Objective: The purpose of our study was to evaluate quantitative multiphasic CT enhancement patterns of malignant renal neoplasms to enable lesion differentiation by their enhancement characteristics. We used a new method to standardize enhancement measurement in lesions on multiphasic CT not being influenced by intrinsic factors like cardiac output. Conclusion: The new correction method is a simple tool for excluding intrinsic influences on the enhancement of lesions. Quantitative enhancement evaluation with this method of the influence of intrinsic factors enables accurate differentiation between renal clear cell carcinoma and renal papillary carcinoma. (author)

  18. Differentiation of renal clear cell carcinoma and renal papillary carcinoma using quantitative CT enhancement parameters

    Ruppert-Kohlmayr, A.J.; Uggowitzer, M.; Meissnitzer, T.; Ruppert, G. [University Hospital Graz (Austria). Dept. of Radiology

    2004-11-15

    Objective: The purpose of our study was to evaluate quantitative multiphasic CT enhancement patterns of malignant renal neoplasms to enable lesion differentiation by their enhancement characteristics. We used a new method to standardize enhancement measurement in lesions on multiphasic CT not being influenced by intrinsic factors like cardiac output. Conclusion: The new correction method is a simple tool for excluding intrinsic influences on the enhancement of lesions. Quantitative enhancement evaluation with this method of the influence of intrinsic factors enables accurate differentiation between renal clear cell carcinoma and renal papillary carcinoma. (author)

  19. Developing the Structure of a Hardware and Software System for Quantitative Diagnosis of Microhemodynamics

    P.V. Luzhnov

    2015-12-01

    Full Text Available Currently, vascular diseases are the leading cause of disability all over the world. Recent publications have pointed out microcirculation disorders as the main cause of vascular diseases. In this paper, we present an analysis of the existing diagnostic methods and identify the advantages, disadvantages and limitations of each method. The analysis showed that there are no accurate quantitative criteria for assessment and diagnosis of peripheral circulation in any of the methods. Our results can be used for the development of medical and technical requirements for hardware and software systems for quantitative diagnosis of microhemodynamic disorders.

  20. Accurate modelling of flow induced stresses in rigid colloidal aggregates

    Vanni, Marco

    2015-07-01

    A method has been developed to estimate the motion and the internal stresses induced by a fluid flow on a rigid aggregate. The approach couples Stokesian dynamics and structural mechanics in order to take into account accurately the effect of the complex geometry of the aggregates on hydrodynamic forces and the internal redistribution of stresses. The intrinsic error of the method, due to the low-order truncation of the multipole expansion of the Stokes solution, has been assessed by comparison with the analytical solution for the case of a doublet in a shear flow. In addition, it has been shown that the error becomes smaller as the number of primary particles in the aggregate increases and hence it is expected to be negligible for realistic reproductions of large aggregates. The evaluation of internal forces is performed by an adaptation of the matrix methods of structural mechanics to the geometric features of the aggregates and to the particular stress-strain relationship that occurs at intermonomer contacts. A preliminary investigation on the stress distribution in rigid aggregates and their mode of breakup has been performed by studying the response to an elongational flow of both realistic reproductions of colloidal aggregates (made of several hundreds monomers) and highly simplified structures. A very different behaviour has been evidenced between low-density aggregates with isostatic or weakly hyperstatic structures and compact aggregates with highly hyperstatic configuration. In low-density clusters breakup is caused directly by the failure of the most stressed intermonomer contact, which is typically located in the inner region of the aggregate and hence originates the birth of fragments of similar size. On the contrary, breakup of compact and highly cross-linked clusters is seldom caused by the failure of a single bond. When this happens, it proceeds through the removal of a tiny fragment from the external part of the structure. More commonly, however