WorldWideScience

Sample records for accurate quantitative snp-typing

  1. Toward Accurate and Quantitative Comparative Metagenomics

    Science.gov (United States)

    Nayfach, Stephen; Pollard, Katherine S.

    2016-01-01

    Shotgun metagenomics and computational analysis are used to compare the taxonomic and functional profiles of microbial communities. Leveraging this approach to understand roles of microbes in human biology and other environments requires quantitative data summaries whose values are comparable across samples and studies. Comparability is currently hampered by the use of abundance statistics that do not estimate a meaningful parameter of the microbial community and biases introduced by experimental protocols and data-cleaning approaches. Addressing these challenges, along with improving study design, data access, metadata standardization, and analysis tools, will enable accurate comparative metagenomics. We envision a future in which microbiome studies are replicable and new metagenomes are easily and rapidly integrated with existing data. Only then can the potential of metagenomics for predictive ecological modeling, well-powered association studies, and effective microbiome medicine be fully realized. PMID:27565341

  2. SNP typing on the NanoChip electronic microarray

    DEFF Research Database (Denmark)

    Børsting, Claus; Sanchez Sanchez, Juan Jose; Morling, Niels

    2005-01-01

    We describe a single nucleotide polymorphism (SNP) typing protocol developed for the NanoChip electronic microarray. The NanoChip array consists of 100 electrodes covered by a thin hydrogel layer containing streptavidin. An electric currency can be applied to one, several, or all electrodes...

  3. SNP typing reveals similarity in Mycobacterium tuberculosis genetic diversity between Portugal and Northeast Brazil.

    Science.gov (United States)

    Lopes, Joao S; Marques, Isabel; Soares, Patricia; Nebenzahl-Guimaraes, Hanna; Costa, Joao; Miranda, Anabela; Duarte, Raquel; Alves, Adriana; Macedo, Rita; Duarte, Tonya A; Barbosa, Theolis; Oliveira, Martha; Nery, Joilda S; Boechat, Neio; Pereira, Susan M; Barreto, Mauricio L; Pereira-Leal, Jose; Gomes, Maria Gabriela Miranda; Penha-Goncalves, Carlos

    2013-08-01

    Human tuberculosis is an infectious disease caused by bacteria from the Mycobacterium tuberculosis complex (MTBC). Although spoligotyping and MIRU-VNTR are standard methodologies in MTBC genetic epidemiology, recent studies suggest that Single Nucleotide Polymorphisms (SNP) are advantageous in phylogenetics and strain group/lineages identification. In this work we use a set of 79 SNPs to characterize 1987 MTBC isolates from Portugal and 141 from Northeast Brazil. All Brazilian samples were further characterized using spolygotyping. Phylogenetic analysis against a reference set revealed that about 95% of the isolates in both populations are singly attributed to bacterial lineage 4. Within this lineage, the most frequent strain groups in both Portugal and Brazil are LAM, followed by Haarlem and X. Contrary to these groups, strain group T showed a very different prevalence between Portugal (10%) and Brazil (1.5%). Spoligotype identification shows about 10% of mis-matches compared to the use of SNPs and a little more than 1% of strains unidentifiability. The mis-matches are observed in the most represented groups of our sample set (i.e., LAM and Haarlem) in almost the same proportion. Besides being more accurate in identifying strain groups/lineages, SNP-typing can also provide phylogenetic relationships between strain groups/lineages and, thus, indicate cases showing phylogenetic incongruence. Overall, the use of SNP-typing revealed striking similarities between MTBC populations from Portugal and Brazil.

  4. SNP typing for germplasm identification of Amomum villosum Lour. Based on DNA barcoding markers.

    Directory of Open Access Journals (Sweden)

    Qionglin Huang

    Full Text Available Amomum villosum Lour., produced from Yangchun, Guangdong Province, China, is a Daodi medicinal material of Amomi Fructus in traditional Chinese medicine. This herb germplasm should be accurately identified and collected to ensure its quality and safety in medication. In the present study, single nucleotide polymorphism typing method was evaluated on the basis of DNA barcoding markers to identify the germplasm of Amomi Fructus. Genomic DNA was extracted from the leaves of 29 landraces representing three Amomum species (A. villosum Lour., A. xanthioides Wall. ex Baker and A. longiligulare T. L. Wu by using the CTAB method. Six barcoding markers (ITS, ITS2, LSU D1-D3, matK, rbcL and trnH-psbA were PCR amplified and sequenced; SNP typing and phylogenetic analysis were performed to differentiate the landraces. Results showed that high-quality bidirectional sequences were acquired for five candidate regions (ITS, ITS2, LSU D1-D3, matK, and rbcL except trnH-psbA. Three ribosomal regions, namely, ITS, ITS2, and LSU D1-D3, contained more SNP genotypes (STs than the plastid genes rbcL and matK. In the 29 specimens, 19 STs were detected from the combination of four regions (ITS, LSU D1-D3, rbcL, and matK. Phylogenetic analysis results further revealed two clades. Minimum-spanning tree demonstrated the existence of two main groups: group I was consisting of 9 STs (ST1-8 and ST11 of A. villosum Lour., and group II was composed of 3 STs (ST16-18 of A. longiligulare T.L. Wu. Our results suggested that ITS and LSU D1-D3 should be incorporated with the core barcodes rbcL and matK. The four combined regions could be used as a multiregional DNA barcode to precisely differentiate the Amomi Fructus landraces in different producing areas.

  5. SNP Typing for Germplasm Identification of Amomum villosum Lour. Based on DNA Barcoding Markers

    Science.gov (United States)

    Yang, Jinfen; Ma, Xinye; Zhan, Ruoting; Xu, Hui; Chen, Weiwen

    2014-01-01

    Amomum villosum Lour., produced from Yangchun, Guangdong Province, China, is a Daodi medicinal material of Amomi Fructus in traditional Chinese medicine. This herb germplasm should be accurately identified and collected to ensure its quality and safety in medication. In the present study, single nucleotide polymorphism typing method was evaluated on the basis of DNA barcoding markers to identify the germplasm of Amomi Fructus. Genomic DNA was extracted from the leaves of 29 landraces representing three Amomum species (A. villosum Lour., A. xanthioides Wall. ex Baker and A. longiligulare T. L. Wu) by using the CTAB method. Six barcoding markers (ITS, ITS2, LSU D1–D3, matK, rbcL and trnH-psbA) were PCR amplified and sequenced; SNP typing and phylogenetic analysis were performed to differentiate the landraces. Results showed that high-quality bidirectional sequences were acquired for five candidate regions (ITS, ITS2, LSU D1–D3, matK, and rbcL) except trnH-psbA. Three ribosomal regions, namely, ITS, ITS2, and LSU D1–D3, contained more SNP genotypes (STs) than the plastid genes rbcL and matK. In the 29 specimens, 19 STs were detected from the combination of four regions (ITS, LSU D1–D3, rbcL, and matK). Phylogenetic analysis results further revealed two clades. Minimum-spanning tree demonstrated the existence of two main groups: group I was consisting of 9 STs (ST1–8 and ST11) of A. villosum Lour., and group II was composed of 3 STs (ST16–18) of A. longiligulare T.L. Wu. Our results suggested that ITS and LSU D1–D3 should be incorporated with the core barcodes rbcL and matK. The four combined regions could be used as a multiregional DNA barcode to precisely differentiate the Amomi Fructus landraces in different producing areas. PMID:25531885

  6. Accurate Quantitation of Dystrophin Protein in Human Skeletal Muscle Using Mass Spectrometry

    OpenAIRE

    Brown, Kristy J; Marathi, Ramya; Fiorillo, Alyson A; Ciccimaro, Eugene F.; Sharma, Seema; Rowlands, David S.; Rayavarapu, Sree; Nagaraju, Kanneboyina; Eric P. Hoffman; Hathout, Yetrib

    2012-01-01

    Quantitation of human dystrophin protein in muscle biopsies is a clinically relevant endpoint for both diagnosis and response to dystrophin-replacement therapies for dystrophinopathies. A robust and accurate assay would enable the use of dystrophin as a surrogate biomarker, particularly in exploratory Phase 2 trials. Currently available methods to quantitate dystrophin rely on immunoblot or immunohistochemistry methods that are not considered robust. Here we present a mass spectrometry based ...

  7. FANSe: an accurate algorithm for quantitative mapping of large scale sequencing reads.

    Science.gov (United States)

    Zhang, Gong; Fedyunin, Ivan; Kirchner, Sebastian; Xiao, Chuanle; Valleriani, Angelo; Ignatova, Zoya

    2012-06-01

    The most crucial step in data processing from high-throughput sequencing applications is the accurate and sensitive alignment of the sequencing reads to reference genomes or transcriptomes. The accurate detection of insertions and deletions (indels) and errors introduced by the sequencing platform or by misreading of modified nucleotides is essential for the quantitative processing of the RNA-based sequencing (RNA-Seq) datasets and for the identification of genetic variations and modification patterns. We developed a new, fast and accurate algorithm for nucleic acid sequence analysis, FANSe, with adjustable mismatch allowance settings and ability to handle indels to accurately and quantitatively map millions of reads to small or large reference genomes. It is a seed-based algorithm which uses the whole read information for mapping and high sensitivity and low ambiguity are achieved by using short and non-overlapping reads. Furthermore, FANSe uses hotspot score to prioritize the processing of highly possible matches and implements modified Smith-Watermann refinement with reduced scoring matrix to accelerate the calculation without compromising its sensitivity. The FANSe algorithm stably processes datasets from various sequencing platforms, masked or unmasked and small or large genomes. It shows a remarkable coverage of low-abundance mRNAs which is important for quantitative processing of RNA-Seq datasets.

  8. Designer cantilevers for even more accurate quantitative measurements of biological systems with multifrequency AFM

    Science.gov (United States)

    Contera, S.

    2016-04-01

    Multifrequency excitation/monitoring of cantilevers has made it possible both to achieve fast, relatively simple, nanometre-resolution quantitative mapping of mechanical of biological systems in solution using atomic force microscopy (AFM), and single molecule resolution detection by nanomechanical biosensors. A recent paper by Penedo et al [2015 Nanotechnology 26 485706] has made a significant contribution by developing simple methods to improve the signal to noise ratio in liquid environments, by selectively enhancing cantilever modes, which will lead to even more accurate quantitative measurements.

  9. Accurate quantitation of allele-specific expression patterns by analysis of DNA melting

    OpenAIRE

    Jeong, Sangkyun; Hahn, Yoonsoo; Rong, Qi; Pfeifer, Karl

    2007-01-01

    Epigenetic and genetic mechanisms can result in large differences in expression levels of the two alleles in a diploid organism. Furthermore, these differences may be critical to phenotypic variations among individuals. In this study, we present a novel procedure and algorithm to precisely and accurately quantitate the relative expression of each allele. This method uses the differential melting properties of DNAs differing at even a single base pair. By referring to the melting characteristi...

  10. Oxidized fatty acid analysis by charge-switch derivatization, selected reaction monitoring, and accurate mass quantitation.

    Science.gov (United States)

    Liu, Xinping; Moon, Sung Ho; Mancuso, David J; Jenkins, Christopher M; Guan, Shaoping; Sims, Harold F; Gross, Richard W

    2013-11-01

    A highly sensitive, specific, and robust method for the analysis of oxidized metabolites of linoleic acid (LA), arachidonic acid (AA), and docosahexaenoic acid (DHA) was developed using charge-switch derivatization, liquid chromatography-electrospray ionization tandem mass spectrometry (LC-ESI MS/MS) with selected reaction monitoring (SRM) and quantitation by high mass accuracy analysis of product ions, thereby minimizing interferences from contaminating ions. Charge-switch derivatization of LA, AA, and DHA metabolites with N-(4-aminomethylphenyl)-pyridinium resulted in a 10- to 30-fold increase in ionization efficiency. Improved quantitation was accompanied by decreased false positive interferences through accurate mass measurements of diagnostic product ions during SRM transitions by ratiometric comparisons with stable isotope internal standards. The limits of quantitation were between 0.05 and 6.0pg, with a dynamic range of 3 to 4 orders of magnitude (correlation coefficient r(2)>0.99). This approach was used to quantitate the levels of representative fatty acid metabolites from wild-type (WT) and iPLA2γ(-/-) mouse liver identifying the role of iPLA2γ in hepatic lipid second messenger production. Collectively, these results demonstrate the utility of high mass accuracy product ion analysis in conjunction with charge-switch derivatization for the highly specific quantitation of diminutive amounts of LA, AA, and DHA metabolites in biologic systems. Copyright © 2013 Elsevier Inc. All rights reserved.

  11. Multiplexed SNP typing of ancient DNA clarifies the origin of Andaman mtDNA haplogroups amongst South Asian tribal populations.

    Directory of Open Access Journals (Sweden)

    Phillip Endicott

    Full Text Available The issue of errors in genetic data sets is of growing concern, particularly in population genetics where whole genome mtDNA sequence data is coming under increased scrutiny. Multiplexed PCR reactions, combined with SNP typing, are currently under-exploited in this context, but have the potential to genotype whole populations rapidly and accurately, significantly reducing the amount of errors appearing in published data sets. To show the sensitivity of this technique for screening mtDNA genomic sequence data, 20 historic samples of the enigmatic Andaman Islanders and 12 modern samples from three Indian tribal populations (Chenchu, Lambadi and Lodha were genotyped for 20 coding region sites after provisional haplogroup assignment with control region sequences. The genotype data from the historic samples significantly revise the topologies for the Andaman M31 and M32 mtDNA lineages by rectifying conflicts in published data sets. The new Indian data extend the distribution of the M31a lineage to South Asia, challenging previous interpretations of mtDNA phylogeography. This genetic connection between the ancestors of the Andamanese and South Asian tribal groups approximately 30 kya has important implications for the debate concerning migration routes and settlement patterns of humans leaving Africa during the late Pleistocene, and indicates the need for more detailed genotyping strategies. The methodology serves as a low-cost, high-throughput model for the production and authentication of data from modern or ancient DNA, and demonstrates the value of museum collections as important records of human genetic diversity.

  12. Accurate Quantitative Sensing of Intracellular pH based on Self-ratiometric Upconversion Luminescent Nanoprobe

    Science.gov (United States)

    Li, Cuixia; Zuo, Jing; Zhang, Li; Chang, Yulei; Zhang, Youlin; Tu, Langping; Liu, Xiaomin; Xue, Bin; Li, Qiqing; Zhao, Huiying; Zhang, Hong; Kong, Xianggui

    2016-12-01

    Accurate quantitation of intracellular pH (pHi) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pHi sensing. Till now, biological auto-fluorescence background upon UV-Vis excitation and severe photo-bleaching of dyes are the two main factors impeding the accurate quantitative detection of pHi. Herein, we have developed a self-ratiometric luminescence nanoprobe based on förster resonant energy transfer (FRET) for probing pHi, in which pH-sensitive fluorescein isothiocyanate (FITC) and upconversion nanoparticles (UCNPs) were served as energy acceptor and donor, respectively. Under 980 nm excitation, upconversion emission bands at 475 nm and 645 nm of NaYF4:Yb3+, Tm3+ UCNPs were used as pHi response and self-ratiometric reference signal, respectively. This direct quantitative sensing approach has circumvented the traditional software-based subsequent processing of images which may lead to relatively large uncertainty of the results. Due to efficient FRET and fluorescence background free, a highly-sensitive and accurate sensing has been achieved, featured by 3.56 per unit change in pHi value 3.0–7.0 with deviation less than 0.43. This approach shall facilitate the researches in pHi related areas and development of the intracellular drug delivery systems.

  13. A fluorescence-based quantitative real-time PCR assay for accurate Pocillopora damicornis species identification

    Science.gov (United States)

    Thomas, Luke; Stat, Michael; Evans, Richard D.; Kennington, W. Jason

    2016-09-01

    Pocillopora damicornis is one of the most extensively studied coral species globally, but high levels of phenotypic plasticity within the genus make species identification based on morphology alone unreliable. As a result, there is a compelling need to develop cheap and time-effective molecular techniques capable of accurately distinguishing P. damicornis from other congeneric species. Here, we develop a fluorescence-based quantitative real-time PCR (qPCR) assay to genotype a single nucleotide polymorphism that accurately distinguishes P. damicornis from other morphologically similar Pocillopora species. We trial the assay across colonies representing multiple Pocillopora species and then apply the assay to screen samples of Pocillopora spp. collected at regional scales along the coastline of Western Australia. This assay offers a cheap and time-effective alternative to Sanger sequencing and has broad applications including studies on gene flow, dispersal, recruitment and physiological thresholds of P. damicornis.

  14. Quantitative proteomics using the high resolution accurate mass capabilities of the quadrupole-orbitrap mass spectrometer.

    Science.gov (United States)

    Gallien, Sebastien; Domon, Bruno

    2014-08-01

    High resolution/accurate mass hybrid mass spectrometers have considerably advanced shotgun proteomics and the recent introduction of fast sequencing capabilities has expanded its use for targeted approaches. More specifically, the quadrupole-orbitrap instrument has a unique configuration and its new features enable a wide range of experiments. An overview of the analytical capabilities of this instrument is presented, with a focus on its application to quantitative analyses. The high resolution, the trapping capability and the versatility of the instrument have allowed quantitative proteomic workflows to be redefined and new data acquisition schemes to be developed. The initial proteomic applications have shown an improvement of the analytical performance. However, as quantification relies on ion trapping, instead of ion beam, further refinement of the technique can be expected.

  15. A processing method enabling the use of peak height for accurate and precise proton NMR quantitation.

    Science.gov (United States)

    Hays, Patrick A; Thompson, Robert A

    2009-10-01

    In NMR, peak area quantitation is the most common method used because the area under a peak or peak group is proportional to the number of nuclei at those frequencies. Peak height quantitation has not enjoyed as much utility because of poor precision and linearity as a result of inconsistent shapes and peak widths (measured at half height). By using a post-acquisition processing method employing a Gaussian or line-broadening (exponential decay) apodization (i.e. weighting function) to normalize the shape and width of the internal standard (ISTD) peak, the heights of an analyte calibration spectrum can be compared to the analyte peaks in a sample spectrum resulting in accurate and precise quantitative results. Peak height results compared favorably with 'clean' peak area results for several hundred illicit samples of methamphetamine HCl, cocaine HCl, and heroin HCl, of varying composition and purity. Using peak height and peak area results together can enhance the confidence in the reported purity value; a major advantage in high throughput, automated quantitative analyses. Published in 2009 by John Wiley & Sons, Ltd.

  16. Accurate virus quantitation using a Scanning Transmission Electron Microscopy (STEM) detector in a scanning electron microscope.

    Science.gov (United States)

    Blancett, Candace D; Fetterer, David P; Koistinen, Keith A; Morazzani, Elaine M; Monninger, Mitchell K; Piper, Ashley E; Kuehl, Kathleen A; Kearney, Brian J; Norris, Sarah L; Rossi, Cynthia A; Glass, Pamela J; Sun, Mei G

    2017-10-01

    A method for accurate quantitation of virus particles has long been sought, but a perfect method still eludes the scientific community. Electron Microscopy (EM) quantitation is a valuable technique because it provides direct morphology information and counts of all viral particles, whether or not they are infectious. In the past, EM negative stain quantitation methods have been cited as inaccurate, non-reproducible, and with detection limits that were too high to be useful. To improve accuracy and reproducibility, we have developed a method termed Scanning Transmission Electron Microscopy - Virus Quantitation (STEM-VQ), which simplifies sample preparation and uses a high throughput STEM detector in a Scanning Electron Microscope (SEM) coupled with commercially available software. In this paper, we demonstrate STEM-VQ with an alphavirus stock preparation to present the method's accuracy and reproducibility, including a comparison of STEM-VQ to viral plaque assay and the ViroCyt Virus Counter. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  17. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  18. Optimization of sample preparation for accurate results in quantitative NMR spectroscopy

    Science.gov (United States)

    Yamazaki, Taichi; Nakamura, Satoe; Saito, Takeshi

    2017-04-01

    Quantitative nuclear magnetic resonance (qNMR) spectroscopy has received high marks as an excellent measurement tool that does not require the same reference standard as the analyte. Measurement parameters have been discussed in detail and high-resolution balances have been used for sample preparation. However, the high-resolution balances, such as an ultra-microbalance, are not general-purpose analytical tools and many analysts may find those balances difficult to use, thereby hindering accurate sample preparation for qNMR measurement. In this study, we examined the relationship between the resolution of the balance and the amount of sample weighed during sample preparation. We were able to confirm the accuracy of the assay results for samples weighed on a high-resolution balance, such as the ultra-microbalance. Furthermore, when an appropriate tare and amount of sample was weighed on a given balance, accurate assay results were obtained with another high-resolution balance. Although this is a fundamental result, it offers important evidence that would enhance the versatility of the qNMR method.

  19. SILAC-Based Quantitative Strategies for Accurate Histone Posttranslational Modification Profiling Across Multiple Biological Samples.

    Science.gov (United States)

    Cuomo, Alessandro; Soldi, Monica; Bonaldi, Tiziana

    2017-01-01

    Histone posttranslational modifications (hPTMs) play a key role in regulating chromatin dynamics and fine-tuning DNA-based processes. Mass spectrometry (MS) has emerged as a versatile technology for the analysis of histones, contributing to the dissection of hPTMs, with special strength in the identification of novel marks and in the assessment of modification cross talks. Stable isotope labeling by amino acid in cell culture (SILAC), when adapted to histones, permits the accurate quantification of PTM changes among distinct functional states; however, its application has been mainly confined to actively dividing cell lines. A spike-in strategy based on SILAC can be used to overcome this limitation and profile hPTMs across multiple samples. We describe here the adaptation of SILAC to the analysis of histones, in both standard and spike-in setups. We also illustrate its coupling to an implemented "shotgun" workflow, by which heavy arginine-labeled histone peptides, produced upon Arg-C digestion, are qualitatively and quantitatively analyzed in an LC-MS/MS system that combines ultrahigh-pressure liquid chromatography (UHPLC) with new-generation Orbitrap high-resolution instrument.

  20. Multiobjective optimization in quantitative structure-activity relationships: deriving accurate and interpretable QSARs.

    Science.gov (United States)

    Nicolotti, Orazio; Gillet, Valerie J; Fleming, Peter J; Green, Darren V S

    2002-11-07

    Deriving quantitative structure-activity relationship (QSAR) models that are accurate, reliable, and easily interpretable is a difficult task. In this study, two new methods have been developed that aim to find useful QSAR models that represent an appropriate balance between model accuracy and complexity. Both methods are based on genetic programming (GP). The first method, referred to as genetic QSAR (or GPQSAR), uses a penalty function to control model complexity. GPQSAR is designed to derive a single linear model that represents an appropriate balance between the variance and the number of descriptors selected for the model. The second method, referred to as multiobjective genetic QSAR (MoQSAR), is based on multiobjective GP and represents a new way of thinking of QSAR. Specifically, QSAR is considered as a multiobjective optimization problem that comprises a number of competitive objectives. Typical objectives include model fitting, the total number of terms, and the occurrence of nonlinear terms. MoQSAR results in a family of equivalent QSAR models where each QSAR represents a different tradeoff in the objectives. A practical consideration often overlooked in QSAR studies is the need for the model to promote an understanding of the biochemical response under investigation. To accomplish this, chemically intuitive descriptors are needed but do not always give rise to statistically robust models. This problem is addressed by the addition of a further objective, called chemical desirability, that aims to reward models that consist of descriptors that are easily interpretable by chemists. GPQSAR and MoQSAR have been tested on various data sets including the Selwood data set and two different solubility data sets. The study demonstrates that the MoQSAR method is able to find models that are at least as good as models derived using standard statistical approaches and also yields models that allow a medicinal chemist to trade statistical robustness for chemical

  1. SNP Typing for Germplasm Identification of Amomum villosum Lour. Based on DNA Barcoding Markers

    OpenAIRE

    Qionglin Huang; Zhonggang Duan; Jinfen Yang; Xinye Ma; Ruoting Zhan; Hui Xu; Weiwen Chen

    2014-01-01

    Amomum villosum Lour., produced from Yangchun, Guangdong Province, China, is a Daodi medicinal material of Amomi Fructus in traditional Chinese medicine. This herb germplasm should be accurately identified and collected to ensure its quality and safety in medication. In the present study, single nucleotide polymorphism typing method was evaluated on the basis of DNA barcoding markers to identify the germplasm of Amomi Fructus. Genomic DNA was extracted from the leaves of 29 landraces represen...

  2. XML A Simplified Van Erth Single Nucleotide Polymorphism (SNP Typing Method of Bacillus Anthracis Applicable by Traditional Thermocycler Machines

    Directory of Open Access Journals (Sweden)

    Najafi Olya, Z. (BSc

    2015-05-01

    Full Text Available SNP typing is now a well-established genotyping system in Bacillus anthracis studies. In the original standard method of Van Erth, SNPs at 13 loci of the B. anthracis genome were analyzed. In order to simplify and make appropriate this expensive method to low-budget laboratory settings, 13 primer pairs targeting the 13 corresponding SNPs were designed. Besides, a universal PCR protocol was developed to enable simultaneous amplification of all loci by conventional PCR machines. The efficiency of this approach was approved by applying on nine isolates of B. anthracis. We recommend using this modified procedure as an efficient alternative to Van Erth method until developing newer and affordable techniques.

  3. Autosomal SNP typing of forensic samples with the GenPlex(TM) HID System: Results of a collaborative study

    DEFF Research Database (Denmark)

    Tomas, C.; Axler-DiPerte, G.; Budimlija, Z.M.;

    2011-01-01

    The GenPlex(TM) HID System (Applied Biosystems - AB) offers typing of 48 of the 52 SNPforID SNPs and amelogenin. Previous studies have shown a high reproducibility of the GenPlex(TM) HID System using 250-500 pg DNA of good quality. An international exercise was performed by 14 laboratories (9......Plex(TM) HID System with the most commonly used STR kits, 500 pg of partly degraded DNA from three samples was typed by the laboratories using one or more STR kits. The median SNP typing success rate was 92.3% with 500 pg of partly degraded DNA. Three of the fourteen laboratories counted for more than two...

  4. ACCURATE QUANTITATIVE SPECTROSCOPY OF OB STARS: C AND N ABUNDANCES NEAR THE MAIN SEQUENCE

    Directory of Open Access Journals (Sweden)

    M. F. Nieva

    2008-01-01

    Full Text Available We present a state-of-the-art analysis technique able to simultaneously reproduce the entire H and He spectra of OB-type stars in the visual and the near-IR and to derive highly accurate metal abundances (so far C and N. The spectrum synthesis relies on a hybrid non-LTE approach involving our most recent model atoms. Accurate atmospheric parameters, practically free of systematic errors, are derived spectroscopically (from Stark-broadened H lines and ionization equilibria of He i/ii and Cii-iv for a sample of randomly distributed stars in the solar vicinity. Highly consistent abundances are found in contrast to previous reports indicating broad scatter and large uncertainties. The improvements result from avoidance of systematic errors in the parameter determination, which may be larger than expected in previous work, and a critical evaluation of atomic data for the model atom construction

  5. Accurate radiation temperature and chemical potential from quantitative photoluminescence analysis of hot carrier populations

    Science.gov (United States)

    Gibelli, François; Lombez, Laurent; Guillemoles, Jean-François

    2017-02-01

    In order to characterize hot carrier populations in semiconductors, photoluminescence measurement is a convenient tool, enabling us to probe the carrier thermodynamical properties in a contactless way. However, the analysis of the photoluminescence spectra is based on some assumptions which will be discussed in this work. We especially emphasize the importance of the variation of the material absorptivity that should be considered to access accurate thermodynamical properties of the carriers, especially by varying the excitation power. The proposed method enables us to obtain more accurate results of thermodynamical properties by taking into account a rigorous physical description and finds direct application in investigating hot carrier solar cells, which are an adequate concept for achieving high conversion efficiencies with a relatively simple device architecture.

  6. Novel micelle PCR-based method for accurate, sensitive and quantitative microbiota profiling

    Science.gov (United States)

    Boers, Stefan A.; Hays, John P.; Jansen, Ruud

    2017-01-01

    In the last decade, many researchers have embraced 16S rRNA gene sequencing techniques, which has led to a wealth of publications and documented differences in the composition of microbial communities derived from many different ecosystems. However, comparison between different microbiota studies is currently very difficult due to the lack of a standardized 16S rRNA gene sequencing protocol. Here we report on a novel approach employing micelle PCR (micPCR) in combination with an internal calibrator that allows for standardization of microbiota profiles via their absolute abundances. The addition of an internal calibrator allows the researcher to express the resulting operational taxonomic units (OTUs) as a measure of 16S rRNA gene copies by correcting the number of sequences of each individual OTU in a sample for efficiency differences in the NGS process. Additionally, accurate quantification of OTUs obtained from negative extraction control samples allows for the subtraction of contaminating bacterial DNA derived from the laboratory environment or chemicals/reagents used. Using equimolar synthetic microbial community samples and low biomass clinical samples, we demonstrate that the calibrated micPCR/NGS methodology possess a much higher precision and a lower limit of detection compared with traditional PCR/NGS, resulting in more accurate microbiota profiles suitable for multi-study comparison. PMID:28378789

  7. There's plenty of gloom at the bottom: the many challenges of accurate quantitation in size-based oligomeric separations.

    Science.gov (United States)

    Striegel, André M

    2013-11-01

    There is a variety of small-molecule species (e.g., tackifiers, plasticizers, oligosaccharides) the size-based characterization of which is of considerable scientific and industrial importance. Likewise, quantitation of the amount of oligomers in a polymer sample is crucial for the import and export of substances into the USA and European Union (EU). While the characterization of ultra-high molar mass macromolecules by size-based separation techniques is generally considered a challenge, it is this author's contention that a greater challenge is encountered when trying to perform, for quantitation purposes, separations in and of the oligomeric region. The latter thesis is expounded herein, by detailing the various obstacles encountered en route to accurate, quantitative oligomeric separations by entropically dominated techniques such as size-exclusion chromatography, hydrodynamic chromatography, and asymmetric flow field-flow fractionation, as well as by methods which are, principally, enthalpically driven such as liquid adsorption and temperature gradient interaction chromatography. These obstacles include, among others, the diminished sensitivity of static light scattering (SLS) detection at low molar masses, the non-constancy of the response of SLS and of commonly employed concentration-sensitive detectors across the oligomeric region, and the loss of oligomers through the accumulation wall membrane in asymmetric flow field-flow fractionation. The battle is not lost, however, because, with some care and given a sufficient supply of sample, the quantitation of both individual oligomeric species and of the total oligomeric region is often possible.

  8. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    Energy Technology Data Exchange (ETDEWEB)

    Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn [Physics Department, Carleton University, Ottawa, Ontario K1S 5B6, Canada and Cardiology, The University of Ottawa Heart Institute, Ottawa, Ontario K1Y4W7 (Canada)

    2016-01-15

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but their level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE

  9. Allele-Specific Quantitative PCR for Accurate, Rapid, and Cost-Effective Genotyping.

    Science.gov (United States)

    Lee, Han B; Schwab, Tanya L; Koleilat, Alaa; Ata, Hirotaka; Daby, Camden L; Cervera, Roberto Lopez; McNulty, Melissa S; Bostwick, Hannah S; Clark, Karl J

    2016-06-01

    Customizable endonucleases such as transcription activator-like effector nucleases (TALENs) and clustered regularly interspaced short palindromic repeats/CRISPR-associated protein 9 (CRISPR/Cas9) enable rapid generation of mutant strains at genomic loci of interest in animal models and cell lines. With the accelerated pace of generating mutant alleles, genotyping has become a rate-limiting step to understanding the effects of genetic perturbation. Unless mutated alleles result in distinct morphological phenotypes, mutant strains need to be genotyped using standard methods in molecular biology. Classic restriction fragment length polymorphism (RFLP) or sequencing is labor-intensive and expensive. Although simpler than RFLP, current versions of allele-specific PCR may still require post-polymerase chain reaction (PCR) handling such as sequencing, or they are more expensive if allele-specific fluorescent probes are used. Commercial genotyping solutions can take weeks from assay design to result, and are often more expensive than assembling reactions in-house. Key components of commercial assay systems are often proprietary, which limits further customization. Therefore, we developed a one-step open-source genotyping method based on quantitative PCR. The allele-specific qPCR (ASQ) does not require post-PCR processing and can genotype germline mutants through either threshold cycle (Ct) or end-point fluorescence reading. ASQ utilizes allele-specific primers, a locus-specific reverse primer, universal fluorescent probes and quenchers, and hot start DNA polymerase. Individual laboratories can further optimize this open-source system as we completely disclose the sequences, reagents, and thermal cycling protocol. We have tested the ASQ protocol to genotype alleles in five different genes. ASQ showed a 98-100% concordance in genotype scoring with RFLP or Sanger sequencing outcomes. ASQ is time-saving because a single qPCR without post-PCR handling suffices to score

  10. A method for accurate detection of genomic microdeletions using real-time quantitative PCR

    Directory of Open Access Journals (Sweden)

    Bassett Anne S

    2005-12-01

    Full Text Available Abstract Background Quantitative Polymerase Chain Reaction (qPCR is a well-established method for quantifying levels of gene expression, but has not been routinely applied to the detection of constitutional copy number alterations of human genomic DNA. Microdeletions or microduplications of the human genome are associated with a variety of genetic disorders. Although, clinical laboratories routinely use fluorescence in situ hybridization (FISH to identify such cryptic genomic alterations, there remains a significant number of individuals in which constitutional genomic imbalance is suspected, based on clinical parameters, but cannot be readily detected using current cytogenetic techniques. Results In this study, a novel application for real-time qPCR is presented that can be used to reproducibly detect chromosomal microdeletions and microduplications. This approach was applied to DNA from a series of patient samples and controls to validate genomic copy number alteration at cytoband 22q11. The study group comprised 12 patients with clinical symptoms of chromosome 22q11 deletion syndrome (22q11DS, 1 patient trisomic for 22q11 and 4 normal controls. 6 of the patients (group 1 had known hemizygous deletions, as detected by standard diagnostic FISH, whilst the remaining 6 patients (group 2 were classified as 22q11DS negative using the clinical FISH assay. Screening of the patients and controls with a set of 10 real time qPCR primers, spanning the 22q11.2-deleted region and flanking sequence, confirmed the FISH assay results for all patients with 100% concordance. Moreover, this qPCR enabled a refinement of the region of deletion at 22q11. Analysis of DNA from chromosome 22 trisomic sample demonstrated genomic duplication within 22q11. Conclusion In this paper we present a qPCR approach for the detection of chromosomal microdeletions and microduplications. The strategic use of in silico modelling for qPCR primer design to avoid regions of repetitive

  11. High-throughput bacterial SNP typing identifies distinct clusters of Salmonella Typhi causing typhoid in Nepalese children

    LENUS (Irish Health Repository)

    Holt, Kathryn E

    2010-05-31

    Abstract Background Salmonella Typhi (S. Typhi) causes typhoid fever, which remains an important public health issue in many developing countries. Kathmandu, the capital of Nepal, is an area of high incidence and the pediatric population appears to be at high risk of exposure and infection. Methods We recently defined the population structure of S. Typhi, using new sequencing technologies to identify nearly 2,000 single nucleotide polymorphisms (SNPs) that can be used as unequivocal phylogenetic markers. Here we have used the GoldenGate (Illumina) platform to simultaneously type 1,500 of these SNPs in 62 S. Typhi isolates causing severe typhoid in children admitted to Patan Hospital in Kathmandu. Results Eight distinct S. Typhi haplotypes were identified during the 20-month study period, with 68% of isolates belonging to a subclone of the previously defined H58 S. Typhi. This subclone was closely associated with resistance to nalidixic acid, with all isolates from this group demonstrating a resistant phenotype and harbouring the same resistance-associated SNP in GyrA (Phe83). A secondary clone, comprising 19% of isolates, was observed only during the second half of the study. Conclusions Our data demonstrate the utility of SNP typing for monitoring bacterial populations over a defined period in a single endemic setting. We provide evidence for genotype introduction and define a nalidixic acid resistant subclone of S. Typhi, which appears to be the dominant cause of severe pediatric typhoid in Kathmandu during the study period.

  12. A Global Approach to Accurate and Automatic Quantitative Analysis of NMR Spectra by Complex Least-Squares Curve Fitting

    Science.gov (United States)

    Martin, Y. L.

    The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species, of the nature of the molecular site, and of the type of nucleus, but

  13. Two new rapid SNP-typing methods for classifying Mycobacterium tuberculosis complex into the main phylogenetic lineages.

    Directory of Open Access Journals (Sweden)

    David Stucki

    Full Text Available There is increasing evidence that strain variation in Mycobacterium tuberculosis complex (MTBC might influence the outcome of tuberculosis infection and disease. To assess genotype-phenotype associations, phylogenetically robust molecular markers and appropriate genotyping tools are required. Most current genotyping methods for MTBC are based on mobile or repetitive DNA elements. Because these elements are prone to convergent evolution, the corresponding genotyping techniques are suboptimal for phylogenetic studies and strain classification. By contrast, single nucleotide polymorphisms (SNP are ideal markers for classifying MTBC into phylogenetic lineages, as they exhibit very low degrees of homoplasy. In this study, we developed two complementary SNP-based genotyping methods to classify strains into the six main human-associated lineages of MTBC, the "Beijing" sublineage, and the clade comprising Mycobacterium bovis and Mycobacterium caprae. Phylogenetically informative SNPs were obtained from 22 MTBC whole-genome sequences. The first assay, referred to as MOL-PCR, is a ligation-dependent PCR with signal detection by fluorescent microspheres and a Luminex flow cytometer, which simultaneously interrogates eight SNPs. The second assay is based on six individual TaqMan real-time PCR assays for singleplex SNP-typing. We compared MOL-PCR and TaqMan results in two panels of clinical MTBC isolates. Both methods agreed fully when assigning 36 well-characterized strains into the main phylogenetic lineages. The sensitivity in allele-calling was 98.6% and 98.8% for MOL-PCR and TaqMan, respectively. Typing of an additional panel of 78 unknown clinical isolates revealed 99.2% and 100% sensitivity in allele-calling, respectively, and 100% agreement in lineage assignment between both methods. While MOL-PCR and TaqMan are both highly sensitive and specific, MOL-PCR is ideal for classification of isolates with no previous information, whereas TaqMan is faster

  14. Renal Cortical Lactate Dehydrogenase: A Useful, Accurate, Quantitative Marker of In Vivo Tubular Injury and Acute Renal Failure.

    Directory of Open Access Journals (Sweden)

    Richard A Zager

    Full Text Available Studies of experimental acute kidney injury (AKI are critically dependent on having precise methods for assessing the extent of tubular cell death. However, the most widely used techniques either provide indirect assessments (e.g., BUN, creatinine, suffer from the need for semi-quantitative grading (renal histology, or reflect the status of residual viable, not the number of lost, renal tubular cells (e.g., NGAL content. Lactate dehydrogenase (LDH release is a highly reliable test for assessing degrees of in vitro cell death. However, its utility as an in vivo AKI marker has not been defined. Towards this end, CD-1 mice were subjected to graded renal ischemia (0, 15, 22, 30, 40, or 60 min or to nephrotoxic (glycerol; maleate AKI. Sham operated mice, or mice with AKI in the absence of acute tubular necrosis (ureteral obstruction; endotoxemia, served as negative controls. Renal cortical LDH or NGAL levels were assayed 2 or 24 hrs later. Ischemic, glycerol, and maleate-induced AKI were each associated with striking, steep, inverse correlations (r, -0.89 between renal injury severity and renal LDH content. With severe AKI, >65% LDH declines were observed. Corresponding prompt plasma and urinary LDH increases were observed. These observations, coupled with the maintenance of normal cortical LDH mRNA levels, indicated the renal LDH efflux, not decreased LDH synthesis, caused the falling cortical LDH levels. Renal LDH content was well maintained with sham surgery, ureteral obstruction or endotoxemic AKI. In contrast to LDH, renal cortical NGAL levels did not correlate with AKI severity. In sum, the above results indicate that renal cortical LDH assay is a highly accurate quantitative technique for gauging the extent of experimental acute ischemic and toxic renal injury. That it avoids the limitations of more traditional AKI markers implies great potential utility in experimental studies that require precise quantitation of tubule cell death.

  15. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    Directory of Open Access Journals (Sweden)

    Pricila da Silva Cunha

    2014-01-01

    Full Text Available Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH, which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH, and/or multiplex ligation-dependent probe amplification (MLPA all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.

  16. Accurate measurement of circulating mitochondrial DNA content from human blood samples using real-time quantitative PCR.

    Science.gov (United States)

    Ajaz, Saima; Czajka, Anna; Malik, Afshan

    2015-01-01

    We describe a protocol to accurately measure the amount of human mitochondrial DNA (MtDNA) in peripheral blood samples which can be modified to quantify MtDNA from other body fluids, human cells, and tissues. This protocol is based on the use of real-time quantitative PCR (qPCR) to quantify the amount of MtDNA relative to nuclear DNA (designated the Mt/N ratio). In the last decade, there have been increasing numbers of studies describing altered MtDNA or Mt/N in circulation in common nongenetic diseases where mitochondrial dysfunction may play a role (for review see Malik and Czajka, Mitochondrion 13:481-492, 2013). These studies are distinct from those looking at genetic mitochondrial disease and are attempting to identify acquired changes in circulating MtDNA content as an indicator of mitochondrial function. However, the methodology being used is not always specific and reproducible. As more than 95 % of the human mitochondrial genome is duplicated in the human nuclear genome, it is important to avoid co-amplification of nuclear pseudogenes. Furthermore, template preparation protocols can also affect the results because of the size and structural differences between the mitochondrial and nuclear genomes. Here we describe how to (1) prepare DNA from blood samples; (2) pretreat the DNA to prevent dilution bias; (3) prepare dilution standards for absolute quantification using the unique primers human mitochondrial genome forward primer (hMitoF3) and human mitochondrial genome reverse primer(hMitoR3) for the mitochondrial genome, and human nuclear genome forward primer (hB2MF1) and human nuclear genome reverse primer (hB2MR1) primers for the human nuclear genome; (4) carry out qPCR for either relative or absolute quantification from test samples; (5) analyze qPCR data; and (6) calculate the sample size to adequately power studies. The protocol presented here is suitable for high-throughput use.

  17. Qualitative and quantitative determination of YiXinShu Tablet using ultra high performance liquid chromatography with Q Exactive hybrid quadrupole orbitrap high-resolution accurate mass spectrometry.

    Science.gov (United States)

    Sun, Zhi; Li, Zhuolun; Zuo, Lihua; Wang, Zhenhui; Zhou, Lin; Shi, Yingying; Kang, Jian; Zhu, Zhenfeng; Zhang, Xiaojian

    2017-08-24

    To clarify and quantify the chemical profile of YiXinShu Tablet rapidly, a feasible and accurate strategy was developed by applying ultra high performance liquid chromatography with Q Exactive hybrid quadrupole orbitrap high-resolution accurate mass spectrometry. A total of 105 components were identified, including 25 phenanthraquinones, 11 lactones, 19 lignans, 24 acids and 26 other compounds. Among them, 26 major compounds were unambiguously detected by comparing with reference standards. And 19 of these compounds in three batches of YiXinShu Tablet were selected for quantitative determination. (Z)-Ligustilide, salvianic acid A, salvianolic acid A, salvianolic acid B and rosmarinic acid were abundant in these three batches with contents over 1.000 mg/g. The established analysis methods were examined to be accurate and feasible. The results show that the ultra high performance liquid chromatography with Q Exactive hybrid quadrupole orbitrap high-resolution accurate mass spectrometry method has a powerful qualitative ability and promising quantitative application. This article is protected by copyright. All rights reserved. This article is protected by copyright. All rights reserved.

  18. Qualitative and quantitative proteomic profiling of cripto(-/-) embryonic stem cells by means of accurate mass LC-MS analysis.

    Science.gov (United States)

    Chambery, Angela; Vissers, Johannes P C; Langridge, James I; Lonardo, Enza; Minchiotti, Gabriella; Ruvo, Menotti; Parente, Augusto

    2009-02-01

    Cripto is one of the key regulators of embryonic stem cells (ESCs) differentiation into cardiomyocites vs neuronal fate. Cripto(-/-) murine ESCs have been utilized to investigate the molecular mechanisms underlying early events of mammalian lineage differentiation. 2D/LC-MS/MS and a label-free LC-MS approaches were used to qualitatively and quantitatively profile the cripto(-/-) ESC proteome, providing an integral view of the alterations induced in stem cell functions by deleting the cripto gene.

  19. Can a quantitative simulation of an Otto engine be accurately rendered by a simple Novikov model with heat leak?

    Science.gov (United States)

    Fischer, A.; Hoffmann, K.-H.

    2004-03-01

    In this case study a complex Otto engine simulation provides data including, but not limited to, effects from losses due to heat conduction, exhaust losses and frictional losses. This data is used as a benchmark to test whether the Novikov engine with heat leak, a simple endoreversible model, can reproduce the complex engine behavior quantitatively by an appropriate choice of model parameters. The reproduction obtained proves to be of high quality.

  20. Infectious titres of sheep scrapie and bovine spongiform encephalopathy agents cannot be accurately predicted from quantitative laboratory test results.

    Science.gov (United States)

    González, Lorenzo; Thorne, Leigh; Jeffrey, Martin; Martin, Stuart; Spiropoulos, John; Beck, Katy E; Lockey, Richard W; Vickery, Christopher M; Holder, Thomas; Terry, Linda

    2012-11-01

    It is widely accepted that abnormal forms of the prion protein (PrP) are the best surrogate marker for the infectious agent of prion diseases and, in practice, the detection of such disease-associated (PrP(d)) and/or protease-resistant (PrP(res)) forms of PrP is the cornerstone of diagnosis and surveillance of the transmissible spongiform encephalopathies (TSEs). Nevertheless, some studies question the consistent association between infectivity and abnormal PrP detection. To address this discrepancy, 11 brain samples of sheep affected with natural scrapie or experimental bovine spongiform encephalopathy were selected on the basis of the magnitude and predominant types of PrP(d) accumulation, as shown by immunohistochemical (IHC) examination; contra-lateral hemi-brain samples were inoculated at three different dilutions into transgenic mice overexpressing ovine PrP and were also subjected to quantitative analysis by three biochemical tests (BCTs). Six samples gave 'low' infectious titres (10⁶·⁵ to 10⁶·⁷ LD₅₀ g⁻¹) and five gave 'high titres' (10⁸·¹ to ≥ 10⁸·⁷ LD₅₀ g⁻¹) and, with the exception of the Western blot analysis, those two groups tended to correspond with samples with lower PrP(d)/PrP(res) results by IHC/BCTs. However, no statistical association could be confirmed due to high individual sample variability. It is concluded that although detection of abnormal forms of PrP by laboratory methods remains useful to confirm TSE infection, infectivity titres cannot be predicted from quantitative test results, at least for the TSE sources and host PRNP genotypes used in this study. Furthermore, the near inverse correlation between infectious titres and Western blot results (high protease pre-treatment) argues for a dissociation between infectivity and PrP(res).

  1. A reliable and accurate portable device for rapid quantitative estimation of iodine content in different types of edible salt

    Directory of Open Access Journals (Sweden)

    Kapil Yadav

    2015-01-01

    Full Text Available Background: Continuous monitoring of salt iodization to ensure the success of the Universal Salt Iodization (USI program can be significantly strengthened by the use of a simple, safe, and rapid method of salt iodine estimation. This study assessed the validity of a new portable device, iCheck Iodine developed by the BioAnalyt GmbH to estimate the iodine content in salt. Materials and Methods: Validation of the device was conducted in the laboratory of the South Asia regional office of the International Council for Control of Iodine Deficiency Disorders (ICCIDD. The validity of the device was assessed using device specific indicators, comparison of iCheck Iodine device with the iodometric titration, and comparison between iodine estimation using 1 g and 10 g salt by iCheck Iodine using 116 salt samples procured from various small-, medium-, and large-scale salt processors across India. Results: The intra- and interassay imprecision for 10 parts per million (ppm, 30 ppm, and 50 ppm concentrations of iodized salt were 2.8%, 6.1%, and 3.1%, and 2.4%, 2.2%, and 2.1%, respectively. Interoperator imprecision was 6.2%, 6.3%, and 4.6% for the salt with iodine concentrations of 10 ppm, 30 ppm, and 50 ppm respectively. The correlation coefficient between measurements by the two methods was 0.934 and the correlation coefficient between measurements using 1 g of iodized salt and 10 g of iodized salt by the iCheck Iodine device was 0.983. Conclusions: The iCheck Iodine device is reliable and provides a valid method for the quantitative estimation of the iodine content of iodized salt fortified with potassium iodate in the field setting and in different types of salt.

  2. Rapid, Accurate, and Quantitative Detection of Propranolol in Multiple Human Biofluids via Surface-Enhanced Raman Scattering.

    Science.gov (United States)

    Subaihi, Abdu; Almanqur, Laila; Muhamadali, Howbeer; AlMasoud, Najla; Ellis, David I; Trivedi, Drupad K; Hollywood, Katherine A; Xu, Yun; Goodacre, Royston

    2016-11-15

    There has been an increasing demand for rapid and sensitive techniques for the identification and quantification of pharmaceutical compounds in human biofluids during the past few decades, and surface-enhanced Raman scattering (SERS) is one of a number of physicochemical techniques with the potential to meet these demands. In this study we have developed a SERS-based analytical approach for the assessment of human biofluids in combination with chemometrics. This novel approach has enabled the detection and quantification of the β-blocker propranolol spiked into human serum, plasma, and urine at physiologically relevant concentrations. A range of multivariate statistical analysis techniques, including principal component analysis (PCA), principal component-discriminant function analysis (PC-DFA) and partial least-squares regression (PLSR) were employed to investigate the relationship between the full SERS spectral data and the level of propranolol. The SERS spectra when combined with PCA and PC-DFA demonstrated clear differentiation of neat biofluids and biofluids spiked with varying concentrations of propranolol ranging from 0 to 120 μM, and clear trends in ordination scores space could be correlated with the level of propranolol. Since PCA and PC-DFA are categorical classifiers, PLSR modeling was subsequently used to provide accurate propranolol quantification within all biofluids with high prediction accuracy (expressed as root-mean-square error of predictions) of 0.58, 9.68, and 1.69 for serum, plasma, and urine respectively, and these models also had excellent linearity for the training and test sets between 0 and 120 μM. The limit of detection as calculated from the area under the naphthalene ring vibration from propranolol was 133.1 ng/mL (0.45 μM), 156.8 ng/mL (0.53 μM), and 168.6 ng/mL (0.57 μM) for serum, plasma, and urine, respectively. This result shows a consistent signal irrespective of biofluid, and all are well within the expected physiological

  3. Toward Quantitatively Accurate Calculation of the Redox-Associated Acid-Base and Ligand Binding Equilibria of Aquacobalamin.

    Science.gov (United States)

    Johnston, Ryne C; Zhou, Jing; Smith, Jeremy C; Parks, Jerry M

    2016-08-04

    Redox processes in complex transition metal-containing species are often intimately associated with changes in ligand protonation states and metal coordination number. A major challenge is therefore to develop consistent computational approaches for computing pH-dependent redox and ligand dissociation properties of organometallic species. Reduction of the Co center in the vitamin B12 derivative aquacobalamin can be accompanied by ligand dissociation, protonation, or both, making these properties difficult to compute accurately. We examine this challenge here by using density functional theory and continuum solvation to compute Co-ligand binding equilibrium constants (Kon/off), pKas, and reduction potentials for models of aquacobalamin in aqueous solution. We consider two models for cobalamin ligand coordination: the first follows the hexa, penta, tetra coordination scheme for Co(III), Co(II), and Co(I) species, respectively, and the second model features saturation of each vacant axial coordination site on Co(II) and Co(I) species with a single, explicit water molecule to maintain six directly interacting ligands or water molecules in each oxidation state. Comparing these two coordination schemes in combination with five dispersion-corrected density functionals, we find that the accuracy of the computed properties is largely independent of the scheme used, but including only a continuum representation of the solvent yields marginally better results than saturating the first solvation shell around Co throughout. PBE performs best, displaying balanced accuracy and superior performance overall, with RMS errors of 80 mV for seven reduction potentials, 2.0 log units for five pKas and 2.3 log units for two log Kon/off values for the aquacobalamin system. Furthermore, we find that the BP86 functional commonly used in corrinoid studies suffers from erratic behavior and inaccurate descriptions of Co-axial ligand binding, leading to substantial errors in predicted pKas and

  4. A High Resolution/Accurate Mass (HRAM) Data-Dependent MS3 Neutral Loss Screening, Classification, and Relative Quantitation Methodology for Carbonyl Compounds in Saliva

    Science.gov (United States)

    Dator, Romel; Carrà, Andrea; Maertens, Laura; Guidolin, Valeria; Villalta, Peter W.; Balbo, Silvia

    2016-10-01

    Reactive carbonyl compounds (RCCs) are ubiquitous in the environment and are generated endogenously as a result of various physiological and pathological processes. These compounds can react with biological molecules inducing deleterious processes believed to be at the basis of their toxic effects. Several of these compounds are implicated in neurotoxic processes, aging disorders, and cancer. Therefore, a method characterizing exposures to these chemicals will provide insights into how they may influence overall health and contribute to disease pathogenesis. Here, we have developed a high resolution accurate mass (HRAM) screening strategy allowing simultaneous identification and relative quantitation of DNPH-derivatized carbonyls in human biological fluids. The screening strategy involves the diagnostic neutral loss of hydroxyl radical triggering MS3 fragmentation, which is only observed in positive ionization mode of DNPH-derivatized carbonyls. Unique fragmentation pathways were used to develop a classification scheme for characterizing known and unanticipated/unknown carbonyl compounds present in saliva. Furthermore, a relative quantitation strategy was implemented to assess variations in the levels of carbonyl compounds before and after exposure using deuterated d 3 -DNPH. This relative quantitation method was tested on human samples before and after exposure to specific amounts of alcohol. The nano-electrospray ionization (nano-ESI) in positive mode afforded excellent sensitivity with detection limits on-column in the high-attomole levels. To the best of our knowledge, this is the first report of a method using HRAM neutral loss screening of carbonyl compounds. In addition, the method allows simultaneous characterization and relative quantitation of DNPH-derivatized compounds using nano-ESI in positive mode.

  5. A High Resolution/Accurate Mass (HRAM) Data-Dependent MS3 Neutral Loss Screening, Classification, and Relative Quantitation Methodology for Carbonyl Compounds in Saliva

    Science.gov (United States)

    Dator, Romel; Carrà, Andrea; Maertens, Laura; Guidolin, Valeria; Villalta, Peter W.; Balbo, Silvia

    2017-04-01

    Reactive carbonyl compounds (RCCs) are ubiquitous in the environment and are generated endogenously as a result of various physiological and pathological processes. These compounds can react with biological molecules inducing deleterious processes believed to be at the basis of their toxic effects. Several of these compounds are implicated in neurotoxic processes, aging disorders, and cancer. Therefore, a method characterizing exposures to these chemicals will provide insights into how they may influence overall health and contribute to disease pathogenesis. Here, we have developed a high resolution accurate mass (HRAM) screening strategy allowing simultaneous identification and relative quantitation of DNPH-derivatized carbonyls in human biological fluids. The screening strategy involves the diagnostic neutral loss of hydroxyl radical triggering MS3 fragmentation, which is only observed in positive ionization mode of DNPH-derivatized carbonyls. Unique fragmentation pathways were used to develop a classification scheme for characterizing known and unanticipated/unknown carbonyl compounds present in saliva. Furthermore, a relative quantitation strategy was implemented to assess variations in the levels of carbonyl compounds before and after exposure using deuterated d 3 -DNPH. This relative quantitation method was tested on human samples before and after exposure to specific amounts of alcohol. The nano-electrospray ionization (nano-ESI) in positive mode afforded excellent sensitivity with detection limits on-column in the high-attomole levels. To the best of our knowledge, this is the first report of a method using HRAM neutral loss screening of carbonyl compounds. In addition, the method allows simultaneous characterization and relative quantitation of DNPH-derivatized compounds using nano-ESI in positive mode.

  6. Quantitative LC-MS of polymers: determining accurate molecular weight distributions by combined size exclusion chromatography and electrospray mass spectrometry with maximum entropy data processing.

    Science.gov (United States)

    Gruendling, Till; Guilhaus, Michael; Barner-Kowollik, Christopher

    2008-09-15

    We report on the successful application of size exclusion chromatography (SEC) combined with electrospray ionization mass spectrometry (ESI-MS) and refractive index (RI) detection for the determination of accurate molecular weight distributions of synthetic polymers, corrected for chromatographic band broadening. The presented method makes use of the ability of ESI-MS to accurately depict the peak profiles and retention volumes of individual oligomers eluting from the SEC column, whereas quantitative information on the absolute concentration of oligomers is obtained from the RI-detector only. A sophisticated computational algorithm based on the maximum entropy principle is used to process the data gained by both detectors, yielding an accurate molecular weight distribution, corrected for chromatographic band broadening. Poly(methyl methacrylate) standards with molecular weights up to 10 kDa serve as model compounds. Molecular weight distributions (MWDs) obtained by the maximum entropy procedure are compared to MWDs, which were calculated by a conventional calibration of the SEC-retention time axis with peak retention data obtained from the mass spectrometer. Comparison showed that for the employed chromatographic system, distributions below 7 kDa were only weakly influenced by chromatographic band broadening. However, the maximum entropy algorithm could successfully correct the MWD of a 10 kDa standard for band broadening effects. Molecular weight averages were between 5 and 14% lower than the manufacturer stated data obtained by classical means of calibration. The presented method demonstrates a consistent approach for analyzing data obtained by coupling mass spectrometric detectors and concentration sensitive detectors to polymer liquid chromatography.

  7. Accurate and easy-to-use assessment of contiguous DNA methylation sites based on proportion competitive quantitative-PCR and lateral flow nucleic acid biosensor.

    Science.gov (United States)

    Xu, Wentao; Cheng, Nan; Huang, Kunlun; Lin, Yuehe; Wang, Chenguang; Xu, Yuancong; Zhu, Longjiao; Du, Dan; Luo, Yunbo

    2016-06-15

    Many types of diagnostic technologies have been reported for DNA methylation, but they require a standard curve for quantification or only show moderate accuracy. Moreover, most technologies have difficulty providing information on the level of methylation at specific contiguous multi-sites, not to mention easy-to-use detection to eliminate labor-intensive procedures. We have addressed these limitations and report here a cascade strategy that combines proportion competitive quantitative PCR (PCQ-PCR) and lateral flow nucleic acid biosensor (LFNAB), resulting in accurate and easy-to-use assessment. The P16 gene with specific multi-methylated sites, a well-studied tumor suppressor gene, was used as the target DNA sequence model. First, PCQ-PCR provided amplification products with an accurate proportion of multi-methylated sites following the principle of proportionality, and double-labeled duplex DNA was synthesized. Then, a LFNAB strategy was further employed for amplified signal detection via immune affinity recognition, and the exact level of site-specific methylation could be determined by the relative intensity of the test line and internal reference line. This combination resulted in all recoveries being greater than 94%, which are pretty satisfactory recoveries in DNA methylation assessment. Moreover, the developed cascades show significantly high usability as a simple, sensitive, and low-cost tool. Therefore, as a universal platform for sensing systems for the detection of contiguous multi-sites of DNA methylation without external standards and expensive instrumentation, this PCQ-PCR-LFNAB cascade method shows great promise for the point-of-care diagnosis of cancer risk and therapeutics.

  8. Simultaneous measurement in mass and mass/mass mode for accurate qualitative and quantitative screening analysis of pharmaceuticals in river water.

    Science.gov (United States)

    Martínez Bueno, M J; Ulaszewska, Maria M; Gomez, M J; Hernando, M D; Fernández-Alba, A R

    2012-09-21

    A new approach for the analysis of pharmaceuticals (target and non-target) in water by LC-QTOF-MS is described in this work. The study has been designed to assess the performance of the simultaneous quantitative screening of target compounds, and the qualitative analysis of non-target analytes, in just one run. The features of accurate mass full scan mass spectrometry together with high MS/MS spectral acquisition rates - by means of information dependent acquisition (IDA) - have demonstrated their potential application in this work. Applying this analytical strategy, an identification procedure is presented based on library searching for compounds which were not included a priori in the analytical method as target compounds, thus allowing their characterization by data processing of accurate mass measurements in MS and MS/MS mode. The non-target compounds identified in river water samples were ketorolac, trazodone, fluconazole, metformin and venlafaxine. Simultaneously, this strategy allowed for the identification of other compounds which were not included in the library by screening the highest intensity peaks detected in the samples and by analysis of the full scan TOF-MS, isotope pattern and MS/MS spectra - the example of loratadine (histaminergic) is described. The group of drugs of abuse selected as target compounds for evaluation included analgesics, opioids and psychostimulants. Satisfactory results regarding sensitivity and linearity of the developed method were obtained. Limits of detection for the selected target compounds were from 0.003 to 0.01 μg/L and 0.01 to 0.5 μg/L, in MS and MS/MS mode, respectively - by direct sample injection of 100 μL.

  9. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    Energy Technology Data Exchange (ETDEWEB)

    Malik, Afshan N., E-mail: afshan.malik@kcl.ac.uk [King' s College London, Diabetes Research Group, Division of Diabetes and Nutritional Sciences, School of Medicine (United Kingdom); Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil [King' s College London, Diabetes Research Group, Division of Diabetes and Nutritional Sciences, School of Medicine (United Kingdom)

    2011-08-19

    Highlights: {yields} Mitochondrial dysfunction is central to many diseases of oxidative stress. {yields} 95% of the mitochondrial genome is duplicated in the nuclear genome. {yields} Dilution of untreated genomic DNA leads to dilution bias. {yields} Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as {beta}-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  10. Tandem Mass Spectrometry Measurement of the Collision Products of Carbamate Anions Derived from CO2 Capture Sorbents: Paving the Way for Accurate Quantitation

    Science.gov (United States)

    Jackson, Phil; Fisher, Keith J.; Attalla, Moetaz Ibrahim

    2011-08-01

    The reaction between CO2 and aqueous amines to produce a charged carbamate product plays a crucial role in post-combustion capture chemistry when primary and secondary amines are used. In this paper, we report the low energy negative-ion CID results for several anionic carbamates derived from primary and secondary amines commonly used as post-combustion capture solvents. The study was performed using the modern equivalent of a triple quadrupole instrument equipped with a T-wave collision cell. Deuterium labeling of 2-aminoethanol (1,1,2,2,-d4-2-aminoethanol) and computations at the M06-2X/6-311++G(d,p) level were used to confirm the identity of the fragmentation products for 2-hydroxyethylcarbamate (derived from 2-aminoethanol), in particular the ions CN-, NCO- and facile neutral losses of CO2 and water; there is precedent for the latter in condensed phase isocyanate chemistry. The fragmentations of 2-hydroxyethylcarbamate were generalized for carbamate anions derived from other capture amines, including ethylenediamine, diethanolamine, and piperazine. We also report unequivocal evidence for the existence of carbamate anions derived from sterically hindered amines ( Tris(2-hydroxymethyl)aminomethane and 2-methyl-2-aminopropanol). For the suite of carbamates investigated, diagnostic losses include the decarboxylation product (-CO2, 44 mass units), loss of 46 mass units and the fragments NCO- ( m/z 42) and CN- ( m/z 26). We also report low energy CID results for the dicarbamate dianion (-O2CNHC2H4NHCO{2/-}) commonly encountered in CO2 capture solution utilizing ethylenediamine. Finally, we demonstrate a promising ion chromatography-MS based procedure for the separation and quantitation of aqueous anionic carbamates, which is based on the reported CID findings. The availability of accurate quantitation methods for ionic CO2 capture products could lead to dynamic operational tuning of CO2 capture-plants and, thus, cost-savings via real-time manipulation of solvent

  11. Mitochondrial DNA as a non-invasive biomarker: accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias.

    Science.gov (United States)

    Malik, Afshan N; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-08-19

    Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as β-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a "dilution bias" when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  12. Allele Specific Locked Nucleic Acid Quantitative PCR (ASLNAqPCR): An Accurate and Cost-Effective Assay to Diagnose and Quantify KRAS and BRAF Mutation

    Science.gov (United States)

    Morandi, Luca; de Biase, Dario; Visani, Michela; Cesari, Valentina; De Maglio, Giovanna; Pizzolitto, Stefano; Pession, Annalisa; Tallini, Giovanni

    2012-01-01

    The use of tyrosine kinase inhibitors (TKIs) requires the testing for hot spot mutations of the molecular effectors downstream the membrane-bound tyrosine kinases since their wild type status is expected for response to TKI therapy. We report a novel assay that we have called Allele Specific Locked Nucleic Acid quantitative PCR (ASLNAqPCR). The assay uses LNA-modified allele specific primers and LNA-modified beacon probes to increase sensitivity, specificity and to accurately quantify mutations. We designed primers specific for codon 12/13 KRAS mutations and BRAF V600E, and validated the assay with 300 routine samples from a variety of sources, including cytology specimens. All were analyzed by ASLNAqPCR and Sanger sequencing. Discordant cases were pyrosequenced. ASLNAqPCR correctly identified BRAF and KRAS mutations in all discordant cases and all had a mutated/wild type DNA ratio below the analytical sensitivity of the Sanger method. ASLNAqPCR was 100% specific with greater accuracy, positive and negative predictive values compared with Sanger sequencing. The analytical sensitivity of ASLNAqPCR is 0.1%, allowing quantification of mutated DNA in small neoplastic cell clones. ASLNAqPCR can be performed in any laboratory with real-time PCR equipment, is very cost-effective and can easily be adapted to detect hot spot mutations in other oncogenes. PMID:22558339

  13. Quantitative and qualitative intrapatient comparison of 68Ga-DOTATOC and 68Ga-DOTATATE: net uptake rate for accurate quantification.

    Science.gov (United States)

    Velikyan, Irina; Sundin, Anders; Sörensen, Jens; Lubberink, Mark; Sandström, Mattias; Garske-Román, Ulrike; Lundqvist, Hans; Granberg, Dan; Eriksson, Barbro

    2014-02-01

    Quantitative imaging and dosimetry are crucial for individualized treatment during peptide receptor radionuclide therapy (PRRT). (177)Lu-DOTATATE and (68)Ga-DOTATOC/(68)Ga-DOTATATE are used, respectively, for PRRT and PET examinations targeting somatostatin receptors (SSTRs) in patients affected by neuroendocrine tumors. The aim of the study was to quantitatively and qualitatively compare the performance of (68)Ga-DOTATOC and (68)Ga-DOTATATE in the context of subsequent PRRT with (177)Lu-DOTATATE under standardized conditions in the same patient as well as to investigate the sufficiency of standardized uptake value (SUV) for estimation of SSTR expression. Ten patients with metastatic neuroendocrine tumors underwent one 45-min dynamic and 3 whole-body PET/CT examinations at 1, 2, and 3 h after injection with both tracers. The number of detected lesions, SUVs in lesions and normal tissue, total functional tumor volume, and SSTR volume (functional tumor volume multiplied by mean SUV) were investigated for each time point. Net uptake rate (Ki) was calculated according to the Patlak method for 3 tumors per patient. There were no significant differences in lesion count, lesion SUV, Ki, functional tumor volume, or SSTR volume between (68)Ga-DOTATOC and (68)Ga-DOTATATE at any time point. The detection rate was similar, although with differences for single lesions in occasional patients. For healthy organs, marginally higher uptake of (68)Ga-DOTATATE was observed in kidneys, bone marrow, and liver at 1 h. (68)Ga-DOTATOC uptake was higher in mediastinal blood pool at the 1-h time point (P = 0.018). The tumor-to-liver ratio was marginally higher for (68)Ga-DOTATOC at the 3-h time point (P = 0.037). Blood clearance was fast and similar for both tracers. SUV did not correlate with Ki linearly and achieved saturation for a Ki of greater than 0.2 mL/cm(3)/min, corresponding to an SUV of more than 25. (68)Ga-DOTATOC and (68)Ga-DOTATATE are suited equally well for staging and

  14. Evaluation of the iPLEX(®) Sample ID Plus Panel designed for the Sequenom MassARRAY(®) system. A SNP typing assay developed for human identification and sample tracking based on the SNPforID panel

    DEFF Research Database (Denmark)

    Johansen, P; Andersen, J D; Børsting, Claus;

    2013-01-01

    Sequenom launched the first commercial SNP typing kit for human identification, named the iPLEX(®) Sample ID Plus Panel. The kit amplifies 47 of the 52 SNPs in the SNPforID panel, amelogenin and two Y-chromosome SNPs in one multiplex PCR. The SNPs were analyzed by single base extension (SBE...... SNPforID assay. The average call rate for duplicate typing of any one SNPs in the panel was 90.0% when the mass spectra were analyzed automatically with the MassARRAY(®) TYPER 4.0 genotyping software in real time. Two reproducible inconsistencies were observed (error rate: 0.05%) at two different SNP...

  15. Quantitative profiling of bile acids in biofluids and tissues based on accurate mass high resolution LC-FT-MS: Compound class targeting in a metabolomics workflow

    NARCIS (Netherlands)

    Bobeldijk, I.; Hekman, M.; Vries de- Weij, J.van der; Coulier, L.; Ramaker, R.; Kleemann, R.; Kooistra, T.; Rubingh, C.; Freidig, A.; Verheij, E.

    2008-01-01

    We report a sensitive, generic method for quantitative profiling of bile acids and other endogenous metabolites in small quantities of various biological fluids and tissues. The method is based on a straightforward sample preparation, separation by reversed-phase high performance liquid-chromatograp

  16. Accurate quantitative analysis of gold alloys using multi-pulse laser induced breakdown spectroscopy and a correlation-based calibration method

    Energy Technology Data Exchange (ETDEWEB)

    Galbacs, Gabor [Department of Inorganic and Analytical Chemistry, University of Szeged, 6720 Szeged, Dom ter 7. (Hungary)], E-mail: galbx@chem.u-szeged.hu; Jedlinszki, Nikoletta; Cseh, Gabor; Galbacs, Zoltan; Turi, Laszlo [Department of Inorganic and Analytical Chemistry, University of Szeged, 6720 Szeged, Dom ter 7. (Hungary)

    2008-05-15

    Multi-pulse laser induced breakdown spectroscopy (LIBS), in combination with the generalized linear correlation calibration method (GLCM), was applied to the quantitative analysis (fineness determination) of quaternary gold alloys. Accuracy and precision on the order of a few thousandths ( per mille ) was achieved. The analytical performance is directly comparable to that of the standard cupellation method (fire assay), but provides results within minutes and is virtually non-destructive, as it consumes only a few micrograms of the sample.

  17. Differential Label-free Quantitative Proteomic Analysis of Shewanella oneidensis Cultured under Aerobic and Suboxic Conditions by Accurate Mass and Time Tag Approach

    Energy Technology Data Exchange (ETDEWEB)

    Fang, Ruihua; Elias, Dwayne A.; Monroe, Matthew E.; Shen, Yufeng; McIntosh, Martin; Wang, Pei; Goddard, Carrie D.; Callister, Stephen J.; Moore, Ronald J.; Gorby, Yuri A.; Adkins, Joshua N.; Fredrickson, Jim K.; Lipton, Mary S.; Smith, Richard D.

    2006-04-01

    We describe the application of liquid chromatography coupled to mass spectrometry (LC/MS) without the use of stable isotope labeling for differential quantitative proteomics analysis of whole cell lysates of Shewanella oneidensis MR-1 cultured under aerobic and sub-oxic conditions. Liquid chromatography coupled to tandem mass spectrometry (LC-MS/MS) was used to initially identify peptide sequences, and LC coupled to Fourier transform ion cyclotron resonance mass spectrometry (LC-FTICR) was used to confirm these identifications, as well as measure relative peptide abundances. 2343 peptides, covering 668 proteins were identified with high confidence and quantified. Among these proteins, a subset of 56 changed significantly using statistical approaches such as SAM, while another subset of 56 that were annotated as performing housekeeping functions remained essentially unchanged in relative abundance. Numerous proteins involved in anaerobic energy metabolism exhibited up to a 10-fold increase in relative abundance when S. oneidensis is transitioned from aerobic to sub-oxic conditions.

  18. qFibrosis: A fully-quantitative innovative method incorporating histological features to facilitate accurate fibrosis scoring in animal model and chronic hepatitis B patients

    Science.gov (United States)

    Tai, Dean C.S.; Wang, Shi; Cheng, Chee Leong; Peng, Qiwen; Yan, Jie; Chen, Yongpeng; Sun, Jian; Liang, Xieer; Zhu, Youfu; Rajapakse, Jagath C.; Welsch, Roy E.; So, Peter T.C.; Wee, Aileen; Hou, Jinlin; Yu, Hanry

    2014-01-01

    Background & Aims There is increasing need for accurate assessment of liver fibrosis/cirrhosis. We aimed to develop qFibrosis, a fully-automated assessment method combining quantification of histopathological architectural features, to address unmet needs in core biopsy evaluation of fibrosis in chronic hepatitis B (CHB) patients. Methods qFibrosis was established as a combined index based on 87 parameters of architectural features. Images acquired from 25 Thioacetamide-treated rat samples and 162 CHB core biopsies were used to train and test qFibrosis and to demonstrate its reproducibility. qFibrosis scoring was analyzed employing Metavir and Ishak fibrosis staging as standard references, and collagen proportionate area (CPA) measurement for comparison. Results qFibrosis faithfully and reliably recapitulates Metavir fibrosis scores, as it can identify differences between all stages in both animal samples (p biopsies (p biopsies: 10–44 mm in length). qFibrosis can significantly predict staging underestimation in suboptimal biopsies (<15 mm) and under- and over-scoring by different pathologists (p <0.001). qFibrosis can also differentiate between Ishak stages 5 and 6 (AUC: 0.73, p = 0.008), suggesting the possibility of monitoring intra-stage cirrhosis changes. Best of all, qFibrosis demonstrates superior performance to CPA on all counts. Conclusions qFibrosis can improve fibrosis scoring accuracy and throughput, thus allowing for reproducible and reliable analysis of efficacies of anti-fibrotic therapies in clinical research and practice. PMID:24583249

  19. Quantitative Proteome Analysis of Breast Cancer Cell Lines using 18O-Labeling and an Accurate Mass and Time Tag Strategy

    Energy Technology Data Exchange (ETDEWEB)

    Patwardhan, Anil J.; Strittmatter, Eric F.; Camp, David G.; Smith, Richard D.; Pallavicini, Maria

    2006-05-01

    Proteome comparison of cell lines derived from breast cancer and normal breast epithelium provide opportunities to identify differentially expressed proteins and pathways associated with specific phenotypes. We employed trypsin-catalyzed 16O/18O peptide labeling, FTI-CR mass spectrometry, and the accurate mass and time (AMT) tag strategy to calculate compare the relative protein abundances of hundreds of proteins simultaneously in non-cancer and cancer cell lines derived from breast tissue. A reference panel of cell lines was created to facilitate comparisons of relative protein abundance amongst multiple cell lines and across multiple experiments. A peptide database generated from multidimensional LC separations and MS/MS analysis was used to facilitate subsequent AMT tag-based peptide identifications. This peptide database represented a total of 2,299 proteins, including 514 that were quantified using the AMT tag and 16O/18O strategies. Eighty-six proteins showed at least a 3-fold protein abundance change between cancer and non-cancer cell lines. A comparison of protein expression profiles with previously published gene expression data revealed that 21 of these proteins also had >3-fold differences between the non-cancer and cancer cell lines at the transcriptional level. Clustering of protein abundance ratios revealed that several groups of proteins were differentially expressed between the cancer cell lines

  20. Validation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in strawberry fruits using different cultivars and osmotic stresses.

    Science.gov (United States)

    Galli, Vanessa; Borowski, Joyce Moura; Perin, Ellen Cristina; Messias, Rafael da Silva; Labonde, Julia; Pereira, Ivan dos Santos; Silva, Sérgio Delmar Dos Anjos; Rombaldi, Cesar Valmor

    2015-01-10

    The increasing demand of strawberry (Fragaria×ananassa Duch) fruits is associated mainly with their sensorial characteristics and the content of antioxidant compounds. Nevertheless, the strawberry production has been hampered due to its sensitivity to abiotic stresses. Therefore, to understand the molecular mechanisms highlighting stress response is of great importance to enable genetic engineering approaches aiming to improve strawberry tolerance. However, the study of expression of genes in strawberry requires the use of suitable reference genes. In the present study, seven traditional and novel candidate reference genes were evaluated for transcript normalization in fruits of ten strawberry cultivars and two abiotic stresses, using RefFinder, which integrates the four major currently available software programs: geNorm, NormFinder, BestKeeper and the comparative delta-Ct method. The results indicate that the expression stability is dependent on the experimental conditions. The candidate reference gene DBP (DNA binding protein) was considered the most suitable to normalize expression data in samples of strawberry cultivars and under drought stress condition, and the candidate reference gene HISTH4 (histone H4) was the most stable under osmotic stresses and salt stress. The traditional genes GAPDH (glyceraldehyde-3-phosphate dehydrogenase) and 18S (18S ribosomal RNA) were considered the most unstable genes in all conditions. The expression of phenylalanine ammonia lyase (PAL) and 9-cis epoxycarotenoid dioxygenase (NCED1) genes were used to further confirm the validated candidate reference genes, showing that the use of an inappropriate reference gene may induce erroneous results. This study is the first survey on the stability of reference genes in strawberry cultivars and osmotic stresses and provides guidelines to obtain more accurate RT-qPCR results for future breeding efforts. Copyright © 2014 Elsevier B.V. All rights reserved.

  1. Evaluation of reference genes for accurate normalization of gene expression for real time-quantitative PCR in Pyrus pyrifolia using different tissue samples and seasonal conditions.

    Science.gov (United States)

    Imai, Tsuyoshi; Ubi, Benjamin E; Saito, Takanori; Moriguchi, Takaya

    2014-01-01

    We have evaluated suitable reference genes for real time (RT)-quantitative PCR (qPCR) analysis in Japanese pear (Pyrus pyrifolia). We tested most frequently used genes in the literature such as β-Tubulin, Histone H3, Actin, Elongation factor-1α, Glyceraldehyde-3-phosphate dehydrogenase, together with newly added genes Annexin, SAND and TIP41. A total of 17 primer combinations for these eight genes were evaluated using cDNAs synthesized from 16 tissue samples from four groups, namely: flower bud, flower organ, fruit flesh and fruit skin. Gene expression stabilities were analyzed using geNorm and NormFinder software packages or by ΔCt method. geNorm analysis indicated three best performing genes as being sufficient for reliable normalization of RT-qPCR data. Suitable reference genes were different among sample groups, suggesting the importance of validation of gene expression stability of reference genes in the samples of interest. Ranking of stability was basically similar between geNorm and NormFinder, suggesting usefulness of these programs based on different algorithms. ΔCt method suggested somewhat different results in some groups such as flower organ or fruit skin; though the overall results were in good correlation with geNorm or NormFinder. Gene expression of two cold-inducible genes PpCBF2 and PpCBF4 were quantified using the three most and the three least stable reference genes suggested by geNorm. Although normalized quantities were different between them, the relative quantities within a group of samples were similar even when the least stable reference genes were used. Our data suggested that using the geometric mean value of three reference genes for normalization is quite a reliable approach to evaluating gene expression by RT-qPCR. We propose that the initial evaluation of gene expression stability by ΔCt method, and subsequent evaluation by geNorm or NormFinder for limited number of superior gene candidates will be a practical way of finding out

  2. Accurate quantitation for in vitro refolding of single domain antibody fragments expressed as inclusion bodies by referring the concomitant expression of a soluble form in the periplasms of Escherichia coli.

    Science.gov (United States)

    Noguchi, Tomoaki; Nishida, Yuichi; Takizawa, Keiji; Cui, Yue; Tsutsumi, Koki; Hamada, Takashi; Nishi, Yoshisuke

    2017-03-01

    Single domain antibody fragments from two species, a camel VHH (PM1) and a shark VNAR (A6), were derived from inclusion bodies of E. coli and refolded in vitro following three refolding recipes for comparing refolding efficiencies: three-step cold dialysis refolding (TCDR), one-step hot dialysis refolding (OHDR), and one-step cold dialysis refolding (OCDR), as these fragments were expressed as 'a soluble form' either in cytoplasm or periplasm, but the amount were much less than those expressed as 'an insoluble form (inclusion body)' in cytoplasm and periplasm. In order to verify the refolding efficiencies from inclusion bodies correctly, proteins purified from periplasmic soluble fractions were used as reference samples. These samples showed far-UV spectra of a typical β-sheet-dominant structure in circular dichroism (CD) spectroscopy and so did the refolded samples as well. As the maximal magnitude of ellipticity in millidegrees (θmax) observed at a given wave length was proportional to the concentrations of the respective reference samples, we could draw linear regression lines for the magnitudes vs. sample concentrations. By using these lines, we measured the concentrations for the refolded PM1 and A6 samples purified from solubilized cytoplasmic insoluble fractions. The refolding efficiency of PM1 was almost 50% following TCDR and 40% and 30% following OHDR and OCDR, respectively, whereas the value of A6 was around 30% following TCDR, and out of bound for quantitation following the other two recipes. The ELISA curves, which were derived from the refolded samples, coincided better with those obtained from the reference samples after converting the values from the protein-concentrations at recovery to the ones of refolded proteins using recovery ratios, indicating that such a correction gives better results for the accurate measure of the ELISA curves than those without correction. Our method require constructing a dual expression system, expressed both in

  3. Genome-Wide Association Mapping for Intelligence in Military Working Dogs: Canine Cohort, Canine Intelligence Assessment Regimen, Genome-Wide Single Nucleotide Polymorphism (SNP) Typing, and Unsupervised Classification Algorithm for Genome-Wide Association Data Analysis

    Science.gov (United States)

    2011-09-01

    Almasy, L, Blangero, J. (2009) Human QTL linkage mapping. Genetica 136:333-340. Amos, CI. (2007) Successful design and conduct of genome-wide...quantitative trait loci. Genetica 136:237-243. Skol AD, Scott LJ, Abecasis GR, Boehnke M. (2006) Joint analysis is more efficient than replication

  4. Rigour in quantitative research.

    Science.gov (United States)

    Claydon, Leica Sarah

    2015-07-22

    This article which forms part of the research series addresses scientific rigour in quantitative research. It explores the basis and use of quantitative research and the nature of scientific rigour. It examines how the reader may determine whether quantitative research results are accurate, the questions that should be asked to determine accuracy and the checklists that may be used in this process. Quantitative research has advantages in nursing, since it can provide numerical data to help answer questions encountered in everyday practice.

  5. A method to accurately quantitate intensities of (32)P-DNA bands when multiple bands appear in a single lane of a gel is used to study dNTP insertion opposite a benzo[a]pyrene-dG adduct by Sulfolobus DNA polymerases Dpo4 and Dbh.

    Science.gov (United States)

    Sholder, Gabriel; Loechler, Edward L

    2015-01-01

    Quantitating relative (32)P-band intensity in gels is desired, e.g., to study primer-extension kinetics of DNA polymerases (DNAPs). Following imaging, multiple (32)P-bands are often present in lanes. Though individual bands appear by eye to be simple and well-resolved, scanning reveals they are actually skewed-Gaussian in shape and neighboring bands are overlapping, which complicates quantitation, because slower migrating bands often have considerable contributions from the trailing edges of faster migrating bands. A method is described to accurately quantitate adjacent (32)P-bands, which relies on having a standard: a simple skewed-Gaussian curve from an analogous pure, single-component band (e.g., primer alone). This single-component scan/curve is superimposed on its corresponding band in an experimentally determined scan/curve containing multiple bands (e.g., generated in a primer-extension reaction); intensity exceeding the single-component scan/curve is attributed to other components (e.g., insertion products). Relative areas/intensities are determined via pixel analysis, from which relative molarity of components is computed. Common software is used. Commonly used alternative methods (e.g., drawing boxes around bands) are shown to be less accurate. Our method was used to study kinetics of dNTP primer-extension opposite a benzo[a]pyrene-N(2)-dG-adduct with four DNAPs, including Sulfolobus solfataricus Dpo4 and Sulfolobus acidocaldarius Dbh. Vmax/Km is similar for correct dCTP insertion with Dpo4 and Dbh. Compared to Dpo4, Dbh misinsertion is slower for dATP (∼20-fold), dGTP (∼110-fold) and dTTP (∼6-fold), due to decreases in Vmax. These findings provide support that Dbh is in the same Y-Family DNAP class as eukaryotic DNAP κ and bacterial DNAP IV, which accurately bypass N(2)-dG adducts, as well as establish the scan-method described herein as an accurate method to quantitate relative intensity of overlapping bands in a single lane, whether generated

  6. 粮谷中8种痕量真菌毒素的定量分析方法%Accurate Quantitative Analysis Method for 8 Kinds of Mycotoxins in Cereal

    Institute of Scientific and Technical Information of China (English)

    曹娅; 孙利; 王明林; 冯峰; 储晓刚

    2013-01-01

    建立了大米、小麦和大豆中黄曲霉毒素B1、黄曲霉毒素B2、黄曲霉毒素G1、黄曲霉毒素G2、伏马毒素B1、伏马毒素B2、柄曲霉素和异烟棒曲霉素C8种真菌毒素的高效液相色谱-串联质谱(HPLC-MS/MS)分析方法.样品加入正己烷去除油脂,用60%乙腈振荡液液分配提取,取乙腈水层过滤膜后分析.在电喷雾电离(ESI)正离子模式下采用多反应监测(MRM)进行测定.定量方法采用同位素内标稀释法,8种真菌毒素在各自浓度范围内线性关系良好,线性系数均不低于0.997 0.空白样品的加标回收率为77%~123%,相对标准偏差(RSD)为0.6%~13.3%.该方法操作简单、灵敏度高,可用于粮谷中真菌毒素的检测.%A high performance liquid chromatography - tandem mass spectrometric( HPLC - MS/MS) method was developed for the determination of 8 kinds of mycotoxins (aflatoxin Bl, aflatoxin B2, af-latoxin Gl, aflatoxin G2, fumonisin Bl, fumonisin B2, sterigmatocystin and roquefortine C) in ce real. The sample was diluted with hexane to remove the oil, then extracted with 60% acetonitrile. The acetonitrile layer was made up to constant volume and passed through the filter before tested. The detection was performed in the electrospray ion positive mode and multiple reaction monitoring (MRM) mode, and the quantitation of target compounds was carried out by the isotope dilution assays. The results indicated that the 8 compounds showed good linear relationships in certain concentration ranges, with correlation coefficients not less than than 0.997 0. The average recoveries ranged from 77% to 123%, with relative standard deviations of 0. 6% - 13. 3% . The method was proved to be rapid and sensitive, and could be used for the detection of mycotoxins in cereal.

  7. Speaking Fluently And Accurately

    Institute of Scientific and Technical Information of China (English)

    JosephDeVeto

    2004-01-01

    Even after many years of study,students make frequent mistakes in English. In addition, many students still need a long time to think of what they want to say. For some reason, in spite of all the studying, students are still not quite fluent.When I teach, I use one technique that helps students not only speak more accurately, but also more fluently. That technique is dictations.

  8. Quantitatively accurate calculations of conductance and thermopower of molecular junctions

    DEFF Research Database (Denmark)

    Markussen, Troels; Jin, Chengjun; Thygesen, Kristian Sommer

    2013-01-01

    ) connected to gold electrodes using first‐principles calculations. We find excellent agreement with experiments for both molecules when exchange–correlation effects are described by the many‐body GW approximation. In contrast, results from standard density functional theory (DFT) deviate from experiments......‐interaction errors and image charge effects. Finally, we show that the conductance and thermopower of the considered junctions are relatively insensitive to the metal–molecule bonding geometry. Our results demonstrate that electronic and thermoelectric properties of molecular junctions can be predicted from first‐principles...... calculations when exchange–correlation effects are taken properly into account....

  9. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE ...

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with P significantly reduced the bioavailability of Pb. The bioaccessibility of the Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter 24%, or present as Pb sulfate 18%. Ad

  10. Groundwater recharge: Accurately representing evapotranspiration

    CSIR Research Space (South Africa)

    Bugan, Richard DH

    2011-09-01

    Full Text Available Groundwater recharge is the basis for accurate estimation of groundwater resources, for determining the modes of water allocation and groundwater resource susceptibility to climate change. Accurate estimations of groundwater recharge with models...

  11. Impact of reconstruction parameters on quantitative I-131 SPECT

    NARCIS (Netherlands)

    van Gils, C A J; Beijst, C; van Rooij, R; de Jong, H W A M

    2016-01-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate cor

  12. Impact of reconstruction parameters on quantitative I-131 SPECT

    NARCIS (Netherlands)

    van Gils, C A J; Beijst, C; van Rooij, R; de Jong, H W A M

    2016-01-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate

  13. Quantitative film radiography

    Energy Technology Data Exchange (ETDEWEB)

    Devine, G.; Dobie, D.; Fugina, J.; Hernandez, J.; Logan, C.; Mohr, P.; Moss, R.; Schumacher, B.; Updike, E.; Weirup, D.

    1991-02-26

    We have developed a system of quantitative radiography in order to produce quantitative images displaying homogeneity of parts. The materials that we characterize are synthetic composites and may contain important subtle density variations not discernible by examining a raw film x-radiograph. In order to quantitatively interpret film radiographs, it is necessary to digitize, interpret, and display the images. Our integrated system of quantitative radiography displays accurate, high-resolution pseudo-color images in units of density. We characterize approximately 10,000 parts per year in hundreds of different configurations and compositions with this system. This report discusses: the method; film processor monitoring and control; verifying film and processor performance; and correction of scatter effects.

  14. Quantitative EPR A Practitioners Guide

    CERN Document Server

    Eaton, Gareth R; Barr, David P; Weber, Ralph T

    2010-01-01

    This is the first comprehensive yet practical guide for people who perform quantitative EPR measurements. No existing book provides this level of practical guidance to ensure the successful use of EPR. There is a growing need in both industrial and academic research to provide meaningful and accurate quantitative EPR results. This text discusses the various sample, instrument and software related aspects required for EPR quantitation. Specific topics include: choosing a reference standard, resonator considerations (Q, B1, Bm), power saturation characteristics, sample positioning, and finally, putting all the factors together to obtain an accurate spin concentration of a sample.

  15. NNLOPS accurate associated HW production

    CERN Document Server

    Astill, William; Re, Emanuele; Zanderighi, Giulia

    2016-01-01

    We present a next-to-next-to-leading order accurate description of associated HW production consistently matched to a parton shower. The method is based on reweighting events obtained with the HW plus one jet NLO accurate calculation implemented in POWHEG, extended with the MiNLO procedure, to reproduce NNLO accurate Born distributions. Since the Born kinematics is more complex than the cases treated before, we use a parametrization of the Collins-Soper angles to reduce the number of variables required for the reweighting. We present phenomenological results at 13 TeV, with cuts suggested by the Higgs Cross Section Working Group.

  16. Efficient and accurate fragmentation methods.

    Science.gov (United States)

    Pruitt, Spencer R; Bertoni, Colleen; Brorsen, Kurt R; Gordon, Mark S

    2014-09-16

    Conspectus Three novel fragmentation methods that are available in the electronic structure program GAMESS (general atomic and molecular electronic structure system) are discussed in this Account. The fragment molecular orbital (FMO) method can be combined with any electronic structure method to perform accurate calculations on large molecular species with no reliance on capping atoms or empirical parameters. The FMO method is highly scalable and can take advantage of massively parallel computer systems. For example, the method has been shown to scale nearly linearly on up to 131 000 processor cores for calculations on large water clusters. There have been many applications of the FMO method to large molecular clusters, to biomolecules (e.g., proteins), and to materials that are used as heterogeneous catalysts. The effective fragment potential (EFP) method is a model potential approach that is fully derived from first principles and has no empirically fitted parameters. Consequently, an EFP can be generated for any molecule by a simple preparatory GAMESS calculation. The EFP method provides accurate descriptions of all types of intermolecular interactions, including Coulombic interactions, polarization/induction, exchange repulsion, dispersion, and charge transfer. The EFP method has been applied successfully to the study of liquid water, π-stacking in substituted benzenes and in DNA base pairs, solvent effects on positive and negative ions, electronic spectra and dynamics, non-adiabatic phenomena in electronic excited states, and nonlinear excited state properties. The effective fragment molecular orbital (EFMO) method is a merger of the FMO and EFP methods, in which interfragment interactions are described by the EFP potential, rather than the less accurate electrostatic potential. The use of EFP in this manner facilitates the use of a smaller value for the distance cut-off (Rcut). Rcut determines the distance at which EFP interactions replace fully quantum

  17. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  18. Quantitative research.

    Science.gov (United States)

    Watson, Roger

    2015-04-01

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

  19. Accurate ab initio spin densities

    CERN Document Server

    Boguslawski, Katharina; Legeza, Örs; Reiher, Markus

    2012-01-01

    We present an approach for the calculation of spin density distributions for molecules that require very large active spaces for a qualitatively correct description of their electronic structure. Our approach is based on the density-matrix renormalization group (DMRG) algorithm to calculate the spin density matrix elements as basic quantity for the spatially resolved spin density distribution. The spin density matrix elements are directly determined from the second-quantized elementary operators optimized by the DMRG algorithm. As an analytic convergence criterion for the spin density distribution, we employ our recently developed sampling-reconstruction scheme [J. Chem. Phys. 2011, 134, 224101] to build an accurate complete-active-space configuration-interaction (CASCI) wave function from the optimized matrix product states. The spin density matrix elements can then also be determined as an expectation value employing the reconstructed wave function expansion. Furthermore, the explicit reconstruction of a CA...

  20. The Accurate Particle Tracer Code

    CERN Document Server

    Wang, Yulei; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusion energy research, computational mathematics, software engineering, and high-performance computation. The APT code consists of seven main modules, including the I/O module, the initialization module, the particle pusher module, the parallelization module, the field configuration module, the external force-field module, and the extendible module. The I/O module, supported by Lua and Hdf5 projects, provides a user-friendly interface for both numerical simulation and data analysis. A series of new geometric numerical methods...

  1. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    Analysis and optimization methods for the design of advanced printed re ectarrays have been investigated, and the study is focused on developing an accurate and efficient simulation tool. For the analysis, a good compromise between accuracy and efficiency can be obtained using the spectral domain...... to the POT. The GDOT can optimize for the size as well as the orientation and position of arbitrarily shaped array elements. Both co- and cross-polar radiation can be optimized for multiple frequencies, dual polarization, and several feed illuminations. Several contoured beam reflectarrays have been designed...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  2. Accurate thickness measurement of graphene

    Science.gov (United States)

    Shearer, Cameron J.; Slattery, Ashley D.; Stapleton, Andrew J.; Shapter, Joseph G.; Gibson, Christopher T.

    2016-03-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  3. Accurate thickness measurement of graphene.

    Science.gov (United States)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-03-29

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1-1.3 nm to 0.1-0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials.

  4. A More Accurate Fourier Transform

    CERN Document Server

    Courtney, Elya

    2015-01-01

    Fourier transform methods are used to analyze functions and data sets to provide frequencies, amplitudes, and phases of underlying oscillatory components. Fast Fourier transform (FFT) methods offer speed advantages over evaluation of explicit integrals (EI) that define Fourier transforms. This paper compares frequency, amplitude, and phase accuracy of the two methods for well resolved peaks over a wide array of data sets including cosine series with and without random noise and a variety of physical data sets, including atmospheric $\\mathrm{CO_2}$ concentrations, tides, temperatures, sound waveforms, and atomic spectra. The FFT uses MIT's FFTW3 library. The EI method uses the rectangle method to compute the areas under the curve via complex math. Results support the hypothesis that EI methods are more accurate than FFT methods. Errors range from 5 to 10 times higher when determining peak frequency by FFT, 1.4 to 60 times higher for peak amplitude, and 6 to 10 times higher for phase under a peak. The ability t...

  5. 38 CFR 4.46 - Accurate measurement.

    Science.gov (United States)

    2010-07-01

    ... 38 Pensions, Bonuses, and Veterans' Relief 1 2010-07-01 2010-07-01 false Accurate measurement. 4... RATING DISABILITIES Disability Ratings The Musculoskeletal System § 4.46 Accurate measurement. Accurate measurement of the length of stumps, excursion of joints, dimensions and location of scars with respect...

  6. Quantitative spectroscopy of hot stars

    Science.gov (United States)

    Kudritzki, R. P.; Hummer, D. G.

    1990-01-01

    A review on the quantitative spectroscopy (QS) of hot stars is presented, with particular attention given to the study of photospheres, optically thin winds, unified model atmospheres, and stars with optically thick winds. It is concluded that the results presented here demonstrate the reliability of Qs as a unique source of accurate values of the global parameters (effective temperature, surface gravity, and elemental abundances) of hot stars.

  7. The quantitative Morse theorem

    OpenAIRE

    Loi, Ta Le; Phien, Phan

    2013-01-01

    In this paper, we give a proof of the quantitative Morse theorem stated by {Y. Yomdin} in \\cite{Y1}. The proof is based on the quantitative Sard theorem, the quantitative inverse function theorem and the quantitative Morse lemma.

  8. A fast and accurate method for echocardiography strain rate imaging

    Science.gov (United States)

    Tavakoli, Vahid; Sahba, Nima; Hajebi, Nima; Nambakhsh, Mohammad Saleh

    2009-02-01

    Recently Strain and strain rate imaging have proved their superiority with respect to classical motion estimation methods in myocardial evaluation as a novel technique for quantitative analysis of myocardial function. Here in this paper, we propose a novel strain rate imaging algorithm using a new optical flow technique which is more rapid and accurate than the previous correlation-based methods. The new method presumes a spatiotemporal constancy of intensity and Magnitude of the image. Moreover the method makes use of the spline moment in a multiresolution approach. Moreover cardiac central point is obtained using a combination of center of mass and endocardial tracking. It is proved that the proposed method helps overcome the intensity variations of ultrasound texture while preserving the ability of motion estimation technique for different motions and orientations. Evaluation is performed on simulated, phantom (a contractile rubber balloon) and real sequences and proves that this technique is more accurate and faster than the previous methods.

  9. Quantitative imaging methods in osteoporosis.

    Science.gov (United States)

    Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G

    2016-12-01

    Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.

  10. Accurate measurement of streamwise vortices using dual-plane PIV

    Science.gov (United States)

    Waldman, Rye M.; Breuer, Kenneth S.

    2012-11-01

    Low Reynolds number aerodynamic experiments with flapping animals (such as bats and small birds) are of particular interest due to their application to micro air vehicles which operate in a similar parameter space. Previous PIV wake measurements described the structures left by bats and birds and provided insight into the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions based on said measurements. The highly three-dimensional and unsteady nature of the flows associated with flapping flight are major challenges for accurate measurements. The challenge of animal flight measurements is finding small flow features in a large field of view at high speed with limited laser energy and camera resolution. Cross-stream measurement is further complicated by the predominately out-of-plane flow that requires thick laser sheets and short inter-frame times, which increase noise and measurement uncertainty. Choosing appropriate experimental parameters requires compromise between the spatial and temporal resolution and the dynamic range of the measurement. To explore these challenges, we do a case study on the wake of a fixed wing. The fixed model simplifies the experiment and allows direct measurements of the aerodynamic forces via load cell. We present a detailed analysis of the wake measurements, discuss the criteria for making accurate measurements, and present a solution for making quantitative aerodynamic load measurements behind free-flyers.

  11. Quantitative Decision Support Requires Quantitative User Guidance

    Science.gov (United States)

    Smith, L. A.

    2009-12-01

    Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output

  12. Laboratory Building for Accurate Determination of Plutonium

    Institute of Scientific and Technical Information of China (English)

    2008-01-01

    <正>The accurate determination of plutonium is one of the most important assay techniques of nuclear fuel, also the key of the chemical measurement transfer and the base of the nuclear material balance. An

  13. Understanding the Code: keeping accurate records.

    Science.gov (United States)

    Griffith, Richard

    2015-10-01

    In his continuing series looking at the legal and professional implications of the Nursing and Midwifery Council's revised Code of Conduct, Richard Griffith discusses the elements of accurate record keeping under Standard 10 of the Code. This article considers the importance of accurate record keeping for the safety of patients and protection of district nurses. The legal implications of records are explained along with how district nurses should write records to ensure these legal requirements are met.

  14. Quantitative comparison of ammonia and 3-indoleacetic acid ...

    African Journals Online (AJOL)

    Quantitative comparison of ammonia and 3-indoleacetic acid production in ... method and 3-indoleacetic acid as Salkowski method in halophilic, alkalophilic and ... in research due to their ease of implementation and relatively accurate results.

  15. Christhin: Quantitative Analysis of Thin Layer Chromatography

    CERN Document Server

    Barchiesi, Maximiliano; Renaudo, Carlos; Rossi, Pablo; Pramparo, María de Carmen; Nepote, Valeria; Grosso, Nelson Ruben; Gayol, María Fernanda

    2012-01-01

    Manual for Christhin 0.1.36 Christhin (Chromatography Riser Thin) is software developed for the quantitative analysis of data obtained from thin-layer chromatographic techniques (TLC). Once installed on your computer, the program is very easy to use, and provides data quickly and accurately. This manual describes the program, and reading should be enough to use it properly.

  16. A sensitive issue: Pyrosequencing as a valuable forensic SNP typing platform

    DEFF Research Database (Denmark)

    Harrison, C.; Musgrave-Brown, E.; Bender, K.

    2006-01-01

    Analysing minute amounts of DNA is a routine challenge in forensics in part due to the poor sensitivity of an instrument and its inability to detect results from forensic samples. In this study, the sensitivity of the Pyrosequencing method is investigated using varying concentrations of DNA and f...

  17. Human population genetic diversity as a function of SNP type from HapMap data.

    Science.gov (United States)

    Garte, Seymour

    2010-01-01

    Data from the international HapMap project were mined to determine if the degree of genetic differentiation (Fst) is dependent on single nucleotide polymorphism (SNP) category. The Fst statistic was evaluated across all SNPs for each of 30 genes and for each of five chromosomes. A consistent decrease in diversity between Europeans and Africans was seen for nonsynonymous coding region SNPs compared to the three other SNP categories: synonymous SNPs, UTR, and intronic SNPs. This suggests an effect of balancing selection in reducing interpopulation genetic diversity at sites that would be expected to influence phenotype and therefore be subject to selection. This result is inconsistent with the concept of large population specific genetic differences that could have applications in "racialized medicine."

  18. Accurate strand-specific quantification of viral RNA.

    Directory of Open Access Journals (Sweden)

    Nicole E Plaskon

    Full Text Available The presence of full-length complements of viral genomic RNA is a hallmark of RNA virus replication within an infected cell. As such, methods for detecting and measuring specific strands of viral RNA in infected cells and tissues are important in the study of RNA viruses. Strand-specific quantitative real-time PCR (ssqPCR assays are increasingly being used for this purpose, but the accuracy of these assays depends on the assumption that the amount of cDNA measured during the quantitative PCR (qPCR step accurately reflects amounts of a specific viral RNA strand present in the RT reaction. To specifically test this assumption, we developed multiple ssqPCR assays for the positive-strand RNA virus o'nyong-nyong (ONNV that were based upon the most prevalent ssqPCR assay design types in the literature. We then compared various parameters of the ONNV-specific assays. We found that an assay employing standard unmodified virus-specific primers failed to discern the difference between cDNAs generated from virus specific primers and those generated through false priming. Further, we were unable to accurately measure levels of ONNV (- strand RNA with this assay when higher levels of cDNA generated from the (+ strand were present. Taken together, these results suggest that assays of this type do not accurately quantify levels of the anti-genomic strand present during RNA virus infectious cycles. However, an assay permitting the use of a tag-specific primer was able to distinguish cDNAs transcribed from ONNV (- strand RNA from other cDNAs present, thus allowing accurate quantification of the anti-genomic strand. We also report the sensitivities of two different detection strategies and chemistries, SYBR(R Green and DNA hydrolysis probes, used with our tagged ONNV-specific ssqPCR assays. Finally, we describe development, design and validation of ssqPCR assays for chikungunya virus (CHIKV, the recent cause of large outbreaks of disease in the Indian Ocean

  19. Accurate tracking control in LOM application

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The fabrication of accurate prototype from CAD model directly in short time depends on the accurate tracking control and reference trajectory planning in (Laminated Object Manufacture) LOM application. An improvement on contour accuracy is acquired by the introduction of a tracking controller and a trajectory generation policy. A model of the X-Y positioning system of LOM machine is developed as the design basis of tracking controller. The ZPETC (Zero Phase Error Tracking Controller) is used to eliminate single axis following error, thus reduce the contour error. The simulation is developed on a Maltab model based on a retrofitted LOM machine and the satisfied result is acquired.

  20. Using an Educational Electronic Documentation System to Help Nursing Students Accurately Identify Nursing Diagnoses

    Science.gov (United States)

    Pobocik, Tamara J.

    2013-01-01

    The use of technology and electronic medical records in healthcare has exponentially increased. This quantitative research project used a pretest/posttest design, and reviewed how an educational electronic documentation system helped nursing students to identify the accurate related to statement of the nursing diagnosis for the patient in the case…

  1. PIVlab – Towards User-friendly, Affordable and Accurate Digital Particle Image Velocimetry in MATLAB

    NARCIS (Netherlands)

    Stamhuis, Eize; Thielicke, William

    2014-01-01

    Digital particle image velocimetry (DPIV) is a non-intrusive analysis technique that is very popular for mapping flows quantitatively. To get accurate results, in particular in complex flow fields, a number of challenges have to be faced and solved: The quality of the flow measurements is affected b

  2. PIVlab – Towards User-friendly, Affordable and Accurate Digital Particle Image Velocimetry in MATLAB

    NARCIS (Netherlands)

    Stamhuis, Eize; Thielicke, William

    2014-01-01

    Digital particle image velocimetry (DPIV) is a non-intrusive analysis technique that is very popular for mapping flows quantitatively. To get accurate results, in particular in complex flow fields, a number of challenges have to be faced and solved: The quality of the flow measurements is affected

  3. Accurate Virus Quantitation Using a Scanning Transmission Electron Microscopy (STEM) Detector in a Scanning Electron Microscope

    Science.gov (United States)

    2017-06-29

    remove any viral aggregation. Nutrient rich media was required for virus growth, but this media resulted in crystallized salt and sugar deposits on the...1940s using a spray or centrifugation technique to deposit the sample on the supporting media , followed by negative staining with 2% Uranyl Acetate...being handled 2,3. Choosing a cell line, media , and other variables are essential to a successful plaque assay 3. Plaque assay has the lowest limit of

  4. Novel micelle PCR-based method for accurate, sensitive and quantitative microbiota profiling

    NARCIS (Netherlands)

    S.A. Boers (Stefan A.); J.P. Hays (John); R. Jansen (Ruud)

    2017-01-01

    textabstractIn the last decade, many researchers have embraced 16S rRNA gene sequencing techniques, which has led to a wealth of publications and documented differences in the composition of microbial communities derived from many different ecosystems. However, comparison between different microbiot

  5. Accurate and objective copy number profiling using real-time quantitative PCR.

    Science.gov (United States)

    D'haene, Barbara; Vandesompele, Jo; Hellemans, Jan

    2010-04-01

    Copy number changes are known to be involved in numerous human genetic disorders. In this context, qPCR-based copy number screening may serve as the method of choice for targeted screening of the relevant disease genes and their surrounding regulatory landscapes. qPCR has many advantages over alternative methods, such as its low consumable and instrumentation costs, fast turnaround and assay development time, high sensitivity and open format (independent of a single supplier). In this chapter we provide all relevant information for a successfully implement of qPCR-based copy number analysis. We emphasize the significance of thorough in silico and empirical validation of the primers, the need for a well thought-out experiment design, and the importance of quality controls along the entire workflow. Furthermore, we suggest an appropriate and practical way to calculate copy numbers and to objectively interpret the results. The provided guidelines will most certainly improve the quality and reliability of your qPCR-based copy number screening.

  6. Machine Learning of Accurate Energy-Conserving Molecular Force Fields

    CERN Document Server

    Chmiela, Stefan; Sauceda, Huziel E; Poltavsky, Igor; Schütt, Kristof; Müller, Klaus-Robert

    2016-01-01

    Using conservation of energy -- a fundamental property of closed classical and quantum mechanical systems -- we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential-energy surfaces of intermediate-size molecules with an accuracy of 0.3 kcal/mol for energies and 1 kcal/mol/{\\AA} for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations...

  7. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  8. Accurate Switched-Voltage voltage averaging circuit

    OpenAIRE

    金光, 一幸; 松本, 寛樹

    2006-01-01

    Abstract ###This paper proposes an accurate Switched-Voltage (SV) voltage averaging circuit. It is presented ###to compensated for NMOS missmatch error at MOS differential type voltage averaging circuit. ###The proposed circuit consists of a voltage averaging and a SV sample/hold (S/H) circuit. It can ###operate using nonoverlapping three phase clocks. Performance of this circuit is verified by PSpice ###simulations.

  9. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency renderi

  10. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency

  11. Technological Basis and Scientific Returns for Absolutely Accurate Measurements

    Science.gov (United States)

    Dykema, J. A.; Anderson, J.

    2011-12-01

    The 2006 NRC Decadal Survey fostered a new appreciation for societal objectives as a driving motivation for Earth science. Many high-priority societal objectives are dependent on predictions of weather and climate. These predictions are based on numerical models, which derive from approximate representations of well-founded physics and chemistry on space and timescales appropriate to global and regional prediction. These laws of chemistry and physics in turn have a well-defined quantitative relationship with physical measurement units, provided these measurement units are linked to international measurement standards that are the foundation of contemporary measurement science and standards for engineering and commerce. Without this linkage, measurements have an ambiguous relationship to scientific principles that introduces avoidable uncertainty in analyses, predictions, and improved understanding of the Earth system. Since the improvement of climate and weather prediction is fundamentally dependent on the improvement of the representation of physical processes, measurement systems that reduce the ambiguity between physical truth and observations represent an essential component of a national strategy for understanding and living with the Earth system. This paper examines the technological basis and potential science returns of sensors that make measurements that are quantitatively tied on-orbit to international measurement standards, and thus testable to systematic errors. This measurement strategy provides several distinct benefits. First, because of the quantitative relationship between these international measurement standards and fundamental physical constants, measurements of this type accurately capture the true physical and chemical behavior of the climate system and are not subject to adjustment due to excluded measurement physics or instrumental artifacts. In addition, such measurements can be reproduced by scientists anywhere in the world, at any time

  12. Quantitative lithofacies palaeogeography

    Institute of Scientific and Technical Information of China (English)

    Zeng-Zhao; Feng; Xiu-Juan; Zheng; Zhi-Dong; Bao; Zhen-Kui; Jin; Sheng-He; Wu; You-Bin; He; Yong-Min; Peng; Yu-Qing; Yang; Jia-Qiang; Zhang; Yong-Sheng; Zhang

    2014-01-01

    Quantitative lithofacies palaeogeography is an important discipline of palaeogeography.It is developed on the foundation of traditional lithofacies palaeogeography and palaeogeography,the core of which is the quantitative lithofacies palaeogeographic map.Quantity means that in the palaeogeographic map,the division and identification of each palaeogeographic unit are supported by quantitative data and quantitative fundamental maps.Our lithofacies palaeogeographic maps are quantitative or mainly quantitative.A great number of quantitative lithofacies palaeogeographic maps have been published,and articles and monographs of quantitative lithofacies palaeogeography have been published successively,thus the quantitative lithofacies palaeogeography was formed and established.It is an important development in lithofacies palaeogeography.In composing quantitative lithofacies palaeogeographic maps,the key measure is the single factor analysis and multifactor comprehensive mapping method—methodology of quantitative lithofacies palaeogeography.In this paper,the authors utilize two case studies,one from the Early Ordovician of South China and the other from the Early Ordovician of Ordos,North China,to explain how to use this methodology to compose the quantitative lithofacies palaeogeographic maps,and to discuss the palaeogeographic units in these maps.Finally,three characteristics,i.e.,quantification,multiple orders and multiple types,of quantitative lithofacies palaeogeographic maps are conclusively discussed.

  13. Quantitative investment analysis

    CERN Document Server

    DeFusco, Richard

    2007-01-01

    In the "Second Edition" of "Quantitative Investment Analysis," financial experts Richard DeFusco, Dennis McLeavey, Jerald Pinto, and David Runkle outline the tools and techniques needed to understand and apply quantitative methods to today's investment process.

  14. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...

  15. Accurate colorimetric feedback for RGB LED clusters

    Science.gov (United States)

    Man, Kwong; Ashdown, Ian

    2006-08-01

    We present an empirical model of LED emission spectra that is applicable to both InGaN and AlInGaP high-flux LEDs, and which accurately predicts their relative spectral power distributions over a wide range of LED junction temperatures. We further demonstrate with laboratory measurements that changes in LED spectral power distribution with temperature can be accurately predicted with first- or second-order equations. This provides the basis for a real-time colorimetric feedback system for RGB LED clusters that can maintain the chromaticity of white light at constant intensity to within +/-0.003 Δuv over a range of 45 degrees Celsius, and to within 0.01 Δuv when dimmed over an intensity range of 10:1.

  16. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  17. Synthesizing Accurate Floating-Point Formulas

    OpenAIRE

    Ioualalen, Arnault; Martel, Matthieu

    2013-01-01

    International audience; Many critical embedded systems perform floating-point computations yet their accuracy is difficult to assert and strongly depends on how formulas are written in programs. In this article, we focus on the synthesis of accurate formulas mathematically equal to the original formulas occurring in source codes. In general, an expression may be rewritten in many ways. To avoid any combinatorial explosion, we use an intermediate representation, called APEG, enabling us to rep...

  18. Efficient Accurate Context-Sensitive Anomaly Detection

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    For program behavior-based anomaly detection, the only way to ensure accurate monitoring is to construct an efficient and precise program behavior model. A new program behavior-based anomaly detection model,called combined pushdown automaton (CPDA) model was proposed, which is based on static binary executable analysis. The CPDA model incorporates the optimized call stack walk and code instrumentation technique to gain complete context information. Thereby the proposed method can detect more attacks, while retaining good performance.

  19. Accurate Control of Josephson Phase Qubits

    Science.gov (United States)

    2016-04-14

    61 ~1986!. 23 K. Kraus, States, Effects, and Operations: Fundamental Notions of Quantum Theory, Lecture Notes in Physics , Vol. 190 ~Springer-Verlag... PHYSICAL REVIEW B 68, 224518 ~2003!Accurate control of Josephson phase qubits Matthias Steffen,1,2,* John M. Martinis,3 and Isaac L. Chuang1 1Center...for Bits and Atoms and Department of Physics , MIT, Cambridge, Massachusetts 02139, USA 2Solid State and Photonics Laboratory, Stanford University

  20. On accurate determination of contact angle

    Science.gov (United States)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  1. Accurate guitar tuning by cochlear implant musicians.

    Science.gov (United States)

    Lu, Thomas; Huang, Juan; Zeng, Fan-Gang

    2014-01-01

    Modern cochlear implant (CI) users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  2. Accurate integration of forced and damped oscillators

    OpenAIRE

    García Alonso, Fernando Luis; Cortés Molina, Mónica; Villacampa, Yolanda; Reyes Perales, José Antonio

    2016-01-01

    The new methods accurately integrate forced and damped oscillators. A family of analytical functions is introduced known as T-functions which are dependent on three parameters. The solution is expressed as a series of T-functions calculating their coefficients by means of recurrences which involve the perturbation function. In the T-functions series method the perturbation parameter is the factor in the local truncation error. Furthermore, this method is zero-stable and convergent. An applica...

  3. Accurate structural correlations from maximum likelihood superpositions.

    Directory of Open Access Journals (Sweden)

    Douglas L Theobald

    2008-02-01

    Full Text Available The cores of globular proteins are densely packed, resulting in complicated networks of structural interactions. These interactions in turn give rise to dynamic structural correlations over a wide range of time scales. Accurate analysis of these complex correlations is crucial for understanding biomolecular mechanisms and for relating structure to function. Here we report a highly accurate technique for inferring the major modes of structural correlation in macromolecules using likelihood-based statistical analysis of sets of structures. This method is generally applicable to any ensemble of related molecules, including families of nuclear magnetic resonance (NMR models, different crystal forms of a protein, and structural alignments of homologous proteins, as well as molecular dynamics trajectories. Dominant modes of structural correlation are determined using principal components analysis (PCA of the maximum likelihood estimate of the correlation matrix. The correlations we identify are inherently independent of the statistical uncertainty and dynamic heterogeneity associated with the structural coordinates. We additionally present an easily interpretable method ("PCA plots" for displaying these positional correlations by color-coding them onto a macromolecular structure. Maximum likelihood PCA of structural superpositions, and the structural PCA plots that illustrate the results, will facilitate the accurate determination of dynamic structural correlations analyzed in diverse fields of structural biology.

  4. Accurate finite element modeling of acoustic waves

    Science.gov (United States)

    Idesman, A.; Pham, D.

    2014-07-01

    In the paper we suggest an accurate finite element approach for the modeling of acoustic waves under a suddenly applied load. We consider the standard linear elements and the linear elements with reduced dispersion for the space discretization as well as the explicit central-difference method for time integration. The analytical study of the numerical dispersion shows that the most accurate results can be obtained with the time increments close to the stability limit. However, even in this case and the use of the linear elements with reduced dispersion, mesh refinement leads to divergent numerical results for acoustic waves under a suddenly applied load. This is explained by large spurious high-frequency oscillations. For the quantification and the suppression of spurious oscillations, we have modified and applied a two-stage time-integration technique that includes the stage of basic computations and the filtering stage. This technique allows accurate convergent results at mesh refinement as well as significantly reduces the numerical anisotropy of solutions. We should mention that the approach suggested is very general and can be equally applied to any loading as well as for any space-discretization technique and any explicit or implicit time-integration method.

  5. Accurate measurement of unsteady state fluid temperature

    Science.gov (United States)

    Jaremkiewicz, Magdalena

    2017-03-01

    In this paper, two accurate methods for determining the transient fluid temperature were presented. Measurements were conducted for boiling water since its temperature is known. At the beginning the thermometers are at the ambient temperature and next they are immediately immersed into saturated water. The measurements were carried out with two thermometers of different construction but with the same housing outer diameter equal to 15 mm. One of them is a K-type industrial thermometer widely available commercially. The temperature indicated by the thermometer was corrected considering the thermometers as the first or second order inertia devices. The new design of a thermometer was proposed and also used to measure the temperature of boiling water. Its characteristic feature is a cylinder-shaped housing with the sheath thermocouple located in its center. The temperature of the fluid was determined based on measurements taken in the axis of the solid cylindrical element (housing) using the inverse space marching method. Measurements of the transient temperature of the air flowing through the wind tunnel using the same thermometers were also carried out. The proposed measurement technique provides more accurate results compared with measurements using industrial thermometers in conjunction with simple temperature correction using the inertial thermometer model of the first or second order. By comparing the results, it was demonstrated that the new thermometer allows obtaining the fluid temperature much faster and with higher accuracy in comparison to the industrial thermometer. Accurate measurements of the fast changing fluid temperature are possible due to the low inertia thermometer and fast space marching method applied for solving the inverse heat conduction problem.

  6. New law requires 'medically accurate' lesson plans.

    Science.gov (United States)

    1999-09-17

    The California Legislature has passed a bill requiring all textbooks and materials used to teach about AIDS be medically accurate and objective. Statements made within the curriculum must be supported by research conducted in compliance with scientific methods, and published in peer-reviewed journals. Some of the current lesson plans were found to contain scientifically unsupported and biased information. In addition, the bill requires material to be "free of racial, ethnic, or gender biases." The legislation is supported by a wide range of interests, but opposed by the California Right to Life Education Fund, because they believe it discredits abstinence-only material.

  7. Accurate diagnosis is essential for amebiasis

    Institute of Scientific and Technical Information of China (English)

    2004-01-01

    @@ Amebiasis is one of the three most common causes of death from parasitic disease, and Entamoeba histolytica is the most widely distributed parasites in the world. Particularly, Entamoeba histolytica infection in the developing countries is a significant health problem in amebiasis-endemic areas with a significant impact on infant mortality[1]. In recent years a world wide increase in the number of patients with amebiasis has refocused attention on this important infection. On the other hand, improving the quality of parasitological methods and widespread use of accurate tecniques have improved our knowledge about the disease.

  8. The first accurate description of an aurora

    Science.gov (United States)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  9. Niche Genetic Algorithm with Accurate Optimization Performance

    Institute of Scientific and Technical Information of China (English)

    LIU Jian-hua; YAN De-kun

    2005-01-01

    Based on crowding mechanism, a novel niche genetic algorithm was proposed which can record evolutionary direction dynamically during evolution. After evolution, the solutions's precision can be greatly improved by means of the local searching along the recorded direction. Simulation shows that this algorithm can not only keep population diversity but also find accurate solutions. Although using this method has to take more time compared with the standard GA, it is really worth applying to some cases that have to meet a demand for high solution precision.

  10. Universality: Accurate Checks in Dyson's Hierarchical Model

    Science.gov (United States)

    Godina, J. J.; Meurice, Y.; Oktay, M. B.

    2003-06-01

    In this talk we present high-accuracy calculations of the susceptibility near βc for Dyson's hierarchical model in D = 3. Using linear fitting, we estimate the leading (γ) and subleading (Δ) exponents. Independent estimates are obtained by calculating the first two eigenvalues of the linearized renormalization group transformation. We found γ = 1.29914073 ± 10 -8 and, Δ = 0.4259469 ± 10-7 independently of the choice of local integration measure (Ising or Landau-Ginzburg). After a suitable rescaling, the approximate fixed points for a large class of local measure coincide accurately with a fixed point constructed by Koch and Wittwer.

  11. Accurate Stellar Parameters for Exoplanet Host Stars

    Science.gov (United States)

    Brewer, John Michael; Fischer, Debra; Basu, Sarbani; Valenti, Jeff A.

    2015-01-01

    A large impedement to our understanding of planet formation is obtaining a clear picture of planet radii and densities. Although determining precise ratios between planet and stellar host are relatively easy, determining accurate stellar parameters is still a difficult and costly undertaking. High resolution spectral analysis has traditionally yielded precise values for some stellar parameters but stars in common between catalogs from different authors or analyzed using different techniques often show offsets far in excess of their uncertainties. Most analyses now use some external constraint, when available, to break observed degeneracies between surface gravity, effective temperature, and metallicity which can otherwise lead to correlated errors in results. However, these external constraints are impossible to obtain for all stars and can require more costly observations than the initial high resolution spectra. We demonstrate that these discrepencies can be mitigated by use of a larger line list that has carefully tuned atomic line data. We use an iterative modeling technique that does not require external constraints. We compare the surface gravity obtained with our spectral synthesis modeling to asteroseismically determined values for 42 Kepler stars. Our analysis agrees well with only a 0.048 dex offset and an rms scatter of 0.05 dex. Such accurate stellar gravities can reduce the primary source of uncertainty in radii by almost an order of magnitude over unconstrained spectral analysis.

  12. Accurate pose estimation for forensic identification

    Science.gov (United States)

    Merckx, Gert; Hermans, Jeroen; Vandermeulen, Dirk

    2010-04-01

    In forensic authentication, one aims to identify the perpetrator among a series of suspects or distractors. A fundamental problem in any recognition system that aims for identification of subjects in a natural scene is the lack of constrains on viewing and imaging conditions. In forensic applications, identification proves even more challenging, since most surveillance footage is of abysmal quality. In this context, robust methods for pose estimation are paramount. In this paper we will therefore present a new pose estimation strategy for very low quality footage. Our approach uses 3D-2D registration of a textured 3D face model with the surveillance image to obtain accurate far field pose alignment. Starting from an inaccurate initial estimate, the technique uses novel similarity measures based on the monogenic signal to guide a pose optimization process. We will illustrate the descriptive strength of the introduced similarity measures by using them directly as a recognition metric. Through validation, using both real and synthetic surveillance footage, our pose estimation method is shown to be accurate, and robust to lighting changes and image degradation.

  13. Accurate pattern registration for integrated circuit tomography

    Energy Technology Data Exchange (ETDEWEB)

    Levine, Zachary H.; Grantham, Steven; Neogi, Suneeta; Frigo, Sean P.; McNulty, Ian; Retsch, Cornelia C.; Wang, Yuxin; Lucatorto, Thomas B.

    2001-07-15

    As part of an effort to develop high resolution microtomography for engineered structures, a two-level copper integrated circuit interconnect was imaged using 1.83 keV x rays at 14 angles employing a full-field Fresnel zone plate microscope. A major requirement for high resolution microtomography is the accurate registration of the reference axes in each of the many views needed for a reconstruction. A reconstruction with 100 nm resolution would require registration accuracy of 30 nm or better. This work demonstrates that even images that have strong interference fringes can be used to obtain accurate fiducials through the use of Radon transforms. We show that we are able to locate the coordinates of the rectilinear circuit patterns to 28 nm. The procedure is validated by agreement between an x-ray parallax measurement of 1.41{+-}0.17 {mu}m and a measurement of 1.58{+-}0.08 {mu}m from a scanning electron microscope image of a cross section.

  14. Accurate basis set truncation for wavefunction embedding

    Science.gov (United States)

    Barnes, Taylor A.; Goodpaster, Jason D.; Manby, Frederick R.; Miller, Thomas F.

    2013-07-01

    Density functional theory (DFT) provides a formally exact framework for performing embedded subsystem electronic structure calculations, including DFT-in-DFT and wavefunction theory-in-DFT descriptions. In the interest of efficiency, it is desirable to truncate the atomic orbital basis set in which the subsystem calculation is performed, thus avoiding high-order scaling with respect to the size of the MO virtual space. In this study, we extend a recently introduced projection-based embedding method [F. R. Manby, M. Stella, J. D. Goodpaster, and T. F. Miller III, J. Chem. Theory Comput. 8, 2564 (2012)], 10.1021/ct300544e to allow for the systematic and accurate truncation of the embedded subsystem basis set. The approach is applied to both covalently and non-covalently bound test cases, including water clusters and polypeptide chains, and it is demonstrated that errors associated with basis set truncation are controllable to well within chemical accuracy. Furthermore, we show that this approach allows for switching between accurate projection-based embedding and DFT embedding with approximate kinetic energy (KE) functionals; in this sense, the approach provides a means of systematically improving upon the use of approximate KE functionals in DFT embedding.

  15. How Accurately can we Calculate Thermal Systems?

    Energy Technology Data Exchange (ETDEWEB)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-04-20

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K{sub eff}, for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors.

  16. Accurate taxonomic assignment of short pyrosequencing reads.

    Science.gov (United States)

    Clemente, José C; Jansson, Jesper; Valiente, Gabriel

    2010-01-01

    Ambiguities in the taxonomy dependent assignment of pyrosequencing reads are usually resolved by mapping each read to the lowest common ancestor in a reference taxonomy of all those sequences that match the read. This conservative approach has the drawback of mapping a read to a possibly large clade that may also contain many sequences not matching the read. A more accurate taxonomic assignment of short reads can be made by mapping each read to the node in the reference taxonomy that provides the best precision and recall. We show that given a suffix array for the sequences in the reference taxonomy, a short read can be mapped to the node of the reference taxonomy with the best combined value of precision and recall in time linear in the size of the taxonomy subtree rooted at the lowest common ancestor of the matching sequences. An accurate taxonomic assignment of short reads can thus be made with about the same efficiency as when mapping each read to the lowest common ancestor of all matching sequences in a reference taxonomy. We demonstrate the effectiveness of our approach on several metagenomic datasets of marine and gut microbiota.

  17. Accurate determination of characteristic relative permeability curves

    Science.gov (United States)

    Krause, Michael H.; Benson, Sally M.

    2015-09-01

    A recently developed technique to accurately characterize sub-core scale heterogeneity is applied to investigate the factors responsible for flowrate-dependent effective relative permeability curves measured on core samples in the laboratory. The dependency of laboratory measured relative permeability on flowrate has long been both supported and challenged by a number of investigators. Studies have shown that this apparent flowrate dependency is a result of both sub-core scale heterogeneity and outlet boundary effects. However this has only been demonstrated numerically for highly simplified models of porous media. In this paper, flowrate dependency of effective relative permeability is demonstrated using two rock cores, a Berea Sandstone and a heterogeneous sandstone from the Otway Basin Pilot Project in Australia. Numerical simulations of steady-state coreflooding experiments are conducted at a number of injection rates using a single set of input characteristic relative permeability curves. Effective relative permeability is then calculated from the simulation data using standard interpretation methods for calculating relative permeability from steady-state tests. Results show that simplified approaches may be used to determine flowrate-independent characteristic relative permeability provided flow rate is sufficiently high, and the core heterogeneity is relatively low. It is also shown that characteristic relative permeability can be determined at any typical flowrate, and even for geologically complex models, when using accurate three-dimensional models.

  18. Efficient design, accurate fabrication and effective characterization of plasmonic quasicrystalline arrays of nano-spherical particles

    Science.gov (United States)

    Namin, Farhad A.; Yuwen, Yu A.; Liu, Liu; Panaretos, Anastasios H.; Werner, Douglas H.; Mayer, Theresa S.

    2016-02-01

    In this paper, the scattering properties of two-dimensional quasicrystalline plasmonic lattices are investigated. We combine a newly developed synthesis technique, which allows for accurate fabrication of spherical nanoparticles, with a recently published variation of generalized multiparticle Mie theory to develop the first quantitative model for plasmonic nano-spherical arrays based on quasicrystalline morphologies. In particular, we study the scattering properties of Penrose and Ammann- Beenker gold spherical nanoparticle array lattices. We demonstrate that by using quasicrystalline lattices, one can obtain multi-band or broadband plasmonic resonances which are not possible in periodic structures. Unlike previously published works, our technique provides quantitative results which show excellent agreement with experimental measurements.

  19. Quantitative Autonomic Testing

    OpenAIRE

    Novak, Peter

    2011-01-01

    Disorders associated with dysfunction of autonomic nervous system are quite common yet frequently unrecognized. Quantitative autonomic testing can be invaluable tool for evaluation of these disorders, both in clinic and research. There are number of autonomic tests, however, only few were validated clinically or are quantitative. Here, fully quantitative and clinically validated protocol for testing of autonomic functions is presented. As a bare minimum the clinical autonomic laboratory shoul...

  20. Accurate, fully-automated NMR spectral profiling for metabolomics.

    Directory of Open Access Journals (Sweden)

    Siamak Ravanbakhsh

    Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of

  1. Accurate Telescope Mount Positioning with MEMS Accelerometers

    Science.gov (United States)

    Mészáros, L.; Jaskó, A.; Pál, A.; Csépány, G.

    2014-08-01

    This paper describes the advantages and challenges of applying microelectromechanical accelerometer systems (MEMS accelerometers) in order to attain precise, accurate and stateless positioning of telescope mounts. This provides a completely independent method from other forms of electronic, optical, mechanical or magnetic feedback or real-time astrometry. Our goal is to reach the sub-arcminute range which is well smaller than the field-of-view of conventional imaging telescope systems. Here we present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors and we also detail how our procedures can be extended in order to attain even finer measurements. In addition, our paper discusses how can a complete system design be implemented in order to be a part of a telescope control system.

  2. Apparatus for accurately measuring high temperatures

    Science.gov (United States)

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  3. Accurate renormalization group analyses in neutrino sector

    Energy Technology Data Exchange (ETDEWEB)

    Haba, Naoyuki [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Kaneta, Kunio [Kavli IPMU (WPI), The University of Tokyo, Kashiwa, Chiba 277-8568 (Japan); Takahashi, Ryo [Graduate School of Science and Engineering, Shimane University, Matsue 690-8504 (Japan); Yamaguchi, Yuya [Department of Physics, Faculty of Science, Hokkaido University, Sapporo 060-0810 (Japan)

    2014-08-15

    We investigate accurate renormalization group analyses in neutrino sector between ν-oscillation and seesaw energy scales. We consider decoupling effects of top quark and Higgs boson on the renormalization group equations of light neutrino mass matrix. Since the decoupling effects are given in the standard model scale and independent of high energy physics, our method can basically apply to any models beyond the standard model. We find that the decoupling effects of Higgs boson are negligible, while those of top quark are not. Particularly, the decoupling effects of top quark affect neutrino mass eigenvalues, which are important for analyzing predictions such as mass squared differences and neutrinoless double beta decay in an underlying theory existing at high energy scale.

  4. Quantitative Algebraic Reasoning

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Panangaden, Prakash; Plotkin, Gordon

    2016-01-01

    We develop a quantitative analogue of equational reasoning which we call quantitative algebra. We define an equality relation indexed by rationals: a =ε b which we think of as saying that “a is approximately equal to b up to an error of ε”. We have 4 interesting examples where we have a quantitative...... equational theory whose free algebras correspond to well known structures. In each case we have finitary and continuous versions. The four cases are: Hausdorff metrics from quantitive semilattices; pWasserstein metrics (hence also the Kantorovich metric) from barycentric algebras and also from pointed...

  5. Accurate Weather Forecasting for Radio Astronomy

    Science.gov (United States)

    Maddalena, Ronald J.

    2010-01-01

    The NRAO Green Bank Telescope routinely observes at wavelengths from 3 mm to 1 m. As with all mm-wave telescopes, observing conditions depend upon the variable atmospheric water content. The site provides over 100 days/yr when opacities are low enough for good observing at 3 mm, but winds on the open-air structure reduce the time suitable for 3-mm observing where pointing is critical. Thus, to maximum productivity the observing wavelength needs to match weather conditions. For 6 years the telescope has used a dynamic scheduling system (recently upgraded; www.gb.nrao.edu/DSS) that requires accurate multi-day forecasts for winds and opacities. Since opacity forecasts are not provided by the National Weather Services (NWS), I have developed an automated system that takes available forecasts, derives forecasted opacities, and deploys the results on the web in user-friendly graphical overviews (www.gb.nrao.edu/ rmaddale/Weather). The system relies on the "North American Mesoscale" models, which are updated by the NWS every 6 hrs, have a 12 km horizontal resolution, 1 hr temporal resolution, run to 84 hrs, and have 60 vertical layers that extend to 20 km. Each forecast consists of a time series of ground conditions, cloud coverage, etc, and, most importantly, temperature, pressure, humidity as a function of height. I use the Liebe's MWP model (Radio Science, 20, 1069, 1985) to determine the absorption in each layer for each hour for 30 observing wavelengths. Radiative transfer provides, for each hour and wavelength, the total opacity and the radio brightness of the atmosphere, which contributes substantially at some wavelengths to Tsys and the observational noise. Comparisons of measured and forecasted Tsys at 22.2 and 44 GHz imply that the forecasted opacities are good to about 0.01 Nepers, which is sufficient for forecasting and accurate calibration. Reliability is high out to 2 days and degrades slowly for longer-range forecasts.

  6. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  7. Machine learning of accurate energy-conserving molecular force fields

    Science.gov (United States)

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E.; Poltavsky, Igor; Schütt, Kristof T.; Müller, Klaus-Robert

    2017-01-01

    Using conservation of energy—a fundamental property of closed classical and quantum mechanical systems—we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol−1 for energies and 1 kcal mol−1 Å̊−1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods. PMID:28508076

  8. Accurate measurement of streamwise vortices in low speed aerodynamic flows

    Science.gov (United States)

    Waldman, Rye M.; Kudo, Jun; Breuer, Kenneth S.

    2010-11-01

    Low Reynolds number experiments with flapping animals (such as bats and small birds) are of current interest in understanding biological flight mechanics, and due to their application to Micro Air Vehicles (MAVs) which operate in a similar parameter space. Previous PIV wake measurements have described the structures left by bats and birds, and provided insight to the time history of their aerodynamic force generation; however, these studies have faced difficulty drawing quantitative conclusions due to significant experimental challenges associated with the highly three-dimensional and unsteady nature of the flows, and the low wake velocities associated with lifting bodies that only weigh a few grams. This requires the high-speed resolution of small flow features in a large field of view using limited laser energy and finite camera resolution. Cross-stream measurements are further complicated by the high out-of-plane flow which requires thick laser sheets and short interframe times. To quantify and address these challenges we present data from a model study on the wake behind a fixed wing at conditions comparable to those found in biological flight. We present a detailed analysis of the PIV wake measurements, discuss the criteria necessary for accurate measurements, and present a new dual-plane PIV configuration to resolve these issues.

  9. Subvoxel accurate graph search using non-Euclidean graph space.

    Directory of Open Access Journals (Sweden)

    Michael D Abràmoff

    Full Text Available Graph search is attractive for the quantitative analysis of volumetric medical images, and especially for layered tissues, because it allows globally optimal solutions in low-order polynomial time. However, because nodes of graphs typically encode evenly distributed voxels of the volume with arcs connecting orthogonally sampled voxels in Euclidean space, segmentation cannot achieve greater precision than a single unit, i.e. the distance between two adjoining nodes, and partial volume effects are ignored. We generalize the graph to non-Euclidean space by allowing non-equidistant spacing between nodes, so that subvoxel accurate segmentation is achievable. Because the number of nodes and edges in the graph remains the same, running time and memory use are similar, while all the advantages of graph search, including global optimality and computational efficiency, are retained. A deformation field calculated from the volume data adaptively changes regional node density so that node density varies with the inverse of the expected cost. We validated our approach using optical coherence tomography (OCT images of the retina and 3-D MR of the arterial wall, and achieved statistically significant increased accuracy. Our approach allows improved accuracy in volume data acquired with the same hardware, and also, preserved accuracy with lower resolution, more cost-effective, image acquisition equipment. The method is not limited to any specific imaging modality and readily extensible to higher dimensions.

  10. Accurate lineshape spectroscopy and the Boltzmann constant.

    Science.gov (United States)

    Truong, G-W; Anstie, J D; May, E F; Stace, T M; Luiten, A N

    2015-10-14

    Spectroscopy has an illustrious history delivering serendipitous discoveries and providing a stringent testbed for new physical predictions, including applications from trace materials detection, to understanding the atmospheres of stars and planets, and even constraining cosmological models. Reaching fundamental-noise limits permits optimal extraction of spectroscopic information from an absorption measurement. Here, we demonstrate a quantum-limited spectrometer that delivers high-precision measurements of the absorption lineshape. These measurements yield a very accurate measurement of the excited-state (6P1/2) hyperfine splitting in Cs, and reveals a breakdown in the well-known Voigt spectral profile. We develop a theoretical model that accounts for this breakdown, explaining the observations to within the shot-noise limit. Our model enables us to infer the thermal velocity dispersion of the Cs vapour with an uncertainty of 35 p.p.m. within an hour. This allows us to determine a value for Boltzmann's constant with a precision of 6 p.p.m., and an uncertainty of 71 p.p.m.

  11. MEMS accelerometers in accurate mount positioning systems

    Science.gov (United States)

    Mészáros, László; Pál, András.; Jaskó, Attila

    2014-07-01

    In order to attain precise, accurate and stateless positioning of telescope mounts we apply microelectromechanical accelerometer systems (also known as MEMS accelerometers). In common practice, feedback from the mount position is provided by electronic, optical or magneto-mechanical systems or via real-time astrometric solution based on the acquired images. Hence, MEMS-based systems are completely independent from these mechanisms. Our goal is to investigate the advantages and challenges of applying such devices and to reach the sub-arcminute range { that is well smaller than the field-of-view of conventional imaging telescope systems. We present how this sub-arcminute accuracy can be achieved with very cheap MEMS sensors. Basically, these sensors yield raw output within an accuracy of a few degrees. We show what kind of calibration procedures could exploit spherical and cylindrical constraints between accelerometer output channels in order to achieve the previously mentioned accuracy level. We also demonstrate how can our implementation be inserted in a telescope control system. Although this attainable precision is less than both the resolution of telescope mount drive mechanics and the accuracy of astrometric solutions, the independent nature of attitude determination could significantly increase the reliability of autonomous or remotely operated astronomical observations.

  12. Does a pneumotach accurately characterize voice function?

    Science.gov (United States)

    Walters, Gage; Krane, Michael

    2016-11-01

    A study is presented which addresses how a pneumotach might adversely affect clinical measurements of voice function. A pneumotach is a device, typically a mask, worn over the mouth, in order to measure time-varying glottal volume flow. By measuring the time-varying difference in pressure across a known aerodynamic resistance element in the mask, the glottal volume flow waveform is estimated. Because it adds aerodynamic resistance to the vocal system, there is some concern that using a pneumotach may not accurately portray the behavior of the voice. To test this hypothesis, experiments were performed in a simplified airway model with the principal dimensions of an adult human upper airway. A compliant constriction, fabricated from silicone rubber, modeled the vocal folds. Variations of transglottal pressure, time-averaged volume flow, model vocal fold vibration amplitude, and radiated sound with subglottal pressure were performed, with and without the pneumotach in place, and differences noted. Acknowledge support of NIH Grant 2R01DC005642-10A1.

  13. Towards Accurate Modeling of Moving Contact Lines

    CERN Document Server

    Holmgren, Hanna

    2015-01-01

    A main challenge in numerical simulations of moving contact line problems is that the adherence, or no-slip boundary condition leads to a non-integrable stress singularity at the contact line. In this report we perform the first steps in developing the macroscopic part of an accurate multiscale model for a moving contact line problem in two space dimensions. We assume that a micro model has been used to determine a relation between the contact angle and the contact line velocity. An intermediate region is introduced where an analytical expression for the velocity exists. This expression is used to implement boundary conditions for the moving contact line at a macroscopic scale, along a fictitious boundary located a small distance away from the physical boundary. Model problems where the shape of the interface is constant thought the simulation are introduced. For these problems, experiments show that the errors in the resulting contact line velocities converge with the grid size $h$ at a rate of convergence $...

  14. Accurate upper body rehabilitation system using kinect.

    Science.gov (United States)

    Sinha, Sanjana; Bhowmick, Brojeshwar; Chakravarty, Kingshuk; Sinha, Aniruddha; Das, Abhijit

    2016-08-01

    The growing importance of Kinect as a tool for clinical assessment and rehabilitation is due to its portability, low cost and markerless system for human motion capture. However, the accuracy of Kinect in measuring three-dimensional body joint center locations often fails to meet clinical standards of accuracy when compared to marker-based motion capture systems such as Vicon. The length of the body segment connecting any two joints, measured as the distance between three-dimensional Kinect skeleton joint coordinates, has been observed to vary with time. The orientation of the line connecting adjoining Kinect skeletal coordinates has also been seen to differ from the actual orientation of the physical body segment. Hence we have proposed an optimization method that utilizes Kinect Depth and RGB information to search for the joint center location that satisfies constraints on body segment length and as well as orientation. An experimental study have been carried out on ten healthy participants performing upper body range of motion exercises. The results report 72% reduction in body segment length variance and 2° improvement in Range of Motion (ROM) angle hence enabling to more accurate measurements for upper limb exercises.

  15. Fast and accurate exhaled breath ammonia measurement.

    Science.gov (United States)

    Solga, Steven F; Mudalel, Matthew L; Spacek, Lisa A; Risby, Terence H

    2014-06-11

    This exhaled breath ammonia method uses a fast and highly sensitive spectroscopic method known as quartz enhanced photoacoustic spectroscopy (QEPAS) that uses a quantum cascade based laser. The monitor is coupled to a sampler that measures mouth pressure and carbon dioxide. The system is temperature controlled and specifically designed to address the reactivity of this compound. The sampler provides immediate feedback to the subject and the technician on the quality of the breath effort. Together with the quick response time of the monitor, this system is capable of accurately measuring exhaled breath ammonia representative of deep lung systemic levels. Because the system is easy to use and produces real time results, it has enabled experiments to identify factors that influence measurements. For example, mouth rinse and oral pH reproducibly and significantly affect results and therefore must be controlled. Temperature and mode of breathing are other examples. As our understanding of these factors evolves, error is reduced, and clinical studies become more meaningful. This system is very reliable and individual measurements are inexpensive. The sampler is relatively inexpensive and quite portable, but the monitor is neither. This limits options for some clinical studies and provides rational for future innovations.

  16. Noninvasive hemoglobin monitoring: how accurate is enough?

    Science.gov (United States)

    Rice, Mark J; Gravenstein, Nikolaus; Morey, Timothy E

    2013-10-01

    Evaluating the accuracy of medical devices has traditionally been a blend of statistical analyses, at times without contextualizing the clinical application. There have been a number of recent publications on the accuracy of a continuous noninvasive hemoglobin measurement device, the Masimo Radical-7 Pulse Co-oximeter, focusing on the traditional statistical metrics of bias and precision. In this review, which contains material presented at the Innovations and Applications of Monitoring Perfusion, Oxygenation, and Ventilation (IAMPOV) Symposium at Yale University in 2012, we critically investigated these metrics as applied to the new technology, exploring what is required of a noninvasive hemoglobin monitor and whether the conventional statistics adequately answer our questions about clinical accuracy. We discuss the glucose error grid, well known in the glucose monitoring literature, and describe an analogous version for hemoglobin monitoring. This hemoglobin error grid can be used to evaluate the required clinical accuracy (±g/dL) of a hemoglobin measurement device to provide more conclusive evidence on whether to transfuse an individual patient. The important decision to transfuse a patient usually requires both an accurate hemoglobin measurement and a physiologic reason to elect transfusion. It is our opinion that the published accuracy data of the Masimo Radical-7 is not good enough to make the transfusion decision.

  17. Accurate free energy calculation along optimized paths.

    Science.gov (United States)

    Chen, Changjun; Xiao, Yi

    2010-05-01

    The path-based methods of free energy calculation, such as thermodynamic integration and free energy perturbation, are simple in theory, but difficult in practice because in most cases smooth paths do not exist, especially for large molecules. In this article, we present a novel method to build the transition path of a peptide. We use harmonic potentials to restrain its nonhydrogen atom dihedrals in the initial state and set the equilibrium angles of the potentials as those in the final state. Through a series of steps of geometrical optimization, we can construct a smooth and short path from the initial state to the final state. This path can be used to calculate free energy difference. To validate this method, we apply it to a small 10-ALA peptide and find that the calculated free energy changes in helix-helix and helix-hairpin transitions are both self-convergent and cross-convergent. We also calculate the free energy differences between different stable states of beta-hairpin trpzip2, and the results show that this method is more efficient than the conventional molecular dynamics method in accurate free energy calculation.

  18. Accurate fission data for nuclear safety

    CERN Document Server

    Solders, A; Jokinen, A; Kolhinen, V S; Lantz, M; Mattera, A; Penttila, H; Pomp, S; Rakopoulos, V; Rinta-Antila, S

    2013-01-01

    The Accurate fission data for nuclear safety (AlFONS) project aims at high precision measurements of fission yields, using the renewed IGISOL mass separator facility in combination with a new high current light ion cyclotron at the University of Jyvaskyla. The 30 MeV proton beam will be used to create fast and thermal neutron spectra for the study of neutron induced fission yields. Thanks to a series of mass separating elements, culminating with the JYFLTRAP Penning trap, it is possible to achieve a mass resolving power in the order of a few hundred thousands. In this paper we present the experimental setup and the design of a neutron converter target for IGISOL. The goal is to have a flexible design. For studies of exotic nuclei far from stability a high neutron flux (10^12 neutrons/s) at energies 1 - 30 MeV is desired while for reactor applications neutron spectra that resembles those of thermal and fast nuclear reactors are preferred. It is also desirable to be able to produce (semi-)monoenergetic neutrons...

  19. Fast and Provably Accurate Bilateral Filtering.

    Science.gov (United States)

    Chaudhury, Kunal N; Dabhade, Swapnil D

    2016-06-01

    The bilateral filter is a non-linear filter that uses a range filter along with a spatial filter to perform edge-preserving smoothing of images. A direct computation of the bilateral filter requires O(S) operations per pixel, where S is the size of the support of the spatial filter. In this paper, we present a fast and provably accurate algorithm for approximating the bilateral filter when the range kernel is Gaussian. In particular, for box and Gaussian spatial filters, the proposed algorithm can cut down the complexity to O(1) per pixel for any arbitrary S . The algorithm has a simple implementation involving N+1 spatial filterings, where N is the approximation order. We give a detailed analysis of the filtering accuracy that can be achieved by the proposed approximation in relation to the target bilateral filter. This allows us to estimate the order N required to obtain a given accuracy. We also present comprehensive numerical results to demonstrate that the proposed algorithm is competitive with the state-of-the-art methods in terms of speed and accuracy.

  20. Accurate thermoplasmonic simulation of metallic nanoparticles

    Science.gov (United States)

    Yu, Da-Miao; Liu, Yan-Nan; Tian, Fa-Lin; Pan, Xiao-Min; Sheng, Xin-Qing

    2017-01-01

    Thermoplasmonics leads to enhanced heat generation due to the localized surface plasmon resonances. The measurement of heat generation is fundamentally a complicated task, which necessitates the development of theoretical simulation techniques. In this paper, an efficient and accurate numerical scheme is proposed for applications with complex metallic nanostructures. Light absorption and temperature increase are, respectively, obtained by solving the volume integral equation (VIE) and the steady-state heat diffusion equation through the method of moments (MoM). Previously, methods based on surface integral equations (SIEs) were utilized to obtain light absorption. However, computing light absorption from the equivalent current is as expensive as O(NsNv), where Ns and Nv, respectively, denote the number of surface and volumetric unknowns. Our approach reduces the cost to O(Nv) by using VIE. The accuracy, efficiency and capability of the proposed scheme are validated by multiple simulations. The simulations show that our proposed method is more efficient than the approach based on SIEs under comparable accuracy, especially for the case where many incidents are of interest. The simulations also indicate that the temperature profile can be tuned by several factors, such as the geometry configuration of array, beam direction, and light wavelength.

  1. Accurate simulation of optical properties in dyes.

    Science.gov (United States)

    Jacquemin, Denis; Perpète, Eric A; Ciofini, Ilaria; Adamo, Carlo

    2009-02-17

    Since Antiquity, humans have produced and commercialized dyes. To this day, extraction of natural dyes often requires lengthy and costly procedures. In the 19th century, global markets and new industrial products drove a significant effort to synthesize artificial dyes, characterized by low production costs, huge quantities, and new optical properties (colors). Dyes that encompass classes of molecules absorbing in the UV-visible part of the electromagnetic spectrum now have a wider range of applications, including coloring (textiles, food, paintings), energy production (photovoltaic cells, OLEDs), or pharmaceuticals (diagnostics, drugs). Parallel to the growth in dye applications, researchers have increased their efforts to design and synthesize new dyes to customize absorption and emission properties. In particular, dyes containing one or more metallic centers allow for the construction of fairly sophisticated systems capable of selectively reacting to light of a given wavelength and behaving as molecular devices (photochemical molecular devices, PMDs).Theoretical tools able to predict and interpret the excited-state properties of organic and inorganic dyes allow for an efficient screening of photochemical centers. In this Account, we report recent developments defining a quantitative ab initio protocol (based on time-dependent density functional theory) for modeling dye spectral properties. In particular, we discuss the importance of several parameters, such as the methods used for electronic structure calculations, solvent effects, and statistical treatments. In addition, we illustrate the performance of such simulation tools through case studies. We also comment on current weak points of these methods and ways to improve them.

  2. Accurate paleointensities - the multi-method approach

    Science.gov (United States)

    de Groot, Lennart

    2016-04-01

    The accuracy of models describing rapid changes in the geomagnetic field over the past millennia critically depends on the availability of reliable paleointensity estimates. Over the past decade methods to derive paleointensities from lavas (the only recorder of the geomagnetic field that is available all over the globe and through geologic times) have seen significant improvements and various alternative techniques were proposed. The 'classical' Thellier-style approach was optimized and selection criteria were defined in the 'Standard Paleointensity Definitions' (Paterson et al, 2014). The Multispecimen approach was validated and the importance of additional tests and criteria to assess Multispecimen results must be emphasized. Recently, a non-heating, relative paleointensity technique was proposed -the pseudo-Thellier protocol- which shows great potential in both accuracy and efficiency, but currently lacks a solid theoretical underpinning. Here I present work using all three of the aforementioned paleointensity methods on suites of young lavas taken from the volcanic islands of Hawaii, La Palma, Gran Canaria, Tenerife, and Terceira. Many of the sampled cooling units are <100 years old, the actual field strength at the time of cooling is therefore reasonably well known. Rather intuitively, flows that produce coherent results from two or more different paleointensity methods yield the most accurate estimates of the paleofield. Furthermore, the results for some flows pass the selection criteria for one method, but fail in other techniques. Scrutinizing and combing all acceptable results yielded reliable paleointensity estimates for 60-70% of all sampled cooling units - an exceptionally high success rate. This 'multi-method paleointensity approach' therefore has high potential to provide the much-needed paleointensities to improve geomagnetic field models for the Holocene.

  3. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  4. Optimizing cell arrays for accurate functional genomics

    Directory of Open Access Journals (Sweden)

    Fengler Sven

    2012-07-01

    Full Text Available Abstract Background Cellular responses emerge from a complex network of dynamic biochemical reactions. In order to investigate them is necessary to develop methods that allow perturbing a high number of gene products in a flexible and fast way. Cell arrays (CA enable such experiments on microscope slides via reverse transfection of cellular colonies growing on spotted genetic material. In contrast to multi-well plates, CA are susceptible to contamination among neighboring spots hindering accurate quantification in cell-based screening projects. Here we have developed a quality control protocol for quantifying and minimizing contamination in CA. Results We imaged checkered CA that express two distinct fluorescent proteins and segmented images into single cells to quantify the transfection efficiency and interspot contamination. Compared with standard procedures, we measured a 3-fold reduction of contaminants when arrays containing HeLa cells were washed shortly after cell seeding. We proved that nucleic acid uptake during cell seeding rather than migration among neighboring spots was the major source of contamination. Arrays of MCF7 cells developed without the washing step showed 7-fold lower percentage of contaminant cells, demonstrating that contamination is dependent on specific cell properties. Conclusions Previously published methodological works have focused on achieving high transfection rate in densely packed CA. Here, we focused in an equally important parameter: The interspot contamination. The presented quality control is essential for estimating the rate of contamination, a major source of false positives and negatives in current microscopy based functional genomics screenings. We have demonstrated that a washing step after seeding enhances CA quality for HeLA but is not necessary for MCF7. The described method provides a way to find optimal seeding protocols for cell lines intended to be used for the first time in CA.

  5. Important Nearby Galaxies without Accurate Distances

    Science.gov (United States)

    McQuinn, Kristen

    2014-10-01

    The Spitzer Infrared Nearby Galaxies Survey (SINGS) and its offspring programs (e.g., THINGS, HERACLES, KINGFISH) have resulted in a fundamental change in our view of star formation and the ISM in galaxies, and together they represent the most complete multi-wavelength data set yet assembled for a large sample of nearby galaxies. These great investments of observing time have been dedicated to the goal of understanding the interstellar medium, the star formation process, and, more generally, galactic evolution at the present epoch. Nearby galaxies provide the basis for which we interpret the distant universe, and the SINGS sample represents the best studied nearby galaxies.Accurate distances are fundamental to interpreting observations of galaxies. Surprisingly, many of the SINGS spiral galaxies have numerous distance estimates resulting in confusion. We can rectify this situation for 8 of the SINGS spiral galaxies within 10 Mpc at a very low cost through measurements of the tip of the red giant branch. The proposed observations will provide an accuracy of better than 0.1 in distance modulus. Our sample includes such well known galaxies as M51 (the Whirlpool), M63 (the Sunflower), M104 (the Sombrero), and M74 (the archetypal grand design spiral).We are also proposing coordinated parallel WFC3 UV observations of the central regions of the galaxies, rich with high-mass UV-bright stars. As a secondary science goal we will compare the resolved UV stellar populations with integrated UV emission measurements used in calibrating star formation rates. Our observations will complement the growing HST UV atlas of high resolution images of nearby galaxies.

  6. How flatbed scanners upset accurate film dosimetry.

    Science.gov (United States)

    van Battum, L J; Huizenga, H; Verdaasdonk, R M; Heukelom, S

    2016-01-21

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner's transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner's optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  7. On Quantitative Rorschach Scales.

    Science.gov (United States)

    Haggard, Ernest A.

    1978-01-01

    Two types of quantitative Rorschach scales are discussed: first, those based on the response categories of content, location, and the determinants, and second, global scales based on the subject's responses to all ten stimulus cards. (Author/JKS)

  8. An automated method for accurate vessel segmentation

    Science.gov (United States)

    Yang, Xin; Liu, Chaoyue; Le Minh, Hung; Wang, Zhiwei; Chien, Aichi; (Tim Cheng, Kwang-Ting

    2017-05-01

    Vessel segmentation is a critical task for various medical applications, such as diagnosis assistance of diabetic retinopathy, quantification of cerebral aneurysm’s growth, and guiding surgery in neurosurgical procedures. Despite technology advances in image segmentation, existing methods still suffer from low accuracy for vessel segmentation in the two challenging while common scenarios in clinical usage: (1) regions with a low signal-to-noise-ratio (SNR), and (2) at vessel boundaries disturbed by adjacent non-vessel pixels. In this paper, we present an automated system which can achieve highly accurate vessel segmentation for both 2D and 3D images even under these challenging scenarios. Three key contributions achieved by our system are: (1) a progressive contrast enhancement method to adaptively enhance contrast of challenging pixels that were otherwise indistinguishable, (2) a boundary refinement method to effectively improve segmentation accuracy at vessel borders based on Canny edge detection, and (3) a content-aware region-of-interests (ROI) adjustment method to automatically determine the locations and sizes of ROIs which contain ambiguous pixels and demand further verification. Extensive evaluation of our method is conducted on both 2D and 3D datasets. On a public 2D retinal dataset (named DRIVE (Staal 2004 IEEE Trans. Med. Imaging 23 501-9)) and our 2D clinical cerebral dataset, our approach achieves superior performance to the state-of-the-art methods including a vesselness based method (Frangi 1998 Int. Conf. on Medical Image Computing and Computer-Assisted Intervention) and an optimally oriented flux (OOF) based method (Law and Chung 2008 European Conf. on Computer Vision). An evaluation on 11 clinical 3D CTA cerebral datasets shows that our method can achieve 94% average accuracy with respect to the manual segmentation reference, which is 23% to 33% better than the five baseline methods (Yushkevich 2006 Neuroimage 31 1116-28; Law and Chung 2008

  9. Qualitative vs. quantitative software process simulation modelling: conversion and comparison

    OpenAIRE

    Zhang, He; Kitchenham, Barbara; Jeffery, Ross

    2009-01-01

    peer-reviewed Software Process Simulation Modeling (SPSM) research has increased in the past two decades. However, most of these models are quantitative, which require detailed understanding and accurate measurement. As the continuous work to our previous studies in qualitative modeling of software process, this paper aims to investigate the structure equivalence and model conversion between quantitative and qualitative process modeling, and to compare the characteristics and performance o...

  10. Multivariate Quantitative Chemical Analysis

    Science.gov (United States)

    Kinchen, David G.; Capezza, Mary

    1995-01-01

    Technique of multivariate quantitative chemical analysis devised for use in determining relative proportions of two components mixed and sprayed together onto object to form thermally insulating foam. Potentially adaptable to other materials, especially in process-monitoring applications in which necessary to know and control critical properties of products via quantitative chemical analyses of products. In addition to chemical composition, also used to determine such physical properties as densities and strengths.

  11. Multivariate Quantitative Chemical Analysis

    Science.gov (United States)

    Kinchen, David G.; Capezza, Mary

    1995-01-01

    Technique of multivariate quantitative chemical analysis devised for use in determining relative proportions of two components mixed and sprayed together onto object to form thermally insulating foam. Potentially adaptable to other materials, especially in process-monitoring applications in which necessary to know and control critical properties of products via quantitative chemical analyses of products. In addition to chemical composition, also used to determine such physical properties as densities and strengths.

  12. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  13. A Broadly Applicable Assay for Rapidly and Accurately Quantifying DNA Surface Coverage on Diverse Particles.

    Science.gov (United States)

    Yu, Haixiang; Xu, Xiaowen; Liang, Pingping; Loh, Kang Yong; Guntupalli, Bhargav; Roncancio, Daniel; Xiao, Yi

    2017-04-19

    DNA-modified particles are used extensively for applications in sensing, material science, and molecular biology. The performance of such DNA-modified particles is greatly dependent on the degree of surface coverage, but existing methods for quantitation can only be employed for certain particle compositions and/or conjugation chemistries. We have developed a simple and broadly applicable exonuclease III (Exo III) digestion assay based on the cleavage of phosphodiester bonds-a universal feature of DNA-modified particles-to accurately quantify DNA probe surface coverage on diverse, commonly used particles of different compositions, conjugation chemistries, and sizes. Our assay utilizes particle-conjugated, fluorophore-labeled probes that incorporate two abasic sites; these probes are hybridized to a complementary DNA (cDNA) strand, and quantitation is achieved via cleavage and digestion of surface-bound probe DNA via Exo III's apurinic endonucleolytic and exonucleolytic activities. The presence of the two abasic sites in the probe greatly speeds up the enzymatic reaction without altering the packing density of the probes on the particles. Probe digestion releases a signal-generating fluorophore and liberates the intact cDNA strand to start a new cycle of hybridization and digestion, until all fluorophore tags have been released. Since the molar ratio of fluorophore to immobilized DNA is 1:1, DNA surface coverage can be determined accurately based on the complete release of fluorophores. Our method delivers accurate, rapid, and reproducible quantitation of thiolated DNA on the surface of gold nanoparticles, and also performs equally well with other conjugation chemistries, substrates, and particle sizes, and thus offers a broadly useful assay for quantitation of DNA surface coverage.

  14. Artificial neural network accurately predicts hepatitis B surface antigen seroclearance.

    Directory of Open Access Journals (Sweden)

    Ming-Hua Zheng

    Full Text Available BACKGROUND & AIMS: Hepatitis B surface antigen (HBsAg seroclearance and seroconversion are regarded as favorable outcomes of chronic hepatitis B (CHB. This study aimed to develop artificial neural networks (ANNs that could accurately predict HBsAg seroclearance or seroconversion on the basis of available serum variables. METHODS: Data from 203 untreated, HBeAg-negative CHB patients with spontaneous HBsAg seroclearance (63 with HBsAg seroconversion, and 203 age- and sex-matched HBeAg-negative controls were analyzed. ANNs and logistic regression models (LRMs were built and tested according to HBsAg seroclearance and seroconversion. Predictive accuracy was assessed with area under the receiver operating characteristic curve (AUROC. RESULTS: Serum quantitative HBsAg (qHBsAg and HBV DNA levels, qHBsAg and HBV DNA reduction were related to HBsAg seroclearance (P<0.001 and were used for ANN/LRM-HBsAg seroclearance building, whereas, qHBsAg reduction was not associated with ANN-HBsAg seroconversion (P = 0.197 and LRM-HBsAg seroconversion was solely based on qHBsAg (P = 0.01. For HBsAg seroclearance, AUROCs of ANN were 0.96, 0.93 and 0.95 for the training, testing and genotype B subgroups respectively. They were significantly higher than those of LRM, qHBsAg and HBV DNA (all P<0.05. Although the performance of ANN-HBsAg seroconversion (AUROC 0.757 was inferior to that for HBsAg seroclearance, it tended to be better than those of LRM, qHBsAg and HBV DNA. CONCLUSIONS: ANN identifies spontaneous HBsAg seroclearance in HBeAg-negative CHB patients with better accuracy, on the basis of easily available serum data. More useful predictors for HBsAg seroconversion are still needed to be explored in the future.

  15. Quantitative autonomic testing.

    Science.gov (United States)

    Novak, Peter

    2011-07-19

    Disorders associated with dysfunction of autonomic nervous system are quite common yet frequently unrecognized. Quantitative autonomic testing can be invaluable tool for evaluation of these disorders, both in clinic and research. There are number of autonomic tests, however, only few were validated clinically or are quantitative. Here, fully quantitative and clinically validated protocol for testing of autonomic functions is presented. As a bare minimum the clinical autonomic laboratory should have a tilt table, ECG monitor, continuous noninvasive blood pressure monitor, respiratory monitor and a mean for evaluation of sudomotor domain. The software for recording and evaluation of autonomic tests is critical for correct evaluation of data. The presented protocol evaluates 3 major autonomic domains: cardiovagal, adrenergic and sudomotor. The tests include deep breathing, Valsalva maneuver, head-up tilt, and quantitative sudomotor axon test (QSART). The severity and distribution of dysautonomia is quantitated using Composite Autonomic Severity Scores (CASS). Detailed protocol is provided highlighting essential aspects of testing with emphasis on proper data acquisition, obtaining the relevant parameters and unbiased evaluation of autonomic signals. The normative data and CASS algorithm for interpretation of results are provided as well.

  16. Approaches for the accurate definition of geological time boundaries

    Science.gov (United States)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  17. Quantitative Hydrocarbon Surface Analysis

    Science.gov (United States)

    Douglas, Vonnie M.

    2000-01-01

    The elimination of ozone depleting substances, such as carbon tetrachloride, has resulted in the use of new analytical techniques for cleanliness verification and contamination sampling. The last remaining application at Rocketdyne which required a replacement technique was the quantitative analysis of hydrocarbons by infrared spectrometry. This application, which previously utilized carbon tetrachloride, was successfully modified using the SOC-400, a compact portable FTIR manufactured by Surface Optics Corporation. This instrument can quantitatively measure and identify hydrocarbons from solvent flush of hardware as well as directly analyze the surface of metallic components without the use of ozone depleting chemicals. Several sampling accessories are utilized to perform analysis for various applications.

  18. Towards in vivo focal cortical dysplasia phenotyping using quantitative MRI.

    Science.gov (United States)

    Adler, Sophie; Lorio, Sara; Jacques, Thomas S; Benova, Barbora; Gunny, Roxana; Cross, J Helen; Baldeweg, Torsten; Carmichael, David W

    2017-01-01

    Focal cortical dysplasias (FCDs) are a range of malformations of cortical development each with specific histopathological features. Conventional radiological assessment of standard structural MRI is useful for the localization of lesions but is unable to accurately predict the histopathological features. Quantitative MRI offers the possibility to probe tissue biophysical properties in vivo and may bridge the gap between radiological assessment and ex-vivo histology. This review will cover histological, genetic and radiological features of FCD following the ILAE classification and will explain how quantitative voxel- and surface-based techniques can characterise these features. We will provide an overview of the quantitative MRI measures available, their link with biophysical properties and finally the potential application of quantitative MRI to the problem of FCD subtyping. Future research linking quantitative MRI to FCD histological properties should improve clinical protocols, allow better characterisation of lesions in vivo and tailored surgical planning to the individual.

  19. A quantitative description for efficient financial markets

    Science.gov (United States)

    Immonen, Eero

    2015-09-01

    In this article we develop a control system model for describing efficient financial markets. We define the efficiency of a financial market in quantitative terms by robust asymptotic price-value equality in this model. By invoking the Internal Model Principle of robust output regulation theory we then show that under No Bubble Conditions, in the proposed model, the market is efficient if and only if the following conditions hold true: (1) the traders, as a group, can identify any mispricing in asset value (even if no one single trader can do it accurately), and (2) the traders, as a group, incorporate an internal model of the value process (again, even if no one single trader knows it). This main result of the article, which deliberately avoids the requirement for investor rationality, demonstrates, in quantitative terms, that the more transparent the markets are, the more efficient they are. An extensive example is provided to illustrate the theoretical development.

  20. Consensus strategy to quantitate malignant cells in myeloma patients is validated in a multicenter study

    NARCIS (Netherlands)

    Willems, P; Verhagen, O; Segeren, C; Veenhuizen, P; Guikema, J; Wiemer, E; Groothuis, L; Buitenweg-de Jong, T; Kok, H; Bloem, A; Bos, N; Vellenga, E; Mensink, E; Sonneveld, P; van der Schoot, HLE; Raymakers, R

    2000-01-01

    Recently the Belgium-Dutch Hematology-Oncology group initiated a multicenter study to evaluate whether myeloma patients treated with intensive chemotherapy benefit from additional peripheral stem cell transplantation. To determine treatment response accurately, we decided to quantitate malignant cel

  1. LC-MS systems for quantitative bioanalysis.

    Science.gov (United States)

    van Dongen, William D; Niessen, Wilfried M A

    2012-10-01

    LC-MS has become the method-of-choice in small-molecule drug bioanalysis (molecular mass Triple quadrupole MS is the established bioanalytical technique due to its unpreceded selectivity and sensitivity, but high-resolution accurate-mass MS is recently gaining ground due to its ability to provide simultaneous quantitative and qualitative analysis of drugs and their metabolites. This article discusses current trends in the field of bioanalytical LC-MS (until September 2012), and provides an overview of currently available commercial triple quadrupole MS and high-resolution LC-MS instruments as applied for the bioanalysis of small-molecule and biopharmaceutical drugs.

  2. DNA DAMAGE QUANTITATION BY ALKALINE GEL ELECTROPHORESIS.

    Energy Technology Data Exchange (ETDEWEB)

    SUTHERLAND,B.M.; BENNETT,P.V.; SUTHERLAND, J.C.

    2004-03-24

    Physical and chemical agents in the environment, those used in clinical applications, or encountered during recreational exposures to sunlight, induce damages in DNA. Understanding the biological impact of these agents requires quantitation of the levels of such damages in laboratory test systems as well as in field or clinical samples. Alkaline gel electrophoresis provides a sensitive (down to {approx} a few lesions/5Mb), rapid method of direct quantitation of a wide variety of DNA damages in nanogram quantities of non-radioactive DNAs from laboratory, field, or clinical specimens, including higher plants and animals. This method stems from velocity sedimentation studies of DNA populations, and from the simple methods of agarose gel electrophoresis. Our laboratories have developed quantitative agarose gel methods, analytical descriptions of DNA migration during electrophoresis on agarose gels (1-6), and electronic imaging for accurate determinations of DNA mass (7-9). Although all these components improve sensitivity and throughput of large numbers of samples (7,8,10), a simple version using only standard molecular biology equipment allows routine analysis of DNA damages at moderate frequencies. We present here a description of the methods, as well as a brief description of the underlying principles, required for a simplified approach to quantitation of DNA damages by alkaline gel electrophoresis.

  3. Quantitative Intracerebral Hemorrhage Localization

    Science.gov (United States)

    Muschelli, John; Ullman, Natalie L.; Sweeney, Elizabeth M.; Eloyan, Ani; Martin, Neil; Vespa, Paul; Hanley, Daniel F.; Crainiceanu, Ciprian M.

    2015-01-01

    Background and Purpose The location of intracerebral hemorrhage (ICH) is currently described in a qualitative way; we provide a quantitative framework for estimating ICH engagement and its relevance to stroke outcomes. Methods We analyzed 111 patients with ICH from the MISTIE II clinical trial. We estimated ICH engagement at a population level using image registration of CT scans to a template and a previously labeled atlas. Predictive regions of NIHSS and GCS stroke severity scores, collected at enrollment, were estimated. Results The percent coverage of the ICH by these regions strongly outperformed the reader-labeled locations. The adjusted R2 almost doubled from 0.129 (reader-labeled model) to 0.254 (quantitative-location model) for NIHSS and more than tripled from 0.069 (reader-labeled model) to 0.214 (quantitative-location model). A permutation test confirmed that the new predictive regions are more predictive than chance: p<.001 for NIHSS and p<.01 for GCS. Conclusions Objective measures of ICH location and engagement using advanced CT imaging processing provide finer, objective, and more quantitative anatomic information than that provided by human readers. PMID:26451031

  4. Critical Quantitative Inquiry in Context

    Science.gov (United States)

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  5. Critical Quantitative Inquiry in Context

    Science.gov (United States)

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  6. Quantitative aspects of inductively coupled plasma mass spectrometry

    Science.gov (United States)

    Bulska, Ewa; Wagner, Barbara

    2016-10-01

    Accurate determination of elements in various kinds of samples is essential for many areas, including environmental science, medicine, as well as industry. Inductively coupled plasma mass spectrometry (ICP-MS) is a powerful tool enabling multi-elemental analysis of numerous matrices with high sensitivity and good precision. Various calibration approaches can be used to perform accurate quantitative measurements by ICP-MS. They include the use of pure standards, matrix-matched standards, or relevant certified reference materials, assuring traceability of the reported results. This review critically evaluates the advantages and limitations of different calibration approaches, which are used in quantitative analyses by ICP-MS. Examples of such analyses are provided. This article is part of the themed issue 'Quantitative mass spectrometry'.

  7. Fast and accurate conversion of atomic models into electron density maps

    Directory of Open Access Journals (Sweden)

    Carlos O.S. Sorzano

    2015-03-01

    Full Text Available New image processing methodologies and algorithms have greatly contributed to the signi cant progress in three-dimensional electron microscopy (3DEM of biological complexes we have seen over the last decades. Naturally, the availability of accurate procedures for the objective testing of new algorithms is a crucial requirement for the further advancement of the eld. A good and accepted testing work ow involves the generation of realistic 3DEM-like maps of biological macromolecules from which some measure of ground truth can be derived, ideally because their 3D atomic structure is already known. In this work we propose a very accurate generation of maps using atomic form factors for electron scattering. We thoroughly review current approaches in the eld, quantitatively demonstrating the bene ts of the new methodology. Additionally, we study a concrete example of the use of this approach for hypothesis testing in 3D Electron Microscopy.

  8. Accurate Jones Matrix of the Practical Faraday Rotator

    Institute of Scientific and Technical Information of China (English)

    王林斗; 祝昇翔; 李玉峰; 邢文烈; 魏景芝

    2003-01-01

    The Jones matrix of practical Faraday rotators is often used in the engineering calculation of non-reciprocal optical field. Nevertheless, only the approximate Jones matrix of practical Faraday rotators has been presented by now. Based on the theory of polarized light, this paper presents the accurate Jones matrix of practical Faraday rotators. In addition, an experiment has been carried out to verify the validity of the accurate Jones matrix. This matrix accurately describes the optical characteristics of practical Faraday rotators, including rotation, loss and depolarization of the polarized light. The accurate Jones matrix can be used to obtain the accurate results for the practical Faraday rotator to transform the polarized light, which paves the way for the accurate analysis and calculation of practical Faraday rotators in relevant engineering applications.

  9. Impact of reconstruction parameters on quantitative I-131 SPECT

    Science.gov (United States)

    van Gils, C. A. J.; Beijst, C.; van Rooij, R.; de Jong, H. W. A. M.

    2016-07-01

    Radioiodine therapy using I-131 is widely used for treatment of thyroid disease or neuroendocrine tumors. Monitoring treatment by accurate dosimetry requires quantitative imaging. The high energy photons however render quantitative SPECT reconstruction challenging, potentially requiring accurate correction for scatter and collimator effects. The goal of this work is to assess the effectiveness of various correction methods on these effects using phantom studies. A SPECT/CT acquisition of the NEMA IEC body phantom was performed. Images were reconstructed using the following parameters: (1) without scatter correction, (2) with triple energy window (TEW) scatter correction and (3) with Monte Carlo-based scatter correction. For modelling the collimator-detector response (CDR), both (a) geometric Gaussian CDRs as well as (b) Monte Carlo simulated CDRs were compared. Quantitative accuracy, contrast to noise ratios and recovery coefficients were calculated, as well as the background variability and the residual count error in the lung insert. The Monte Carlo scatter corrected reconstruction method was shown to be intrinsically quantitative, requiring no experimentally acquired calibration factor. It resulted in a more accurate quantification of the background compartment activity density compared with TEW or no scatter correction. The quantification error relative to a dose calibrator derived measurement was found to be  Gaussian or no CDR modelling. Scatter correction showed a small effect on quantification of small volumes. When using a weighting factor, TEW correction was comparable to Monte Carlo reconstruction in all measured parameters, although this approach is clinically impractical since this factor may be patient dependent. Monte Carlo based scatter correction including accurately simulated CDR modelling is the most robust and reliable method to reconstruct accurate quantitative iodine-131 SPECT images.

  10. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    KAUST Repository

    Pan, Bing

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost. © 2014 Elsevier Ltd.

  11. PIVlab – Towards User-friendly, Affordable and Accurate Digital Particle Image Velocimetry in MATLAB

    Directory of Open Access Journals (Sweden)

    William Thielicke

    2014-10-01

    Full Text Available Digital particle image velocimetry (DPIV is a non-intrusive analysis technique that is very popular for mapping flows quantitatively. To get accurate results, in particular in complex flow fields, a number of challenges have to be faced and solved: The quality of the flow measurements is affected by computational details such as image pre-conditioning, sub-pixel peak estimators, data validation procedures, interpolation algorithms and smoothing methods. The accuracy of several algorithms was determined and the best performing methods were implemented in a user-friendly open-source tool for performing DPIV flow analysis in Matlab.

  12. Biomimetic Approach for Accurate, Real-Time Aerodynamic Coefficients Project

    Data.gov (United States)

    National Aeronautics and Space Administration — Aerodynamic and structural reliability and efficiency depends critically on the ability to accurately assess the aerodynamic loads and moments for each lifting...

  13. Quantifying Methane Fluxes Simply and Accurately: The Tracer Dilution Method

    Science.gov (United States)

    Rella, Christopher; Crosson, Eric; Green, Roger; Hater, Gary; Dayton, Dave; Lafleur, Rick; Merrill, Ray; Tan, Sze; Thoma, Eben

    2010-05-01

    Methane is an important atmospheric constituent with a wide variety of sources, both natural and anthropogenic, including wetlands and other water bodies, permafrost, farms, landfills, and areas with significant petrochemical exploration, drilling, transport, or processing, or refining occurs. Despite its importance to the carbon cycle, its significant impact as a greenhouse gas, and its ubiquity in modern life as a source of energy, its sources and sinks in marine and terrestrial ecosystems are only poorly understood. This is largely because high quality, quantitative measurements of methane fluxes in these different environments have not been available, due both to the lack of robust field-deployable instrumentation as well as to the fact that most significant sources of methane extend over large areas (from 10's to 1,000,000's of square meters) and are heterogeneous emitters - i.e., the methane is not emitted evenly over the area in question. Quantifying the total methane emissions from such sources becomes a tremendous challenge, compounded by the fact that atmospheric transport from emission point to detection point can be highly variable. In this presentation we describe a robust, accurate, and easy-to-deploy technique called the tracer dilution method, in which a known gas (such as acetylene, nitrous oxide, or sulfur hexafluoride) is released in the same vicinity of the methane emissions. Measurements of methane and the tracer gas are then made downwind of the release point, in the so-called far-field, where the area of methane emissions cannot be distinguished from a point source (i.e., the two gas plumes are well-mixed). In this regime, the methane emissions are given by the ratio of the two measured concentrations, multiplied by the known tracer emission rate. The challenges associated with atmospheric variability and heterogeneous methane emissions are handled automatically by the transport and dispersion of the tracer. We present detailed methane flux

  14. Measurement of Fracture Geometry for Accurate Computation of Hydraulic Conductivity

    Science.gov (United States)

    Chae, B.; Ichikawa, Y.; Kim, Y.

    2003-12-01

    Fluid flow in rock mass is controlled by geometry of fractures which is mainly characterized by roughness, aperture and orientation. Fracture roughness and aperture was observed by a new confocal laser scanning microscope (CLSM; Olympus OLS1100). The wavelength of laser is 488nm, and the laser scanning is managed by a light polarization method using two galvano-meter scanner mirrors. The system improves resolution in the light axis (namely z) direction because of the confocal optics. The sampling is managed in a spacing 2.5 μ m along x and y directions. The highest measurement resolution of z direction is 0.05 μ m, which is the more accurate than other methods. For the roughness measurements, core specimens of coarse and fine grained granites were provided. Measurements were performed along three scan lines on each fracture surface. The measured data were represented as 2-D and 3-D digital images showing detailed features of roughness. Spectral analyses by the fast Fourier transform (FFT) were performed to characterize on the roughness data quantitatively and to identify influential frequency of roughness. The FFT results showed that components of low frequencies were dominant in the fracture roughness. This study also verifies that spectral analysis is a good approach to understand complicate characteristics of fracture roughness. For the aperture measurements, digital images of the aperture were acquired under applying five stages of uniaxial normal stresses. This method can characterize the response of aperture directly using the same specimen. Results of measurements show that reduction values of aperture are different at each part due to rough geometry of fracture walls. Laboratory permeability tests were also conducted to evaluate changes of hydraulic conductivities related to aperture variation due to different stress levels. The results showed non-uniform reduction of hydraulic conductivity under increase of the normal stress and different values of

  15. Applied quantitative finance

    CERN Document Server

    Chen, Cathy; Overbeck, Ludger

    2017-01-01

    This volume provides practical solutions and introduces recent theoretical developments in risk management, pricing of credit derivatives, quantification of volatility and copula modeling. This third edition is devoted to modern risk analysis based on quantitative methods and textual analytics to meet the current challenges in banking and finance. It includes 14 new contributions and presents a comprehensive, state-of-the-art treatment of cutting-edge methods and topics, such as collateralized debt obligations, the high-frequency analysis of market liquidity, and realized volatility. The book is divided into three parts: Part 1 revisits important market risk issues, while Part 2 introduces novel concepts in credit risk and its management along with updated quantitative methods. The third part discusses the dynamics of risk management and includes risk analysis of energy markets and for cryptocurrencies. Digital assets, such as blockchain-based currencies, have become popular b ut are theoretically challenging...

  16. Energy & Climate: Getting Quantitative

    Science.gov (United States)

    Wolfson, Richard

    2011-11-01

    A noted environmentalist claims that buying an SUV instead of a regular car is energetically equivalent to leaving your refrigerator door open for seven years. A fossil-fuel apologist argues that solar energy is a pie-in-the-sky dream promulgated by na"ive environmentalists, because there's nowhere near enough solar energy to meet humankind's energy demand. A group advocating shutdown of the Vermont Yankee nuclear plant claims that 70% of its electrical energy is lost in transmission lines. Around the world, thousands agitate for climate action, under the numerical banner ``350.'' Neither the environmentalist, the fossil-fuel apologist, the antinuclear activists, nor most of those marching under the ``350'' banner can back up their assertions with quantitative arguments. Yet questions about energy and its environmental impacts almost always require quantitative answers. Physics can help! This poster gives some cogent examples, based on the newly published 2^nd edition of the author's textbook Energy, Environment, and Climate.

  17. Absolute quantitation of protein posttranslational modification isoform.

    Science.gov (United States)

    Yang, Zhu; Li, Ning

    2015-01-01

    Mass spectrometry has been widely applied in characterization and quantification of proteins from complex biological samples. Because the numbers of absolute amounts of proteins are needed in construction of mathematical models for molecular systems of various biological phenotypes and phenomena, a number of quantitative proteomic methods have been adopted to measure absolute quantities of proteins using mass spectrometry. The liquid chromatography-tandem mass spectrometry (LC-MS/MS) coupled with internal peptide standards, i.e., the stable isotope-coded peptide dilution series, which was originated from the field of analytical chemistry, becomes a widely applied method in absolute quantitative proteomics research. This approach provides more and more absolute protein quantitation results of high confidence. As quantitative study of posttranslational modification (PTM) that modulates the biological activity of proteins is crucial for biological science and each isoform may contribute a unique biological function, degradation, and/or subcellular location, the absolute quantitation of protein PTM isoforms has become more relevant to its biological significance. In order to obtain the absolute cellular amount of a PTM isoform of a protein accurately, impacts of protein fractionation, protein enrichment, and proteolytic digestion yield should be taken into consideration and those effects before differentially stable isotope-coded PTM peptide standards are spiked into sample peptides have to be corrected. Assisted with stable isotope-labeled peptide standards, the absolute quantitation of isoforms of posttranslationally modified protein (AQUIP) method takes all these factors into account and determines the absolute amount of a protein PTM isoform from the absolute amount of the protein of interest and the PTM occupancy at the site of the protein. The absolute amount of the protein of interest is inferred by quantifying both the absolute amounts of a few PTM

  18. Quantitation of signal transduction.

    Science.gov (United States)

    Krauss, S; Brand, M D

    2000-12-01

    Conventional qualitative approaches to signal transduction provide powerful ways to explore the architecture and function of signaling pathways. However, at the level of the complete system, they do not fully depict the interactions between signaling and metabolic pathways and fail to give a manageable overview of the complexity that is often a feature of cellular signal transduction. Here, we introduce a quantitative experimental approach to signal transduction that helps to overcome these difficulties. We present a quantitative analysis of signal transduction during early mitogen stimulation of lymphocytes, with steady-state respiration rate as a convenient marker of metabolic stimulation. First, by inhibiting various key signaling pathways, we measure their relative importance in regulating respiration. About 80% of the input signal is conveyed via identifiable routes: 50% through pathways sensitive to inhibitors of protein kinase C and MAP kinase and 30% through pathways sensitive to an inhibitor of calcineurin. Second, we quantify how each of these pathways differentially stimulates functional units of reactions that produce and consume a key intermediate in respiration: the mitochondrial membrane potential. Both the PKC and calcineurin routes stimulate consumption more strongly than production, whereas the unidentified signaling routes stimulate production more than consumption, leading to no change in membrane potential despite increased respiration rate. The approach allows a quantitative description of the relative importance of signal transduction pathways and the routes by which they activate a specific cellular process. It should be widely applicable.

  19. Quantitative traits and diversification.

    Science.gov (United States)

    FitzJohn, Richard G

    2010-12-01

    Quantitative traits have long been hypothesized to affect speciation and extinction rates. For example, smaller body size or increased specialization may be associated with increased rates of diversification. Here, I present a phylogenetic likelihood-based method (quantitative state speciation and extinction [QuaSSE]) that can be used to test such hypotheses using extant character distributions. This approach assumes that diversification follows a birth-death process where speciation and extinction rates may vary with one or more traits that evolve under a diffusion model. Speciation and extinction rates may be arbitrary functions of the character state, allowing much flexibility in testing models of trait-dependent diversification. I test the approach using simulated phylogenies and show that a known relationship between speciation and a quantitative character could be recovered in up to 80% of the cases on large trees (500 species). Consistent with other approaches, detecting shifts in diversification due to differences in extinction rates was harder than when due to differences in speciation rates. Finally, I demonstrate the application of QuaSSE to investigate the correlation between body size and diversification in primates, concluding that clade-specific differences in diversification may be more important than size-dependent diversification in shaping the patterns of diversity within this group.

  20. Absolute quantitation of proteins by Acid hydrolysis combined with amino Acid detection by mass spectrometry

    DEFF Research Database (Denmark)

    Mirgorodskaya, Olga A; Körner, Roman; Kozmin, Yuri P;

    2012-01-01

    Amino acid analysis is among the most accurate methods for absolute quantification of proteins and peptides. Here, we combine acid hydrolysis with the addition of isotopically labeled standard amino acids and analysis by mass spectrometry for accurate and sensitive protein quantitation...

  1. Accurate Quantification of Lipid Species by Electrospray Ionization Mass Spectrometry — Meets a Key Challenge in Lipidomics

    Directory of Open Access Journals (Sweden)

    Kui Yang

    2011-11-01

    Full Text Available Electrospray ionization mass spectrometry (ESI-MS has become one of the most popular and powerful technologies to identify and quantify individual lipid species in lipidomics. Meanwhile, quantitative analysis of lipid species by ESI-MS has also become a major obstacle to meet the challenges of lipidomics. Herein, we discuss the principles, advantages, and possible limitations of different mass spectrometry-based methodologies for lipid quantification, as well as a few practical issues important for accurate quantification of individual lipid species. Accordingly, accurate quantification of individual lipid species, one of the key challenges in lipidomics, can be practically met.

  2. Speed-of-sound compensated photoacoustic tomography for accurate imaging

    CERN Document Server

    Jose, Jithin; Steenbergen, Wiendelt; Slump, Cornelis H; van Leeuwen, Ton G; Manohar, Srirang

    2012-01-01

    In most photoacoustic (PA) measurements, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. We present experimental and image reconstruction methods with which 2-D SOS distributions can be accurately acquired and reconstructed, and with which the SOS map can be used subsequently to reconstruct highly accurate PA tomograms. We begin with a 2-D iterative reconstruction approach in an ultrasound transmission tomography (UTT) setting, which uses ray refracted paths instead of straight ray paths to recover accurate SOS images of the subject. Subsequently, we use the SOS distribution in a new 2-D iterative approach, where refraction of rays originating from PA sources are accounted for in accurately retrieving the distribution of these sources. Both the SOS reconstruction and SOS-compensated PA reconstruction methods utilize the Eikonal equation to m...

  3. Quantitative Imaging with a Mobile Phone Microscope

    Science.gov (United States)

    Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072

  4. Quantitative imaging with a mobile phone microscope.

    Directory of Open Access Journals (Sweden)

    Arunan Skandarajah

    Full Text Available Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone-based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications.

  5. Directional and quantitative phosphorylation networks

    DEFF Research Database (Denmark)

    Jørgensen, Claus; Linding, Rune

    2008-01-01

    for unravelling phosphorylation-mediated cellular interaction networks. In particular, we will discuss how the combination of new quantitative mass-spectrometric technologies and computational algorithms together are enhancing mapping of these largely uncharted dynamic networks. By combining quantitative...

  6. F# for quantitative finance

    CERN Document Server

    Astborg, Johan

    2013-01-01

    To develop your confidence in F#, this tutorial will first introduce you to simpler tasks such as curve fitting. You will then advance to more complex tasks such as implementing algorithms for trading semi-automation in a practical scenario-based format.If you are a data analyst or a practitioner in quantitative finance, economics, or mathematics and wish to learn how to use F# as a functional programming language, this book is for you. You should have a basic conceptual understanding of financial concepts and models. Elementary knowledge of the .NET framework would also be helpful.

  7. Designing quantitative telemedicine research.

    Science.gov (United States)

    Wade, Victoria; Barnett, Adrian G; Martin-Khan, Melinda; Russell, Trevor

    2016-10-27

    When designing quantitative trials and evaluation of telehealth interventions, researchers should think ahead to the intended way that the intervention could be implemented in routine care and consider how trial participants with similar characteristics to the target population can be included. The telehealth intervention and the context in which it is placed should be clearly described, and consideration given to conducting pragmatic trials in order to show the effect of telehealth in complex environments with rapidly changing technology. Types of research designs, comparators and outcome measures are discussed and common statistical issues are introduced. © The Author(s) 2016.

  8. The accurate assessment of small-angle X-ray scattering data

    Energy Technology Data Exchange (ETDEWEB)

    Grant, Thomas D. [Hauptman–Woodward Medical Research Institute, 700 Ellicott Street, Buffalo, NY 14203 (United States); Luft, Joseph R. [Hauptman–Woodward Medical Research Institute, 700 Ellicott Street, Buffalo, NY 14203 (United States); SUNY Buffalo, 700 Ellicott Street, Buffalo, NY 14203 (United States); Carter, Lester G.; Matsui, Tsutomu; Weiss, Thomas M.; Martel, Anne [Stanford Synchrotron Radiation Lightsource, 2575 Sand Hill Road, MS69, Menlo Park, CA 94025 (United States); Snell, Edward H., E-mail: esnell@hwi.buffalo.edu [Hauptman–Woodward Medical Research Institute, 700 Ellicott Street, Buffalo, NY 14203 (United States); SUNY Buffalo, 700 Ellicott Street, Buffalo, NY 14203 (United States)

    2015-01-01

    A set of quantitative techniques is suggested for assessing SAXS data quality. These are applied in the form of a script, SAXStats, to a test set of 27 proteins, showing that these techniques are more sensitive than manual assessment of data quality. Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.

  9. Automated selected reaction monitoring software for accurate label-free protein quantification.

    Science.gov (United States)

    Teleman, Johan; Karlsson, Christofer; Waldemarson, Sofia; Hansson, Karin; James, Peter; Malmström, Johan; Levander, Fredrik

    2012-07-06

    Selected reaction monitoring (SRM) is a mass spectrometry method with documented ability to quantify proteins accurately and reproducibly using labeled reference peptides. However, the use of labeled reference peptides becomes impractical if large numbers of peptides are targeted and when high flexibility is desired when selecting peptides. We have developed a label-free quantitative SRM workflow that relies on a new automated algorithm, Anubis, for accurate peak detection. Anubis efficiently removes interfering signals from contaminating peptides to estimate the true signal of the targeted peptides. We evaluated the algorithm on a published multisite data set and achieved results in line with manual data analysis. In complex peptide mixtures from whole proteome digests of Streptococcus pyogenes we achieved a technical variability across the entire proteome abundance range of 6.5-19.2%, which was considerably below the total variation across biological samples. Our results show that the label-free SRM workflow with automated data analysis is feasible for large-scale biological studies, opening up new possibilities for quantitative proteomics and systems biology.

  10. Addressing the current bottlenecks of metabolomics: Isotopic Ratio Outlier Analysis™, an isotopic-labeling technique for accurate biochemical profiling.

    Science.gov (United States)

    de Jong, Felice A; Beecher, Chris

    2012-09-01

    Metabolomics or biochemical profiling is a fast emerging science; however, there are still many associated bottlenecks to overcome before measurements will be considered robust. Advances in MS resolution and sensitivity, ultra pressure LC-MS, ESI, and isotopic approaches such as flux analysis and stable-isotope dilution, have made it easier to quantitate biochemicals. The digitization of mass spectrometers has simplified informatic aspects. However, issues of analytical variability, ion suppression and metabolite identification still plague metabolomics investigators. These hurdles need to be overcome for accurate metabolite quantitation not only for in vitro systems, but for complex matrices such as biofluids and tissues, before it is possible to routinely identify biomarkers that are associated with the early prediction and diagnosis of diseases. In this report, we describe a novel isotopic-labeling method that uses the creation of distinct biochemical signatures to eliminate current bottlenecks and enable accurate metabolic profiling.

  11. Multiplex PCR with minisequencing as an effective high-throughput SNP typing method for formalin-fixed tissue

    DEFF Research Database (Denmark)

    Gilbert, Marcus T P; Sanchez, Juan J; Haselkorn, Tamara;

    2007-01-01

    Extensive collections of formalin-fixed paraffin-embedded (FFPE) tissues exist that could be exploited for genetic analyses in order to provide important insights into the genetic basis of disease or host/pathogen cointeractions. We report here an evaluation of a 44 SNP multiplex genotyping metho...

  12. Forensic genetic SNP typing of low-template DNA and highly degraded DNA from crime case samples

    DEFF Research Database (Denmark)

    Børsting, Claus; Mogensen, Helle Smidt; Morling, Niels

    2013-01-01

    Heterozygote imbalances leading to allele drop-outs and disproportionally large stutters leading to allele drop-ins are known stochastic phenomena related to STR typing of low-template DNA (LtDNA). The large stutters and the many drop-ins in typical STR stutter positions are artifacts from the PCR...

  13. Autosomal SNP typing of forensic samples with the GenPlex(TM) HID System: Results of a collaborative study

    DEFF Research Database (Denmark)

    Tomas, C.; Axler-DiPerte, G.; Budimlija, Z.M.

    2011-01-01

    The GenPlex(TM) HID System (Applied Biosystems - AB) offers typing of 48 of the 52 SNPforID SNPs and amelogenin. Previous studies have shown a high reproducibility of the GenPlex(TM) HID System using 250-500 pg DNA of good quality. An international exercise was performed by 14 laboratories (9 in ...

  14. Forensic genetic SNP typing of low-template DNA and highly degraded DNA from crime case samples

    DEFF Research Database (Denmark)

    Børsting, Claus; Mogensen, Helle Smidt; Morling, Niels

    2013-01-01

    Heterozygote imbalances leading to allele drop-outs and disproportionally large stutters leading to allele drop-ins are known stochastic phenomena related to STR typing of low-template DNA (LtDNA). The large stutters and the many drop-ins in typical STR stutter positions are artifacts from the PCR...

  15. Peopling of the North Circumpolar Region--insights from Y chromosome STR and SNP typing of Greenlanders.

    Directory of Open Access Journals (Sweden)

    Jill Katharina Olofsson

    Full Text Available The human population in Greenland is characterized by migration events of Paleo- and Neo-Eskimos, as well as admixture with Europeans. In this study, the Y-chromosomal variation in male Greenlanders was investigated in detail by typing 73 Y-chromosomal single nucleotide polymorphisms (Y-SNPs and 17 Y-chromosomal short tandem repeats (Y-STRs. Approximately 40% of the analyzed Greenlandic Y chromosomes were of European origin (I-M170, R1a-M513 and R1b-M343. Y chromosomes of European origin were mainly found in individuals from the west and south coasts of Greenland, which is in agreement with the historic records of the geographic placements of European settlements in Greenland. Two Inuit Y-chromosomal lineages, Q-M3 (xM19, M194, L663, SA01 and L766 and Q-NWT01 (xM265 were found in 23% and 31% of the male Greenlanders, respectively. The time to the most recent common ancestor (TMRCA of the Q-M3 lineage of the Greenlanders was estimated to be between 4,400 and 10,900 years ago (y. a. using two different methods. This is in agreement with the theory that the North Circumpolar Region was populated via a second expansion of humans in the North American continent. The TMRCA of the Q-NWT01 (xM265 lineage in Greenland was estimated to be between 7,000 and 14,300 y. a. using two different methods, which is older than the previously reported TMRCA of this lineage in other Inuit populations. Our results indicate that Inuit individuals carrying the Q-NWT01 (xM265 lineage may have their origin in the northeastern parts of North America and could be descendants of the Dorset culture. This in turn points to the possibility that the current Inuit population in Greenland is comprised of individuals of both Thule and Dorset descent.

  16. Quantitative immunoglobulins in adulthood.

    Science.gov (United States)

    Crisp, Howard C; Quinn, James M

    2009-01-01

    Although age-related changes in serum immunoglobulins are well described in childhood, alterations in immunoglobulins in the elderly are less well described and published. This study was designed to better define expected immunoglobulin ranges and differences in adults of differing decades of life. Sera from 404 patients, aged 20-89 years old were analyzed for quantitative immunoglobulin G (IgG), immunoglobulin M (IgM), and immunoglobulin A (IgA). The patients with diagnoses or medications known to affect immunoglobulin levels were identified while blinded to their immunoglobulin levels. A two-factor ANOVA was performed using decade of life and gender on both the entire sample population as well as the subset without any disease or medication expected to alter immunoglobulin levels. A literature review was also performed on all English language articles evaluating quantitative immunoglobulin levels in adults >60 years old. For the entire population, IgM was found to be higher in women when compared with men (p immunoglobulin levels, the differences in IgM with gender and age were maintained (p immunoglobulin levels have higher serum IgA levels and lower serum IgM levels. Women have higher IgM levels than men throughout life. IgG levels are not significantly altered in an older population.

  17. Is quantitative electromyography reliable?

    Science.gov (United States)

    Cecere, F; Ruf, S; Pancherz, H

    1996-01-01

    The reliability of quantitative electromyography (EMG) of the masticatory muscles was investigated in 14 subjects without any signs or symptoms of temporomandibular disorders. Integrated EMG activity from the anterior temporalis and masseter muscles was recorded bilaterally by means of bipolar surface electrodes during chewing and biting activities. In the first experiment, the influence of electrode relocation was investigated. No influence of electrode relocation on the recorded EMG signal could be detected. In a second experiment, three sessions of EMG recordings during five different chewing and biting activities were performed in the morning (I); 1 hour later without intermediate removal of the electrodes (II); and in the afternoon, using new electrodes (III). The method errors for different time intervals (I-II and I-III errors) for each muscle and each function were calculated. Depending on the time interval between the EMG recordings, the muscles considered, and the function performed, the individual errors ranged from 5% to 63%. The method error increased significantly (P masseter (mean 27.2%) was higher than for the temporalis (mean 20.0%). The largest function error was found during maximal biting in intercuspal position (mean 23.1%). Based on the findings, quantitative electromyography of the masticatory muscles seems to have a limited value in diagnostics and in the evaluation of individual treatment results.

  18. Realization of Quadrature Signal Generator Using Accurate Magnitude Integrator

    DEFF Research Database (Denmark)

    Xin, Zhen; Yoon, Changwoo; Zhao, Rende

    2016-01-01

    -signal parameters, espically when a fast resonse is required for usages such as grid synchronization. As a result, the parameters design of the SOGI-QSG becomes complicated. Theoretical analysis shows that it is caused by the inaccurate magnitude-integration characteristic of the SOGI-QSG. To solve this problem......, an Accurate-Magnitude-Integrator based QSG (AMI-QSG) is proposed. The AMI has an accurate magnitude-integration characteristic for the sinusoidal signal, which makes the AMI-QSG possess an accurate First-Order-System (FOS) characteristic in terms of magnitude than the SOGI-QSG. The parameter design process...

  19. Fabricating an Accurate Implant Master Cast: A Technique Report.

    Science.gov (United States)

    Balshi, Thomas J; Wolfinger, Glenn J; Alfano, Stephen G; Cacovean, Jeannine N; Balshi, Stephen F

    2015-12-01

    The technique for fabricating an accurate implant master cast following the 12-week healing period after Teeth in a Day® dental implant surgery is detailed. The clinical, functional, and esthetic details captured during the final master impression are vital to creating an accurate master cast. This technique uses the properties of the all-acrylic resin interim prosthesis to capture these details. This impression captures the relationship between the remodeled soft tissue and the interim prosthesis. This provides the laboratory technician with an accurate orientation of the implant replicas in the master cast with which a passive fitting restoration can be fabricated.

  20. Identification and validation of reference genes for accurate normalization of real-time quantitative PCR data in kiwifruit.

    Science.gov (United States)

    Ferradás, Yolanda; Rey, Laura; Martínez, Óscar; Rey, Manuel; González, Ma Victoria

    2016-05-01

    Identification and validation of reference genes are required for the normalization of qPCR data. We studied the expression stability produced by eight primer pairs amplifying four common genes used as references for normalization. Samples representing different tissues, organs and developmental stages in kiwifruit (Actinidia chinensis var. deliciosa (A. Chev.) A. Chev.) were used. A total of 117 kiwifruit samples were divided into five sample sets (mature leaves, axillary buds, stigmatic arms, fruit flesh and seeds). All samples were also analysed as a single set. The expression stability of the candidate primer pairs was tested using three algorithms (geNorm, NormFinder and BestKeeper). The minimum number of reference genes necessary for normalization was also determined. A unique primer pair was selected for amplifying the 18S rRNA gene. The primer pair selected for amplifying the ACTIN gene was different depending on the sample set. 18S 2 and ACT 2 were the candidate primer pairs selected for normalization in the three sample sets (mature leaves, fruit flesh and stigmatic arms). 18S 2 and ACT 3 were the primer pairs selected for normalization in axillary buds. No primer pair could be selected for use as the reference for the seed sample set. The analysis of all samples in a single set did not produce the selection of any stably expressing primer pair. Considering data previously reported in the literature, we validated the selected primer pairs amplifying the FLOWERING LOCUS T gene for use in the normalization of gene expression in kiwifruit.

  1. Random forest algorithm yields accurate quantitative prediction models of benthic light at intertidal sites affected by toxic Lyngbya majuscula blooms

    NARCIS (Netherlands)

    Kehoe, M.J.; O’ Brien, K.; Grinham, A.; Rissik, D.; Ahern, K.S.; Maxwell, P.

    2012-01-01

    It is shown that targeted high frequency monitoring and modern machine learning methods lead to highly predictive models of benthic light flux. A state-of-the-art machine learning technique was used in conjunction with a high frequency data set to calibrate and test predictive benthic lights models

  2. Random forest algorithm yields accurate quantitative prediction models of benthic light at intertidal sites affected by toxic Lyngbya majuscula blooms

    NARCIS (Netherlands)

    Kehoe, M.J.; O’ Brien, K.; Grinham, A.; Rissik, D.; Ahern, K.S.; Maxwell, P.

    2012-01-01

    It is shown that targeted high frequency monitoring and modern machine learning methods lead to highly predictive models of benthic light flux. A state-of-the-art machine learning technique was used in conjunction with a high frequency data set to calibrate and test predictive benthic lights models

  3. Accurate quantitation of pentaerythritol tetranitrate and its degradation products using liquid chromatography-atmospheric pressure chemical ionization-mass spectrometry

    NARCIS (Netherlands)

    Brust, H.; Asten, A. van; Koeberg, M.; Dalmolen, J.; Heijden, A.E.D.M. van der; Schoenmakers, P.

    2014-01-01

    After an explosion of pentaerythritol tetranitrate (PETN), its degradation products pentaerythritol trinitrate (PETriN), dinitrate (PEDiN) and mononitrate (PEMN) were detected using liquid chromatography-atmospheric-pressure chemical-ionization-mass spectrometry (LC-APCI-MS). Discrimination between

  4. Accurately Estimating the State of a Geophysical System with Sparse Observations: Predicting the Weather

    CERN Document Server

    An, Zhe; Abarbanel, Henry D I

    2014-01-01

    Utilizing the information in observations of a complex system to make accurate predictions through a quantitative model when observations are completed at time $T$, requires an accurate estimate of the full state of the model at time $T$. When the number of measurements $L$ at each observation time within the observation window is larger than a sufficient minimum value $L_s$, the impediments in the estimation procedure are removed. As the number of available observations is typically such that $L \\ll L_s$, additional information from the observations must be presented to the model. We show how, using the time delays of the measurements at each observation time, one can augment the information transferred from the data to the model, removing the impediments to accurate estimation and permitting dependable prediction. We do this in a core geophysical fluid dynamics model, the shallow water equations, at the heart of numerical weather prediction. The method is quite general, however, and can be utilized in the a...

  5. An analytic model for accurate spring constant calibration of rectangular atomic force microscope cantilevers.

    Science.gov (United States)

    Li, Rui; Ye, Hongfei; Zhang, Weisheng; Ma, Guojun; Su, Yewang

    2015-10-29

    Spring constant calibration of the atomic force microscope (AFM) cantilever is of fundamental importance for quantifying the force between the AFM cantilever tip and the sample. The calibration within the framework of thin plate theory undoubtedly has a higher accuracy and broader scope than that within the well-established beam theory. However, thin plate theory-based accurate analytic determination of the constant has been perceived as an extremely difficult issue. In this paper, we implement the thin plate theory-based analytic modeling for the static behavior of rectangular AFM cantilevers, which reveals that the three-dimensional effect and Poisson effect play important roles in accurate determination of the spring constants. A quantitative scaling law is found that the normalized spring constant depends only on the Poisson's ratio, normalized dimension and normalized load coordinate. Both the literature and our refined finite element model validate the present results. The developed model is expected to serve as the benchmark for accurate calibration of rectangular AFM cantilevers.

  6. Highly Accurate Sensor for High-Purity Oxygen Determination Project

    Data.gov (United States)

    National Aeronautics and Space Administration — In this STTR effort, Los Gatos Research (LGR) and the University of Wisconsin (UW) propose to develop a highly-accurate sensor for high-purity oxygen determination....

  7. Multi-objective optimization of inverse planning for accurate radiotherapy

    Institute of Scientific and Technical Information of China (English)

    曹瑞芬; 吴宜灿; 裴曦; 景佳; 李国丽; 程梦云; 李贵; 胡丽琴

    2011-01-01

    The multi-objective optimization of inverse planning based on the Pareto solution set, according to the multi-objective character of inverse planning in accurate radiotherapy, was studied in this paper. Firstly, the clinical requirements of a treatment pl

  8. Accurate backgrounds to Higgs production at the LHC

    CERN Document Server

    Kauer, N

    2007-01-01

    Corrections of 10-30% for backgrounds to the H --> WW --> l^+l^-\\sla{p}_T search in vector boson and gluon fusion at the LHC are reviewed to make the case for precise and accurate theoretical background predictions.

  9. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  10. Controlling Hay Fever Symptoms with Accurate Pollen Counts

    Science.gov (United States)

    ... counts Share | Controlling Hay Fever Symptoms with Accurate Pollen Counts This article has been reviewed by Thanai ... rhinitis known as hay fever is caused by pollen carried in the air during different times of ...

  11. Digital system accurately controls velocity of electromechanical drive

    Science.gov (United States)

    Nichols, G. B.

    1965-01-01

    Digital circuit accurately regulates electromechanical drive mechanism velocity. The gain and phase characteristics of digital circuits are relatively unimportant. Control accuracy depends only on the stability of the input signal frequency.

  12. Accurate screening for synthetic preservatives in beverage using high performance liquid chromatography with time-of-flight mass spectrometry

    Energy Technology Data Exchange (ETDEWEB)

    Li Xiuqin; Zhang Feng; Sun Yanyan; Yong Wei [Institute of Food Safety, Chinese Academy of Inspection and Quarantine, Jia 3, Gaobeidian North Road, Beijing 100025 (China); Chu Xiaogang [Institute of Food Safety, Chinese Academy of Inspection and Quarantine, Jia 3, Gaobeidian North Road, Beijing 100025 (China)], E-mail: lixq_sypu@yahoo.com; Fang Yanyan; Zweigenbaum, Jerry [Agilent Technologies, Inc., 2850 Centerville Road, Wilmington, Delaware (United States)

    2008-02-11

    In this study, liquid chromatography time-of-flight mass spectrometry (HPLC/TOF-MS) is applied to qualitation and quantitation of 18 synthetic preservatives in beverage. The identification by HPLC/TOF-MS is accomplished with the accurate mass (the subsequent generated empirical formula) of the protonated molecules [M + H]+ or the deprotonated molecules [M - H]-, along with the accurate mass of their main fragment ions. In order to obtain sufficient sensitivity for quantitation purposes (using the protonated or deprotonated molecule) and additional qualitative mass spectrum information provided by the fragments ions, segment program of fragmentor voltages is designed in positive and negative ion mode, respectively. Accurate mass measurements are highly useful in the complex sample analyses since they allow us to achieve a high degree of specificity, often needed when other interferents are present in the matrix. The mass accuracy typically obtained is routinely better than 3 ppm. The 18 compounds behave linearly in the 0.005-5.0 mg.kg{sup -1} concentration range, with correlation coefficient >0.996. The recoveries at the tested concentrations of 1.0 mg.kg{sup -1}-100 mg.kg{sup -1} are 81-106%, with coefficients of variation <7.5%. Limits of detection (LODs) range from 0.0005 to 0.05 mg.kg{sup -1}, which are far below the required maximum residue level (MRL) for these preservatives in foodstuff. The method is suitable for routine quantitative and qualitative analyses of synthetic preservatives in foodstuff.

  13. Accurate screening for synthetic preservatives in beverage using high performance liquid chromatography with time-of-flight mass spectrometry.

    Science.gov (United States)

    Li, Xiu Qin; Zhang, Feng; Sun, Yan Yan; Yong, Wei; Chu, Xiao Gang; Fang, Yan Yan; Zweigenbaum, Jerry

    2008-02-11

    In this study, liquid chromatography time-of-flight mass spectrometry (HPLC/TOF-MS) is applied to qualitation and quantitation of 18 synthetic preservatives in beverage. The identification by HPLC/TOF-MS is accomplished with the accurate mass (the subsequent generated empirical formula) of the protonated molecules [M+H]+ or the deprotonated molecules [M-H]-, along with the accurate mass of their main fragment ions. In order to obtain sufficient sensitivity for quantitation purposes (using the protonated or deprotonated molecule) and additional qualitative mass spectrum information provided by the fragments ions, segment program of fragmentor voltages is designed in positive and negative ion mode, respectively. Accurate mass measurements are highly useful in the complex sample analyses since they allow us to achieve a high degree of specificity, often needed when other interferents are present in the matrix. The mass accuracy typically obtained is routinely better than 3 ppm. The 18 compounds behave linearly in the 0.005-5.0mg.kg(-1) concentration range, with correlation coefficient >0.996. The recoveries at the tested concentrations of 1.0mg.kg(-1)-100mg.kg(-1) are 81-106%, with coefficients of variation preservatives in foodstuff. The method is suitable for routine quantitative and qualitative analyses of synthetic preservatives in foodstuff.

  14. Quantitative radionuclide angiocardiography

    Energy Technology Data Exchange (ETDEWEB)

    Scholz, P.M.; Rerych, S.K.; Moran, J.F.; Newman, G.E.; Douglas, J.M.; Sabiston, D.C. Jr.; Jones, R.H.

    1980-01-01

    This study introduces a new method for calculating actual left ventricular volumes and cardiac output from data recorded during a single transit of a radionuclide bolus through the heart, and describes in detail current radionuclide angiocardiography methodology. A group of 64 healthy adults with a wide age range were studied to define the normal range of hemodynamic parameters determined by the technique. Radionuclide angiocardiograms were performed in patients undergoing cardiac catherization to validate the measurements. In 33 patients studied by both techniques on the same day, a close correlation was documented for measurement of ejection fraction and end-diastolic volume. To validate the method of volumetric cardiac output calcuation, 33 simultaneous radionuclide and indocyanine green dye determinations of cardiac output were performed in 18 normal young adults. These independent comparisons of radionuclide measurements with two separate methods document that initial transit radionuclide angiocardiography accurately assesses left ventricular function.

  15. A simplified and accurate detection of the genetically modified wheat MON71800 with one calibrator plasmid.

    Science.gov (United States)

    Kim, Jae-Hwan; Park, Saet-Byul; Roh, Hyo-Jeong; Park, Sunghoon; Shin, Min-Ki; Moon, Gui Im; Hong, Jin-Hwan; Kim, Hae-Yeong

    2015-06-01

    With the increasing number of genetically modified (GM) events, unauthorized GMO releases into the food market have increased dramatically, and many countries have developed detection tools for them. This study described the qualitative and quantitative detection methods of unauthorized the GM wheat MON71800 with a reference plasmid (pGEM-M71800). The wheat acetyl-CoA carboxylase (acc) gene was used as the endogenous gene. The plasmid pGEM-M71800, which contains both the acc gene and the event-specific target MON71800, was constructed as a positive control for the qualitative and quantitative analyses. The limit of detection in the qualitative PCR assay was approximately 10 copies. In the quantitative PCR assay, the standard deviation and relative standard deviation repeatability values ranged from 0.06 to 0.25 and from 0.23% to 1.12%, respectively. This study supplies a powerful and very simple but accurate detection strategy for unauthorized GM wheat MON71800 that utilizes a single calibrator plasmid.

  16. Mass spectrometry based protein identification with accurate statistical significance assignment

    OpenAIRE

    Alves, Gelio; Yu, Yi-Kuo

    2014-01-01

    Motivation: Assigning statistical significance accurately has become increasingly important as meta data of many types, often assembled in hierarchies, are constructed and combined for further biological analyses. Statistical inaccuracy of meta data at any level may propagate to downstream analyses, undermining the validity of scientific conclusions thus drawn. From the perspective of mass spectrometry based proteomics, even though accurate statistics for peptide identification can now be ach...

  17. QUAIL: A Quantitative Security Analyzer for Imperative Code

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Wasowski, Andrzej; Traonouez, Louis-Marie;

    2013-01-01

    Quantitative security analysis evaluates and compares how effectively a system protects its secret data. We introduce QUAIL, the first tool able to perform an arbitrary-precision quantitative analysis of the security of a system depending on private information. QUAIL builds a Markov Chain model...... the safety of randomized protocols depending on secret data, allowing to verify a security protocol’s effectiveness. We experiment with a few examples and show that QUAIL’s security analysis is more accurate and revealing than results of other tools...

  18. QUAIL: A Quantitative Security Analyzer for Imperative Code

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Wasowski, Andrzej; Traonouez, Louis-Marie

    2013-01-01

    Quantitative security analysis evaluates and compares how effectively a system protects its secret data. We introduce QUAIL, the first tool able to perform an arbitrary-precision quantitative analysis of the security of a system depending on private information. QUAIL builds a Markov Chain model...... the safety of randomized protocols depending on secret data, allowing to verify a security protocol’s effectiveness. We experiment with a few examples and show that QUAIL’s security analysis is more accurate and revealing than results of other tools...

  19. Quantitative analysis of norfloxacin by 1H NMR and HPLC.

    Science.gov (United States)

    Frackowiak, Anita; Kokot, Zenon J

    2012-01-01

    1H NMR and developed previously HPLC methods were applied to quantitative determination of norfloxacin in veterinary solution form for pigeon. Changes in concentration can lead to significant changes in the 1H chemical shifts of non-exchangeable aromatic protons as a result of extensive self-association phenomena. This chemical shift variation of protons was analyzed and applied in the quantitative determination of norfloxacin. The method is simple, rapid, precise and accurate, and can be used for quality control of this drug.

  20. Quantitative Risk Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Helms, J. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-02-10

    The US energy sector is vulnerable to multiple hazards including both natural disasters and malicious attacks from an intelligent adversary. The question that utility owners, operators and regulators face is how to prioritize their investments to mitigate the risks from a hazard that can have the most impact on the asset of interest. In order to be able to understand their risk landscape and develop a prioritized mitigation strategy, they must quantify risk in a consistent way across all hazards their asset is facing. Without being able to quantitatively measure risk, it is not possible to defensibly prioritize security investments or evaluate trade-offs between security and functionality. Development of a methodology that will consistently measure and quantify risk across different hazards is needed.

  1. Quantitative velocity modulation spectroscopy

    Science.gov (United States)

    Hodges, James N.; McCall, Benjamin J.

    2016-05-01

    Velocity Modulation Spectroscopy (VMS) is arguably the most important development in the 20th century for spectroscopic study of molecular ions. For decades, interpretation of VMS lineshapes has presented challenges due to the intrinsic covariance of fit parameters including velocity modulation amplitude, linewidth, and intensity. This limitation has stifled the growth of this technique into the quantitative realm. In this work, we show that subtle changes in the lineshape can be used to help address this complexity. This allows for determination of the linewidth, intensity relative to other transitions, velocity modulation amplitude, and electric field strength in the positive column of a glow discharge. Additionally, we explain the large homogeneous component of the linewidth that has been previously described. Using this component, the ion mobility can be determined.

  2. Quantitative metamaterial property extraction

    CERN Document Server

    Schurig, David

    2015-01-01

    We examine an extraction model for metamaterials, not previously reported, that gives precise, quantitative and causal representation of S parameter data over a broad frequency range, up to frequencies where the free space wavelength is only a modest factor larger than the unit cell dimension. The model is comprised of superposed, slab shaped response regions of finite thickness, one for each observed resonance. The resonance dispersion is Lorentzian and thus strictly causal. This new model is compared with previous models for correctness likelihood, including an appropriate Occam's factor for each fit parameter. We find that this new model is by far the most likely to be correct in a Bayesian analysis of model fits to S parameter simulation data for several classic metamaterial unit cells.

  3. Quantitative Hyperspectral Reflectance Imaging

    Directory of Open Access Journals (Sweden)

    Ted A.G. Steemers

    2008-09-01

    Full Text Available Hyperspectral imaging is a non-destructive optical analysis technique that can for instance be used to obtain information from cultural heritage objects unavailable with conventional colour or multi-spectral photography. This technique can be used to distinguish and recognize materials, to enhance the visibility of faint or obscured features, to detect signs of degradation and study the effect of environmental conditions on the object. We describe the basic concept, working principles, construction and performance of a laboratory instrument specifically developed for the analysis of historical documents. The instrument measures calibrated spectral reflectance images at 70 wavelengths ranging from 365 to 1100 nm (near-ultraviolet, visible and near-infrared. By using a wavelength tunable narrow-bandwidth light-source, the light energy used to illuminate the measured object is minimal, so that any light-induced degradation can be excluded. Basic analysis of the hyperspectral data includes a qualitative comparison of the spectral images and the extraction of quantitative data such as mean spectral reflectance curves and statistical information from user-defined regions-of-interest. More sophisticated mathematical feature extraction and classification techniques can be used to map areas on the document, where different types of ink had been applied or where one ink shows various degrees of degradation. The developed quantitative hyperspectral imager is currently in use by the Nationaal Archief (National Archives of The Netherlands to study degradation effects of artificial samples and original documents, exposed in their permanent exhibition area or stored in their deposit rooms.

  4. Quantitative Techniques in Volumetric Analysis

    Science.gov (United States)

    Zimmerman, John; Jacobsen, Jerrold J.

    1996-12-01

    Quantitative Techniques in Volumetric Analysis is a visual library of techniques used in making volumetric measurements. This 40-minute VHS videotape is designed as a resource for introducing students to proper volumetric methods and procedures. The entire tape, or relevant segments of the tape, can also be used to review procedures used in subsequent experiments that rely on the traditional art of quantitative analysis laboratory practice. The techniques included are: Quantitative transfer of a solid with a weighing spoon Quantitative transfer of a solid with a finger held weighing bottle Quantitative transfer of a solid with a paper strap held bottle Quantitative transfer of a solid with a spatula Examples of common quantitative weighing errors Quantitative transfer of a solid from dish to beaker to volumetric flask Quantitative transfer of a solid from dish to volumetric flask Volumetric transfer pipet A complete acid-base titration Hand technique variations The conventional view of contemporary quantitative chemical measurement tends to focus on instrumental systems, computers, and robotics. In this view, the analyst is relegated to placing standards and samples on a tray. A robotic arm delivers a sample to the analysis center, while a computer controls the analysis conditions and records the results. In spite of this, it is rare to find an analysis process that does not rely on some aspect of more traditional quantitative analysis techniques, such as careful dilution to the mark of a volumetric flask. Figure 2. Transfer of a solid with a spatula. Clearly, errors in a classical step will affect the quality of the final analysis. Because of this, it is still important for students to master the key elements of the traditional art of quantitative chemical analysis laboratory practice. Some aspects of chemical analysis, like careful rinsing to insure quantitative transfer, are often an automated part of an instrumental process that must be understood by the

  5. Nonexposure Accurate Location K-Anonymity Algorithm in LBS

    Directory of Open Access Journals (Sweden)

    Jinying Jia

    2014-01-01

    Full Text Available This paper tackles location privacy protection in current location-based services (LBS where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user’s accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR, nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user’s accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.

  6. Nonexposure accurate location K-anonymity algorithm in LBS.

    Science.gov (United States)

    Jia, Jinying; Zhang, Fengli

    2014-01-01

    This paper tackles location privacy protection in current location-based services (LBS) where mobile users have to report their exact location information to an LBS provider in order to obtain their desired services. Location cloaking has been proposed and well studied to protect user privacy. It blurs the user's accurate coordinate and replaces it with a well-shaped cloaked region. However, to obtain such an anonymous spatial region (ASR), nearly all existent cloaking algorithms require knowing the accurate locations of all users. Therefore, location cloaking without exposing the user's accurate location to any party is urgently needed. In this paper, we present such two nonexposure accurate location cloaking algorithms. They are designed for K-anonymity, and cloaking is performed based on the identifications (IDs) of the grid areas which were reported by all the users, instead of directly on their accurate coordinates. Experimental results show that our algorithms are more secure than the existent cloaking algorithms, need not have all the users reporting their locations all the time, and can generate smaller ASR.

  7. Quantitative Testing of Defect for Gun Barrels

    Institute of Scientific and Technical Information of China (English)

    WANG Chang-long; JI Feng-zhu; WANG Jin; CHEN Zheng-ge

    2007-01-01

    The magnetic flux leakage (MFL) method is commonly used in the nondestructive evaluation (NDE) of gun barrels. The key point of MFL testing is to estimate the crack geometry parameters based on the measured signal. The analysis of magnetic leakage fields can be obtained by solving Maxwell's equations using finite element method (FEM).The radial component of magnetic flux density is measured in MFL testing. The peak-peak value, the separation distance between positive and negative peaks of signal and the lift-off value of Hall-sensor are used as the main features of every sample. This paper establishes the multi-regression equations related to the width (the depth) of crack and the main characteristic values. The regression model is tested by use of the magnetic leakage data. The experimental results indicate that the regression equations can accurately predict the 2-D defect geometry parameters and the MFL quantitative testing can be achieved.

  8. Quantitative microbiological risk assessment.

    Science.gov (United States)

    Hoornstra, E; Notermans, S

    2001-05-21

    The production of safe food is being increasingly based on the use of risk analysis, and this process is now in use to establish national and international food safety objectives. It is also being used more frequently to guarantee that safety objectives are met and that such guarantees are achieved in a cost-effective manner. One part of the overall risk analysis procedure-risk assessment-is the scientific process in which the hazards and risk factors are identified, and the risk estimate or risk profile is determined. Risk assessment is an especially important tool for governments when food safety objectives have to be developed in the case of 'new' contaminants in known products or known contaminants causing trouble in 'new' products. Risk assessment is also an important approach for food companies (i) during product development, (ii) during (hygienic) process optimalization, and (iii) as an extension (validation) of the more qualitative HACCP-plan. This paper discusses these two different types of risk assessment, and uses probability distribution functions to assess the risks posed by Escherichia coli O157:H7 in each case. Such approaches are essential elements of risk management, as they draw on all available information to derive accurate and realistic estimations of the risk posed. The paper also discusses the potential of scenario-analysis in simulating the impact of different or modified risk factors during the consideration of new or improved control measures.

  9. Quantitative goals for monetary policy

    OpenAIRE

    Fatás, Antonio; Mihov, Ilian; ROSE, Andrew K.

    2006-01-01

    We study empirically the macroeconomic effects of an explicit de jure quantitative goal for monetary policy. Quantitative goals take three forms: exchange rates, money growth rates, and inflation targets. We analyze the effects on inflation of both having a quantitative target, and of hitting a declared target; we also consider effects on output volatility. Our empirical work uses an annual data set covering 42 countries between 1960 and 2000, and takes account of other determinants of inflat...

  10. Quantitative Risk - Phases 1 & 2

    Science.gov (United States)

    2013-11-12

    quantitative risk characterization”, " Risk characterization of microbiological hazards in food ", Chapter 4, 2009 314...State University, July 9, 2013 213. Albert I, Grenier E, Denis JB, Rousseau J., “ Quantitative Risk Assessment from Farm to Fork and Beyond: a...MELHEM, G., “Conduct Effective Quantitative Risk Assessment (QRA) Studies”, ioMosaic Corporation, 2006 233. Anderson, J., Brown, R., “ Risk

  11. Quantitative Electron Nanodiffraction.

    Energy Technology Data Exchange (ETDEWEB)

    Spence, John [Arizona State Univ., Mesa, AZ (United States)

    2015-01-30

    This Final report summarizes progress under this award for the final reporting period 2002 - 2013 in our development of quantitive electron nanodiffraction to materials problems, especially devoted to atomistic processes in semiconductors and electronic oxides such as the new artificial oxide multilayers, where our microdiffraction is complemented with energy-loss spectroscopy (ELNES) and aberration-corrected STEM imaging (9). The method has also been used to map out the chemical bonds in the important GaN semiconductor (1) used for solid state lighting, and to understand the effects of stacking sequence variations and interfaces in digital oxide superlattices (8). Other projects include the development of a laser-beam Zernike phase plate for cryo-electron microscopy (5) (based on the Kapitza-Dirac effect), work on reconstruction of molecular images using the scattering from many identical molecules lying in random orientations (4), a review article on space-group determination for the International Tables on Crystallography (10), the observation of energy-loss spectra with millivolt energy resolution and sub-nanometer spatial resolution from individual point defects in an alkali halide, a review article for the Centenary of X-ray Diffration (17) and the development of a new method of electron-beam lithography (12). We briefly summarize here the work on GaN, on oxide superlattice ELNES, and on lithography by STEM.

  12. Programmable Quantitative DNA Nanothermometers.

    Science.gov (United States)

    Gareau, David; Desrosiers, Arnaud; Vallée-Bélisle, Alexis

    2016-07-13

    Developing molecules, switches, probes or nanomaterials that are able to respond to specific temperature changes should prove of utility for several applications in nanotechnology. Here, we describe bioinspired strategies to design DNA thermoswitches with programmable linear response ranges that can provide either a precise ultrasensitive response over a desired, small temperature interval (±0.05 °C) or an extended linear response over a wide temperature range (e.g., from 25 to 90 °C). Using structural modifications or inexpensive DNA stabilizers, we show that we can tune the transition midpoints of DNA thermometers from 30 to 85 °C. Using multimeric switch architectures, we are able to create ultrasensitive thermometers that display large quantitative fluorescence gains within small temperature variation (e.g., > 700% over 10 °C). Using a combination of thermoswitches of different stabilities or a mix of stabilizers of various strengths, we can create extended thermometers that respond linearly up to 50 °C in temperature range. Here, we demonstrate the reversibility, robustness, and efficiency of these programmable DNA thermometers by monitoring temperature change inside individual wells during polymerase chain reactions. We discuss the potential applications of these programmable DNA thermoswitches in various nanotechnology fields including cell imaging, nanofluidics, nanomedecine, nanoelectronics, nanomaterial, and synthetic biology.

  13. Quantitive DNA Fiber Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Chun-Mei; Wang, Mei; Greulich-Bode, Karin M.; Weier, Jingly F.; Weier, Heinz-Ulli G.

    2008-01-28

    Several hybridization-based methods used to delineate single copy or repeated DNA sequences in larger genomic intervals take advantage of the increased resolution and sensitivity of free chromatin, i.e., chromatin released from interphase cell nuclei. Quantitative DNA fiber mapping (QDFM) differs from the majority of these methods in that it applies FISH to purified, clonal DNA molecules which have been bound with at least one end to a solid substrate. The DNA molecules are then stretched by the action of a receding meniscus at the water-air interface resulting in DNA molecules stretched homogeneously to about 2.3 kb/{micro}m. When non-isotopically, multicolor-labeled probes are hybridized to these stretched DNA fibers, their respective binding sites are visualized in the fluorescence microscope, their relative distance can be measured and converted into kilobase pairs (kb). The QDFM technique has found useful applications ranging from the detection and delineation of deletions or overlap between linked clones to the construction of high-resolution physical maps to studies of stalled DNA replication and transcription.

  14. Accurate level set method for simulations of liquid atomization☆

    Institute of Scientific and Technical Information of China (English)

    Changxiao Shao; Kun Luo; Jianshan Yang; Song Chen; Jianren Fan

    2015-01-01

    Computational fluid dynamics is an efficient numerical approach for spray atomization study, but it is chal enging to accurately capture the gas–liquid interface. In this work, an accurate conservative level set method is intro-duced to accurately track the gas–liquid interfaces in liquid atomization. To validate the capability of this method, binary drop collision and drop impacting on liquid film are investigated. The results are in good agreement with experiment observations. In addition, primary atomization (swirling sheet atomization) is studied using this method. To the swirling sheet atomization, it is found that Rayleigh–Taylor instability in the azimuthal direction causes the primary breakup of liquid sheet and complex vortex structures are clustered around the rim of the liq-uid sheet. The effects of central gas velocity and liquid–gas density ratio on atomization are also investigated. This work lays a solid foundation for further studying the mechanism of spray atomization.

  15. Highly Accurate Measurement of the Electron Orbital Magnetic Moment

    CERN Document Server

    Awobode, A M

    2015-01-01

    We propose to accurately determine the orbital magnetic moment of the electron by measuring, in a Magneto-Optical or Ion trap, the ratio of the Lande g-factors in two atomic states. From the measurement of (gJ1/gJ2), the quantity A, which depends on the corrections to the electron g-factors can be extracted, if the states are LS coupled. Given that highly accurate values of the correction to the spin g-factor are currently available, accurate values of the correction to the orbital g-factor may also be determined. At present, (-1.8 +/- 0.4) x 10-4 has been determined as a correction to the electron orbital g-factor, by using earlier measurements of the ratio gJ1/gJ2, made on the Indium 2P1/2 and 2P3/2 states.

  16. Simple and accurate analytical calculation of shortest path lengths

    CERN Document Server

    Melnik, Sergey

    2016-01-01

    We present an analytical approach to calculating the distribution of shortest paths lengths (also called intervertex distances, or geodesic paths) between nodes in unweighted undirected networks. We obtain very accurate results for synthetic random networks with specified degree distribution (the so-called configuration model networks). Our method allows us to accurately predict the distribution of shortest path lengths on real-world networks using their degree distribution, or joint degree-degree distribution. Compared to some other methods, our approach is simpler and yields more accurate results. In order to obtain the analytical results, we use the analogy between an infection reaching a node in $n$ discrete time steps (i.e., as in the susceptible-infected epidemic model) and that node being at a distance $n$ from the source of the infection.

  17. Accurate Fiber Length Measurement Using Time-of-Flight Technique

    Science.gov (United States)

    Terra, Osama; Hussein, Hatem

    2016-06-01

    Fiber artifacts of very well-measured length are required for the calibration of optical time domain reflectometers (OTDR). In this paper accurate length measurement of different fiber lengths using the time-of-flight technique is performed. A setup is proposed to measure accurately lengths from 1 to 40 km at 1,550 and 1,310 nm using high-speed electro-optic modulator and photodetector. This setup offers traceability to the SI unit of time, the second (and hence to meter by definition), by locking the time interval counter to the Global Positioning System (GPS)-disciplined quartz oscillator. Additionally, the length of a recirculating loop artifact is measured and compared with the measurement made for the same fiber by the National Physical Laboratory of United Kingdom (NPL). Finally, a method is proposed to relatively correct the fiber refractive index to allow accurate fiber length measurement.

  18. Accurate nuclear radii and binding energies from a chiral interaction

    CERN Document Server

    Ekstrom, A; Wendt, K A; Hagen, G; Papenbrock, T; Carlsson, B D; Forssen, C; Hjorth-Jensen, M; Navratil, P; Nazarewicz, W

    2015-01-01

    The accurate reproduction of nuclear radii and binding energies is a long-standing challenge in nuclear theory. To address this problem two-nucleon and three-nucleon forces from chiral effective field theory are optimized simultaneously to low-energy nucleon-nucleon scattering data, as well as binding energies and radii of few-nucleon systems and selected isotopes of carbon and oxygen. Coupled-cluster calculations based on this interaction, named NNLOsat, yield accurate binding energies and radii of nuclei up to 40Ca, and are consistent with the empirical saturation point of symmetric nuclear matter. In addition, the low-lying collective 3- states in 16O and 40Ca are described accurately, while spectra for selected p- and sd-shell nuclei are in reasonable agreement with experiment.

  19. Accurate reconstruction of digital holography using frequency domain zero padding

    Science.gov (United States)

    Shin, Jun Geun; Kim, Ju Wan; Lee, Jae Hwi; Lee, Byeong Ha

    2017-04-01

    We propose an image reconstruction method of digital holography for getting more accurate reconstruction. Digital holography provides both the light amplitude and the phase of a specimen through recording the interferogram. Since the Fresenl diffraction can be efficiently implemented by the Fourier transform, zero padding technique can be applied to obtain more accurate information. In this work, we report the method of frequency domain zero padding (FDZP). Both in computer-simulation and in experiment made with a USAF 1951 resolution chart and target, the FDZD gave the more accurate rconstruction images. Even though, the FDZD asks more processing time, with the help of graphics processing unit (GPU), it can find good applications in digital holography for 3-D profile imaging.

  20. Memory conformity affects inaccurate memories more than accurate memories.

    Science.gov (United States)

    Wright, Daniel B; Villalba, Daniella K

    2012-01-01

    After controlling for initial confidence, inaccurate memories were shown to be more easily distorted than accurate memories. In two experiments groups of participants viewed 50 stimuli and were then presented with these stimuli plus 50 fillers. During this test phase participants reported their confidence that each stimulus was originally shown. This was followed by computer-generated responses from a bogus participant. After being exposed to this response participants again rated the confidence of their memory. The computer-generated responses systematically distorted participants' responses. Memory distortion depended on initial memory confidence, with uncertain memories being more malleable than confident memories. This effect was moderated by whether the participant's memory was initially accurate or inaccurate. Inaccurate memories were more malleable than accurate memories. The data were consistent with a model describing two types of memory (i.e., recollective and non-recollective memories), which differ in how susceptible these memories are to memory distortion.

  1. Accurate torque-speed performance prediction for brushless dc motors

    Science.gov (United States)

    Gipper, Patrick D.

    Desirable characteristics of the brushless dc motor (BLDCM) have resulted in their application for electrohydrostatic (EH) and electromechanical (EM) actuation systems. But to effectively apply the BLDCM requires accurate prediction of performance. The minimum necessary performance characteristics are motor torque versus speed, peak and average supply current and efficiency. BLDCM nonlinear simulation software specifically adapted for torque-speed prediction is presented. The capability of the software to quickly and accurately predict performance has been verified on fractional to integral HP motor sizes, and is presented. Additionally, the capability of torque-speed prediction with commutation angle advance is demonstrated.

  2. Accurate analysis of planar metamaterials using the RLC theory

    DEFF Research Database (Denmark)

    Malureanu, Radu; Lavrinenko, Andrei

    2008-01-01

    In this work we will present an accurate description of metallic pads response using RLC theory. In order to calculate such response we take into account several factors including the mutual inductances, precise formula for determining the capacitance and also the pads’ resistance considering...... the variation of permittivity due to small thicknesses. Even if complex, such strategy gives accurate results and we believe that, after more refinement, can be used to completely calculate a complex metallic structure placed on a substrate in a far faster manner than full simulations programs do....

  3. Method of accurate grinding for single enveloping TI worm

    Institute of Scientific and Technical Information of China (English)

    SUN; Yuehai; ZHENG; Huijiang; BI; Qingzhen; WANG; Shuren

    2005-01-01

    TI worm drive consists of involute helical gear and its enveloping Hourglass worm. Accurate grinding for TI worm is the key manufacture technology for TI worm gearing being popularized and applied. According to the theory of gear mesh, the equations of tooth surface of worm drive are gained, and the equation of the axial section profile of grinding wheel that can accurately grind TI worm is extracted. Simultaneously,the relation of position and motion between TI worm and grinding wheel are expounded.The method for precisely grinding single enveloping TI worm is obtained.

  4. Quantitative photoacoustic image reconstruction improves accuracy in deep tissue structures.

    Science.gov (United States)

    Mastanduno, Michael A; Gambhir, Sanjiv S

    2016-10-01

    Photoacoustic imaging (PAI) is emerging as a potentially powerful imaging tool with multiple applications. Image reconstruction for PAI has been relatively limited because of limited or no modeling of light delivery to deep tissues. This work demonstrates a numerical approach to quantitative photoacoustic image reconstruction that minimizes depth and spectrally derived artifacts. We present the first time-domain quantitative photoacoustic image reconstruction algorithm that models optical sources through acoustic data to create quantitative images of absorption coefficients. We demonstrate quantitative accuracy of less than 5% error in large 3 cm diameter 2D geometries with multiple targets and within 22% error in the largest size quantitative photoacoustic studies to date (6cm diameter). We extend the algorithm to spectral data, reconstructing 6 varying chromophores to within 17% of the true values. This quantitiative PA tomography method was able to improve considerably on filtered-back projection from the standpoint of image quality, absolute, and relative quantification in all our simulation geometries. We characterize the effects of time step size, initial guess, and source configuration on final accuracy. This work could help to generate accurate quantitative images from both endogenous absorbers and exogenous photoacoustic dyes in both preclinical and clinical work, thereby increasing the information content obtained especially from deep-tissue photoacoustic imaging studies.

  5. Quantitative Literacy: Geosciences and Beyond

    Science.gov (United States)

    Richardson, R. M.; McCallum, W. G.

    2002-12-01

    Quantitative literacy seems like such a natural for the geosciences, right? The field has gone from its origin as a largely descriptive discipline to one where it is hard to imagine failing to bring a full range of mathematical tools to the solution of geological problems. Although there are many definitions of quantitative literacy, we have proposed one that is analogous to the UNESCO definition of conventional literacy: "A quantitatively literate person is one who, with understanding, can both read and represent quantitative information arising in his or her everyday life." Central to this definition is the concept that a curriculum for quantitative literacy must go beyond the basic ability to "read and write" mathematics and develop conceptual understanding. It is also critical that a curriculum for quantitative literacy be engaged with a context, be it everyday life, humanities, geoscience or other sciences, business, engineering, or technology. Thus, our definition works both within and outside the sciences. What role do geoscience faculty have in helping students become quantitatively literate? Is it our role, or that of the mathematicians? How does quantitative literacy vary between different scientific and engineering fields? Or between science and nonscience fields? We will argue that successful quantitative literacy curricula must be an across-the-curriculum responsibility. We will share examples of how quantitative literacy can be developed within a geoscience curriculum, beginning with introductory classes for nonmajors (using the Mauna Loa CO2 data set) through graduate courses in inverse theory (using singular value decomposition). We will highlight six approaches to across-the curriculum efforts from national models: collaboration between mathematics and other faculty; gateway testing; intensive instructional support; workshops for nonmathematics faculty; quantitative reasoning requirement; and individual initiative by nonmathematics faculty.

  6. Deterministic quantitative risk assessment development

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)

    2009-07-01

    Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)

  7. Quantitative luminescence imaging system

    Science.gov (United States)

    Batishko, C. R.; Stahl, K. A.; Fecht, B. A.

    The goal of the Measurement of Chemiluminescence project is to develop and deliver a suite of imaging radiometric instruments for measuring spatial distributions of chemiluminescence. Envisioned deliverables include instruments working at the microscopic, macroscopic, and life-sized scales. Both laboratory and field portable instruments are envisioned. The project also includes development of phantoms as enclosures for the diazoluminomelanin (DALM) chemiluminescent chemistry. A suite of either phantoms in a variety of typical poses, or phantoms that could be adjusted to a variety of poses, is envisioned. These are to include small mammals (rats), mid-sized mammals (monkeys), and human body parts. A complete human phantom that can be posed is a long-term goal of the development. Taken together, the chemistry and instrumentation provide a means for imaging rf dosimetry based on chemiluminescence induced by the heat resulting from rf energy absorption. The first delivered instrument, the Quantitative Luminescence Imaging System (QLIS), resulted in a patent, and an R&D Magazine 1991 R&D 100 award, recognizing it as one of the 100 most significant technological developments of 1991. The current status of the project is that three systems have been delivered, several related studies have been conducted, two preliminary human hand phantoms have been delivered, system upgrades have been implemented, and calibrations have been maintained. Current development includes sensitivity improvements to the microscope-based system; extension of the large-scale (potentially life-sized targets) system to field portable applications; extension of the 2-D large-scale system to 3-D measurement; imminent delivery of a more refined human hand phantom and a rat phantom; rf, thermal and imaging subsystem integration; and continued calibration and upgrade support.

  8. Global Quantitative Modeling of Chromatin Factor Interactions

    Science.gov (United States)

    Zhou, Jian; Troyanskaya, Olga G.

    2014-01-01

    Chromatin is the driver of gene regulation, yet understanding the molecular interactions underlying chromatin factor combinatorial patterns (or the “chromatin codes”) remains a fundamental challenge in chromatin biology. Here we developed a global modeling framework that leverages chromatin profiling data to produce a systems-level view of the macromolecular complex of chromatin. Our model ultilizes maximum entropy modeling with regularization-based structure learning to statistically dissect dependencies between chromatin factors and produce an accurate probability distribution of chromatin code. Our unsupervised quantitative model, trained on genome-wide chromatin profiles of 73 histone marks and chromatin proteins from modENCODE, enabled making various data-driven inferences about chromatin profiles and interactions. We provided a highly accurate predictor of chromatin factor pairwise interactions validated by known experimental evidence, and for the first time enabled higher-order interaction prediction. Our predictions can thus help guide future experimental studies. The model can also serve as an inference engine for predicting unknown chromatin profiles — we demonstrated that with this approach we can leverage data from well-characterized cell types to help understand less-studied cell type or conditions. PMID:24675896

  9. On the accurate molecular dynamics analysis of biological molecules

    Science.gov (United States)

    Yamashita, Takefumi

    2016-12-01

    As the evolution of computational technology has now enabled long molecular dynamics (MD) simulation, the evaluation of many physical properties shows improved convergence. Therefore, we can examine the detailed conditions of MD simulations and perform quantitative MD analyses. In this study, we address the quantitative and accuracy aspects of MD simulations using two example systems. First, it is found that several conditions of the MD simulations influence the area/lipid of the lipid bilayer. Second, we successfully detect the small but important differences in antibody motion between the antigen-bound and unbound states.

  10. Quantitative tomographic measurements of opaque multiphase flows

    Energy Technology Data Exchange (ETDEWEB)

    GEORGE,DARIN L.; TORCZYNSKI,JOHN R.; SHOLLENBERGER,KIM ANN; O' HERN,TIMOTHY J.; CECCIO,STEVEN L.

    2000-03-01

    An electrical-impedance tomography (EIT) system has been developed for quantitative measurements of radial phase distribution profiles in two-phase and three-phase vertical column flows. The EIT system is described along with the computer algorithm used for reconstructing phase volume fraction profiles. EIT measurements were validated by comparison with a gamma-densitometry tomography (GDT) system. The EIT system was used to accurately measure average solid volume fractions up to 0.05 in solid-liquid flows, and radial gas volume fraction profiles in gas-liquid flows with gas volume fractions up to 0.15. In both flows, average phase volume fractions and radial volume fraction profiles from GDT and EIT were in good agreement. A minor modification to the formula used to relate conductivity data to phase volume fractions was found to improve agreement between the methods. GDT and EIT were then applied together to simultaneously measure the solid, liquid, and gas radial distributions within several vertical three-phase flows. For average solid volume fractions up to 0.30, the gas distribution for each gas flow rate was approximately independent of the amount of solids in the column. Measurements made with this EIT system demonstrate that EIT may be used successfully for noninvasive, quantitative measurements of dispersed multiphase flows.

  11. Quantitative shadowgraphy and proton radiography for large intensity modulations

    CERN Document Server

    Kasim, Muhammad Firmansyah; Ratan, Naren; Sadler, James; Chen, Nicholas; Savert, Alexander; Trines, Raoul; Bingham, Robert; Burrows, Philip N; Kaluza, Malte C; Norreys, Peter

    2016-01-01

    Shadowgraphy is a technique widely used to diagnose objects or systems in various fields in physics and engineering. In shadowgraphy, an optical beam is deflected by the object and then the intensity modulation is captured on a screen placed some distance away. However, retrieving quantitative information from the shadowgrams themselves is a challenging task because of the non-linear nature of the process. Here, a novel method to retrieve quantitative information from shadowgrams, based on computational geometry, is presented for the first time. This process can be applied to proton radiography for electric and magnetic field diagnosis in high-energy-density plasmas and has been benchmarked using a toroidal magnetic field as the object, among others. It is shown that the method can accurately retrieve quantitative parameters with error bars less than 10%, even when caustics are present. The method is also shown to be robust enough to process real experimental results with simple pre- and post-processing techn...

  12. A Quantitative Assessment Approach to COTS Component Security

    Directory of Open Access Journals (Sweden)

    Jinfu Chen

    2013-01-01

    Full Text Available The vulnerability of software components hinders the development of component technology. An effective assessment approach to component security level can promote the development of component technology. Thus, the current paper proposes a quantitative assessment approach to COTS (commercial-off-the-shelf component security. The steps of interface fault injection and the assessment framework are given based on the internal factors of the tested component. The quantitative assessment algorithm and formula of component security level are also presented. The experiment results show that the approach not only can detect component security vulnerabilities effectively but also quantitatively assess the component security level. The score of component security can be accurately calculated, which represents the security level of the tested component.

  13. Spotsizer: High-throughput quantitative analysis of microbial growth

    Science.gov (United States)

    Jeffares, Daniel C.; Arzhaeva, Yulia; Bähler, Jürg

    2017-01-01

    Microbial colony growth can serve as a useful readout in assays for studying complex genetic interactions or the effects of chemical compounds. Although computational tools for acquiring quantitative measurements of microbial colonies have been developed, their utility can be compromised by inflexible input image requirements, non-trivial installation procedures, or complicated operation. Here, we present the Spotsizer software tool for automated colony size measurements in images of robotically arrayed microbial colonies. Spotsizer features a convenient graphical user interface (GUI), has both single-image and batch-processing capabilities, and works with multiple input image formats and different colony grid types. We demonstrate how Spotsizer can be used for high-throughput quantitative analysis of fission yeast growth. The user-friendly Spotsizer tool provides rapid, accurate, and robust quantitative analyses of microbial growth in a high-throughput format. Spotsizer is freely available at https://data.csiro.au/dap/landingpage?pid=csiro:15330 under a proprietary CSIRO license. PMID:27712582

  14. Accurate Non-adiabatic Quantum Dynamics from Pseudospectral Sampling of Time-dependent Gaussian Basis Sets

    CERN Document Server

    Heaps, Charles W

    2016-01-01

    Quantum molecular dynamics requires an accurate representation of the molecular potential energy surface from a minimal number of electronic structure calculations, particularly for nonadiabatic dynamics where excited states are required. In this paper, we employ pseudospectral sampling of time-dependent Gaussian basis functions for the simulation of non-adiabatic dynamics. Unlike other methods, the pseudospectral Gaussian molecular dynamics tests the Schr\\"{o}dinger equation with $N$ Dirac delta functions located at the centers of the Gaussian functions reducing the scaling of potential energy evaluations from $\\mathcal{O}(N^2)$ to $\\mathcal{O}(N)$. By projecting the Gaussian basis onto discrete points in space, the method is capable of efficiently and quantitatively describing nonadiabatic population transfer and intra-surface quantum coherence. We investigate three model systems; the photodissociation of three coupled Morse oscillators, the bound state dynamics of two coupled Morse oscillators, and a two-d...

  15. Accurate titration of avidin and streptavidin with biotin-fluorophore conjugates in complex, colored biofluids.

    Science.gov (United States)

    Gruber, H J; Kada, G; Marek, M; Kaiser, K

    1998-07-23

    A new fluorimetric assay is presented for the specific and reliable quantitation of >/=2 nM avidin and streptavidin. The assay is based on pronounced changes in the fluorescence properties of commercial fluorescein-biotin, or of a newly synthesized biotin-poly(ethylene glycol)-pyrene conjugate, which occur upon binding to avidin and streptavidin. Accurate measurement of (strept)avidin in complex, colored biofluids, such as crude egg white or serum relies on a simple titration protocol. Only occasional recalibration of the reagent solution is required. Due to these merits the proposed assay is particularly suited for rapid measurement of few samples on short notice, for functional control of (strept)avidin-containing reagents after storage, and for the monitoring of (strept)avidin concentrations in large scale processes. Copyright 1998 Elsevier Science B.V. All rights reserved.

  16. Workshop on quantitative dynamic stratigraphy

    Energy Technology Data Exchange (ETDEWEB)

    Cross, T.A.

    1988-04-01

    This document discusses the development of quantitative simulation models for the investigation of geologic systems. The selection of variables, model verification, evaluation, and future directions in quantitative dynamic stratigraphy (QDS) models are detailed. Interdisciplinary applications, integration, implementation, and transfer of QDS are also discussed. (FI)

  17. Mastering R for quantitative finance

    CERN Document Server

    Berlinger, Edina; Badics, Milán; Banai, Ádám; Daróczi, Gergely; Dömötör, Barbara; Gabler, Gergely; Havran, Dániel; Juhász, Péter; Margitai, István; Márkus, Balázs; Medvegyev, Péter; Molnár, Julia; Szucs, Balázs Árpád; Tuza, Ágnes; Vadász, Tamás; Váradi, Kata; Vidovics-Dancs, Ágnes

    2015-01-01

    This book is intended for those who want to learn how to use R's capabilities to build models in quantitative finance at a more advanced level. If you wish to perfectly take up the rhythm of the chapters, you need to be at an intermediate level in quantitative finance and you also need to have a reasonable knowledge of R.

  18. Understanding quantitative research: part 2

    OpenAIRE

    Hoare, Z.; Hoe, J.

    2013-01-01

    This article, which is the second in a two-part series, provides an introduction to understanding quantitative research, basic statistics and terminology used in research articles. Understanding statistical analysis will ensure that nurses can assess the credibility and significance of the evidence reported. This article focuses on explaining common statistical terms and the presentation of statistical data in quantitative research.

  19. Accurate Period Approximation for Any Simple Pendulum Amplitude

    Institute of Scientific and Technical Information of China (English)

    XUE De-Sheng; ZHOU Zhao; GAO Mei-Zhen

    2012-01-01

    Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed.Based on an approximation of the elliptic integral,two new logarithmic formulae for large amplitude close to 180° are obtained.Considering the trigonometric function modulation results from the dependence of relative error on the amplitude,we realize accurate approximation period expressions for any amplitude between 0 and 180°.A relative error less than 0.02% is achieved for any amplitude.This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.%Accurate approximate analytical formulae of the pendulum period composed of a few elementary functions for any amplitude are constructed. Based on an approximation of the elliptic integral, two new logarithmic formulae for large amplitude close to 180° are obtained. Considering the trigonometric function modulation results from the dependence of relative error on the amplitude, we realize accurate approximation period expressions for any amplitude between 0 and 180°. A relative error less than 0.02% is achieved for any amplitude. This kind of modulation is also effective for other large-amplitude logarithmic approximation expressions.

  20. On accurate boundary conditions for a shape sensitivity equation method

    Science.gov (United States)

    Duvigneau, R.; Pelletier, D.

    2006-01-01

    This paper studies the application of the continuous sensitivity equation method (CSEM) for the Navier-Stokes equations in the particular case of shape parameters. Boundary conditions for shape parameters involve flow derivatives at the boundary. Thus, accurate flow gradients are critical to the success of the CSEM. A new approach is presented to extract accurate flow derivatives at the boundary. High order Taylor series expansions are used on layered patches in conjunction with a constrained least-squares procedure to evaluate accurate first and second derivatives of the flow variables at the boundary, required for Dirichlet and Neumann sensitivity boundary conditions. The flow and sensitivity fields are solved using an adaptive finite-element method. The proposed methodology is first verified on a problem with a closed form solution obtained by the Method of Manufactured Solutions. The ability of the proposed method to provide accurate sensitivity fields for realistic problems is then demonstrated. The flow and sensitivity fields for a NACA 0012 airfoil are used for fast evaluation of the nearby flow over an airfoil of different thickness (NACA 0015).

  1. A Simple and Accurate Method for Measuring Enzyme Activity.

    Science.gov (United States)

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  2. The value of accurate A/R information.

    Science.gov (United States)

    Freeman, G; Allcorn, S

    1985-01-01

    The understanding and management of an accounts receivable system in a medical group practice is particularly important to administrators in today's economy. As the authors explain, an accurate information system can provide the medical group with valuable information regarding its financial condition and cash flow collections status, as well as help plan for future funding needs.

  3. Technique to accurately quantify collagen content in hyperconfluent cell culture.

    Science.gov (United States)

    See, Eugene Yong-Shun; Toh, Siew Lok; Goh, James Cho Hong

    2008-12-01

    Tissue engineering aims to regenerate tissues that can successfully take over the functions of the native tissue when it is damaged or diseased. In most tissues, collagen makes up the bulk component of the extracellular matrix, thus, there is great emphasis on its accurate quantification in tissue engineering. It has already been reported that pepsin digestion is able to solubilize the collagen deposited within the cell layer for accurate quantification of collagen content in cultures, but this method has drawbacks when cultured cells are hyperconfluent. In this condition, Pepsin digestion will result in fragments of the cell layers that cannot be completely resolved. These fragments of the undigested cell sheet are visible to the naked eye, which can bias the final results. To the best of our knowledge, there has been no reported method to accurately quantify the collagen content in hyperconfluent cell sheet. Therefore, this study aims to illustrate that sonication is able to aid pepsin digestion of hyperconfluent cell layers of fibroblasts and bone marrow mesenchymal stem cells, to solubilize all the collagen for accurate quantification purposes.

  4. A Simple and Accurate Method for Measuring Enzyme Activity.

    Science.gov (United States)

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  5. On a more accurate Hardy-Mulholland-type inequality

    Directory of Open Access Journals (Sweden)

    Bicheng Yang

    2016-03-01

    Full Text Available Abstract By using weight coefficients, technique of real analysis, and Hermite-Hadamard’s inequality, we give a more accurate Hardy-Mulholland-type inequality with multiparameters and a best possible constant factor related to the beta function. The equivalent forms, the reverses, the operator expressions, and some particular cases are also considered.

  6. Improved fingercode alignment for accurate and compact fingerprint recognition

    CSIR Research Space (South Africa)

    Brown, Dane

    2016-05-01

    Full Text Available The traditional texture-based fingerprint recognition system known as FingerCode is improved in this work. Texture-based fingerprint recognition methods are generally more accurate than other methods, but at the disadvantage of increased storage...

  7. Accurate analysis of planar metamaterials using the RLC theory

    DEFF Research Database (Denmark)

    Malureanu, Radu; Lavrinenko, Andrei

    2008-01-01

    In this work we will present an accurate description of metallic pads response using RLC theory. In order to calculate such response we take into account several factors including the mutual inductances, precise formula for determining the capacitance and also the pads’ resistance considering the...

  8. $H_{2}^{+}$ ion in strong magnetic field an accurate calculation

    CERN Document Server

    López, J C; Turbiner, A V

    1997-01-01

    Using a unique trial function we perform an accurate calculation of the ground state $1\\sigma_g$ of the hydrogenic molecular ion $H^+_2$ in a constant uniform magnetic field ranging $0-10^{13}$ G. We show that this trial function also makes it possible to study the negative parity ground state $1\\sigma_u$.

  9. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  10. Accurate and Simple Calibration of DLP Projector Systems

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods...

  11. Accurate segmentation of dense nanoparticles by partially discrete electron tomography

    Energy Technology Data Exchange (ETDEWEB)

    Roelandts, T., E-mail: tom.roelandts@ua.ac.be [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Batenburg, K.J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium); Centrum Wiskunde and Informatica, Science Park 123, 1098 XG Amsterdam (Netherlands); Biermans, E. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Kuebel, C. [Institute of Nanotechnology, Karlsruhe Institute of Technology, Hermann-von-Helmholtz-Platz 1, 76344 Eggenstein-Leopoldshafen (Germany); Bals, S. [EMAT, University of Antwerp, Groenenborgerlaan 171, 2020 Antwerp (Belgium); Sijbers, J. [IBBT-Vision Lab University of Antwerp, Universiteitsplein 1, 2610 Wilrijk (Belgium)

    2012-03-15

    Accurate segmentation of nanoparticles within various matrix materials is a difficult problem in electron tomography. Due to artifacts related to image series acquisition and reconstruction, global thresholding of reconstructions computed by established algorithms, such as weighted backprojection or SIRT, may result in unreliable and subjective segmentations. In this paper, we introduce the Partially Discrete Algebraic Reconstruction Technique (PDART) for computing accurate segmentations of dense nanoparticles of constant composition. The particles are segmented directly by the reconstruction algorithm, while the surrounding regions are reconstructed using continuously varying gray levels. As no properties are assumed for the other compositions of the sample, the technique can be applied to any sample where dense nanoparticles must be segmented, regardless of the surrounding compositions. For both experimental and simulated data, it is shown that PDART yields significantly more accurate segmentations than those obtained by optimal global thresholding of the SIRT reconstruction. -- Highlights: Black-Right-Pointing-Pointer We present a novel reconstruction method for partially discrete electron tomography. Black-Right-Pointing-Pointer It accurately segments dense nanoparticles directly during reconstruction. Black-Right-Pointing-Pointer The gray level to use for the nanoparticles is determined objectively. Black-Right-Pointing-Pointer The method expands the set of samples for which discrete tomography can be applied.

  12. Fast, Accurate and Detailed NoC Simulations

    NARCIS (Netherlands)

    Wolkotte, P.T.; Hölzenspies, P.K.F.; Smit, G.J.M.; Kellenberger, P.

    2007-01-01

    Network-on-Chip (NoC) architectures have a wide variety of parameters that can be adapted to the designer's requirements. Fast exploration of this parameter space is only possible at a high-level and several methods have been proposed. Cycle and bit accurate simulation is necessary when the actual r

  13. Novel multi-beam radiometers for accurate ocean surveillance

    DEFF Research Database (Denmark)

    Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.

    2014-01-01

    Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions...

  14. Practical schemes for accurate forces in quantum Monte Carlo

    NARCIS (Netherlands)

    Moroni, S.; Saccani, S.; Filippi, Claudia

    2014-01-01

    While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of

  15. Accurate eye center location through invariant isocentric patterns

    NARCIS (Netherlands)

    Valenti, R.; Gevers, T.

    2012-01-01

    Locating the center of the eyes allows for valuable information to be captured and used in a wide range of applications. Accurate eye center location can be determined using commercial eye-gaze trackers, but additional constraints and expensive hardware make these existing solutions unattractive and

  16. Creating a Culture of Accurate and Precise Data.

    Science.gov (United States)

    Bergren, Martha Dewey; Maughan, Erin D; Johnson, Kathleen H; Wolfe, Linda C; Watts, H Estelle S; Cole, Marjorie

    2017-01-01

    There are many stakeholders for school health data. Each one has a stake in the quality and accuracy of the health data collected and reported in schools. The joint NASN and NASSNC national school nurse data set initiative, Step Up & Be Counted!, heightens the need to assure accurate and precise data. The use of a standardized terminology allows the data on school health care delivered in local schools to be aggregated for use at the local, state, and national levels. The use of uniform terminology demands that data elements be defined and that accurate and reliable data are entered into the database. Barriers to accurate data are misunderstanding of accurate data needs, student caseloads that exceed the national recommendations, lack of electronic student health records, and electronic student health records that do not collect the indicators using the standardized terminology or definitions. The quality of the data that school nurses report and share has an impact at the personal, district, state, and national levels and influences the confidence and quality of the decisions made using that data.

  17. Modeling Battery Behavior for Accurate State-of-Charge Indication

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Veld, op het J.H.G.; Regtien, P.P.L.; Danilov, D.; Notten, P.H.L.

    2006-01-01

    Li-ion is the most commonly used battery chemistry in portable applications nowadays. Accurate state-of-charge (SOC) and remaining run-time indication for portable devices is important for the user's convenience and to prolong the lifetime of batteries. A new SOC indication system, combining the ele

  18. Compact and Accurate Turbocharger Modelling for Engine Control

    DEFF Research Database (Denmark)

    Sorenson, Spencer C; Hendricks, Elbert; Magnússon, Sigurjón

    2005-01-01

    (Engine Control Unit) as a table. This method uses a great deal of memory space and often requires on-line interpolation and thus a large amount of CPU time. In this paper a more compact, accurate and rapid method of dealing with the compressor modelling problem is presented and is applicable to all...

  19. Speed-of-sound compensated photoacoustic tomography for accurate imaging

    NARCIS (Netherlands)

    Jose, J.; Willemink, G.H.; Steenbergen, W.; Leeuwen, van A.G.J.M.; Manohar, S.

    2012-01-01

    Purpose: In most photoacoustic (PA) tomographic reconstructions, variations in speed-of-sound (SOS) of the subject are neglected under the assumption of acoustic homogeneity. Biological tissue with spatially heterogeneous SOS cannot be accurately reconstructed under this assumption. The authors pres

  20. BIOACCESSIBILITY TESTS ACCURATELY ESTIMATE BIOAVAILABILITY OF LEAD TO QUAIL

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contami...

  1. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  2. Dynamic weighing for accurate fertilizer application and monitoring

    NARCIS (Netherlands)

    Bergeijk, van J.; Goense, D.; Willigenburg, van L.G.; Speelman, L.

    2001-01-01

    The mass flow of fertilizer spreaders must be calibrated for the different types of fertilizers used. To obtain accurate fertilizer application manual calibration of actual mass flow must be repeated frequently. Automatic calibration is possible by measurement of the actual mass flow, based on

  3. A Self-Instructional Device for Conditioning Accurate Prosody.

    Science.gov (United States)

    Buiten, Roger; Lane, Harlan

    1965-01-01

    A self-instructional device for conditioning accurate prosody in second-language learning is described in this article. The Speech Auto-Instructional Device (SAID) is electro-mechanical and performs three functions: SAID (1) presents to the student tape-recorded pattern sentences that are considered standards in prosodic performance; (2) processes…

  4. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb, we incorporated Pb-contaminated soils or Pb acetate into diets for Japanese quail (Coturnix japonica), fed the quail for 15 days, and ...

  5. Practical schemes for accurate forces in quantum Monte Carlo

    NARCIS (Netherlands)

    Moroni, S.; Saccani, S.; Filippi, C.

    2014-01-01

    While the computation of interatomic forces has become a well-established practice within variational Monte Carlo (VMC), the use of the more accurate Fixed-Node Diffusion Monte Carlo (DMC) method is still largely limited to the computation of total energies on structures obtained at a lower level of

  6. Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks

    NARCIS (Netherlands)

    Bahrepour, M.; Meratnia, Nirvana; Havinga, Paul J.M.

    2010-01-01

    Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless

  7. Accurate quantification of microRNA via single strand displacement reaction on DNA origami motif.

    Directory of Open Access Journals (Sweden)

    Jie Zhu

    Full Text Available DNA origami is an emerging technology that assembles hundreds of staple strands and one single-strand DNA into certain nanopattern. It has been widely used in various fields including detection of biological molecules such as DNA, RNA and proteins. MicroRNAs (miRNAs play important roles in post-transcriptional gene repression as well as many other biological processes such as cell growth and differentiation. Alterations of miRNAs' expression contribute to many human diseases. However, it is still a challenge to quantitatively detect miRNAs by origami technology. In this study, we developed a novel approach based on streptavidin and quantum dots binding complex (STV-QDs labeled single strand displacement reaction on DNA origami to quantitatively detect the concentration of miRNAs. We illustrated a linear relationship between the concentration of an exemplary miRNA as miRNA-133 and the STV-QDs hybridization efficiency; the results demonstrated that it is an accurate nano-scale miRNA quantifier motif. In addition, both symmetrical rectangular motif and asymmetrical China-map motif were tested. With significant linearity in both motifs, our experiments suggested that DNA Origami motif with arbitrary shape can be utilized in this method. Since this DNA origami-based method we developed owns the unique advantages of simple, time-and-material-saving, potentially multi-targets testing in one motif and relatively accurate for certain impurity samples as counted directly by atomic force microscopy rather than fluorescence signal detection, it may be widely used in quantification of miRNAs.

  8. Accurate Quantification of microRNA via Single Strand Displacement Reaction on DNA Origami Motif

    Science.gov (United States)

    Lou, Jingyu; Li, Weidong; Li, Sheng; Zhu, Hongxin; Yang, Lun; Zhang, Aiping; He, Lin; Li, Can

    2013-01-01

    DNA origami is an emerging technology that assembles hundreds of staple strands and one single-strand DNA into certain nanopattern. It has been widely used in various fields including detection of biological molecules such as DNA, RNA and proteins. MicroRNAs (miRNAs) play important roles in post-transcriptional gene repression as well as many other biological processes such as cell growth and differentiation. Alterations of miRNAs' expression contribute to many human diseases. However, it is still a challenge to quantitatively detect miRNAs by origami technology. In this study, we developed a novel approach based on streptavidin and quantum dots binding complex (STV-QDs) labeled single strand displacement reaction on DNA origami to quantitatively detect the concentration of miRNAs. We illustrated a linear relationship between the concentration of an exemplary miRNA as miRNA-133 and the STV-QDs hybridization efficiency; the results demonstrated that it is an accurate nano-scale miRNA quantifier motif. In addition, both symmetrical rectangular motif and asymmetrical China-map motif were tested. With significant linearity in both motifs, our experiments suggested that DNA Origami motif with arbitrary shape can be utilized in this method. Since this DNA origami-based method we developed owns the unique advantages of simple, time-and-material-saving, potentially multi-targets testing in one motif and relatively accurate for certain impurity samples as counted directly by atomic force microscopy rather than fluorescence signal detection, it may be widely used in quantification of miRNAs. PMID:23990889

  9. On the importance of having accurate data for astrophysical modelling

    Science.gov (United States)

    Lique, Francois

    2016-06-01

    The Herschel telescope and the ALMA and NOEMA interferometers have opened new windows of observation for wavelengths ranging from far infrared to sub-millimeter with spatial and spectral resolutions previously unmatched. To make the most of these observations, an accurate knowledge of the physical and chemical processes occurring in the interstellar and circumstellar media is essential.In this presentation, I will discuss what are the current needs of astrophysics in terms of molecular data and I will show that accurate molecular data are crucial for the proper determination of the physical conditions in molecular clouds.First, I will focus on collisional excitation studies that are needed for molecular lines modelling beyond the Local Thermodynamic Equilibrium (LTE) approach. In particular, I will show how new collisional data for the HCN and HNC isomers, two tracers of star forming conditions, have allowed solving the problem of their respective abundance in cold molecular clouds. I will also present the last collisional data that have been computed in order to analyse new highly resolved observations provided by the ALMA interferometer.Then, I will present the calculation of accurate rate constants for the F+H2 → HF+H and Cl+H2 ↔ HCl+H reactions, which have allowed a more accurate determination of the physical conditions in diffuse molecular clouds. I will also present the recent work on the ortho-para-H2 conversion due to hydrogen exchange that allow more accurate determination of the ortho-to-para-H2 ratio in the universe and that imply a significant revision of the cooling mechanism in astrophysical media.

  10. The use sof real-time quantitative PCR for the analysis of cytokine mRNA levels

    NARCIS (Netherlands)

    Forlenza, M.; Kaiser, T.; Savelkoul, H.F.J.; Wiegertjes, G.F.

    2012-01-01

    Over the last decade, real-time-quantitative PCR (RT-qPCR) analysis has become the method of choice not only for quantitative and accurate measurement of mRNA expression levels, but also for sensitive detection of rare or mutated DNA species in diagnostic research. RT-qPCR is based on the standard p

  11. Dynamic windowing algorithm for the fast and accurate determination of luminescence lifetimes.

    Science.gov (United States)

    Collier, Bradley B; McShane, Michael J

    2012-06-05

    An algorithm for the accurate calculation of luminescence lifetimes in near-real-time is described. The dynamic rapid lifetime determination (DRLD) method uses a window-summing technique and dynamically selects the appropriate window width for each lifetime decay such that a large range of lifetimes can be accurately calculated. The selection of window width is based on an optimal range of window-sum ratios. The algorithm was compared to alternative approaches for rapid lifetime determination as well as nonlinear least-squares (NLLS) fitting in both simulated and real experimental conditions. A palladium porphyrin was used as a model luminophore to quantitatively evaluate the algorithm in a dynamic situation, where oxygen concentration was modulated to induce a change in lifetime. Unlike other window-summing techniques, the new algorithm calculates lifetimes that are not significantly different than the slower, traditional NLLS. In addition, the computation time required to calculate the lifetime is 4 orders of magnitude less than NLLS and 2 orders less than other iterative methods. This advance will improve the accuracy of real-time measurements that must be made on samples that are expected to exhibit widely varying lifetimes, such as sensors and biosensors.

  12. Accurate thermodynamic relations of the melting temperature of nanocrystals with different shapes and pure theoretical calculation

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Jinhua; Fu, Qingshan; Xue, Yongqiang, E-mail: xyqlw@126.com; Cui, Zixiang

    2017-05-01

    Based on the surface pre-melting model, accurate thermodynamic relations of the melting temperature of nanocrystals with different shapes (tetrahedron, cube, octahedron, dodecahedron, icosahedron, nanowire) were derived. The theoretically calculated melting temperatures are in relative good agreements with experimental, molecular dynamic simulation and other theoretical results for nanometer Au, Ag, Al, In and Pb. It is found that the particle size and shape have notable effects on the melting temperature of nanocrystals, and the smaller the particle size, the greater the effect of shape. Furthermore, at the same equivalent radius, the more the shape deviates from sphere, the lower the melting temperature is. The value of melting temperature depression of cylindrical nanowire is just half of that of spherical nanoparticle with an identical radius. The theoretical relations enable one to quantitatively describe the influence regularities of size and shape on the melting temperature and to provide an effective way to predict and interpret the melting temperature of nanocrystals with different sizes and shapes. - Highlights: • Accurate relations of T{sub m} of nanocrystals with various shapes are derived. • Calculated T{sub m} agree with literature results for nano Au, Ag, Al, In and Pb. • ΔT{sub m} (nanowire) = 0.5ΔT{sub m} (spherical nanocrystal). • The relations apply to predict and interpret the melting behaviors of nanocrystals.

  13. Accurate phosphoregulation of kinetochore–microtubule affinity requires unconstrained molecular interactions

    Science.gov (United States)

    Zaytsev, Anatoly V.; Sundin, Lynsie J.R.; DeLuca, Keith F.

    2014-01-01

    Accurate chromosome segregation relies on dynamic interactions between microtubules (MTs) and the NDC80 complex, a major kinetochore MT-binding component. Phosphorylation at multiple residues of its Hec1 subunit may tune kinetochore–MT binding affinity for diverse mitotic functions, but molecular details of such phosphoregulation remain elusive. Using quantitative analyses of mitotic progression in mammalian cells, we show that Hec1 phosphorylation provides graded control of kinetochore–MT affinity. In contrast, modeling the kinetochore interface with repetitive MT binding sites predicts a switchlike response. To reconcile these findings, we hypothesize that interactions between NDC80 complexes and MTs are not constrained, i.e., the NDC80 complexes can alternate their binding between adjacent kinetochore MTs. Experiments using cells with phosphomimetic Hec1 mutants corroborate predictions of such a model but not of the repetitive sites model. We propose that accurate regulation of kinetochore–MT affinity is driven by incremental phosphorylation of an NDC80 molecular “lawn,” in which the NDC80–MT bonds reorganize dynamically in response to the number and stability of MT attachments. PMID:24982430

  14. DNA barcode data accurately assign higher spider taxa

    Directory of Open Access Journals (Sweden)

    Jonathan A. Coddington

    2016-07-01

    Full Text Available The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%. Accurate assignment of higher taxa (PIdent above which errors totaled less than 5% occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However

  15. Quantitative approaches in developmental biology.

    Science.gov (United States)

    Oates, Andrew C; Gorfinkiel, Nicole; González-Gaitán, Marcos; Heisenberg, Carl-Philipp

    2009-08-01

    The tissues of a developing embryo are simultaneously patterned, moved and differentiated according to an exchange of information between their constituent cells. We argue that these complex self-organizing phenomena can only be fully understood with quantitative mathematical frameworks that allow specific hypotheses to be formulated and tested. The quantitative and dynamic imaging of growing embryos at the molecular, cellular and tissue level is the key experimental advance required to achieve this interaction between theory and experiment. Here we describe how mathematical modelling has become an invaluable method to integrate quantitative biological information across temporal and spatial scales, serving to connect the activity of regulatory molecules with the morphological development of organisms.

  16. Understanding quantitative research: part 1.

    Science.gov (United States)

    Hoe, Juanita; Hoare, Zoë

    This article, which is the first in a two-part series, provides an introduction to understanding quantitative research, basic statistics and terminology used in research articles. Critical appraisal of research articles is essential to ensure that nurses remain up to date with evidence-based practice to provide consistent and high-quality nursing care. This article focuses on developing critical appraisal skills and understanding the use and implications of different quantitative approaches to research. Part two of this article will focus on explaining common statistical terms and the presentation of statistical data in quantitative research.

  17. Quantitative vs qualitative research methods.

    Science.gov (United States)

    Lakshman, M; Sinha, L; Biswas, M; Charles, M; Arora, N K

    2000-05-01

    Quantitative methods have been widely used because of the fact that things that can be measured or counted gain scientific credibility over the unmeasurable. But the extent of biological abnormality, severity, consequences and the impact of illness cannot be satisfactorily captured and answered by the quantitative research alone. In such situations qualitative methods take a holistic perspective preserving the complexities of human behavior by addressing the "why" and "how" questions. In this paper an attempt has been made to highlight the strengths and weaknesses of both the methods and also that a balanced mix of both qualitative as well as quantitative methods yield the most valid and reliable results.

  18. Accurate parameter estimation for unbalanced three-phase system.

    Science.gov (United States)

    Chen, Yuan; So, Hing Cheung

    2014-01-01

    Smart grid is an intelligent power generation and control console in modern electricity networks, where the unbalanced three-phase power system is the commonly used model. Here, parameter estimation for this system is addressed. After converting the three-phase waveforms into a pair of orthogonal signals via the α β-transformation, the nonlinear least squares (NLS) estimator is developed for accurately finding the frequency, phase, and voltage parameters. The estimator is realized by the Newton-Raphson scheme, whose global convergence is studied in this paper. Computer simulations show that the mean square error performance of NLS method can attain the Cramér-Rao lower bound. Moreover, our proposal provides more accurate frequency estimation when compared with the complex least mean square (CLMS) and augmented CLMS.

  19. Symmetric Uniformly Accurate Gauss-Runge-Kutta Method

    Directory of Open Access Journals (Sweden)

    Dauda G. YAKUBU

    2007-08-01

    Full Text Available Symmetric methods are particularly attractive for solving stiff ordinary differential equations. In this paper by the selection of Gauss-points for both interpolation and collocation, we derive high order symmetric single-step Gauss-Runge-Kutta collocation method for accurate solution of ordinary differential equations. The resulting symmetric method with continuous coefficients is evaluated for the proposed block method for accurate solution of ordinary differential equations. More interestingly, the block method is self-starting with adequate absolute stability interval that is capable of producing simultaneously dense approximation to the solution of ordinary differential equations at a block of points. The use of this method leads to a maximal gain in efficiency as well as in minimal function evaluation per step.

  20. Accurate measurement of the helical twisting power of chiral dopants

    Science.gov (United States)

    Kosa, Tamas; Bodnar, Volodymyr; Taheri, Bahman; Palffy-Muhoray, Peter

    2002-03-01

    We propose a method for the accurate determination of the helical twisting power (HTP) of chiral dopants. In the usual Cano-wedge method, the wedge angle is determined from the far-field separation of laser beams reflected from the windows of the test cell. Here we propose to use an optical fiber based spectrometer to accurately measure the cell thickness. Knowing the cell thickness at the positions of the disclination lines allows determination of the HTP. We show that this extension of the Cano-wedge method greatly increases the accuracy with which the HTP is determined. We show the usefulness of this method by determining the HTP of ZLI811 in a variety of hosts with negative dielectric anisotropy.

  1. Accurate multireference study of Si3 electronic manifold

    CERN Document Server

    Goncalves, Cayo Emilio Monteiro; Braga, Joao Pedro

    2016-01-01

    Since it has been shown that the silicon trimer has a highly multi-reference character, accurate multi-reference configuration interaction calculations are performed to elucidate its electronic manifold. Emphasis is given to the long range part of the potential, aiming to understand the atom-diatom collisions dynamical aspects, to describe conical intersections and important saddle points along the reactive path. Potential energy surface main features analysis are performed for benchmarking, and highly accurate values for structures, vibrational constants and energy gaps are reported, as well as the unpublished spin-orbit coupling magnitude. The results predict that inter-system crossings will play an important role in dynamical simulations, specially in triplet state quenching, making the problem of constructing a precise potential energy surface more complicated and multi-layer dependent. The ground state is predicted to be the singlet one, but since the singlet-triplet gap is rather small (2.448 kJ/mol) bo...

  2. Efficient and Accurate Robustness Estimation for Large Complex Networks

    CERN Document Server

    Wandelt, Sebastian

    2016-01-01

    Robustness estimation is critical for the design and maintenance of resilient networks, one of the global challenges of the 21st century. Existing studies exploit network metrics to generate attack strategies, which simulate intentional attacks in a network, and compute a metric-induced robustness estimation. While some metrics are easy to compute, e.g. degree centrality, other, more accurate, metrics require considerable computation efforts, e.g. betweennes centrality. We propose a new algorithm for estimating the robustness of a network in sub-quadratic time, i.e., significantly faster than betweenness centrality. Experiments on real-world networks and random networks show that our algorithm estimates the robustness of networks close to or even better than betweenness centrality, while being orders of magnitudes faster. Our work contributes towards scalable, yet accurate methods for robustness estimation of large complex networks.

  3. Accurate speed and slip measurement of induction motors

    Energy Technology Data Exchange (ETDEWEB)

    Ho, S.Y.S.; Langman, R. [Tasmania Univ., Hobart, TAS (Australia)

    1996-03-01

    Two alternative hardware circuits, for the accurate measurement of low slip in cage induction motors, are discussed. Both circuits compare the periods of the fundamental of the supply frequency and pulses from a shaft-connected toothed-wheel. The better of the two achieves accuracy to 0.5 percent of slip over the range 0.1 to 0.005, or better than 0.001 percent of speed over the range. This method is considered useful for slip measurement of motors supplied by either constant frequency mains of variable speed controllers with PMW waveforms. It is demonstrated that accurate slip measurement supports the conclusions of work previously done on the detection of broken rotor bars. (author). 1 tab., 6 figs., 13 refs.

  4. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Directory of Open Access Journals (Sweden)

    Zhiwei Zhao

    2015-02-01

    Full Text Available Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1 achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2 greatly improves the performance of protocols exploiting link correlation.

  5. Accurate analysis of arbitrarily-shaped helical groove waveguide

    Institute of Scientific and Technical Information of China (English)

    Liu Hong-Tao; Wei Yan-Yu; Gong Yu-Bin; Yue Ling-Na; Wang Wen-Xiang

    2006-01-01

    This paper presents a theory on accurately analysing the dispersion relation and the interaction impedance of electromagnetic waves propagating through a helical groove waveguide with arbitrary groove shape, in which the complex groove profile is synthesized by a series of rectangular steps. By introducing the influence of high-order evanescent modes on the connection of any two neighbouring steps by an equivalent susceptance under a modified admittance matching condition, the assumption of the neglecting discontinuity capacitance in previously published analysis is avoided, and the accurate dispersion equation is obtained by means of a combination of field-matching method and admittancematching technique. The validity of this theory is proved by comparison between the measurements and the numerical calculations for two kinds of helical groove waveguides with different groove shapes.

  6. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  7. Multimodal Spatial Calibration for Accurately Registering EEG Sensor Positions

    Directory of Open Access Journals (Sweden)

    Jianhua Zhang

    2014-01-01

    Full Text Available This paper proposes a fast and accurate calibration method to calibrate multiple multimodal sensors using a novel photogrammetry system for fast localization of EEG sensors. The EEG sensors are placed on human head and multimodal sensors are installed around the head to simultaneously obtain all EEG sensor positions. A multiple views’ calibration process is implemented to obtain the transformations of multiple views. We first develop an efficient local repair algorithm to improve the depth map, and then a special calibration body is designed. Based on them, accurate and robust calibration results can be achieved. We evaluate the proposed method by corners of a chessboard calibration plate. Experimental results demonstrate that the proposed method can achieve good performance, which can be further applied to EEG source localization applications on human brain.

  8. Library preparation for highly accurate population sequencing of RNA viruses

    Science.gov (United States)

    Acevedo, Ashley; Andino, Raul

    2015-01-01

    Circular resequencing (CirSeq) is a novel technique for efficient and highly accurate next-generation sequencing (NGS) of RNA virus populations. The foundation of this approach is the circularization of fragmented viral RNAs, which are then redundantly encoded into tandem repeats by ‘rolling-circle’ reverse transcription. When sequenced, the redundant copies within each read are aligned to derive a consensus sequence of their initial RNA template. This process yields sequencing data with error rates far below the variant frequencies observed for RNA viruses, facilitating ultra-rare variant detection and accurate measurement of low-frequency variants. Although library preparation takes ~5 d, the high-quality data generated by CirSeq simplifies downstream data analysis, making this approach substantially more tractable for experimentalists. PMID:24967624

  9. FIXED-WING MICRO AERIAL VEHICLE FOR ACCURATE CORRIDOR MAPPING

    Directory of Open Access Journals (Sweden)

    M. Rehak

    2015-08-01

    Full Text Available In this study we present a Micro Aerial Vehicle (MAV equipped with precise position and attitude sensors that together with a pre-calibrated camera enables accurate corridor mapping. The design of the platform is based on widely available model components to which we integrate an open-source autopilot, customized mass-market camera and navigation sensors. We adapt the concepts of system calibration from larger mapping platforms to MAV and evaluate them practically for their achievable accuracy. We present case studies for accurate mapping without ground control points: first for a block configuration, later for a narrow corridor. We evaluate the mapping accuracy with respect to checkpoints and digital terrain model. We show that while it is possible to achieve pixel (3-5 cm mapping accuracy in both cases, precise aerial position control is sufficient for block configuration, the precise position and attitude control is required for corridor mapping.

  10. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  11. A method to detect landmark pairs accurately between intra-patient volumetric medical images.

    Science.gov (United States)

    Yang, Deshan; Zhang, Miao; Chang, Xiao; Fu, Yabo; Liu, Shi; Li, Harold H; Mutic, Sasa; Duan, Ye

    2017-08-23

    An image processing procedure was developed in this study to detect large quantity of landmark pairs accurately in pairs of volumetric medical images. The detected landmark pairs can be used to evaluate of deformable image registration (DIR) methods quantitatively. Landmark detection and pair matching were implemented in a Gaussian pyramid multi-resolution scheme. A 3D scale-invariant feature transform (SIFT) feature detection method and a 3D Harris-Laplacian corner detection method were employed to detect feature points, i.e., landmarks. A novel feature matching algorithm, Multi-Resolution Inverse-Consistent Guided Matching or MRICGM, was developed to allow accurate feature pairs matching. MRICGM performs feature matching using guidance by the feature pairs detected at the lower resolution stage and the higher confidence feature pairs already detected at the same resolution stage, while enforces inverse consistency. The proposed feature detection and feature pair matching algorithms were optimized to process 3D CT and MRI images. They were successfully applied between the inter-phase abdomen 4DCT images of three patients, between the original and the re-scanned radiation therapy simulation CT images of two head-neck patients, and between inter-fractional treatment MRIs of two patients. The proposed procedure was able to successfully detect and match over 6300 feature pairs on average. The automatically detected landmark pairs were manually verified and the mismatched pairs were rejected. The automatic feature matching accuracy before manual error rejection was 99.4%. Performance of MRICGM was also evaluated using seven digital phantom datasets with known ground truth of tissue deformation. On average, 11855 feature pairs were detected per digital phantom dataset with TRE = 0.77 ± 0.72 mm. A procedure was developed in this study to detect large number of landmark pairs accurately between two volumetric medical images. It allows a semi-automatic way to generate the

  12. A robust and accurate formulation of molecular and colloidal electrostatics

    Science.gov (United States)

    Sun, Qiang; Klaseboer, Evert; Chan, Derek Y. C.

    2016-08-01

    This paper presents a re-formulation of the boundary integral method for the Debye-Hückel model of molecular and colloidal electrostatics that removes the mathematical singularities that have to date been accepted as an intrinsic part of the conventional boundary integral equation method. The essence of the present boundary regularized integral equation formulation consists of subtracting a known solution from the conventional boundary integral method in such a way as to cancel out the singularities associated with the Green's function. This approach better reflects the non-singular physical behavior of the systems on boundaries with the benefits of the following: (i) the surface integrals can be evaluated accurately using quadrature without any need to devise special numerical integration procedures, (ii) being able to use quadratic or spline function surface elements to represent the surface more accurately and the variation of the functions within each element is represented to a consistent level of precision by appropriate interpolation functions, (iii) being able to calculate electric fields, even at boundaries, accurately and directly from the potential without having to solve hypersingular integral equations and this imparts high precision in calculating the Maxwell stress tensor and consequently, intermolecular or colloidal forces, (iv) a reliable way to handle geometric configurations in which different parts of the boundary can be very close together without being affected by numerical instabilities, therefore potentials, fields, and forces between surfaces can be found accurately at surface separations down to near contact, and (v) having the simplicity of a formulation that does not require complex algorithms to handle singularities will result in significant savings in coding effort and in the reduction of opportunities for coding errors. These advantages are illustrated using examples drawn from molecular and colloidal electrostatics.

  13. Accurate Method for Determining Adhesion of Cantilever Beams

    Energy Technology Data Exchange (ETDEWEB)

    Michalske, T.A.; de Boer, M.P.

    1999-01-08

    Using surface micromachined samples, we demonstrate the accurate measurement of cantilever beam adhesion by using test structures which are adhered over long attachment lengths. We show that this configuration has a deep energy well, such that a fracture equilibrium is easily reached. When compared to the commonly used method of determining the shortest attached beam, the present method is much less sensitive to variations in surface topography or to details of capillary drying.

  14. Accurate and Simple Calibration of DLP Projector Systems

    OpenAIRE

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry pr...

  15. Accurate Insertion Loss Measurements of the Juno Patch Array Antennas

    Science.gov (United States)

    Chamberlain, Neil; Chen, Jacqueline; Hodges, Richard; Demas, John

    2010-01-01

    This paper describes two independent methods for estimating the insertion loss of patch array antennas that were developed for the Juno Microwave Radiometer instrument. One method is based principally on pattern measurements while the other method is based solely on network analyzer measurements. The methods are accurate to within 0.1 dB for the measured antennas and show good agreement (to within 0.1dB) of separate radiometric measurements.

  16. The highly accurate anteriolateral portal for injecting the knee

    Directory of Open Access Journals (Sweden)

    Chavez-Chiang Colbert E

    2011-03-01

    Full Text Available Abstract Background The extended knee lateral midpatellar portal for intraarticular injection of the knee is accurate but is not practical for all patients. We hypothesized that a modified anteriolateral portal where the synovial membrane of the medial femoral condyle is the target would be highly accurate and effective for intraarticular injection of the knee. Methods 83 subjects with non-effusive osteoarthritis of the knee were randomized to intraarticular injection using the modified anteriolateral bent knee versus the standard lateral midpatellar portal. After hydrodissection of the synovial membrane with lidocaine using a mechanical syringe (reciprocating procedure device, 80 mg of triamcinolone acetonide were injected into the knee with a 2.0-in (5.1-cm 21-gauge needle. Baseline pain, procedural pain, and pain at outcome (2 weeks and 6 months were determined with the 10 cm Visual Analogue Pain Score (VAS. The accuracy of needle placement was determined by sonographic imaging. Results The lateral midpatellar and anteriolateral portals resulted in equivalent clinical outcomes including procedural pain (VAS midpatellar: 4.6 ± 3.1 cm; anteriolateral: 4.8 ± 3.2 cm; p = 0.77, pain at outcome (VAS midpatellar: 2.6 ± 2.8 cm; anteriolateral: 1.7 ± 2.3 cm; p = 0.11, responders (midpatellar: 45%; anteriolateral: 56%; p = 0.33, duration of therapeutic effect (midpatellar: 3.9 ± 2.4 months; anteriolateral: 4.1 ± 2.2 months; p = 0.69, and time to next procedure (midpatellar: 7.3 ± 3.3 months; anteriolateral: 7.7 ± 3.7 months; p = 0.71. The anteriolateral portal was 97% accurate by real-time ultrasound imaging. Conclusion The modified anteriolateral bent knee portal is an effective, accurate, and equivalent alternative to the standard lateral midpatellar portal for intraarticular injection of the knee. Trial Registration ClinicalTrials.gov: NCT00651625

  17. Accurate quantum state estimation via "Keeping the experimentalist honest"

    CERN Document Server

    Blume-Kohout, R; Blume-Kohout, Robin; Hayden, Patrick

    2006-01-01

    In this article, we derive a unique procedure for quantum state estimation from a simple, self-evident principle: an experimentalist's estimate of the quantum state generated by an apparatus should be constrained by honesty. A skeptical observer should subject the estimate to a test that guarantees that a self-interested experimentalist will report the true state as accurately as possible. We also find a non-asymptotic, operational interpretation of the quantum relative entropy function.

  18. Are Predictive Energy Expenditure Equations in Ventilated Surgery Patients Accurate?

    Science.gov (United States)

    Tignanelli, Christopher J; Andrews, Allan G; Sieloff, Kurt M; Pleva, Melissa R; Reichert, Heidi A; Wooley, Jennifer A; Napolitano, Lena M; Cherry-Bukowiec, Jill R

    2017-01-01

    While indirect calorimetry (IC) is the gold standard used to calculate specific calorie needs in the critically ill, predictive equations are frequently utilized at many institutions for various reasons. Prior studies suggest these equations frequently misjudge actual resting energy expenditure (REE) in medical and mixed intensive care unit (ICU) patients; however, their utility for surgical ICU (SICU) patients has not been fully evaluated. Therefore, the objective of this study was to compare the REE measured by IC with REE calculated using specific calorie goals or predictive equations for nutritional support in ventilated adult SICU patients. A retrospective review of prospectively collected data was performed on all adults (n = 419, 18-91 years) mechanically ventilated for >24 hours, with an Fio2 ≤ 60%, who met IC screening criteria. Caloric needs were estimated using Harris-Benedict equations (HBEs), and 20, 25, and 30 kcal/kg/d with actual (ABW), adjusted (ADJ), and ideal body (IBW) weights. The REE was measured using IC. The estimated REE was considered accurate when within ±10% of the measured REE by IC. The HBE, 20, 25, and 30 kcal/kg/d estimates of REE were found to be inaccurate regardless of age, gender, or weight. The HBE and 20 kcal/kg/d underestimated REE, while 25 and 30 kcal/kg/d overestimated REE. Of the methods studied, those found to most often accurately estimate REE were the HBE using ABW, which was accurate 35% of the time, and 25 kcal/kg/d ADJ, which was accurate 34% of the time. This difference was not statistically significant. Using HBE, 20, 25, or 30 kcal/kg/d to estimate daily caloric requirements in critically ill surgical patients is inaccurate compared to REE measured by IC. In SICU patients with nutrition requirements essential to recovery, IC measurement should be performed to guide clinicians in determining goal caloric requirements.

  19. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry

    OpenAIRE

    Fuchs, Franz G.; Hjelmervik, Jon M.

    2014-01-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire...

  20. A multiple more accurate Hardy-Littlewood-Polya inequality

    Directory of Open Access Journals (Sweden)

    Qiliang Huang

    2012-11-01

    Full Text Available By introducing multi-parameters and conjugate exponents and using Euler-Maclaurin’s summation formula, we estimate the weight coefficient and prove a multiple more accurate Hardy-Littlewood-Polya (H-L-P inequality, which is an extension of some earlier published results. We also prove that the constant factor in the new inequality is the best possible, and obtain its equivalent forms.

  1. A highly accurate method to solve Fisher’s equation

    Indian Academy of Sciences (India)

    Mehdi Bastani; Davod Khojasteh Salkuyeh

    2012-03-01

    In this study, we present a new and very accurate numerical method to approximate the Fisher’s-type equations. Firstly, the spatial derivative in the proposed equation is approximated by a sixth-order compact finite difference (CFD6) scheme. Secondly, we solve the obtained system of differential equations using a third-order total variation diminishing Runge–Kutta (TVD-RK3) scheme. Numerical examples are given to illustrate the efficiency of the proposed method.

  2. Accurate and simple calibration of DLP projector systems

    Science.gov (United States)

    Wilm, Jakob; Olesen, Oline V.; Larsen, Rasmus

    2014-03-01

    Much work has been devoted to the calibration of optical cameras, and accurate and simple methods are now available which require only a small number of calibration targets. The problem of obtaining these parameters for light projectors has not been studied as extensively and most current methods require a camera and involve feature extraction from a known projected pattern. In this work we present a novel calibration technique for DLP Projector systems based on phase shifting profilometry projection onto a printed calibration target. In contrast to most current methods, the one presented here does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination of parameters including lens distortion. Our implementation acquires printed planar calibration scenes in less than 1s. This makes our method both fast and convenient. We evaluate our method in terms of reprojection errors and structured light image reconstruction quality.

  3. Accurate modelling of unsteady flows in collapsible tubes.

    Science.gov (United States)

    Marchandise, Emilie; Flaud, Patrice

    2010-01-01

    The context of this paper is the development of a general and efficient numerical haemodynamic tool to help clinicians and researchers in understanding of physiological flow phenomena. We propose an accurate one-dimensional Runge-Kutta discontinuous Galerkin (RK-DG) method coupled with lumped parameter models for the boundary conditions. The suggested model has already been successfully applied to haemodynamics in arteries and is now extended for the flow in collapsible tubes such as veins. The main difference with cardiovascular simulations is that the flow may become supercritical and elastic jumps may appear with the numerical consequence that scheme may not remain monotone if no limiting procedure is introduced. We show that our second-order RK-DG method equipped with an approximate Roe's Riemann solver and a slope-limiting procedure allows us to capture elastic jumps accurately. Moreover, this paper demonstrates that the complex physics associated with such flows is more accurately modelled than with traditional methods such as finite difference methods or finite volumes. We present various benchmark problems that show the flexibility and applicability of the numerical method. Our solutions are compared with analytical solutions when they are available and with solutions obtained using other numerical methods. Finally, to illustrate the clinical interest, we study the emptying process in a calf vein squeezed by contracting skeletal muscle in a normal and pathological subject. We compare our results with experimental simulations and discuss the sensitivity to parameters of our model.

  4. BASIC: A Simple and Accurate Modular DNA Assembly Method.

    Science.gov (United States)

    Storch, Marko; Casini, Arturo; Mackrow, Ben; Ellis, Tom; Baldwin, Geoff S

    2017-01-01

    Biopart Assembly Standard for Idempotent Cloning (BASIC) is a simple, accurate, and robust DNA assembly method. The method is based on linker-mediated DNA assembly and provides highly accurate DNA assembly with 99 % correct assemblies for four parts and 90 % correct assemblies for seven parts [1]. The BASIC standard defines a single entry vector for all parts flanked by the same prefix and suffix sequences and its idempotent nature means that the assembled construct is returned in the same format. Once a part has been adapted into the BASIC format it can be placed at any position within a BASIC assembly without the need for reformatting. This allows laboratories to grow comprehensive and universal part libraries and to share them efficiently. The modularity within the BASIC framework is further extended by the possibility of encoding ribosomal binding sites (RBS) and peptide linker sequences directly on the linkers used for assembly. This makes BASIC a highly versatile library construction method for combinatorial part assembly including the construction of promoter, RBS, gene variant, and protein-tag libraries. In comparison with other DNA assembly standards and methods, BASIC offers a simple robust protocol; it relies on a single entry vector, provides for easy hierarchical assembly, and is highly accurate for up to seven parts per assembly round [2].

  5. Discrete sensors distribution for accurate plantar pressure analyses.

    Science.gov (United States)

    Claverie, Laetitia; Ille, Anne; Moretto, Pierre

    2016-12-01

    The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  6. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  7. Is bioelectrical impedance accurate for use in large epidemiological studies?

    Directory of Open Access Journals (Sweden)

    Merchant Anwar T

    2008-09-01

    Full Text Available Abstract Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI, waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study.

  8. Accurate genome relative abundance estimation based on shotgun metagenomic reads.

    Directory of Open Access Journals (Sweden)

    Li C Xia

    Full Text Available Accurate estimation of microbial community composition based on metagenomic sequencing data is fundamental for subsequent metagenomics analysis. Prevalent estimation methods are mainly based on directly summarizing alignment results or its variants; often result in biased and/or unstable estimates. We have developed a unified probabilistic framework (named GRAMMy by explicitly modeling read assignment ambiguities, genome size biases and read distributions along the genomes. Maximum likelihood method is employed to compute Genome Relative Abundance of microbial communities using the Mixture Model theory (GRAMMy. GRAMMy has been demonstrated to give estimates that are accurate and robust across both simulated and real read benchmark datasets. We applied GRAMMy to a collection of 34 metagenomic read sets from four metagenomics projects and identified 99 frequent species (minimally 0.5% abundant in at least 50% of the data-sets in the human gut samples. Our results show substantial improvements over previous studies, such as adjusting the over-estimated abundance for Bacteroides species for human gut samples, by providing a new reference-based strategy for metagenomic sample comparisons. GRAMMy can be used flexibly with many read assignment tools (mapping, alignment or composition-based even with low-sensitivity mapping results from huge short-read datasets. It will be increasingly useful as an accurate and robust tool for abundance estimation with the growing size of read sets and the expanding database of reference genomes.

  9. An accurate metric for the spacetime around neutron stars

    CERN Document Server

    Pappas, George

    2016-01-01

    The problem of having an accurate description of the spacetime around neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work we present such an approximate stationary and axisymmetric metric for the exterior of neutron stars, which is constructed using the Ernst formalism and is parameterised by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical propert...

  10. [Spectroscopy technique and ruminant methane emissions accurate inspecting].

    Science.gov (United States)

    Shang, Zhan-Huan; Guo, Xu-Sheng; Long, Rui-Jun

    2009-03-01

    The increase in atmospheric CH4 concentration, on the one hand through the radiation process, will directly cause climate change, and on the other hand, cause a lot of changes in atmospheric chemical processes, indirectly causing climate change. The rapid growth of atmospheric methane has gained attention of governments and scientists. All countries in the world now deal with global climate change as an important task of reducing emissions of greenhouse gases, but the need for monitoring the concentration of methane gas, in particular precision monitoring, can be scientifically formulated to provide a scientific basis for emission reduction measures. So far, CH4 gas emissions of different animal production systems have received extensive research. The methane emission by ruminant reported in the literature is only estimation. This is due to the various factors that affect the methane production in ruminant, there are various variables associated with the techniques for measuring methane production, the techniques currently developed to measure methane are unable to accurately determine the dynamics of methane emission by ruminant, and therefore there is an urgent need to develop an accurate method for this purpose. Currently, spectroscopy technique has been used and is relatively a more accurate and reliable method. Various spectroscopy techniques such as modified infrared spectroscopy methane measuring system, laser and near-infrared sensory system are able to achieve the objective of determining the dynamic methane emission by both domestic and grazing ruminant. Therefore spectroscopy technique is an important methane measuring technique, and contributes to proposing reduction methods of methane.

  11. Absolute quantitation of endogenous proteins with precision and accuracy using a capillary western system

    OpenAIRE

    Chen, Jin-Qiu; Heldman, Madeleine R.; Herrmann, Michelle A.; Kedei, Noemi; Woo, Wonhee; Blumberg, Peter M.; Goldsmith, Paul K.

    2013-01-01

    Precise and accurate quantification of protein expression levels in a complex biological setting is challenging. Here, we describe a method for absolute quantitation of endogenous proteins in cell lysates using an automated capillary immunoassay system (the size-based Simple Western system, ProteinSimple, CA). The method was able to accurately measure the absolute amounts of target proteins at picogram or sub-picogram levels per nanogram of cell lysates. The measurements were independent of t...

  12. Ultrafast quantitative time-stretch imaging flow cytometry of phytoplankton

    Science.gov (United States)

    Lai, Queenie T. K.; Lau, Andy K. S.; Tang, Anson H. L.; Wong, Kenneth K. Y.; Tsia, Kevin K.

    2016-03-01

    Comprehensive quantification of phytoplankton abundance, sizes and other parameters, e.g. biomasses, has been an important, yet daunting task in aquatic sciences and biofuel research. It is primarily because of the lack of effective tool to image and thus accurately profile individual microalgae in a large population. The phytoplankton species are highly diversified and heterogeneous in terms of their sizes and the richness in morphological complexity. This fact makes time-stretch imaging, a new ultrafast real-time optical imaging technology, particularly suitable for ultralarge-scale taxonomic classification of phytoplankton together with quantitative image recognition and analysis. We here demonstrate quantitative imaging flow cytometry of single phytoplankton based on quantitative asymmetric-detection time-stretch optical microscopy (Q-ATOM) - a new time-stretch imaging modality for label-free quantitative phase imaging without interferometric implementations. Sharing the similar concept of Schlieren imaging, Q-ATOM accesses multiple phase-gradient contrasts of each single phytoplankton, from which the quantitative phase profile is computed. We employ such system to capture, at an imaging line-scan rate of 11.6 MHz, high-resolution images of two phytoplankton populations (scenedesmus and chlamydomonas) in ultrafast microfluidic flow (3 m/s). We further perform quantitative taxonomic screening analysis enabled by this technique. More importantly, the system can also generate quantitative phase images of single phytoplankton. This is especially useful for label-free quantification of biomasses (e.g. lipid droplets) of the particular species of interest - an important task adopted in biofuel applications. Combining machine learning for automated classification, Q-ATOM could be an attractive platform for continuous and real-time ultralarge-scale single-phytoplankton analysis.

  13. Developing Geoscience Students' Quantitative Skills

    Science.gov (United States)

    Manduca, C. A.; Hancock, G. S.

    2005-12-01

    Sophisticated quantitative skills are an essential tool for the professional geoscientist. While students learn many of these sophisticated skills in graduate school, it is increasingly important that they have a strong grounding in quantitative geoscience as undergraduates. Faculty have developed many strong approaches to teaching these skills in a wide variety of geoscience courses. A workshop in June 2005 brought together eight faculty teaching surface processes and climate change to discuss and refine activities they use and to publish them on the Teaching Quantitative Skills in the Geosciences website (serc.Carleton.edu/quantskills) for broader use. Workshop participants in consultation with two mathematics faculty who have expertise in math education developed six review criteria to guide discussion: 1) Are the quantitative and geologic goals central and important? (e.g. problem solving, mastery of important skill, modeling, relating theory to observation); 2) Does the activity lead to better problem solving? 3) Are the quantitative skills integrated with geoscience concepts in a way that makes sense for the learning environment and supports learning both quantitative skills and geoscience? 4) Does the methodology support learning? (e.g. motivate and engage students; use multiple representations, incorporate reflection, discussion and synthesis) 5) Are the materials complete and helpful to students? 6) How well has the activity worked when used? Workshop participants found that reviewing each others activities was very productive because they thought about new ways to teach and the experience of reviewing helped them think about their own activity from a different point of view. The review criteria focused their thinking about the activity and would be equally helpful in the design of a new activity. We invite a broad international discussion of the criteria(serc.Carleton.edu/quantskills/workshop05/review.html).The Teaching activities can be found on the

  14. Accurate LAI retrieval method based on PROBA/CHRIS data

    Directory of Open Access Journals (Sweden)

    W. Fan

    2009-11-01

    Full Text Available Leaf area index (LAI is one of the key structural variables in terrestrial vegetation ecosystems. Remote sensing offers a chance to derive LAI in regional scales accurately. Variations of background, atmospheric conditions and the anisotropy of canopy reflectance are three factors that can strongly restrain the accuracy of retrieved LAI. Based on the hybrid canopy reflectance model, a new hyperspectral directional second derivative method (DSD is proposed in this paper. This method can estimate LAI accurately through analyzing the canopy anisotropy. The effect of the background can also be effectively removed. So the inversion precision and the dynamic range can be improved remarkably, which has been proved by numerical simulations. As the derivative method is very sensitive to the random noise, we put forward an innovative filtering approach, by which the data can be de-noised in spectral and spatial dimensions synchronously. It shows that the filtering method can remove the random noise effectively; therefore, the method can be performed to the remotely sensed hyperspectral image. The study region is situated in Zhangye, Gansu Province, China; the hyperspectral and multi-angular image of the study region has been acquired from Compact High-Resolution Imaging Spectrometer/Project for On-Board Autonomy (CHRIS/PROBA, on 4 and 14 June 2008. After the pre-processing procedures, the DSD method was applied, and the retrieve LAI was validated by the ground truth of 11 sites. It shows that by applying innovative filtering method, the new LAI inversion method is accurate and effective.

  15. The FLUKA Code: An Accurate Simulation Tool for Particle Therapy.

    Science.gov (United States)

    Battistoni, Giuseppe; Bauer, Julia; Boehlen, Till T; Cerutti, Francesco; Chin, Mary P W; Dos Santos Augusto, Ricardo; Ferrari, Alfredo; Ortega, Pablo G; Kozłowska, Wioletta; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with both (4)He and (12)C ion beams. Accurate description of ionization energy losses and of particle scattering and interactions lead to the excellent agreement of calculated depth-dose profiles with those measured at leading European hadron therapy centers, both with proton and ion beams. In order to support the application of FLUKA in hospital-based environments, Flair, the FLUKA graphical interface, has been enhanced with the capability of translating CT DICOM images into voxel-based computational phantoms in a fast and well-structured way. The interface is capable of importing also radiotherapy treatment data described in DICOM RT standard. In addition, the interface is equipped with an intuitive PET scanner geometry generator and automatic recording of coincidence events. Clinically, similar cases will be presented both in terms of absorbed dose and biological dose calculations describing the various available features.

  16. An accurate metric for the spacetime around rotating neutron stars

    Science.gov (United States)

    Pappas, George

    2017-04-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parametrized metric, i.e. a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work, we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parametrized by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parametrization of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a three-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  17. Fourth order accurate compact scheme with group velocity control (GVC)

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    For solving complex flow field with multi-scale structure higher order accurate schemes are preferred. Among high order schemes the compact schemes have higher resolving efficiency. When the compact and upwind compact schemes are used to solve aerodynamic problems there are numerical oscillations near the shocks. The reason of oscillation production is because of non-uniform group velocity of wave packets in numerical solutions. For improvement of resolution of the shock a parameter function is introduced in compact scheme to control the group velocity. The newly developed method is simple. It has higher accuracy and less stencil of grid points.

  18. Rapid and Accurate Idea Transfer: Presenting Ideas with Concept Maps

    Science.gov (United States)

    2008-07-30

    questions from the Pre-test were included in the Post-test. I. At the end of the day, people in the camps take home about __ taka to feed their families. 2...teachers are not paid regularly. Tuition fees for different classes are 40 to 80 taka (Bangladesh currency) 27 per month which is very high for the...October 26, 2002. 27 In April 2005, exchange rate, one $ = 60 taka . 44 Rapid and Accurate Idea Transfer: CDRL (DI-MIS-807 11 A, 00012 1) Presenting

  19. Accurate emulators for large-scale computer experiments

    CERN Document Server

    Haaland, Ben; 10.1214/11-AOS929

    2012-01-01

    Large-scale computer experiments are becoming increasingly important in science. A multi-step procedure is introduced to statisticians for modeling such experiments, which builds an accurate interpolator in multiple steps. In practice, the procedure shows substantial improvements in overall accuracy, but its theoretical properties are not well established. We introduce the terms nominal and numeric error and decompose the overall error of an interpolator into nominal and numeric portions. Bounds on the numeric and nominal error are developed to show theoretically that substantial gains in overall accuracy can be attained with the multi-step approach.

  20. Accurate Excited State Geometries within Reduced Subspace TDDFT/TDA.

    Science.gov (United States)

    Robinson, David

    2014-12-09

    A method for the calculation of TDDFT/TDA excited state geometries within a reduced subspace of Kohn-Sham orbitals has been implemented and tested. Accurate geometries are found for all of the fluorophore-like molecules tested, with at most all valence occupied orbitals and half of the virtual orbitals included but for some molecules even fewer orbitals. Efficiency gains of between 15 and 30% are found for essentially the same level of accuracy as a standard TDDFT/TDA excited state geometry optimization calculation.

  1. Accurate measurement of ultrasonic velocity by eliminating the diffraction effect

    Institute of Scientific and Technical Information of China (English)

    WEI Tingcun

    2003-01-01

    The accurate measurement method of ultrasonic velocity by the pulse interferencemethod with eliminating the diffraction effect has been investigated in VHF range experimen-tally. Two silicate glasses were taken as the specimens, their frequency dependences of longitu-dinal velocities were measured in the frequency range 50-350 MHz, and the phase advances ofultrasonic signals caused by diffraction effect were calculated using A. O. Williams' theoreticalexpression. For the frequency dependences of longitudinal velocities, the measurement resultswere in good agreement with the simulation ones in which the phase advances were included.It has been shown that the velocity error due to diffraction effect can be corrected very well bythis method.

  2. Accurate studies on dissociation energies of diatomic molecules

    Institute of Scientific and Technical Information of China (English)

    SUN; WeiGuo; FAN; QunChao

    2007-01-01

    The molecular dissociation energies of some electronic states of hydride and N2 molecules were studied using a parameter-free analytical formula suggested in this study and the algebraic method (AM) proposed recently. The results show that the accurate AM dissociation energies DeAM agree excellently with experimental dissociation energies Deexpt, and that the dissociation energy of an electronic state such as the 23△g state of 7Li2 whose experimental value is not available can be predicted using the new formula.

  3. A novel simple and accurate flatness measurement method

    CERN Document Server

    Thang, H L

    2011-01-01

    Flatness measurement of a surface plate is an intensive and old research topic. However ISO definition related and other measurement methods seem uneasy in measuring and/ or complicated in data analysis. Especially in reality, the mentioned methods don't take a clear and straightforward care on the inclining angle which is always included in any given flatness measurement. In this report a novel simple and accurate flatness measurement method was introduced to overcome this prevailing feature in the available methods. The mathematical modeling for this method was also presented making the underlying nature of the method transparent. The applying examples show consistent results.

  4. Quantum-Accurate Molecular Dynamics Potential for Tungsten

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Mitchell; Thompson, Aidan P.

    2017-03-01

    The purpose of this short contribution is to report on the development of a Spectral Neighbor Analysis Potential (SNAP) for tungsten. We have focused on the characterization of elastic and defect properties of the pure material in order to support molecular dynamics simulations of plasma-facing materials in fusion reactors. A parallel genetic algorithm approach was used to efficiently search for fitting parameters optimized against a large number of objective functions. In addition, we have shown that this many-body tungsten potential can be used in conjunction with a simple helium pair potential1 to produce accurate defect formation energies for the W-He binary system.

  5. Accurate analysis of multitone signals using a DFT

    Science.gov (United States)

    Burgess, John C.

    2004-07-01

    Optimum data windows make it possible to determine accurately the amplitude, phase, and frequency of one or more tones (sinusoidal components) in a signal. Procedures presented in this paper can be applied to noisy signals, signals having moderate nonstationarity, and tones close in frequency. They are relevant to many areas of acoustics where sounds are quasistationary. Among these are acoustic probes transmitted through media and natural sounds, such as animal vocalization, speech, and music. The paper includes criteria for multitone FFT block design and an example of application to sound transmission in the atmosphere.

  6. Virmid: accurate detection of somatic mutations with sample impurity inference.

    Science.gov (United States)

    Kim, Sangwoo; Jeong, Kyowon; Bhutani, Kunal; Lee, Jeong; Patel, Anand; Scott, Eric; Nam, Hojung; Lee, Hayan; Gleeson, Joseph G; Bafna, Vineet

    2013-08-29

    Detection of somatic variation using sequence from disease-control matched data sets is a critical first step. In many cases including cancer, however, it is hard to isolate pure disease tissue, and the impurity hinders accurate mutation analysis by disrupting overall allele frequencies. Here, we propose a new method, Virmid, that explicitly determines the level of impurity in the sample, and uses it for improved detection of somatic variation. Extensive tests on simulated and real sequencing data from breast cancer and hemimegalencephaly demonstrate the power of our model. A software implementation of our method is available at http://sourceforge.net/projects/virmid/.

  7. Accurate Programming: Thinking about programs in terms of properties

    Directory of Open Access Journals (Sweden)

    Walid Taha

    2011-09-01

    Full Text Available Accurate programming is a practical approach to producing high quality programs. It combines ideas from test-automation, test-driven development, agile programming, and other state of the art software development methods. In addition to building on approaches that have proven effective in practice, it emphasizes concepts that help programmers sharpen their understanding of both the problems they are solving and the solutions they come up with. This is achieved by encouraging programmers to think about programs in terms of properties.

  8. Calibration Techniques for Accurate Measurements by Underwater Camera Systems

    Directory of Open Access Journals (Sweden)

    Mark Shortis

    2015-12-01

    Full Text Available Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  9. Calibration Techniques for Accurate Measurements by Underwater Camera Systems.

    Science.gov (United States)

    Shortis, Mark

    2015-12-07

    Calibration of a camera system is essential to ensure that image measurements result in accurate estimates of locations and dimensions within the object space. In the underwater environment, the calibration must implicitly or explicitly model and compensate for the refractive effects of waterproof housings and the water medium. This paper reviews the different approaches to the calibration of underwater camera systems in theoretical and practical terms. The accuracy, reliability, validation and stability of underwater camera system calibration are also discussed. Samples of results from published reports are provided to demonstrate the range of possible accuracies for the measurements produced by underwater camera systems.

  10. Quality control for quantitative geophysical logging

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Sang Kyu; Hwang, Se Ho; Hwang, Hak Soo; Park, In Hwa [Korea Institute of Geology Mining and Materials, Taejon (Korea)

    1998-12-01

    Despite the great availability of geophysical data obtained from boreholes, the interpretation is subject to significant uncertainties. More accurate data with less statistical uncertainties should require an employment of more quantitative techniques in log acquisition and interpretation technique. The long-term objective of this project is the development of techniques in both quality control of log measurement and the quantitative interpretation. In the first year, the goals of the project will include establishing the procedure of log acquisition using various tests, analysing the effect of logging velocity change on the logging data, examining the repeatability and reproducibility, analyzing of filtering effect on the log measurements, and finally the zonation and the correlation of single-and inter-well log data. For the establishment of logging procedure, we have tested the multiple factors affecting the accuracy in depth. The factors are divided into two parts: human and mechanical. These factors include the zero setting of depth, the calculation of offset for the sonde, the stretching effect of cable, and measuring wheel accuracy. We conclude that the error in depth setting results primarily from human factor, and also in part from the stretching of cable. The statistical fluctuation of log measurements increases according to increasing the logging speed for the zone of lower natural gamma. Thus, the problem related with logging speed is a trifling matter in case of the application of resources exploration, the logging speed should run more slowly to reduce the statistical fluctuation of natural gamma with lithologic correlation in mind. The repeatability and reproducibility of logging measurements are tested. The results of repeatability test for the natural gamma sonde are qualitatively acceptable in the reproducibility test, the errors occurs in logging data between two operators and successive trials. We conclude that the errors result from the

  11. Quantitative mass spectrometry: an overview

    Science.gov (United States)

    Urban, Pawel L.

    2016-10-01

    Mass spectrometry (MS) is a mainstream chemical analysis technique in the twenty-first century. It has contributed to numerous discoveries in chemistry, physics and biochemistry. Hundreds of research laboratories scattered all over the world use MS every day to investigate fundamental phenomena on the molecular level. MS is also widely used by industry-especially in drug discovery, quality control and food safety protocols. In some cases, mass spectrometers are indispensable and irreplaceable by any other metrological tools. The uniqueness of MS is due to the fact that it enables direct identification of molecules based on the mass-to-charge ratios as well as fragmentation patterns. Thus, for several decades now, MS has been used in qualitative chemical analysis. To address the pressing need for quantitative molecular measurements, a number of laboratories focused on technological and methodological improvements that could render MS a fully quantitative metrological platform. In this theme issue, the experts working for some of those laboratories share their knowledge and enthusiasm about quantitative MS. I hope this theme issue will benefit readers, and foster fundamental and applied research based on quantitative MS measurements. This article is part of the themed issue 'Quantitative mass spectrometry'.

  12. Quantitative mass spectrometry: an overview

    Science.gov (United States)

    2016-01-01

    Mass spectrometry (MS) is a mainstream chemical analysis technique in the twenty-first century. It has contributed to numerous discoveries in chemistry, physics and biochemistry. Hundreds of research laboratories scattered all over the world use MS every day to investigate fundamental phenomena on the molecular level. MS is also widely used by industry—especially in drug discovery, quality control and food safety protocols. In some cases, mass spectrometers are indispensable and irreplaceable by any other metrological tools. The uniqueness of MS is due to the fact that it enables direct identification of molecules based on the mass-to-charge ratios as well as fragmentation patterns. Thus, for several decades now, MS has been used in qualitative chemical analysis. To address the pressing need for quantitative molecular measurements, a number of laboratories focused on technological and methodological improvements that could render MS a fully quantitative metrological platform. In this theme issue, the experts working for some of those laboratories share their knowledge and enthusiasm about quantitative MS. I hope this theme issue will benefit readers, and foster fundamental and applied research based on quantitative MS measurements. This article is part of the themed issue ‘Quantitative mass spectrometry’. PMID:27644965

  13. Bovine serum albumin detection and quantitation based on capacitance measurements of liquid crystals

    Science.gov (United States)

    Lin, Chi-Hao; Lee, Mon-Juan; Lee, Wei

    2016-08-01

    Liquid crystal (LC)-based biosensing is generally limited by the lack of accurate quantitative strategies. This study exploits the unique electric capacitance properties of LCs to establish quantitative assay methods for bovine serum albumin (BSA) biomolecules. By measuring the voltage-dependent electric capacitance of LCs under an alternating-current field with increasing amplitude, positive correlations were derived between the BSA concentration and the electric capacitance parameters of LCs. This study demonstrates that quantitative analysis can be achieved in LC-based biosensing through electric capacitance measurements extensively employed in LCD research and development.

  14. An Analytic Method for Measuring Accurate Fundamental Frequency Components

    Energy Technology Data Exchange (ETDEWEB)

    Nam, Soon Ryul; Park Jong Keun [Seoul National University, Seoul(Korea); Kang, Sang Hee [Myongji University, Seoul (Korea)

    2002-04-01

    This paper proposes an analytic method for measuring the accurate fundamental frequency component of a fault current signal distorted with a DC-offset, a characteristic frequency component, and harmonics. The proposed algorithm is composed of four stages: sine filer, linear filter, Prony's method, and measurement. The sine filter and the linear filter eliminate harmonics and the fundamental frequency component, respectively. Then Prony's method is used to estimate the parameters of the DC-offset and the characteristic frequency component. Finally, the fundamental frequency component is measured by compensating the sine-filtered signal with the estimated parameters. The performance evaluation of the proposed method is presented for a-phase to ground faults on a 345 kV 200 km overhead transmission line. The EMTP is used to generate fault current signals under different fault locations and fault inception angles. It is shown that the analytic method accurately measures the fundamental frequency component regardless of the characteristic frequency component as well as the DC-offset.(author). 19 refs., 4 figs., 4 tabs.

  15. Is Cancer Information Exchanged on Social Media Scientifically Accurate?

    Science.gov (United States)

    Gage-Bouchard, Elizabeth A; LaValley, Susan; Warunek, Molli; Beaupin, Lynda Kwon; Mollica, Michelle

    2017-07-19

    Cancer patients and their caregivers are increasingly using social media as a platform to share cancer experiences, connect with support, and exchange cancer-related information. Yet, little is known about the nature and scientific accuracy of cancer-related information exchanged on social media. We conducted a content analysis of 12 months of data from 18 publically available Facebook Pages hosted by parents of children with acute lymphoblastic leukemia (N = 15,852 posts) and extracted all exchanges of medically-oriented cancer information. We systematically coded for themes in the nature of cancer-related information exchanged on personal Facebook Pages and two oncology experts independently evaluated the scientific accuracy of each post. Of the 15,852 total posts, 171 posts contained medically-oriented cancer information. The most frequent type of cancer information exchanged was information related to treatment protocols and health services use (35%) followed by information related to side effects and late effects (26%), medication (16%), medical caregiving strategies (13%), alternative and complementary therapies (8%), and other (2%). Overall, 67% of all cancer information exchanged was deemed medically/scientifically accurate, 19% was not medically/scientifically accurate, and 14% described unproven treatment modalities. These findings highlight the potential utility of social media as a cancer-related resource, but also indicate that providers should focus on recommending reliable, evidence-based sources to patients and caregivers.

  16. Novel dispersion tolerant interferometry method for accurate measurements of displacement

    Science.gov (United States)

    Bradu, Adrian; Maria, Michael; Leick, Lasse; Podoleanu, Adrian G.

    2015-05-01

    We demonstrate that the recently proposed master-slave interferometry method is able to provide true dispersion free depth profiles in a spectrometer-based set-up that can be used for accurate displacement measurements in sensing and optical coherence tomography. The proposed technique is based on correlating the channelled spectra produced by the linear camera in the spectrometer with previously recorded masks. As such technique is not based on Fourier transformations (FT), it does not require any resampling of data and is immune to any amounts of dispersion left unbalanced in the system. In order to prove the tolerance of technique to dispersion, different lengths of optical fiber are used in the interferometer to introduce dispersion and it is demonstrated that neither the sensitivity profile versus optical path difference (OPD) nor the depth resolution are affected. In opposition, it is shown that the classical FT based methods using calibrated data provide less accurate optical path length measurements and exhibit a quicker decays of sensitivity with OPD.

  17. More-Accurate Model of Flows in Rocket Injectors

    Science.gov (United States)

    Hosangadi, Ashvin; Chenoweth, James; Brinckman, Kevin; Dash, Sanford

    2011-01-01

    An improved computational model for simulating flows in liquid-propellant injectors in rocket engines has been developed. Models like this one are needed for predicting fluxes of heat in, and performances of, the engines. An important part of predicting performance is predicting fluctuations of temperature, fluctuations of concentrations of chemical species, and effects of turbulence on diffusion of heat and chemical species. Customarily, diffusion effects are represented by parameters known in the art as the Prandtl and Schmidt numbers. Prior formulations include ad hoc assumptions of constant values of these parameters, but these assumptions and, hence, the formulations, are inaccurate for complex flows. In the improved model, these parameters are neither constant nor specified in advance: instead, they are variables obtained as part of the solution. Consequently, this model represents the effects of turbulence on diffusion of heat and chemical species more accurately than prior formulations do, and may enable more-accurate prediction of mixing and flows of heat in rocket-engine combustion chambers. The model has been implemented within CRUNCH CFD, a proprietary computational fluid dynamics (CFD) computer program, and has been tested within that program. The model could also be implemented within other CFD programs.

  18. The economic value of accurate wind power forecasting to utilities

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S.J. [Rutherford Appleton Lab., Oxfordshire (United Kingdom); Giebel, G.; Joensen, A. [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    With increasing penetrations of wind power, the need for accurate forecasting is becoming ever more important. Wind power is by its very nature intermittent. For utility schedulers this presents its own problems particularly when the penetration of wind power capacity in a grid reaches a significant level (>20%). However, using accurate forecasts of wind power at wind farm sites, schedulers are able to plan the operation of conventional power capacity to accommodate the fluctuating demands of consumers and wind farm output. The results of a study to assess the value of forecasting at several potential wind farm sites in the UK and in the US state of Iowa using the Reading University/Rutherford Appleton Laboratory National Grid Model (NGM) are presented. The results are assessed for different types of wind power forecasting, namely: persistence, optimised numerical weather prediction or perfect forecasting. In particular, it will shown how the NGM has been used to assess the value of numerical weather prediction forecasts from the Danish Meteorological Institute model, HIRLAM, and the US Nested Grid Model, which have been `site tailored` by the use of the linearized flow model WA{sup s}P and by various Model output Statistics (MOS) and autoregressive techniques. (au)

  19. Ultra-accurate collaborative information filtering via directed user similarity

    Science.gov (United States)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  20. Accurate and precise zinc isotope ratio measurements in urban aerosols.

    Science.gov (United States)

    Gioia, Simone; Weiss, Dominik; Coles, Barry; Arnold, Tim; Babinski, Marly

    2008-12-15

    We developed an analytical method and constrained procedural boundary conditions that enable accurate and precise Zn isotope ratio measurements in urban aerosols. We also demonstrate the potential of this new isotope system for air pollutant source tracing. The procedural blank is around 5 ng and significantly lower than published methods due to a tailored ion chromatographic separation. Accurate mass bias correction using external correction with Cu is limited to Zn sample content of approximately 50 ng due to the combined effect of blank contribution of Cu and Zn from the ion exchange procedure and the need to maintain a Cu/Zn ratio of approximately 1. Mass bias is corrected for by applying the common analyte internal standardization method approach. Comparison with other mass bias correction methods demonstrates the accuracy of the method. The average precision of delta(66)Zn determinations in aerosols is around 0.05 per thousand per atomic mass unit. The method was tested on aerosols collected in Sao Paulo City, Brazil. The measurements reveal significant variations in delta(66)Zn(Imperial) ranging between -0.96 and -0.37 per thousand in coarse and between -1.04 and 0.02 per thousand in fine particular matter. This variability suggests that Zn isotopic compositions distinguish atmospheric sources. The isotopic light signature suggests traffic as the main source. We present further delta(66)Zn(Imperial) data for the standard reference material NIST SRM 2783 (delta(66)Zn(Imperial) = 0.26 +/- 0.10 per thousand).

  1. Accurate Runout Measurement for HDD Spinning Motors and Disks

    Science.gov (United States)

    Jiang, Quan; Bi, Chao; Lin, Song

    As hard disk drive (HDD) areal density increases, its track width becomes smaller and smaller and so is non-repeatable runout. HDD industry needs more accurate and better resolution runout measurements of spinning spindle motors and media platters in both axial and radial directions. This paper introduces a new system how to precisely measure the runout of HDD spinning disks and motors through synchronously acquiring the rotor position signal and the displacements in axial or radial directions. In order to minimize the synchronizing error between the rotor position and the displacement signal, a high resolution counter is adopted instead of the conventional phase-lock loop method. With Laser Doppler Vibrometer and proper signal processing, the proposed runout system can precisely measure the runout of the HDD spinning disks and motors with 1 nm resolution and 0.2% accuracy with a proper sampling rate. It can provide an effective and accurate means to measure the runout of high areal density HDDs, in particular the next generation HDDs, such as, pattern media HDDs and HAMR HDDs.

  2. Accurate Interatomic Force Fields via Machine Learning with Covariant Kernels

    CERN Document Server

    Glielmo, Aldo; De Vita, Alessandro

    2016-01-01

    We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian Process (GP) Regression. This is based on matrix-valued kernel functions, to which we impose that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such "covariant" GP kernels can be obtained by integration over the elements of the rotation group SO(d) for the relevant dimensionality d. Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni and Fe crystalline...

  3. Accurate interatomic force fields via machine learning with covariant kernels

    Science.gov (United States)

    Glielmo, Aldo; Sollich, Peter; De Vita, Alessandro

    2017-06-01

    We present a novel scheme to accurately predict atomic forces as vector quantities, rather than sets of scalar components, by Gaussian process (GP) regression. This is based on matrix-valued kernel functions, on which we impose the requirements that the predicted force rotates with the target configuration and is independent of any rotations applied to the configuration database entries. We show that such covariant GP kernels can be obtained by integration over the elements of the rotation group SO (d ) for the relevant dimensionality d . Remarkably, in specific cases the integration can be carried out analytically and yields a conservative force field that can be recast into a pair interaction form. Finally, we show that restricting the integration to a summation over the elements of a finite point group relevant to the target system is sufficient to recover an accurate GP. The accuracy of our kernels in predicting quantum-mechanical forces in real materials is investigated by tests on pure and defective Ni, Fe, and Si crystalline systems.

  4. Accurately controlled sequential self-folding structures by polystyrene film

    Science.gov (United States)

    Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse

    2017-08-01

    Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.

  5. Learning fast accurate movements requires intact frontostriatal circuits

    Directory of Open Access Journals (Sweden)

    Britne eShabbott

    2013-11-01

    Full Text Available The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound, and difficulties in distinguishing learning deficits from execution impairments (performance confound. We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements, and we addressed the definition and performance confounds by: 1 focusing on an operationally defined core element of motor skill learning (speed-accuracy learning, and 2 using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington’s disease (HD, a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning.

  6. Simple and accurate optical height sensor for wafer inspection systems

    Science.gov (United States)

    Shimura, Kei; Nakai, Naoya; Taniguchi, Koichi; Itoh, Masahide

    2016-02-01

    An accurate method for measuring the wafer surface height is required for wafer inspection systems to adjust the focus of inspection optics quickly and precisely. A method for projecting a laser spot onto the wafer surface obliquely and for detecting its image displacement using a one-dimensional position-sensitive detector is known, and a variety of methods have been proposed for improving the accuracy by compensating the measurement error due to the surface patterns. We have developed a simple and accurate method in which an image of a reticle with eight slits is projected on the wafer surface and its reflected image is detected using an image sensor. The surface height is calculated by averaging the coordinates of the images of the slits in both the two directions in the captured image. Pattern-related measurement error was reduced by applying the coordinates averaging to the multiple-slit-projection method. Accuracy of better than 0.35 μm was achieved for a patterned wafer at the reference height and ±0.1 mm from the reference height in a simple configuration.

  7. Quality metric for accurate overlay control in <20nm nodes

    Science.gov (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  8. Towards an accurate determination of the age of the Universe

    CERN Document Server

    Jiménez, R

    1998-01-01

    In the past 40 years a considerable effort has been focused in determining the age of the Universe at zero redshift using several stellar clocks. In this review I will describe the best theoretical methods to determine the age of the oldest Galactic Globular Clusters (GC). I will also argue that a more accurate age determination may come from passively evolving high-redshift ellipticals. In particular, I will review two new methods to determine the age of GC. These two methods are more accurate than the classical isochrone fitting technique. The first method is based on the morphology of the horizontal branch and is independent of the distance modulus of the globular cluster. The second method uses a careful binning of the stellar luminosity function which determines simultaneously the distance and age of the GC. It is found that the oldest GCs have an age of $13.5 \\pm 2$ Gyr. The absolute minimum age for the oldest GCs is 10.5 Gyr and the maximum is 16.0 Gyr (with 99% confidence). Therefore, an Einstein-De S...

  9. Accurate stone analysis: the impact on disease diagnosis and treatment.

    Science.gov (United States)

    Mandel, Neil S; Mandel, Ian C; Kolbach-Mandel, Ann M

    2017-02-01

    This manuscript reviews the requirements for acceptable compositional analysis of kidney stones using various biophysical methods. High-resolution X-ray powder diffraction crystallography and Fourier transform infrared spectroscopy (FTIR) are the only acceptable methods in our labs for kidney stone analysis. The use of well-constructed spectral reference libraries is the basis for accurate and complete stone analysis. The literature included in this manuscript identify errors in most commercial laboratories and in some academic centers. We provide personal comments on why such errors are occurring at such high rates, and although the work load is rather large, it is very worthwhile in providing accurate stone compositions. We also provide the results of our almost 90,000 stone analyses and a breakdown of the number of components we have observed in the various stones. We also offer advice on determining the method used by the various FTIR equipment manufacturers who also provide a stone analysis library so that the FTIR users can feel comfortable in the accuracy of their reported results. Such an analysis on the accuracy of the individual reference libraries could positively influence the reduction in their respective error rates.

  10. Accurate prediction of secondary metabolite gene clusters in filamentous fungi.

    Science.gov (United States)

    Andersen, Mikael R; Nielsen, Jakob B; Klitgaard, Andreas; Petersen, Lene M; Zachariasen, Mia; Hansen, Tilde J; Blicher, Lene H; Gotfredsen, Charlotte H; Larsen, Thomas O; Nielsen, Kristian F; Mortensen, Uffe H

    2013-01-02

    Biosynthetic pathways of secondary metabolites from fungi are currently subject to an intense effort to elucidate the genetic basis for these compounds due to their large potential within pharmaceutics and synthetic biochemistry. The preferred method is methodical gene deletions to identify supporting enzymes for key synthases one cluster at a time. In this study, we design and apply a DNA expression array for Aspergillus nidulans in combination with legacy data to form a comprehensive gene expression compendium. We apply a guilt-by-association-based analysis to predict the extent of the biosynthetic clusters for the 58 synthases active in our set of experimental conditions. A comparison with legacy data shows the method to be accurate in 13 of 16 known clusters and nearly accurate for the remaining 3 clusters. Furthermore, we apply a data clustering approach, which identifies cross-chemistry between physically separate gene clusters (superclusters), and validate this both with legacy data and experimentally by prediction and verification of a supercluster consisting of the synthase AN1242 and the prenyltransferase AN11080, as well as identification of the product compound nidulanin A. We have used A. nidulans for our method development and validation due to the wealth of available biochemical data, but the method can be applied to any fungus with a sequenced and assembled genome, thus supporting further secondary metabolite pathway elucidation in the fungal kingdom.

  11. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  12. Accurate phylogenetic classification of DNA fragments based onsequence composition

    Energy Technology Data Exchange (ETDEWEB)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  13. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  14. Cerebral fat embolism: Use of MR spectroscopy for accurate diagnosis

    Directory of Open Access Journals (Sweden)

    Laxmi Kokatnur

    2015-01-01

    Full Text Available Cerebral fat embolism (CFE is an uncommon but serious complication following orthopedic procedures. It usually presents with altered mental status, and can be a part of fat embolism syndrome (FES if associated with cutaneous and respiratory manifestations. Because of the presence of other common factors affecting the mental status, particularly in the postoperative period, the diagnosis of CFE can be challenging. Magnetic resonance imaging (MRI of brain typically shows multiple lesions distributed predominantly in the subcortical region, which appear as hyperintense lesions on T2 and diffusion weighted images. Although the location offers a clue, the MRI findings are not specific for CFE. Watershed infarcts, hypoxic encephalopathy, disseminated infections, demyelinating disorders, diffuse axonal injury can also show similar changes on MRI of brain. The presence of fat in these hyperintense lesions, identified by MR spectroscopy as raised lipid peaks will help in accurate diagnosis of CFE. Normal brain tissue or conditions producing similar MRI changes will not show any lipid peak on MR spectroscopy. We present a case of CFE initially misdiagnosed as brain stem stroke based on clinical presentation and cranial computed tomography (CT scan, and later, MR spectroscopy elucidated the accurate diagnosis.

  15. The usefulness of 3D quantitative analysis with using MRI for measuring osteonecrosis of the femoral head

    Energy Technology Data Exchange (ETDEWEB)

    Hwang, Ji Young; Lee, Sun Wha [Ewha Womans University College of Medicine, Seoul (Korea, Republic of); Park, Youn Soo [Samsung Medical Center, Sungkyunkwan University School of Medicine, Seoul (Korea, Republic of)

    2006-11-15

    We wanted to evaluate the usefulness of MRI 3D quantitative analysis for measuring osteonecrosis of the femoral head in comparison with MRI 2D quantitative analysis and quantitative analysis of the specimen. For 3 months at our hospital, 14 femoral head specimens with osteonecrosis were obtained after total hip arthroplasty. The patients preoperative MRIs were retrospectively reviewed for quantitative analysis of the size of the necrosis. Each necrotic fraction of the femoral head was measured by 2D quantitative analysis with using mid-coronal and mid-sagittal MRIs, and by 3D quantitative analysis with using serial continuous coronal MRIs and 3D reconstruction software. The necrotic fraction of the specimen was physically measured by the fluid displacement method. The necrotic fraction according to MRI 2D or 3D quantitative analysis was compared with that of the specimen by using Spearman's correlation test. On the correlative analysis, the necrotic fraction by MRI 2D quantitative analysis and quantitative analysis of the specimen showed moderate correlation (r = 0.657); on the other hand, the necrotic fraction by MRI 3D quantitative analysis and quantitative analysis of the specimen demonstrated a strong correlation (r = 0.952) ({rho} < 0.05). MRI 3D quantitative analysis was more accurate than 2D quantitative analysis using MRI for measuring osteonecrosis of the femoral head. Therefore, it may be useful for predicting the clinical outcome and deciding the proper treatment option.

  16. Quantitative Proteome Mapping of Nitrotyrosines

    Energy Technology Data Exchange (ETDEWEB)

    Bigelow, Diana J.; Qian, Weijun

    2008-02-10

    An essential first step in the understanding disease and environmental perturbations is the early and quantitative detection of the increased levels of the inflammatory marker nitrotyrosine, as compared with its endogenous levels within the tissue or cellular proteome. Thus, methods that successfully address a proteome-wide quantitation of nitrotyrosine and related oxidative modifications can provide early biomarkers of risk and progression of disease as well as effective strategies for therapy. Multidimensional separations LC coupled with tandem mass spectrometry (LC-MS/MS) has, in recent years, significantly expanded our knowledge of human (and mammalian model system) proteomes including some nascent work in identification of post-translational modifications. In the following review, we discuss the application of LC-MS/MS for quantitation and identification of nitrotyrosine-modified proteins within the context of complex protein mixtures presented in mammalian proteomes.

  17. Magnetic Resonance Imaging of Intracranial Hypotension: Diagnostic Value of Combined Qualitative Signs and Quantitative Metrics.

    Science.gov (United States)

    Aslan, Kerim; Gunbey, Hediye Pinar; Tomak, Leman; Ozmen, Zafer; Incesu, Lutfi

    2017-07-13

    The aim of this study was to investigate whether the use of combination quantitative metrics (mamillopontine distance [MPD], pontomesencephalic angle, and mesencephalon anterior-posterior/medial-lateral diameter ratios) with qualitative signs (dural enhancement, subdural collections/hematoma, venous engorgement, pituitary gland enlargements, and tonsillar herniations) provides a more accurate diagnosis of intracranial hypotension (IH). The quantitative metrics and qualitative signs of 34 patients and 34 control subjects were assessed by 2 independent observers. Receiver operating characteristic (ROC) curve was used to evaluate the diagnostic performance of quantitative metrics and qualitative signs, and for the diagnosis of IH, optimum cutoff values of quantitative metrics were found with ROC analysis. Combined ROC curve was measured for the quantitative metrics, and qualitative signs combinations in determining diagnostic accuracy and sensitivity, specificity, and positive and negative predictive values were found, and the best model combination was formed. Whereas MPD and pontomesencephalic angle were significantly lower in patients with IH when compared with the control group (P quantitative metrics, the highest individual distinctive power was MPD with AUC of 0.947. The best accuracy in the diagnosis of IH was obtained by combination of dural enhancement, venous engorgement, and MPD with an AUC of 1.00. This study showed that the combined use of dural enhancement, venous engorgement, and MPD had diagnostic accuracy of 100 % for the diagnosis of IH. Therefore, a more accurate IH diagnosis can be provided with combination of quantitative metrics with qualitative signs.

  18. Semi-Quantitative Group Testing

    CERN Document Server

    Emad, Amin

    2012-01-01

    We consider a novel group testing procedure, termed semi-quantitative group testing, motivated by a class of problems arising in genome sequence processing. Semi-quantitative group testing (SQGT) is a non-binary pooling scheme that may be viewed as a combination of an adder model followed by a quantizer. For the new testing scheme we define the capacity and evaluate the capacity for some special choices of parameters using information theoretic methods. We also define a new class of disjunct codes suitable for SQGT, termed SQ-disjunct codes. We also provide both explicit and probabilistic code construction methods for SQGT with simple decoding algorithms.

  19. Quantitative two-qutrit entanglement

    Energy Technology Data Exchange (ETDEWEB)

    Eltschka, Christopher [Institut fuer Theoretische Physik, Universitaet Regensburg, D-93040 Regensburg (Germany); Siewert, Jens [Departamento de Quimica Fisica, Universidad del Pais Vasco UPV/EHU, 48080 Bilbao (Spain); IKERBASQUE, Basque Foundation for Science, 48011 Bilbao (Spain)

    2013-07-01

    We introduce the new concept of axisymmetric bipartite states. For d x d-dimensional systems these states form a two-parameter family of nontrivial mixed states that include the isotropic states. We present exact quantitative results for class-specific entanglement as well as for the negativity and I-concurrence of two-qutrit axisymmetric states. These results have interesting applications such as for quantitative witnesses of class-specific entanglement in arbitrary two-qutrit states and as device-independent witness for the number of entangled dimensions.

  20. When is Quantitative Easing effective?

    OpenAIRE

    Hoermann, Markus; Schabert, Andreas

    2011-01-01

    We present a simple macroeconomic model with open market operations that allows examining the effects of quantitative and credit easing. The central bank controls the policy rate, i.e. the price of money in open market operations, as well as the amount and the type of assets that are accepted as collateral for money. When the policy rate is sufficiently low, this set-up gives rise to an (il-)liquidity premium on non-eligible assets. Then, a quantitative easing policy, which increases the size...

  1. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    Energy Technology Data Exchange (ETDEWEB)

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these

  2. Shadow photogrammetric apparatus for the quantitative evaluation of corneal buttons.

    Science.gov (United States)

    Denham, D; Mandelbaum, S; Parel, J M; Holland, S; Pflugfelder, S; Parel, J M

    1989-11-01

    We have developed a technique for the accurate, quantitative, geometric evaluation of trephined and punched corneal buttons. A magnified shadow of the frontal and edge views of a corneal button mounted on the rotary stage of a modified optical comparator is projected onto the screen of the comparator and photographed. This process takes approximately three minutes. The diameters and edge profile at any meridian photographed can subsequently be analyzed from the film. The precision in measuring the diameters of well cut corneal buttons is +/- 23 microns, and in measuring the angle of the edge profile is +/- 1 degree. Statistical analysis of inter observer variability indicated excellent reproducibility of measurements. Shadow photogrammetry offers a standardized, accurate, and reproducible method for analysis of corneal trephination.

  3. Digitalized accurate modeling of SPCB with multi-spiral surface based on CPC algorithm

    Science.gov (United States)

    Huang, Yanhua; Gu, Lizhi

    2015-09-01

    The main methods of the existing multi-spiral surface geometry modeling include spatial analytic geometry algorithms, graphical method, interpolation and approximation algorithms. However, there are some shortcomings in these modeling methods, such as large amount of calculation, complex process, visible errors, and so on. The above methods have, to some extent, restricted the design and manufacture of the premium and high-precision products with spiral surface considerably. This paper introduces the concepts of the spatially parallel coupling with multi-spiral surface and spatially parallel coupling body. The typical geometry and topological features of each spiral surface forming the multi-spiral surface body are determined, by using the extraction principle of datum point cluster, the algorithm of coupling point cluster by removing singular point, and the "spatially parallel coupling" principle based on the non-uniform B-spline for each spiral surface. The orientation and quantitative relationships of datum point cluster and coupling point cluster in Euclidean space are determined accurately and in digital description and expression, coupling coalescence of the surfaces with multi-coupling point clusters under the Pro/E environment. The digitally accurate modeling of spatially parallel coupling body with multi-spiral surface is realized. The smooth and fairing processing is done to the three-blade end-milling cutter's end section area by applying the principle of spatially parallel coupling with multi-spiral surface, and the alternative entity model is processed in the four axis machining center after the end mill is disposed. And the algorithm is verified and then applied effectively to the transition area among the multi-spiral surface. The proposed model and algorithms may be used in design and manufacture of the multi-spiral surface body products, as well as in solving essentially the problems of considerable modeling errors in computer graphics and

  4. High-performance computing and networking as tools for accurate emission computed tomography reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Passeri, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Formiconi, A.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); De Cristofaro, M.T.E.R. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Pupi, A. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy); Meldolesi, U. [Dipartimento di Fisiopatologia Clinica - Sezione di Medicina Nucleare, Universita` di Firenze (Italy)

    1997-04-01

    It is well known that the quantitative potential of emission computed tomography (ECT) relies on the ability to compensate for resolution, attenuation and scatter effects. Reconstruction algorithms which are able to take these effects into account are highly demanding in terms of computing resources. The reported work aimed to investigate the use of a parallel high-performance computing platform for ECT reconstruction taking into account an accurate model of the acquisition of single-photon emission tomographic (SPET) data. An iterative algorithm with an accurate model of the variable system response was ported on the MIMD (Multiple Instruction Multiple Data) parallel architecture of a 64-node Cray T3D massively parallel computer. The system was organized to make it easily accessible even from low-cost PC-based workstations through standard TCP/IP networking. A complete brain study of 30 (64 x 64) slices could be reconstructed from a set of 90 (64 x 64) projections with ten iterations of the conjugate gradients algorithm in 9 s, corresponding to an actual speed-up factor of 135. This work demonstrated the possibility of exploiting remote high-performance computing and networking resources from hospital sites by means of low-cost workstations using standard communication protocols without particular problems for routine use. The achievable speed-up factors allow the assessment of the clinical benefit of advanced reconstruction techniques which require a heavy computational burden for the compensation effects such as variable spatial resolution, scatter and attenuation. The possibility of using the same software on the same hardware platform with data acquired in different laboratories with various kinds of SPET instrumentation is appealing for software quality control and for the evaluation of the clinical impact of the reconstruction methods. (orig.). With 4 figs., 1 tab.

  5. A machine learned classifier that uses gene expression data to accurately predict estrogen receptor status.

    Directory of Open Access Journals (Sweden)

    Meysam Bastani

    Full Text Available BACKGROUND: Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. METHODS: To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. RESULTS: This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. CONCLUSIONS: Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions.

  6. A Machine Learned Classifier That Uses Gene Expression Data to Accurately Predict Estrogen Receptor Status

    Science.gov (United States)

    Bastani, Meysam; Vos, Larissa; Asgarian, Nasimeh; Deschenes, Jean; Graham, Kathryn; Mackey, John; Greiner, Russell

    2013-01-01

    Background Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER) status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. Methods To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. Results This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. Conclusions Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions. PMID:24312637

  7. A Qualitative and Quantitative Evaluation of 8 Clear Sky Models.

    Science.gov (United States)

    Bruneton, Eric

    2016-10-27

    We provide a qualitative and quantitative evaluation of 8 clear sky models used in Computer Graphics. We compare the models with each other as well as with measurements and with a reference model from the physics community. After a short summary of the physics of the problem, we present the measurements and the reference model, and how we "invert" it to get the model parameters. We then give an overview of each CG model, and detail its scope, its algorithmic complexity, and its results using the same parameters as in the reference model. We also compare the models with a perceptual study. Our quantitative results confirm that the less simplifications and approximations are used to solve the physical equations, the more accurate are the results. We conclude with a discussion of the advantages and drawbacks of each model, and how to further improve their accuracy.

  8. Accurate and Precise Computation Using Analog VLSI, with Applications to Computer Graphics and Neural Networks.

    Science.gov (United States)

    Kirk, David Blair

    This thesis develops an engineering practice and design methodology to enable us to use CMOS analog VLSI chips to perform more accurate and precise computation. These techniques form the basis of an approach that permits us to build computer graphics and neural network applications using analog VLSI. The nature of the design methodology focuses on defining goals for circuit behavior to be met as part of the design process. To increase the accuracy of analog computation, we develop techniques for creating compensated circuit building blocks, where compensation implies the cancellation of device variations, offsets, and nonlinearities. These compensated building blocks can be used as components in larger and more complex circuits, which can then also be compensated. To this end, we develop techniques for automatically determining appropriate parameters for circuits, using constrained optimization. We also fabricate circuits that implement multi-dimensional gradient estimation for a gradient descent optimization technique. The parameter-setting and optimization tools allow us to automatically choose values for compensating our circuit building blocks, based on our goals for the circuit performance. We can also use the techniques to optimize parameters for larger systems, applying the goal-based techniques hierarchically. We also describe a set of thought experiments involving circuit techniques for increasing the precision of analog computation. Our engineering design methodology is a step toward easier use of analog VLSI to solve problems in computer graphics and neural networks. We provide data measured from compensated multipliers built using these design techniques. To demonstrate the feasibility of using analog VLSI for more quantitative computation, we develop small applications using the goal-based design approach and compensated components. Finally, we conclude by discussing the expected significance of this work for the wider use of analog VLSI for

  9. Digitally Enhanced Thin-Layer Chromatography: An Inexpensive, New Technique for Qualitative and Quantitative Analysis

    Science.gov (United States)

    Hess, Amber Victoria Irish

    2007-01-01

    A study conducted shows that if digital photography is combined with regular thin-layer chromatography (TLC), it could perform highly improved qualitative analysis as well as make accurate quantitative analysis possible for a much lower cost than commercial equipment. The findings suggest that digitally enhanced TLC (DE-TLC) is low-cost and easy…

  10. Analytical and Biological Variables Influencing Quantitative Hepatitis C Virus (HCV) Measurement in HIV-HCV Coinfection

    OpenAIRE

    Curtis Cooper; Paul MacPherson; William Cameron

    2006-01-01

    The present review considers issues pertaining to the precision and variability of quantitative hepatitis C virus (HCV) measurement in general, outlines the characteristics of HCV RNA in HIV-HCV coinfection and evaluates those factors which may affect this measure. The clinical relevance of accurate HCV measurement in HIV-HCV coinfection is discussed.

  11. Digitally Enhanced Thin-Layer Chromatography: An Inexpensive, New Technique for Qualitative and Quantitative Analysis

    Science.gov (United States)

    Hess, Amber Victoria Irish

    2007-01-01

    A study conducted shows that if digital photography is combined with regular thin-layer chromatography (TLC), it could perform highly improved qualitative analysis as well as make accurate quantitative analysis possible for a much lower cost than commercial equipment. The findings suggest that digitally enhanced TLC (DE-TLC) is low-cost and easy…

  12. Quantitative PCR for Detection and Enumeration of Genetic Markers of Bovine Fecal Pollution

    Science.gov (United States)

    Accurate assessment of health risks associated with bovine (cattle) fecal pollution requires a reliable host-specific genetic marker and a rapid quantification method. We report the development of quantitative PCR assays for the detection of two recently described cow feces-spec...

  13. Evaluation of a quantitative phosphorus transport model for potential improvement of southern phosphorus indices

    Science.gov (United States)

    Due to a shortage of available phosphorus (P) loss data sets, simulated data from a quantitative P transport model could be used to evaluate a P-index. However, the model would need to accurately predict the P loss data sets that are available. The objective of this study was to compare predictions ...

  14. A quantitative comparison of Calvin-Benson cycle models.

    Science.gov (United States)

    Arnold, Anne; Nikoloski, Zoran

    2011-12-01

    The Calvin-Benson cycle (CBC) provides the precursors for biomass synthesis necessary for plant growth. The dynamic behavior and yield of the CBC depend on the environmental conditions and regulation of the cellular state. Accurate quantitative models hold the promise of identifying the key determinants of the tightly regulated CBC function and their effects on the responses in future climates. We provide an integrative analysis of the largest compendium of existing models for photosynthetic processes. Based on the proposed ranking, our framework facilitates the discovery of best-performing models with regard to metabolomics data and of candidates for metabolic engineering.

  15. Quantitative phylogenetic assessment of microbial communities indiverse environments

    Energy Technology Data Exchange (ETDEWEB)

    von Mering, C.; Hugenholtz, P.; Raes, J.; Tringe, S.G.; Doerks,T.; Jensen, L.J.; Ward, N.; Bork, P.

    2007-01-01

    The taxonomic composition of environmental communities is an important indicator of their ecology and function. Here, we use a set of protein-coding marker genes, extracted from large-scale environmental shotgun sequencing data, to provide a more direct, quantitative and accurate picture of community composition than traditional rRNA-based approaches using polymerase chain reaction (PCR). By mapping marker genes from four diverse environmental data sets onto a reference species phylogeny, we show that certain communities evolve faster than others, determine preferred habitats for entire microbial clades, and provide evidence that such habitat preferences are often remarkably stable over time.

  16. Integration of hydrothermal-energy economics: related quantitative studies

    Energy Technology Data Exchange (ETDEWEB)

    1982-08-01

    A comparison of ten models for computing the cost of hydrothermal energy is presented. This comparison involved a detailed examination of a number of technical and economic parameters of the various quantitative models with the objective of identifying the most important parameters in the context of accurate estimates of cost of hydrothermal energy. Important features of various models, such as focus of study, applications, marked sectors covered, methodology, input data requirements, and output are compared in the document. A detailed sensitivity analysis of all the important engineering and economic parameters is carried out to determine the effect of non-consideration of individual parameters.

  17. Accurate mass measurements on neutron-deficient krypton isotopes

    Energy Technology Data Exchange (ETDEWEB)

    Rodriguez, D. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany)]. E-mail: rodriguez@lpccaen.in2p3.fr; Audi, G. [CSNSM-IN2P3-CNRS, 91405 Orsay-Campus(France); Aystoe, J. [University of Jyvaeskylae, Department of Physics, P.O. Box 35, 40351 Jyvaeskylae (Finland); Beck, D. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany); Blaum, K. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany); Institute of Physics, University of Mainz, Staudingerweg 7, 55128 Mainz (Germany); Bollen, G. [NSCL, Michigan State University, East Lansing, MI 48824-1321 (United States); Herfurth, F. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany); Jokinen, A. [University of Jyvaeskylae, Department of Physics, P.O. Box 35, 40351 Jyvaeskylae (Finland); Kellerbauer, A. [CERN, Division EP, 1211 Geneva 23 (Switzerland); Kluge, H.-J. [GSI, Planckstrasse 1, 64291 Darmstadt (Germany); University of Heidelberg, 69120 Heidelberg (Germany); Kolhinen, V.S. [University of Jyvaeskylae, Department of Physics, P.O. Box 35, 40351 Jyvaeskylae (Finland); Oinonen, M. [Helsinki Institute of Physics, P.O. Box 64, 00014 University of Helsinki (Finland); Sauvan, E. [Institute of Physics, University of Mainz, Staudingerweg 7, 55128 Mainz (Germany); Schwarz, S. [NSCL, Michigan State University, East Lansing, MI 48824-1321 (United States)

    2006-04-17

    The masses of {sup 72-78,80,82,86}Kr were measured directly with the ISOLTRAP Penning trap mass spectrometer at ISOLDE/CERN. For all these nuclides, the measurements yielded mass uncertainties below 10 keV. The ISOLTRAP mass values for {sup 72-75}Kr outweighed previous results obtained by means of other techniques, and thus completely determine the new values in the Atomic-Mass Evaluation. Besides the interest of these masses for nuclear astrophysics, nuclear structure studies, and Standard Model tests, these results constitute a valuable and accurate input to improve mass models. In this paper, we present the mass measurements and discuss the mass evaluation for these Kr isotopes.

  18. Stereotypes of age differences in personality traits: universal and accurate?

    Science.gov (United States)

    Chan, Wayne; McCrae, Robert R; De Fruyt, Filip; Jussim, Lee; Löckenhoff, Corinna E; De Bolle, Marleen; Costa, Paul T; Sutin, Angelina R; Realo, Anu; Allik, Jüri; Nakazato, Katsuharu; Shimonaka, Yoshiko; Hřebíčková, Martina; Graf, Sylvie; Yik, Michelle; Brunner-Sciarra, Marina; de Figueora, Nora Leibovich; Schmidt, Vanina; Ahn, Chang-Kyu; Ahn, Hyun-nie; Aguilar-Vafaie, Maria E; Siuta, Jerzy; Szmigielska, Barbara; Cain, Thomas R; Crawford, Jarret T; Mastor, Khairul Anwar; Rolland, Jean-Pierre; Nansubuga, Florence; Miramontez, Daniel R; Benet-Martínez, Veronica; Rossier, Jérôme; Bratko, Denis; Marušić, Iris; Halberstadt, Jamin; Yamaguchi, Mami; Knežević, Goran; Martin, Thomas A; Gheorghiu, Mirona; Smith, Peter B; Barbaranelli, Claudio; Wang, Lei; Shakespeare-Finch, Jane; Lima, Margarida P; Klinkosz, Waldemar; Sekowski, Andrzej; Alcalay, Lidia; Simonetti, Franco; Avdeyeva, Tatyana V; Pramila, V S; Terracciano, Antonio

    2012-12-01

    Age trajectories for personality traits are known to be similar across cultures. To address whether stereotypes of age groups reflect these age-related changes in personality, we asked participants in 26 countries (N = 3,323) to rate typical adolescents, adults, and old persons in their own country. Raters across nations tended to share similar beliefs about different age groups; adolescents were seen as impulsive, rebellious, undisciplined, preferring excitement and novelty, whereas old people were consistently considered lower on impulsivity, activity, antagonism, and Openness. These consensual age group stereotypes correlated strongly with published age differences on the five major dimensions of personality and most of 30 specific traits, using as criteria of accuracy both self-reports and observer ratings, different survey methodologies, and data from up to 50 nations. However, personal stereotypes were considerably less accurate, and consensual stereotypes tended to exaggerate differences across age groups. (PsycINFO Database Record (c) 2012 APA, all rights reserved).

  19. Interactive Isogeometric Volume Visualization with Pixel-Accurate Geometry.

    Science.gov (United States)

    Fuchs, Franz G; Hjelmervik, Jon M

    2016-02-01

    A recent development, called isogeometric analysis, provides a unified approach for design, analysis and optimization of functional products in industry. Traditional volume rendering methods for inspecting the results from the numerical simulations cannot be applied directly to isogeometric models. We present a novel approach for interactive visualization of isogeometric analysis results, ensuring correct, i.e., pixel-accurate geometry of the volume including its bounding surfaces. The entire OpenGL pipeline is used in a multi-stage algorithm leveraging techniques from surface rendering, order-independent transparency, as well as theory and numerical methods for ordinary differential equations. We showcase the efficiency of our approach on different models relevant to industry, ranging from quality inspection of the parametrization of the geometry, to stress analysis in linear elasticity, to visualization of computational fluid dynamics results.

  20. Accurate Calculation of Fringe Fields in the LHC Main Dipoles

    CERN Document Server

    Kurz, S; Siegel, N

    2000-01-01

    The ROXIE program developed at CERN for the design and optimization of the superconducting LHC magnets has been recently extended in a collaboration with the University of Stuttgart, Germany, with a field computation method based on the coupling between the boundary element (BEM) and the finite element (FEM) technique. This avoids the meshing of the coils and the air regions, and avoids the artificial far field boundary conditions. The method is therefore specially suited for the accurate calculation of fields in the superconducting magnets in which the field is dominated by the coil. We will present the fringe field calculations in both 2d and 3d geometries to evaluate the effect of connections and the cryostat on the field quality and the flux density to which auxiliary bus-bars are exposed.

  1. CLOMP: Accurately Characterizing OpenMP Application Overheads

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Gyllenhaal, J; de Supinski, B

    2008-02-11

    Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC. Further, we show that CLOMP also captures limitations for OpenMP parallelization on NUMA systems.

  2. Accurate Element of Compressive Bar considering the Effect of Displacement

    Directory of Open Access Journals (Sweden)

    Lifeng Tang

    2015-01-01

    Full Text Available By constructing the compressive bar element and developing the stiffness matrix, most issues about the compressive bar can be solved. In this paper, based on second derivative to the equilibrium differential governing equations, the displacement shape functions are got. And then the finite element formula of compressive bar element is developed by using the potential energy principle and analytical shape function. Based on the total potential energy variation principle, the static and geometrical stiffness matrices are proposed, in which the large deformation of compressive bar is considered. To verify the accurate and validity of the analytical trial function element proposed in this paper, a number of the numerical examples are presented. Comparisons show that the proposed element has high calculation efficiency and rapid speed of convergence.

  3. Accurate determination of heteroclinic orbits in chaotic dynamical systems

    Science.gov (United States)

    Li, Jizhou; Tomsovic, Steven

    2017-03-01

    Accurate calculation of heteroclinic and homoclinic orbits can be of significant importance in some classes of dynamical system problems. Yet for very strongly chaotic systems initial deviations from a true orbit will be magnified by a large exponential rate making direct computational methods fail quickly. In this paper, a method is developed that avoids direct calculation of the orbit by making use of the well-known stability property of the invariant unstable and stable manifolds. Under an area-preserving map, this property assures that any initial deviation from the stable (unstable) manifold collapses onto them under inverse (forward) iterations of the map. Using a set of judiciously chosen auxiliary points on the manifolds, long orbit segments can be calculated using the stable and unstable manifold intersections of the heteroclinic (homoclinic) tangle. Detailed calculations using the example of the kicked rotor are provided along with verification of the relation between action differences and certain areas bounded by the manifolds.

  4. Accurate finite difference methods for time-harmonic wave propagation

    Science.gov (United States)

    Harari, Isaac; Turkel, Eli

    1994-01-01

    Finite difference methods for solving problems of time-harmonic acoustics are developed and analyzed. Multidimensional inhomogeneous problems with variable, possibly discontinuous, coefficients are considered, accounting for the effects of employing nonuniform grids. A weighted-average representation is less sensitive to transition in wave resolution (due to variable wave numbers or nonuniform grids) than the standard pointwise representation. Further enhancement in method performance is obtained by basing the stencils on generalizations of Pade approximation, or generalized definitions of the derivative, reducing spurious dispersion, anisotropy and reflection, and by improving the representation of source terms. The resulting schemes have fourth-order accurate local truncation error on uniform grids and third order in the nonuniform case. Guidelines for discretization pertaining to grid orientation and resolution are presented.

  5. Accurate bond dissociation energies (D 0) for FHF- isotopologues

    Science.gov (United States)

    Stein, Christopher; Oswald, Rainer; Sebald, Peter; Botschwina, Peter; Stoll, Hermann; Peterson, Kirk A.

    2013-09-01

    Accurate bond dissociation energies (D 0) are determined for three isotopologues of the bifluoride ion (FHF-). While the zero-point vibrational contributions are taken from our previous work (P. Sebald, A. Bargholz, R. Oswald, C. Stein, P. Botschwina, J. Phys. Chem. A, DOI: 10.1021/jp3123677), the equilibrium dissociation energy (D e ) of the reaction ? was obtained by a composite method including frozen-core (fc) CCSD(T) calculations with basis sets up to cardinal number n = 7 followed by extrapolation to the complete basis set limit. Smaller terms beyond fc-CCSD(T) cancel each other almost completely. The D 0 values of FHF-, FDF-, and FTF- are predicted to be 15,176, 15,191, and 15,198 cm-1, respectively, with an uncertainty of ca. 15 cm-1.

  6. Redundancy-Free, Accurate Analytical Center Machine for Classification

    Institute of Scientific and Technical Information of China (English)

    ZHENGFanzi; QIUZhengding; LengYonggang; YueJianhai

    2005-01-01

    Analytical center machine (ACM) has remarkable generalization performance based on analytical center of version space and outperforms SVM. From the analysis of geometry of machine learning and principle of ACM, it is showed that some training patterns are redundant to the definition of version space. Redundant patterns push ACM classifier away from analytical center of the prime version space so that the generalization performance degrades, at the same time redundant patterns slow down the classifier and reduce the efficiency of storage. Thus, an incremental algorithm is proposed to remove redundant patterns and embed into the frame of ACM that yields a Redundancy free accurate-Analytical center machine (RFA-ACM) for classification. Experiments with Heart, Thyroid, Banana datasets demonstrate the validity of RFA-ACM.

  7. Phase rainbow refractometry for accurate droplet variation characterization.

    Science.gov (United States)

    Wu, Yingchun; Promvongsa, Jantarat; Saengkaew, Sawitree; Wu, Xuecheng; Chen, Jia; Gréhan, Gérard

    2016-10-15

    We developed a one-dimensional phase rainbow refractometer for the accurate trans-dimensional measurements of droplet size on the micrometer scale as well as the tiny droplet diameter variations at the nanoscale. The dependence of the phase shift of the rainbow ripple structures on the droplet variations is revealed. The phase-shifting rainbow image is recorded by a telecentric one-dimensional rainbow imaging system. Experiments on the evaporating monodispersed droplet stream show that the phase rainbow refractometer can measure the tiny droplet diameter changes down to tens of nanometers. This one-dimensional phase rainbow refractometer is capable of measuring the droplet refractive index and diameter, as well as variations.

  8. Accurate derivative evaluation for any Grad–Shafranov solver

    Energy Technology Data Exchange (ETDEWEB)

    Ricketson, L.F. [Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States); Cerfon, A.J., E-mail: cerfon@cims.nyu.edu [Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States); Rachh, M. [Courant Institute of Mathematical Sciences, New York University, New York, NY 10012 (United States); Freidberg, J.P. [Plasma Science and Fusion Center, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)

    2016-01-15

    We present a numerical scheme that can be combined with any fixed boundary finite element based Poisson or Grad–Shafranov solver to compute the first and second partial derivatives of the solution to these equations with the same order of convergence as the solution itself. At the heart of our scheme is an efficient and accurate computation of the Dirichlet to Neumann map through the evaluation of a singular volume integral and the solution to a Fredholm integral equation of the second kind. Our numerical method is particularly useful for magnetic confinement fusion simulations, since it allows the evaluation of quantities such as the magnetic field, the parallel current density and the magnetic curvature with much higher accuracy than has been previously feasible on the affordable coarse grids that are usually implemented.

  9. A fast and accurate FPGA based QRS detection system.

    Science.gov (United States)

    Shukla, Ashish; Macchiarulo, Luca

    2008-01-01

    An accurate Field Programmable Gate Array (FPGA) based ECG Analysis system is described in this paper. The design, based on a popular software based QRS detection algorithm, calculates the threshold value for the next peak detection cycle, from the median of eight previously detected peaks. The hardware design has accuracy in excess of 96% in detecting the beats correctly when tested with a subset of five 30 minute data records obtained from the MIT-BIH Arrhythmia database. The design, implemented using a proprietary design tool (System Generator), is an extension of our previous work and uses 76% resources available in a small-sized FPGA device (Xilinx Spartan xc3s500), has a higher detection accuracy as compared to our previous design and takes almost half the analysis time in comparison to software based approach.

  10. Accurate measure by weight of liquids in industry

    Energy Technology Data Exchange (ETDEWEB)

    Muller, M.R.

    1992-12-12

    This research's focus was to build a prototype of a computerized liquid dispensing system. This liquid metering system is based on the concept of altering the representative volume to account for temperature changes in the liquid to be dispensed. This is actualized by using a measuring tank and a temperature compensating displacement plunger. By constantly monitoring the temperature of the liquid, the plunger can be used to increase or decrease the specified volume to more accurately dispense liquid with a specified mass. In order to put the device being developed into proper engineering perspective, an extensive literature review was undertaken on all areas of industrial metering of liquids with an emphasis on gravimetric methods.

  11. Accurate measure by weight of liquids in industry. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Muller, M.R.

    1992-12-12

    This research`s focus was to build a prototype of a computerized liquid dispensing system. This liquid metering system is based on the concept of altering the representative volume to account for temperature changes in the liquid to be dispensed. This is actualized by using a measuring tank and a temperature compensating displacement plunger. By constantly monitoring the temperature of the liquid, the plunger can be used to increase or decrease the specified volume to more accurately dispense liquid with a specified mass. In order to put the device being developed into proper engineering perspective, an extensive literature review was undertaken on all areas of industrial metering of liquids with an emphasis on gravimetric methods.

  12. GLIMPSE: Accurate 3D weak lensing reconstructions using sparsity

    CERN Document Server

    Leonard, Adrienne; Starck, Jean-Luc

    2013-01-01

    We present GLIMPSE - Gravitational Lensing Inversion and MaPping with Sparse Estimators - a new algorithm to generate density reconstructions in three dimensions from photometric weak lensing measurements. This is an extension of earlier work in one dimension aimed at applying compressive sensing theory to the inversion of gravitational lensing measurements to recover 3D density maps. Using the assumption that the density can be represented sparsely in our chosen basis - 2D transverse wavelets and 1D line of sight dirac functions - we show that clusters of galaxies can be identified and accurately localised and characterised using this method. Throughout, we use simulated data consistent with the quality currently attainable in large surveys. We present a thorough statistical analysis of the errors and biases in both the redshifts of detected structures and their amplitudes. The GLIMPSE method is able to produce reconstructions at significantly higher resolution than the input data; in this paper we show reco...

  13. Accurate Parallel Algorithm for Adini Nonconforming Finite Element

    Institute of Scientific and Technical Information of China (English)

    罗平; 周爱辉

    2003-01-01

    Multi-parameter asymptotic expansions are interesting since they justify the use of multi-parameter extrapolation which can be implemented in parallel and are well studied in many papers for the conforming finite element methods. For the nonconforming finite element methods, however, the work of the multi-parameter asymptotic expansions and extrapolation have seldom been found in the literature. This paper considers the solution of the biharmonic equation using Adini nonconforming finite elements and reports new results for the multi-parameter asymptotic expansions and extrapolation. The Adini nonconforming finite element solution of the biharmonic equation is shown to have a multi-parameter asymptotic error expansion and extrapolation. This expansion and a multi-parameter extrapolation technique were used to develop an accurate approximation parallel algorithm for the biharmonic equation. Finally, numerical results have verified the extrapolation theory.

  14. Accurate Modeling of Buck Converters with Magnetic-Core Inductors

    DEFF Research Database (Denmark)

    Astorino, Antonio; Antonini, Giulio; Swaminathan, Madhavan

    2015-01-01

    In this paper, a modeling approach for buck converters with magnetic-core inductors is presented. Due to the high nonlinearity of magnetic materials, the frequency domain analysis of such circuits is not suitable for an accurate description of their behaviour. Hence, in this work, a timedomain...... model of buck converters with magnetic-core inductors in a SimulinkR environment is proposed. As an example, the presented approach is used to simulate an eight-phase buck converter. The simulation results show that an unexpected system behaviour in terms of current ripple amplitude needs the inductor core...

  15. Accurate Performance Analysis of Opportunistic Decode-and-Forward Relaying

    CERN Document Server

    Tourki, Kamel; Alouni, Mohamed-Slim

    2011-01-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path can be considered unusable, and takes into account the effect of the possible erroneously detected and transmitted data at the best relay. We first derive statistics based on exact probability density function (PDF) of each hop. Then, the PDFs are used to determine accurate closed form expressions for end-to-end bit-error rate (BER) of binary phase-shift keying (BPSK) modulation. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures.

  16. An Integrative Approach to Accurate Vehicle Logo Detection

    Directory of Open Access Journals (Sweden)

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  17. Stereotypes of Age Differences in Personality Traits: Universal and Accurate?

    Science.gov (United States)

    Chan, Wayne; McCrae, Robert R.; De Fruyt, Filip; Jussim, Lee; Löckenhoff, Corinna E.; De Bolle, Marleen; Costa, Paul T.; Sutin, Angelina R.; Realo, Anu; Allik, Jüri; Nakazato, Katsuharu; Shimonaka, Yoshiko; Hřebíčková, Martina; Kourilova, Sylvie; Yik, Michelle; Ficková, Emília; Brunner-Sciarra, Marina; de Figueora, Nora Leibovich; Schmidt, Vanina; Ahn, Chang-kyu; Ahn, Hyun-nie; Aguilar-Vafaie, Maria E.; Siuta, Jerzy; Szmigielska, Barbara; Cain, Thomas R.; Crawford, Jarret T.; Mastor, Khairul Anwar; Rolland, Jean-Pierre; Nansubuga, Florence; Miramontez, Daniel R.; Benet-Martínez, Veronica; Rossier, Jérôme; Bratko, Denis; Halberstadt, Jamin; Yamaguchi, Mami; Knežević, Goran; Martin, Thomas A.; Gheorghiu, Mirona; Smith, Peter B.; Barbaranelli, Claduio; Wang, Lei; Shakespeare-Finch, Jane; Lima, Margarida P.; Klinkosz, Waldemar; Sekowski, Andrzej; Alcalay, Lidia; Simonetti, Franco; Avdeyeva, Tatyana V.; Pramila, V. S.; Terracciano, Antonio

    2012-01-01

    Age trajectories for personality traits are known to be similar across cultures. To address whether stereotypes of age groups reflect these age-related changes in personality, we asked participants in 26 countries (N = 3,323) to rate typical adolescents, adults, and old persons in their own country. Raters across nations tended to share similar beliefs about different age groups; adolescents were seen as impulsive, rebellious, undisciplined, preferring excitement and novelty, whereas old people were consistently considered lower on impulsivity, activity, antagonism, and Openness. These consensual age group stereotypes correlated strongly with published age differences on the five major dimensions of personality and most of 30 specific traits, using as criteria of accuracy both self-reports and observer ratings, different survey methodologies, and data from up to 50 nations. However, personal stereotypes were considerably less accurate, and consensual stereotypes tended to exaggerate differences across age groups. PMID:23088227

  18. Accurate mass measurements on neutron-deficient krypton isotopes

    CERN Document Server

    Rodríguez, D; Äystö, J; Beck, D

    2006-01-01

    The masses of $^{72–78,80,82,86}$Kr were measured directly with the ISOLTRAP Penning trap mass spectrometer at ISOLDE/CERN. For all these nuclides, the measurements yielded mass uncertainties below 10 keV. The ISOLTRAP mass values for $^{72–75}$Kr being more precise than the previous results obtained by means of other techniques, and thus completely determine the new values in the Atomic-Mass Evaluation. Besides the interest of these masses for nuclear astrophysics, nuclear structure studies, and Standard Model tests, these results constitute a valuable and accurate input to improve mass models. In this paper, we present the mass measurements and discuss the mass evaluation for these Kr isotopes.

  19. Fast and spectrally accurate summation of 2-periodic Stokes potentials

    CERN Document Server

    Lindbo, Dag

    2011-01-01

    We derive a Ewald decomposition for the Stokeslet in planar periodicity and a novel PME-type O(N log N) method for the fast evaluation of the resulting sums. The decomposition is the natural 2P counterpart to the classical 3P decomposition by Hasimoto, and is given in an explicit form not found in the literature. Truncation error estimates are provided to aid in selecting parameters. The fast, PME-type, method appears to be the first fast method for computing Stokeslet Ewald sums in planar periodicity, and has three attractive properties: it is spectrally accurate; it uses the minimal amount of memory that a gridded Ewald method can use; and provides clarity regarding numerical errors and how to choose parameters. Analytical and numerical results are give to support this. We explore the practicalities of the proposed method, and survey the computational issues involved in applying it to 2-periodic boundary integral Stokes problems.

  20. Second-Order Accurate Projective Integrators for Multiscale Problems

    Energy Technology Data Exchange (ETDEWEB)

    Lee, S L; Gear, C W

    2005-05-27

    We introduce new projective versions of second-order accurate Runge-Kutta and Adams-Bashforth methods, and demonstrate their use as outer integrators in solving stiff differential systems. An important outcome is that the new outer integrators, when combined with an inner telescopic projective integrator, can result in fully explicit methods with adaptive outer step size selection and solution accuracy comparable to those obtained by implicit integrators. If the stiff differential equations are not directly available, our formulations and stability analysis are general enough to allow the combined outer-inner projective integrators to be applied to black-box legacy codes or perform a coarse-grained time integration of microscopic systems to evolve macroscopic behavior, for example.

  1. Spectropolarimetrically accurate magnetohydrostatic sunspot model for forward modelling in helioseismology

    CERN Document Server

    Przybylski, D; Cally, P S

    2015-01-01

    We present a technique to construct a spectropolarimetrically accurate magneto-hydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion and absorption in the solar interior and photosphere with the sunspot embedded into it. With the $6173\\mathrm{\\AA}$ magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as full Stokes vector for the simulation at various positions at the solar disk, and analyse the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterised. An increase in acoustic power in the simulated observ...

  2. Accurate complex scaling of three dimensional numerical potentials.

    Science.gov (United States)

    Cerioni, Alessandro; Genovese, Luigi; Duchemin, Ivan; Deutsch, Thierry

    2013-05-28

    The complex scaling method, which consists in continuing spatial coordinates into the complex plane, is a well-established method that allows to compute resonant eigenfunctions of the time-independent Schrödinger operator. Whenever it is desirable to apply the complex scaling to investigate resonances in physical systems defined on numerical discrete grids, the most direct approach relies on the application of a similarity transformation to the original, unscaled Hamiltonian. We show that such an approach can be conveniently implemented in the Daubechies wavelet basis set, featuring a very promising level of generality, high accuracy, and no need for artificial convergence parameters. Complex scaling of three dimensional numerical potentials can be efficiently and accurately performed. By carrying out an illustrative resonant state computation in the case of a one-dimensional model potential, we then show that our wavelet-based approach may disclose new exciting opportunities in the field of computational non-Hermitian quantum mechanics.

  3. Accurate performance analysis of opportunistic decode-and-forward relaying

    KAUST Repository

    Tourki, Kamel

    2011-07-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  4. Inverter Modeling For Accurate Energy Predictions Of Tracking HCPV Installations

    Science.gov (United States)

    Bowman, J.; Jensen, S.; McDonald, Mark

    2010-10-01

    High efficiency high concentration photovoltaic (HCPV) solar plants of megawatt scale are now operational, and opportunities for expanded adoption are plentiful. However, effective bidding for sites requires reliable prediction of energy production. HCPV module nameplate power is rated for specific test conditions; however, instantaneous HCPV power varies due to site specific irradiance and operating temperature, and is degraded by soiling, protective stowing, shading, and electrical connectivity. These factors interact with the selection of equipment typically supplied by third parties, e.g., wire gauge and inverters. We describe a time sequence model accurately accounting for these effects that predicts annual energy production, with specific reference to the impact of the inverter on energy output and interactions between system-level design decisions and the inverter. We will also show two examples, based on an actual field design, of inverter efficiency calculations and the interaction between string arrangements and inverter selection.

  5. Accurate Enthalpies of Formation of Astromolecules: Energy, Stability and Abundance

    CERN Document Server

    Etim, Emmanuel E

    2016-01-01

    Accurate enthalpies of formation are reported for known and potential astromolecules using high level ab initio quantum chemical calculations. A total of 130 molecules comprising of 31 isomeric groups and 24 cyanide/isocyanide pairs with atoms ranging from 3 to 12 have been considered. The results show an interesting, surprisingly not well explored, relationship between energy, stability and abundance (ESA) existing among these molecules. Among the isomeric species, isomers with lower enthalpies of formation are more easily observed in the interstellar medium compared to their counterparts with higher enthalpies of formation. Available data in literature confirm the high abundance of the most stable isomer over other isomers in the different groups considered. Potential for interstellar hydrogen bonding accounts for the few exceptions observed. Thus, in general, it suffices to say that the interstellar abundances of related species are directly proportional to their stabilities. The immediate consequences of ...

  6. Fast and accurate determination of modularity and its effect size

    CERN Document Server

    Treviño, Santiago; Del Genio, Charo I; Bassler, Kevin E

    2014-01-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erd\\H{o}s-R\\'enyi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a $z$-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links.

  7. Methods for Accurate Free Flight Measurement of Drag Coefficients

    CERN Document Server

    Courtney, Elya; Courtney, Michael

    2015-01-01

    This paper describes experimental methods for free flight measurement of drag coefficients to an accuracy of approximately 1%. There are two main methods of determining free flight drag coefficients, or equivalent ballistic coefficients: 1) measuring near and far velocities over a known distance and 2) measuring a near velocity and time of flight over a known distance. Atmospheric conditions must also be known and nearly constant over the flight path. A number of tradeoffs are important when designing experiments to accurately determine drag coefficients. The flight distance must be large enough so that the projectile's loss of velocity is significant compared with its initial velocity and much larger than the uncertainty in the near and/or far velocity measurements. On the other hand, since drag coefficients and ballistic coefficients both depend on velocity, the change in velocity over the flight path should be small enough that the average drag coefficient over the path (which is what is really determined)...

  8. Calculation of Accurate Hexagonal Discontinuity Factors for PARCS

    Energy Technology Data Exchange (ETDEWEB)

    Pounders. J., Bandini, B. R. , Xu, Y, and Downar, T. J.

    2007-11-01

    In this study we derive a methodology for calculating discontinuity factors consistent with the Triangle-based Polynomial Expansion Nodal (TPEN) method implemented in PARCS for hexagonal reactor geometries. The accuracy of coarse-mesh nodal methods is greatly enhanced by permitting flux discontinuities at node boundaries, but the practice of calculating discontinuity factors from infinite-medium (zero-current) single bundle calculations may not be sufficiently accurate for more challenging problems in which there is a large amount of internodal neutron streaming. The authors therefore derive a TPEN-based method for calculating discontinuity factors that are exact with respect to generalized equivalence theory. The method is validated by reproducing the reference solution for a small hexagonal core.

  9. Accurate platelet counting in an insidious case of pseudothrombocytopenia.

    Science.gov (United States)

    Lombarts, A J; Zijlstra, J J; Peters, R H; Thomasson, C G; Franck, P F

    1999-01-01

    Anticoagulant-induced aggregation of platelets leads to pseudothrombocytopenia. Blood cell counters generally trigger alarms to alert the user. We describe an insidious case of pseudothrombocytopenia, where the complete absence of Coulter counter alarms both in ethylenediaminetetraacetic acid blood and in citrate or acid citrate dextrose blood samples was compounded by the fact that the massive aggregates were exclusively found at the edges of the blood smear. Non-recognition of pseudothrombocytopenia can have serious diagnostic and therapeutic consequences. While the anti-aggregant mixture citrate-theophylline-adenosine-dipyridamole completely failed in preventing pseudothrombocytopenia, addition of iloprost to anticoagulants only partially prevented the aggregation. Only the prior addition of gentamicin to any anticoagulant used resulted in a complete prevention of pseudothrombocytopenia and enabled to count accurately the platelets.

  10. A new accurate pill recognition system using imprint information

    Science.gov (United States)

    Chen, Zhiyuan; Kamata, Sei-ichiro

    2013-12-01

    Great achievements in modern medicine benefit human beings. Also, it has brought about an explosive growth of pharmaceuticals that current in the market. In daily life, pharmaceuticals sometimes confuse people when they are found unlabeled. In this paper, we propose an automatic pill recognition technique to solve this problem. It functions mainly based on the imprint feature of the pills, which is extracted by proposed MSWT (modified stroke width transform) and described by WSC (weighted shape context). Experiments show that our proposed pill recognition method can reach an accurate rate up to 92.03% within top 5 ranks when trying to classify more than 10 thousand query pill images into around 2000 categories.

  11. IVA: accurate de novo assembly of RNA virus genomes.

    Science.gov (United States)

    Hunt, Martin; Gall, Astrid; Ong, Swee Hoe; Brener, Jacqui; Ferns, Bridget; Goulder, Philip; Nastouli, Eleni; Keane, Jacqueline A; Kellam, Paul; Otto, Thomas D

    2015-07-15

    An accurate genome assembly from short read sequencing data is critical for downstream analysis, for example allowing investigation of variants within a sequenced population. However, assembling sequencing data from virus samples, especially RNA viruses, into a genome sequence is challenging due to the combination of viral population diversity and extremely uneven read depth caused by amplification bias in the inevitable reverse transcription and polymerase chain reaction amplification process of current methods. We developed a new de novo assembler called IVA (Iterative Virus Assembler) designed specifically for read pairs sequenced at highly variable depth from RNA virus samples. We tested IVA on datasets from 140 sequenced samples from human immunodeficiency virus-1 or influenza-virus-infected people and demonstrated that IVA outperforms all other virus de novo assemblers. The software runs under Linux, has the GPLv3 licence and is freely available from http://sanger-pathogens.github.io/iva © The Author 2015. Published by Oxford University Press.

  12. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  13. Accurate and efficient maximal ball algorithm for pore network extraction

    Science.gov (United States)

    Arand, Frederick; Hesser, Jürgen

    2017-04-01

    The maximal ball (MB) algorithm is a well established method for the morphological analysis of porous media. It extracts a network of pores and throats from volumetric data. This paper describes structural modifications to the algorithm, while the basic concepts are preserved. Substantial improvements to accuracy and efficiency are achieved as follows: First, all calculations are performed on a subvoxel accurate distance field, and no approximations to discretize balls are made. Second, data structures are simplified to keep memory usage low and improve algorithmic speed. Third, small and reasonable adjustments increase speed significantly. In volumes with high porosity, memory usage is improved compared to classic MB algorithms. Furthermore, processing is accelerated more than three times. Finally, the modified MB algorithm is verified by extracting several network properties from reference as well as real data sets. Runtimes are measured and compared to literature.

  14. Accurate Iterative Analysis of the K-V Equations

    Energy Technology Data Exchange (ETDEWEB)

    Anderson, O.A.

    2005-05-09

    Those working with alternating-gradient (A-G) systems look for simple, accurate ways to analyze A-G performance for matched beams. The useful K-V equations are easily solved in the smooth approximation. This approximate solution becomes quite inaccurate for applications with large focusing fields and phase advances. Results of efforts to improve the accuracy have tended to be indirect or complex. Their generalizations presented previously gave better accuracy in a simple explicit format. However, the method used to derive their results (expansion in powers of a small parameter) was complex and hard to follow; also, reference 7 only gave low-order correction formulas. The present paper uses a straightforward iteration method and obtains equations of higher order than shown in their previous paper.

  15. Efficient and Accurate Indoor Localization Using Landmark Graphs

    Science.gov (United States)

    Gu, F.; Kealy, A.; Khoshelham, K.; Shang, J.

    2016-06-01

    Indoor localization is important for a variety of applications such as location-based services, mobile social networks, and emergency response. Fusing spatial information is an effective way to achieve accurate indoor localization with little or with no need for extra hardware. However, existing indoor localization methods that make use of spatial information are either too computationally expensive or too sensitive to the completeness of landmark detection. In this paper, we solve this problem by using the proposed landmark graph. The landmark graph is a directed graph where nodes are landmarks (e.g., doors, staircases, and turns) and edges are accessible paths with heading information. We compared the proposed method with two common Dead Reckoning (DR)-based methods (namely, Compass + Accelerometer + Landmarks and Gyroscope + Accelerometer + Landmarks) by a series of experiments. Experimental results show that the proposed method can achieve 73% accuracy with a positioning error less than 2.5 meters, which outperforms the other two DR-based methods.

  16. Quantitative Proteomic Approaches for Studying Phosphotyrosine Signaling

    Energy Technology Data Exchange (ETDEWEB)

    Ding, Shi-Jian; Qian, Weijun; Smith, Richard D.

    2007-02-01

    Protein tyrosine phosphorylation is a fundamental mechanism for controlling many aspects of cellular processes, as well as aspects of human health and diseases. Compared to phosphoserine (pSer) and phosphothreonine (pThr), phosphotyrosine (pTyr) signaling is more tightly regulated, but often more challenging to characterize due to significantly lower level of tyrosine phosphorylation (a relative abundance of 1800:200:1 was estimated for pSer/pThr/pTyr in vertebrate cells[1]). In this review, we outline the recent advances in analytical methodologies for enrichment, identification, and accurate quantitation of tyrosine phosphorylated proteins and peptides using antibody-based technologies, capillary liquid chromatography (LC) coupled with mass spectrometry (MS), and various stable isotope labeling strategies, as well as non-MS-based methods such as protein or peptide array methods. These proteomic technological advances provide powerful tools for potentially understanding signal transduction at the system level and provide a basis for discovering novel drug targets for human diseases. [1] Hunter, T. (1998) The Croonian Lecture 1997. The phosphorylation of proteins on tyrosine: its role in cell growth and disease. Philos. Trans. R. Soc. Lond. B Biol. Sci. 353, 583–605

  17. Fast and accurate inference of local ancestry in Latino populations

    Science.gov (United States)

    Baran, Yael; Pasaniuc, Bogdan; Sankararaman, Sriram; Torgerson, Dara G.; Gignoux, Christopher; Eng, Celeste; Rodriguez-Cintron, William; Chapela, Rocio; Ford, Jean G.; Avila, Pedro C.; Rodriguez-Santana, Jose; Burchard, Esteban Gonzàlez; Halperin, Eran

    2012-01-01

    Motivation: It is becoming increasingly evident that the analysis of genotype data from recently admixed populations is providing important insights into medical genetics and population history. Such analyses have been used to identify novel disease loci, to understand recombination rate variation and to detect recent selection events. The utility of such studies crucially depends on accurate and unbiased estimation of the ancestry at every genomic locus in recently admixed populations. Although various methods have been proposed and shown to be extremely accurate in two-way admixtures (e.g. African Americans), only a few approaches have been proposed and thoroughly benchmarked on multi-way admixtures (e.g. Latino populations of the Americas). Results: To address these challenges we introduce here methods for local ancestry inference which leverage the structure of linkage disequilibrium in the ancestral population (LAMP-LD), and incorporate the constraint of Mendelian segregation when inferring local ancestry in nuclear family trios (LAMP-HAP). Our algorithms uniquely combine hidden Markov models (HMMs) of haplotype diversity within a novel window-based framework to achieve superior accuracy as compared with published methods. Further, unlike previous methods, the structure of our HMM does not depend on the number of reference haplotypes but on a fixed constant, and it is thereby capable of utilizing large datasets while remaining highly efficient and robust to over-fitting. Through simulations and analysis of real data from 489 nuclear trio families from the mainland US, Puerto Rico and Mexico, we demonstrate that our methods achieve superior accuracy compared with published methods for local ancestry inference in Latinos. Availability: http://lamp.icsi.berkeley.edu/lamp/lampld/ Contact: bpasaniu@hsph.harvard.edu Supplementary information: Supplementary data are available at Bioinformatics online. PMID:22495753

  18. Influence of pansharpening techniques in obtaining accurate vegetation thematic maps

    Science.gov (United States)

    Ibarrola-Ulzurrun, Edurne; Gonzalo-Martin, Consuelo; Marcello-Ruiz, Javier

    2016-10-01

    In last decades, there have been a decline in natural resources, becoming important to develop reliable methodologies for their management. The appearance of very high resolution sensors has offered a practical and cost-effective means for a good environmental management. In this context, improvements are needed for obtaining higher quality of the information available in order to get reliable classified images. Thus, pansharpening enhances the spatial resolution of the multispectral band by incorporating information from the panchromatic image. The main goal in the study is to implement pixel and object-based classification techniques applied to the fused imagery using different pansharpening algorithms and the evaluation of thematic maps generated that serve to obtain accurate information for the conservation of natural resources. A vulnerable heterogenic ecosystem from Canary Islands (Spain) was chosen, Teide National Park, and Worldview-2 high resolution imagery was employed. The classes considered of interest were set by the National Park conservation managers. 7 pansharpening techniques (GS, FIHS, HCS, MTF based, Wavelet `à trous' and Weighted Wavelet `à trous' through Fractal Dimension Maps) were chosen in order to improve the data quality with the goal to analyze the vegetation classes. Next, different classification algorithms were applied at pixel-based and object-based approach, moreover, an accuracy assessment of the different thematic maps obtained were performed. The highest classification accuracy was obtained applying Support Vector Machine classifier at object-based approach in the Weighted Wavelet `à trous' through Fractal Dimension Maps fused image. Finally, highlight the difficulty of the classification in Teide ecosystem due to the heterogeneity and the small size of the species. Thus, it is important to obtain accurate thematic maps for further studies in the management and conservation of natural resources.

  19. Accurate molecular classification of cancer using simple rules

    Directory of Open Access Journals (Sweden)

    Gotoh Osamu

    2009-10-01

    Full Text Available Abstract Background One intractable problem with using microarray data analysis for cancer classification is how to reduce the extremely high-dimensionality gene feature data to remove the effects of noise. Feature selection is often used to address this problem by selecting informative genes from among thousands or tens of thousands of genes. However, most of the existing methods of microarray-based cancer classification utilize too many genes to achieve accurate classification, which often hampers the interpretability of the models. For a better understanding of the classification results, it is desirable to develop simpler rule-based models with as few marker genes as possible. Methods We screened a small number of informative single genes and gene pairs on the basis of their depended degrees proposed in rough sets. Applying the decision rules induced by the selected genes or gene pairs, we constructed cancer classifiers. We tested the efficacy of the classifiers by leave-one-out cross-validation (LOOCV of training sets and classification of independent test sets. Results We applied our methods to five cancerous gene expression datasets: leukemia (acute lymphoblastic leukemia [ALL] vs. acute myeloid leukemia [AML], lung cancer, prostate cancer, breast cancer, and leukemia (ALL vs. mixed-lineage leukemia [MLL] vs. AML. Accurate classification outcomes were obtained by utilizing just one or two genes. Some genes that correlated closely with the pathogenesis of relevant cancers were identified. In terms of both classification performance and algorithm simplicity, our approach outperformed or at least matched existing methods. Conclusion In cancerous gene expression datasets, a small number of genes, even one or two if selected correctly, is capable of achieving an ideal cancer classification effect. This finding also means that very simple rules may perform well for cancerous class prediction.

  20. An accurate solver for forward and inverse transport

    Science.gov (United States)

    Monard, François; Bal, Guillaume

    2010-07-01

    This paper presents a robust and accurate way to solve steady-state linear transport (radiative transfer) equations numerically. Our main objective is to address the inverse transport problem, in which the optical parameters of a domain of interest are reconstructed from measurements performed at the domain's boundary. This inverse problem has important applications in medical and geophysical imaging, and more generally in any field involving high frequency waves or particles propagating in scattering environments. Stable solutions of the inverse transport problem require that the singularities of the measurement operator, which maps the optical parameters to the available measurements, be captured with sufficient accuracy. This in turn requires that the free propagation of particles be calculated with care, which is a difficult problem on a Cartesian grid. A standard discrete ordinates method is used for the direction of propagation of the particles. Our methodology to address spatial discretization is based on rotating the computational domain so that each direction of propagation is always aligned with one of the grid axes. Rotations are performed in the Fourier domain to achieve spectral accuracy. The numerical dispersion of the propagating particles is therefore minimal. As a result, the ballistic and single scattering components of the transport solution are calculated robustly and accurately. Physical blurring effects, such as small angular diffusion, are also incorporated into the numerical tool. Forward and inverse calculations performed in a two-dimensional setting exemplify the capabilities of the method. Although the methodology might not be the fastest way to solve transport equations, its physical accuracy provides us with a numerical tool to assess what can and cannot be reconstructed in inverse transport theory.

  1. Accurate skin dose measurements using radiochromic film in clinical applications.

    Science.gov (United States)

    Devic, S; Seuntjens, J; Abdel-Rahman, W; Evans, M; Olivares, M; Podgorsak, E B; Vuong, Té; Soares, Christopher G

    2006-04-01

    Megavoltage x-ray beams exhibit the well-known phenomena of dose buildup within the first few millimeters of the incident phantom surface, or the skin. Results of the surface dose measurements, however, depend vastly on the measurement technique employed. Our goal in this study was to determine a correction procedure in order to obtain an accurate skin dose estimate at the clinically relevant depth based on radiochromic film measurements. To illustrate this correction, we have used as a reference point a depth of 70 micron. We used the new GAFCHROMIC dosimetry films (HS, XR-T, and EBT) that have effective points of measurement at depths slightly larger than 70 micron. In addition to films, we also used an Attix parallel-plate chamber and a home-built extrapolation chamber to cover tissue-equivalent depths in the range from 4 micron to 1 mm of water-equivalent depth. Our measurements suggest that within the first millimeter of the skin region, the PDD for a 6 MV photon beam and field size of 10 x 10 cm2 increases from 14% to 43%. For the three GAFCHROMIC dosimetry film models, the 6 MV beam entrance skin dose measurement corrections due to their effective point of measurement are as follows: 15% for the EBT, 15% for the HS, and 16% for the XR-T model GAFCHROMIC films. The correction factors for the exit skin dose due to the build-down region are negligible. There is a small field size dependence for the entrance skin dose correction factor when using the EBT GAFCHROMIC film model. Finally, a procedure that uses EBT model GAFCHROMIC film for an accurate measurement of the skin dose in a parallel-opposed pair 6 MV photon beam arrangement is described.

  2. Sleep Deprivation Impairs the Accurate Recognition of Human Emotions

    Science.gov (United States)

    van der Helm, Els; Gujar, Ninad; Walker, Matthew P.

    2010-01-01

    Study Objectives: Investigate the impact of sleep deprivation on the ability to recognize the intensity of human facial emotions. Design: Randomized total sleep-deprivation or sleep-rested conditions, involving between-group and within-group repeated measures analysis. Setting: Experimental laboratory study. Participants: Thirty-seven healthy participants, (21 females) aged 18–25 y, were randomly assigned to the sleep control (SC: n = 17) or total sleep deprivation group (TSD: n = 20). Interventions: Participants performed an emotional face recognition task, in which they evaluated 3 different affective face categories: Sad, Happy, and Angry, each ranging in a gradient from neutral to increasingly emotional. In the TSD group, the task was performed once under conditions of sleep deprivation, and twice under sleep-rested conditions following different durations of sleep recovery. In the SC group, the task was performed twice under sleep-rested conditions, controlling for repeatability. Measurements and Results: In the TSD group, when sleep-deprived, there was a marked and significant blunting in the recognition of Angry and Happy affective expressions in the moderate (but not extreme) emotional intensity range; differences that were most reliable and significant in female participants. No change in the recognition of Sad expressions was observed. These recognition deficits were, however, ameliorated following one night of recovery sleep. No changes in task performance were observed in the SC group. Conclusions: Sleep deprivation selectively impairs the accurate judgment of human facial emotions, especially threat relevant (Anger) and reward relevant (Happy) categories, an effect observed most significantly in females. Such findings suggest that sleep loss impairs discrete affective neural systems, disrupting the identification of salient affective social cues. Citation: van der Helm E; Gujar N; Walker MP. Sleep deprivation impairs the accurate recognition of human

  3. An accurate and portable solid state neutron rem meter

    Energy Technology Data Exchange (ETDEWEB)

    Oakes, T.M. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Bellinger, S.L. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Miller, W.H. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Missouri University Research Reactor, Columbia, MO (United States); Myers, E.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Fronk, R.G.; Cooper, B.W [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Sobering, T.J. [Electronics Design Laboratory, Kansas State University, KS (United States); Scott, P.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Ugorowski, P.; McGregor, D.S; Shultis, J.K. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Caruso, A.N., E-mail: carusoan@umkc.edu [Department of Physics, University of Missouri, Kansas City, MO (United States)

    2013-08-11

    Accurately resolving the ambient neutron dose equivalent spanning the thermal to 15 MeV energy range with a single configuration and lightweight instrument is desirable. This paper presents the design of a portable, high intrinsic efficiency, and accurate neutron rem meter whose energy-dependent response is electronically adjusted to a chosen neutron dose equivalent standard. The instrument may be classified as a moderating type neutron spectrometer, based on an adaptation to the classical Bonner sphere and position sensitive long counter, which, simultaneously counts thermalized neutrons by high thermal efficiency solid state neutron detectors. The use of multiple detectors and moderator arranged along an axis of symmetry (e.g., long axis of a cylinder) with known neutron-slowing properties allows for the construction of a linear combination of responses that approximate the ambient neutron dose equivalent. Variations on the detector configuration are investigated via Monte Carlo N-Particle simulations to minimize the total instrument mass while maintaining acceptable response accuracy—a dose error less than 15% for bare {sup 252}Cf, bare AmBe, an epi-thermal and mixed monoenergetic sources is found at less than 4.5 kg moderator mass in all studied cases. A comparison of the energy dependent dose equivalent response and resultant energy dependent dose equivalent error of the present dosimeter to commercially-available portable rem meters and the prior art are presented. Finally, the present design is assessed by comparison of the simulated output resulting from applications of several known neutron sources and dose rates.

  4. Accurate atom-mapping computation for biochemical reactions.

    Science.gov (United States)

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  5. Queuing theory accurately models the need for critical care resources.

    Science.gov (United States)

    McManus, Michael L; Long, Michael C; Cooper, Abbot; Litvak, Eugene

    2004-05-01

    Allocation of scarce resources presents an increasing challenge to hospital administrators and health policy makers. Intensive care units can present bottlenecks within busy hospitals, but their expansion is costly and difficult to gauge. Although mathematical tools have been suggested for determining the proper number of intensive care beds necessary to serve a given demand, the performance of such models has not been prospectively evaluated over significant periods. The authors prospectively collected 2 years' admission, discharge, and turn-away data in a busy, urban intensive care unit. Using queuing theory, they then constructed a mathematical model of patient flow, compared predictions from the model to observed performance of the unit, and explored the sensitivity of the model to changes in unit size. The queuing model proved to be very accurate, with predicted admission turn-away rates correlating highly with those actually observed (correlation coefficient = 0.89). The model was useful in predicting both monthly responsiveness to changing demand (mean monthly difference between observed and predicted values, 0.4+/-2.3%; range, 0-13%) and the overall 2-yr turn-away rate for the unit (21%vs. 22%). Both in practice and in simulation, turn-away rates increased exponentially when utilization exceeded 80-85%. Sensitivity analysis using the model revealed rapid and severe degradation of system performance with even the small changes in bed availability that might result from sudden staffing shortages or admission of patients with very long stays. The stochastic nature of patient flow may falsely lead health planners to underestimate resource needs in busy intensive care units. Although the nature of arrivals for intensive care deserves further study, when demand is random, queuing theory provides an accurate means of determining the appropriate supply of beds.

  6. Issues and Applications in Label-Free Quantitative Mass Spectrometry

    Directory of Open Access Journals (Sweden)

    Xianyin Lai

    2013-01-01

    Full Text Available To address the challenges associated with differential expression proteomics, label-free mass spectrometric protein quantification methods have been developed as alternatives to array-based, gel-based, and stable isotope tag or label-based approaches. In this paper, we focus on the issues associated with label-free methods that rely on quantitation based on peptide ion peak area measurement. These issues include chromatographic alignment, peptide qualification for quantitation, and normalization. In addressing these issues, we present various approaches, assembled in a recently developed label-free quantitative mass spectrometry platform, that overcome these difficulties and enable comprehensive, accurate, and reproducible protein quantitation in highly complex protein mixtures from experiments with many sample groups. As examples of the utility of this approach, we present a variety of cases where the platform was applied successfully to assess differential protein expression or abundance in body fluids, in vitro nanotoxicology models, tissue proteomics in genetic knock-in mice, and cell membrane proteomics.

  7. Evaluating IPMN and pancreatic carcinoma utilizing quantitative histopathology.

    Science.gov (United States)

    Glazer, Evan S; Zhang, Hao Helen; Hill, Kimberly A; Patel, Charmi; Kha, Stephanie T; Yozwiak, Michael L; Bartels, Hubert; Nafissi, Nellie N; Watkins, Joseph C; Alberts, David S; Krouse, Robert S

    2016-10-01

    Intraductal papillary mucinous neoplasms (IPMN) are pancreatic lesions with uncertain biologic behavior. This study sought objective, accurate prediction tools, through the use of quantitative histopathological signatures of nuclear images, for classifying lesions as chronic pancreatitis (CP), IPMN, or pancreatic carcinoma (PC). Forty-four pancreatic resection patients were retrospectively identified for this study (12 CP; 16 IPMN; 16 PC). Regularized multinomial regression quantitatively classified each specimen as CP, IPMN, or PC in an automated, blinded fashion. Classification certainty was determined by subtracting the smallest classification probability from the largest probability (of the three groups). The certainty function varied from 1.0 (perfectly classified) to 0.0 (random). From each lesion, 180 ± 22 nuclei were imaged. Overall classification accuracy was 89.6% with six unique nuclear features. No CP cases were misclassified, 1/16 IPMN cases were misclassified, and 4/16 PC cases were misclassified. Certainty function was 0.75 ± 0.16 for correctly classified lesions and 0.47 ± 0.10 for incorrectly classified lesions (P = 0.0005). Uncertainty was identified in four of the five misclassified lesions. Quantitative histopathology provides a robust, novel method to distinguish among CP, IPMN, and PC with a quantitative measure of uncertainty. This may be useful when there is uncertainty in diagnosis.

  8. Leidenfrost effect: Accurate drop shape modeling and refined scaling laws.

    Science.gov (United States)

    Sobac, B; Rednikov, A; Dorbolo, S; Colinet, P

    2014-11-01

    We here present a simple fitting-parameter-free theory of the Leidenfrost effect (droplet levitation above a superheated plate) covering the full range of stable shapes, i.e., from small quasispherical droplets to larger puddles floating on a pocketlike vapor film. The geometry of this film is found to be in excellent quantitative agreement with the interferometric measurements of Burton et al. [Phys. Rev. Lett. 109, 074301 (2012)PRLTAO0031-900710.1103/PhysRevLett.109.074301]. We also obtain new scalings generalizing classical ones derived by Biance et al. [Phys. Fluids 15, 1632 (2003)PHFLE61070-663110.1063/1.1572161] as far as the effect of plate superheat is concerned and highlight the relative role of evaporation, gravity, and capillarity in the vapor film. To further substantiate these findings, a treatment of the problem by matched asymptotic expansions is also presented.

  9. A gold nanoparticle-based semi-quantitative and quantitative ultrasensitive paper sensor for the detection of twenty mycotoxins

    Science.gov (United States)

    Kong, Dezhao; Liu, Liqiang; Song, Shanshan; Suryoprabowo, Steven; Li, Aike; Kuang, Hua; Wang, Libing; Xu, Chuanlai

    2016-02-01

    A semi-quantitative and quantitative multi-immunochromatographic (ICA) strip detection assay was developed for the simultaneous detection of twenty types of mycotoxins from five classes, including zearalenones (ZEAs), deoxynivalenols (DONs), T-2 toxins (T-2s), aflatoxins (AFs), and fumonisins (FBs), in cereal food samples. Sensitive and specific monoclonal antibodies were selected for this assay. The semi-quantitative results were obtained within 20 min by the naked eye, with visual limits of detection for ZEAs, DONs, T-2s, AFs and FBs of 0.1-0.5, 2.5-250, 0.5-1, 0.25-1 and 2.5-10 μg kg-1, and cut-off values of 0.25-1, 5-500, 1-10, 0.5-2.5 and 5-25 μg kg-1, respectively. The quantitative results were obtained using a hand-held strip scan reader, with the calculated limits of detection for ZEAs, DONs, T-2s, AFs and FBs of 0.04-0.17, 0.06-49, 0.15-0.22, 0.056-0.49 and 0.53-1.05 μg kg-1, respectively. The analytical results of spiked samples were in accordance with the accurate content in the simultaneous detection analysis. This newly developed ICA strip assay is suitable for the on-site detection and rapid initial screening of mycotoxins in cereal samples, facilitating both semi-quantitative and quantitative determination.A semi-quantitative and quantitative multi-immunochromatographic (ICA) strip detection assay was developed for the simultaneous detection of twenty types of mycotoxins from five classes, including zearalenones (ZEAs), deoxynivalenols (DONs), T-2 toxins (T-2s), aflatoxins (AFs), and fumonisins (FBs), in cereal food samples. Sensitive and specific monoclonal antibodies were selected for this assay. The semi-quantitative results were obtained within 20 min by the naked eye, with visual limits of detection for ZEAs, DONs, T-2s, AFs and FBs of 0.1-0.5, 2.5-250, 0.5-1, 0.25-1 and 2.5-10 μg kg-1, and cut-off values of 0.25-1, 5-500, 1-10, 0.5-2.5 and 5-25 μg kg-1, respectively. The quantitative results were obtained using a hand-held strip scan

  10. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...

  11. Quantitative Reasoning in Problem Solving

    Science.gov (United States)

    Ramful, Ajay; Ho, Siew Yin

    2015-01-01

    In this article, Ajay Ramful and Siew Yin Ho explain the meaning of quantitative reasoning, describing how it is used in the to solve mathematical problems. They also describe a diagrammatic approach to represent relationships among quantities and provide examples of problems and their solutions.

  12. Time-resolved quantitative phosphoproteomics

    DEFF Research Database (Denmark)

    Verano-Braga, Thiago; Schwämmle, Veit; Sylvester, Marc

    2012-01-01

    proteins involved in the Ang-(1-7) signaling, we performed a mass spectrometry-based time-resolved quantitative phosphoproteome study of human aortic endothelial cells (HAEC) treated with Ang-(1-7). We identified 1288 unique phosphosites on 699 different proteins with 99% certainty of correct peptide...

  13. Quantitative Characterisation of Surface Texture

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Lonardo, P.M.; Trumpold, H.

    2000-01-01

    This paper reviews the different methods used to give a quantitative characterisation of surface texture. The paper contains a review of conventional 2D as well as 3D roughness parameters, with particular emphasis on recent international standards and developments. It presents new texture...

  14. GPC and quantitative phase imaging

    DEFF Research Database (Denmark)

    Palima, Darwin; Banas, Andrew Rafael; Villangca, Mark Jayson

    2016-01-01

    shaper followed by the potential of GPC for biomedical and multispectral applications where we experimentally demonstrate the active light shaping of a supercontinuum laser over most of the visible wavelength range. Finally, we discuss how GPC can be advantageously applied for Quantitative Phase Imaging...

  15. Quantitative risk assessment of CO

    NARCIS (Netherlands)

    Koornneef, J.; Spruijt, M.; Molag, M.; Ramírez, A.; Turkenburg, W.; Faaij, A.

    2010-01-01

    A systematic assessment, based on an extensive literature review, of the impact of gaps and uncertainties on the results of quantitative risk assessments (QRAs) for CO2 pipelines is presented. Sources of uncertainties that have been assessed are: failure rates, pipeline pressure, temperat

  16. Can we quantitatively assess security?

    NARCIS (Netherlands)

    Haverkort, Boudewijn R.

    2006-01-01

    This short note describes a number of methods for assessing security in a quantitative way. Next to describing a five existing approaches (where no completeness is claimed), a new assessment technique is proposed, that finds its roots in methods known from performability evaluation and stochastic mo

  17. La quantite en islandais modern

    Directory of Open Access Journals (Sweden)

    Magnús Pétursson

    1978-12-01

    Full Text Available La réalisation phonétique de la quantité en syllabe accentuée dans la lecture de deux textes continus. Le problème de la quantité est un des problèmes les plus étudiés dans la phonologie de l'islandais moderne. Du point de vue phonologique il semble qu'on ne peut pas espérer apporter du nouveau, les possibilités théoriques ayant été pratiquement épuisées comme nous 1'avons rappelé dans notre étude récente (Pétursson 1978, pp. 76-78. Le résultat le plus inattendu des recherches des dernières années est sans doute la découverte d'une différenciation quantitative entre le Nord et le Sud de l'Islande (Pétursson 1976a. Il est pourtant encore prématuré de parler de véritables zones quantitatives puisqu'on n' en connaît ni les limites ni l' étendue sur le plan géographique.

  18. Accurate episomal HIV 2-LTR circles quantification using optimized DNA isolation and droplet digital PCR

    Directory of Open Access Journals (Sweden)

    Eva Malatinkova

    2014-11-01

    Full Text Available Introduction: In HIV-infected patients on combination antiretroviral therapy (cART, the detection of episomal HIV 2-LTR circles is a potential marker for ongoing viral replication. Quantification of 2-LTR circles is based on quantitative PCR or more recently on digital PCR assessment, but is hampered due to its low abundance. Sample pre-PCR processing is a critical step for 2-LTR circles quantification, which has not yet been sufficiently evaluated in patient derived samples. Materials and Methods: We compared two sample processing procedures to more accurately quantify 2-LTR circles using droplet digital PCR (ddPCR. Episomal HIV 2-LTR circles were either isolated by genomic DNA isolation or by a modified plasmid DNA isolation, to separate the small episomal circular DNA from chromosomal DNA. This was performed in a dilution series of HIV-infected cells and HIV-1 infected patient derived samples (n=59. Samples for the plasmid DNA isolation method were spiked with an internal control plasmid. Results: Genomic DNA isolation enables robust 2-LTR circles quantification. However, in the lower ranges of detection, PCR inhibition caused by high genomic DNA load substantially limits the amount of sample input and this impacts sensitivity and accuracy. Moreover, total genomic DNA isolation resulted in a lower recovery of 2-LTR templates per isolate, further reducing its sensitivity. The modified plasmid DNA isolation with a spiked reference for normalization was more accurate in these low ranges compared to genomic DNA isolation. A linear correlation of both methods was observed in the dilution series (R2=0.974 and in the patient derived samples with 2-LTR numbers above 10 copies per million peripheral blood mononuclear cells (PBMCs, (R2=0.671. Furthermore, Bland–Altman analysis revealed an average agreement between the methods within the 27 samples in which 2-LTR circles were detectable with both methods (bias: 0.3875±1.2657 log10. Conclusions: 2-LTR

  19. A virtual environment for the accurate geologic analysis of Martian terrain

    Science.gov (United States)

    Traxler, Christoph; Paar, Gerhard; Gupta, Sanjeev; Hesina, Gerd; Sander, Kathrin; Barnes, Rob; Nauschnegg, Bernhard; Muller, Jan-Peter; Tao, Yu

    2015-04-01

    Remote geology on planetary surfaces requires immersive presentation of the environment to be investigated. Three-dimensional (3D) processing of images from rovers and satellites enables to reconstruct terrain in virtual space on Earth for scientific analysis. In this paper we present a virtual environment that allows to interactively explore 3D-reconstructed Martian terrain and perform accurate measurements on the surface. Geologists do not only require line-of-sight measurements between two points but much more the projected line-of-sight on the surface between two such points. Furthermore the tool supports to define paths of several points. It is also important for geologists to annotate the terrain they explore, especially when collaborating with colleagues. The path tool can also be used to separate geological layers or surround areas of interest. They can be linked with a text label directly positioned in 3D space and always oriented towards the viewing direction. All measurements and annotations can be maintained by a graphical user interface and used as landmarks, i.e. it is possible to fly to the corresponding locations. The virtual environment is fed with 3D vision products from rover cameras, placed in the 3D context gained from satellite images (digital elevations models and corresponding ortho images). This allows investigations in various scales from planet to microscopic level in a seamless manner. The modes of exploitation and added value of such an interactive means are manifold. The visualisation products enable us to map geological surfaces and rock layers over large areas in a quantitative framework. Accurate geometrical relationships of rock bodies especially for sedimentary layers can be reconstructed and the relationships between superposed layers can be established. Within sedimentary layers, we can delineate sedimentary faces and other characteristics. In particular, inclination of beds which may help ascertain flow directions can be

  20. Quantitative disease resistance and quantitative resistance Loci in breeding.

    Science.gov (United States)

    St Clair, Dina A

    2010-01-01

    Quantitative disease resistance (QDR) has been observed within many crop plants but is not as well understood as qualitative (monogenic) disease resistance and has not been used as extensively in breeding. Mapping quantitative trait loci (QTLs) is a powerful tool for genetic dissection of QDR. DNA markers tightly linked to quantitative resistance loci (QRLs) controlling QDR can be used for marker-assisted selection (MAS) to incorporate these valuable traits. QDR confers a reduction, rather than lack, of disease and has diverse biological and molecular bases as revealed by cloning of QRLs and identification of the candidate gene(s) underlying QRLs. Increasing our biological knowledge of QDR and QRLs will enhance understanding of how QDR differs from qualitative resistance and provide the necessary information to better deploy these resources in breeding. Application of MAS for QRLs in breeding for QDR to diverse pathogens is illustrated by examples from wheat, barley, common bean, tomato, and pepper. Strategies for optimum deployment of QRLs require research to understand effects of QDR on pathogen populations over time.

  1. Accurate in-line CD metrology for nanometer semiconductor manufacturing

    Science.gov (United States)

    Perng, Baw-Ching; Shieh, Jyu-Horng; Jang, S.-M.; Liang, M.-S.; Huang, Renee; Chen, Li-Chien; Hwang, Ruey-Lian; Hsu, Joe; Fong, David

    2006-03-01

    The need for absolute accuracy is increasing as semiconductor-manufacturing technologies advance to sub-65nm nodes, since device sizes are reducing to sub-50nm but offsets ranging from 5nm to 20nm are often encountered. While TEM is well-recognized as the most accurate CD metrology, direct comparison between the TEM data and in-line CD data might be misleading sometimes due to different statistical sampling and interferences from sidewall roughness. In this work we explore the capability of CD-AFM as an accurate in-line CD reference metrology. Being a member of scanning profiling metrology, CD-AFM has the advantages of avoiding e-beam damage and minimum sample damage induced CD changes, in addition to the capability of more statistical sampling than typical cross section metrologies. While AFM has already gained its reputation on the accuracy of depth measurement, not much data was reported on the accuracy of CD-AFM for CD measurement. Our main focus here is to prove the accuracy of CD-AFM and show its measuring capability for semiconductor related materials and patterns. In addition to the typical precision check, we spent an intensive effort on examining the bias performance of this CD metrology, which is defined as the difference between CD-AFM data and the best-known CD value of the prepared samples. We first examine line edge roughness (LER) behavior for line patterns of various materials, including polysilicon, photoresist, and a porous low k material. Based on the LER characteristics of each patterning, a method is proposed to reduce its influence on CD measurement. Application of our method to a VLSI nanoCD standard is then performed, and agreement of less than 1nm bias is achieved between the CD-AFM data and the standard's value. With very careful sample preparations and TEM tool calibration, we also obtained excellent correlation between CD-AFM and TEM for poly-CDs ranging from 70nm to 400nm. CD measurements of poly ADI and low k trenches are also

  2. Accurate, low-cost 3D-models of gullies

    Science.gov (United States)

    Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine

    2015-04-01

    Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we

  3. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    Directory of Open Access Journals (Sweden)

    Farmerie William G

    2006-08-01

    Full Text Available Abstract Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20 System (454 Life Sciences Corporation, to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae and Platanus occidentalis (Platanaceae. Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy

  4. Automatic classification and accurate size measurement of blank mask defects

    Science.gov (United States)

    Bhamidipati, Samir; Paninjath, Sankaranarayanan; Pereira, Mark; Buck, Peter

    2015-07-01

    A blank mask and its preparation stages, such as cleaning or resist coating, play an important role in the eventual yield obtained by using it. Blank mask defects' impact analysis directly depends on the amount of available information such as the number of defects observed, their accurate locations and sizes. Mask usability qualification at the start of the preparation process, is crudely based on number of defects. Similarly, defect information such as size is sought to estimate eventual defect printability on the wafer. Tracking of defect characteristics, specifically size and shape, across multiple stages, can further be indicative of process related information such as cleaning or coating process efficiencies. At the first level, inspection machines address the requirement of defect characterization by detecting and reporting relevant defect information. The analysis of this information though is still largely a manual process. With advancing technology nodes and reducing half-pitch sizes, a large number of defects are observed; and the detailed knowledge associated, make manual defect review process an arduous task, in addition to adding sensitivity to human errors. Cases where defect information reported by inspection machine is not sufficient, mask shops rely on other tools. Use of CDSEM tools is one such option. However, these additional steps translate into increased costs. Calibre NxDAT based MDPAutoClassify tool provides an automated software alternative to the manual defect review process. Working on defect images generated by inspection machines, the tool extracts and reports additional information such as defect location, useful for defect avoidance[4][5]; defect size, useful in estimating defect printability; and, defect nature e.g. particle, scratch, resist void, etc., useful for process monitoring. The tool makes use of smart and elaborate post-processing algorithms to achieve this. Their elaborateness is a consequence of the variety and

  5. A spectroscopic transfer standard for accurate atmospheric CO measurements

    Science.gov (United States)

    Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker

    2016-04-01

    Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been

  6. Hydration free energies of cyanide and hydroxide ions from molecular dynamics simulations with accurate force fields

    Science.gov (United States)

    Lee, M.W.; Meuwly, M.

    2013-01-01

    The evaluation of hydration free energies is a sensitive test to assess force fields used in atomistic simulations. We showed recently that the vibrational relaxation times, 1D- and 2D-infrared spectroscopies for CN(-) in water can be quantitatively described from molecular dynamics (MD) simulations with multipolar force fields and slightly enlarged van der Waals radii for the C- and N-atoms. To validate such an approach, the present work investigates the solvation free energy of cyanide in water using MD simulations with accurate multipolar electrostatics. It is found that larger van der Waals radii are indeed necessary to obtain results close to the experimental values when a multipolar force field is used. For CN(-), the van der Waals ranges refined in our previous work yield hydration free energy between -72.0 and -77.2 kcal mol(-1), which is in excellent agreement with the experimental data. In addition to the cyanide ion, we also study the hydroxide ion to show that the method used here is readily applicable to similar systems. Hydration free energies are found to sensitively depend on the intermolecular interactions, while bonded interactions are less important, as expected. We also investigate in the present work the possibility of applying the multipolar force field in scoring trajectories generated using computationally inexpensive methods, which should be useful in broader parametrization studies with reduced computational resources, as scoring is much faster than the generation of the trajectories.

  7. Comparison of PIV with 4D-Flow in a physiological accurate flow phantom

    Science.gov (United States)

    Sansom, Kurt; Balu, Niranjan; Liu, Haining; Aliseda, Alberto; Yuan, Chun; Canton, Maria De Gador

    2016-11-01

    Validation of 4D MRI flow sequences with planar particle image velocimetry (PIV) is performed in a physiologically-accurate flow phantom. A patient-specific phantom of a carotid artery is connected to a pulsatile flow loop to simulate the 3D unsteady flow in the cardiovascular anatomy. Cardiac-cycle synchronized MRI provides time-resolved 3D blood velocity measurements in clinical tool that is promising but lacks a robust validation framework. PIV at three different Reynolds numbers (540, 680, and 815, chosen based on +/- 20 % of the average velocity from the patient-specific CCA waveform) and four different Womersley numbers (3.30, 3.68, 4.03, and 4.35, chosen to reflect a physiological range of heart rates) are compared to 4D-MRI measurements. An accuracy assessment of raw velocity measurements and a comparison of estimated and measureable flow parameters such as wall shear stress, fluctuating velocity rms, and Lagrangian particle residence time, will be presented, with justification for their biomechanics relevance to the pathophysiology of arterial disease: atherosclerosis and intimal hyperplasia. Lastly, the framework is applied to a new 4D-Flow MRI sequence and post processing techniques to provide a quantitative assessment with the benchmarked data. Department of Education GAANN Fellowship.

  8. Combining transcription factor binding affinities with open-chromatin data for accurate gene expression prediction.

    Science.gov (United States)

    Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H

    2017-01-09

    The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively.

  9. Combining transcription factor binding affinities with open-chromatin data for accurate gene expression prediction

    Science.gov (United States)

    Schmidt, Florian; Gasparoni, Nina; Gasparoni, Gilles; Gianmoena, Kathrin; Cadenas, Cristina; Polansky, Julia K.; Ebert, Peter; Nordström, Karl; Barann, Matthias; Sinha, Anupam; Fröhler, Sebastian; Xiong, Jieyi; Dehghani Amirabad, Azim; Behjati Ardakani, Fatemeh; Hutter, Barbara; Zipprich, Gideon; Felder, Bärbel; Eils, Jürgen; Brors, Benedikt; Chen, Wei; Hengstler, Jan G.; Hamann, Alf; Lengauer, Thomas; Rosenstiel, Philip; Walter, Jörn; Schulz, Marcel H.

    2017-01-01

    The binding and contribution of transcription factors (TF) to cell specific gene expression is often deduced from open-chromatin measurements to avoid costly TF ChIP-seq assays. Thus, it is important to develop computational methods for accurate TF binding prediction in open-chromatin regions (OCRs). Here, we report a novel segmentation-based method, TEPIC, to predict TF binding by combining sets of OCRs with position weight matrices. TEPIC can be applied to various open-chromatin data, e.g. DNaseI-seq and NOMe-seq. Additionally, Histone-Marks (HMs) can be used to identify candidate TF binding sites. TEPIC computes TF affinities and uses open-chromatin/HM signal intensity as quantitative measures of TF binding strength. Using machine learning, we find low affinity binding sites to improve our ability to explain gene expression variability compared to the standard presence/absence classification of binding sites. Further, we show that both footprints and peaks capture essential TF binding events and lead to a good prediction performance. In our application, gene-based scores computed by TEPIC with one open-chromatin assay nearly reach the quality of several TF ChIP-seq data sets. Finally, these scores correctly predict known transcriptional regulators as illustrated by the application to novel DNaseI-seq and NOMe-seq data for primary human hepatocytes and CD4+ T-cells, respectively. PMID:27899623

  10. Optimal target VOI size for accurate 4D coregistration of DCE-MRI

    Science.gov (United States)

    Park, Brian; Mikheev, Artem; Zaim Wadghiri, Youssef; Bertrand, Anne; Novikov, Dmitry; Chandarana, Hersh; Rusinek, Henry

    2016-03-01

    Dynamic contrast enhanced (DCE) MRI has emerged as a reliable and diagnostically useful functional imaging technique. DCE protocol typically lasts 3-15 minutes and results in a time series of N volumes. For automated analysis, it is important that volumes acquired at different times be spatially coregistered. We have recently introduced a novel 4D, or volume time series, coregistration tool based on a user-specified target volume of interest (VOI). However, the relationship between coregistration accuracy and target VOI size has not been investigated. In this study, coregistration accuracy was quantitatively measured using various sized target VOIs. Coregistration of 10 DCE-MRI mouse head image sets were performed with various sized VOIs targeting the mouse brain. Accuracy was quantified by measures based on the union and standard deviation of the coregistered volume time series. Coregistration accuracy was determined to improve rapidly as the size of the VOI increased and approached the approximate volume of the target (mouse brain). Further inflation of the VOI beyond the volume of the target (mouse brain) only marginally improved coregistration accuracy. The CPU time needed to accomplish coregistration is a linear function of N that varied gradually with VOI size. From the results of this study, we recommend the optimal size of the VOI to be slightly overinclusive, approximately by 5 voxels, of the target for computationally efficient and accurate coregistration.

  11. Robust and Accurate Discrimination of Self/Non-Self Antigen Presentations by Regulatory T Cell Suppression

    Science.gov (United States)

    Furusawa, Chikara; Yamaguchi, Tomoyuki

    2016-01-01

    The immune response by T cells usually discriminates self and non-self antigens, even though the negative selection of self-reactive T cells is imperfect and a certain fraction of T cells can respond to self-antigens. In this study, we construct a simple mathematical model of T cell populations to analyze how such self/non-self discrimination is possible. The results demonstrate that the control of the immune response by regulatory T cells enables a robust and accurate discrimination of self and non-self antigens, even when there is a significant overlap between the affinity distribution of T cells to self and non-self antigens. Here, the number of regulatory T cells in the system acts as a global variable controlling the T cell population dynamics. The present study provides a basis for the development of a quantitative theory for self and non-self discrimination in the immune system and a possible strategy for its experimental verification. PMID:27668873

  12. How accurate are our assumptions about our students' background knowledge?

    Science.gov (United States)

    Rovick, A A; Michael, J A; Modell, H I; Bruce, D S; Horwitz, B; Adamson, T; Richardson, D R; Silverthorn, D U; Whitescarver, S A

    1999-06-01

    Teachers establish prerequisites that students must meet before they are permitted to enter their courses. It is expected that having these prerequisites will provide students with the knowledge and skills they will need to successfully learn the course content. Also, the material that the students are expected to have previously learned need not be included in a course. We wanted to determine how accurate instructors' understanding of their students background knowledge actually was. To do this, we wrote a set of multiple-choice questions that could be used to test students' knowledge of concepts deemed to be essential for learning respiratory physiology. Instructors then selected 10 of these questions to be used as a prerequisite knowledge test. The instructors also predicted the performance they expected from the students on each of the questions they had selected. The resulting tests were administered in the first week of each of seven courses. The results of this study demonstrate that instructors are poor judges of what beginning students know. Instructors tended to both underestimate and overestimate students' knowledge by large margins on individual questions. Although on the average they tended to underestimate students' factual knowledge, they overestimated the students' abilities to apply this knowledge. Hence, the validity of decisions that instructors make, predicated on the basis of their students having the prerequisite knowledge that they expect, is open to question.

  13. Accurate triage of lower gastrointestinal bleed (LGIB) - A cohort study.

    Science.gov (United States)

    Chong, Vincent; Hill, Andrew G; MacCormick, Andrew D

    2016-01-01

    Acute lower gastrointestinal bleeding (LGIB) is a common acute presenting complaint to hospital. Unlike upper gastrointestinal bleeding, the diagnostic and therapeutic approach is not well-standardised. Intensive monitoring and urgent interventions are essential for patients with severe LGIB. The aim of this study is to investigate factors that predict severe LGIB and develop a clinical predictor tool to accurately triage LGIB in the emergency department of a busy metropolitan teaching hospital. We retrospectively identified all adult patients who presented to Middlemore Hospital Emergency Department with LGIB over a one year period. We recorded demographic variables, Charlson Co-morbidities Index, use of anticoagulation, examination findings, vital signs on arrival, laboratory test results, treatment plans and further investigations results. We then identified a subgroup of patients who suffered severe LGIB. A total of 668 patients presented with an initial triage diagnosis of LGIB. 83 of these patients (20%) developed severe LGIB. Binary logistic regression analysis identified four independent risk factors for severe LGIB: use of aspirin, history of collapse, haemoglobin on presentation of less than 100 mg/dl and albumin of less than 38 g/l. We have developed a clinical prediction tool for severe LGIB in our population with a negative predictive value (NPV) of 88% and a positive predictive value (PPV) of 44% respectively. We aim to validate the clinical prediction tool in a further cohort to ensure stability of the multivariate model. Copyright © 2015. Published by Elsevier Ltd.

  14. How complete and accurate is meningococcal disease notification?

    Science.gov (United States)

    Breen, E; Ghebrehewet, S; Regan, M; Thomson, A P J

    2004-12-01

    Effective public health control of meningococcal disease (meningococcal meningitis and septicaemia) is dependent on complete, accurate and speedy notification. Using capture-recapture techniques this study assesses the completeness, accuracy and timeliness of meningococcal notification in a health authority. The completeness of meningococcal disease notification was 94.8% (95% confidence interval 93.2% to 96.2%); 91.2% of cases in 2001 were notified within 24 hours of diagnosis, but 28.0% of notifications in 2001 were false positives. Clinical staff need to be aware of the public health implications of a notification of meningococcal disease, and of failure of, or delay in notification. Incomplete or delayed notification not only leads to inaccurate data collection but also means that important public health measures may not be taken. A clinical diagnosis of meningococcal disease should be carefully considered between the clinician and the consultant in communicable disease control (CCDC). Otherwise, prophylaxis may be given unnecessarily, disease incidence inflated, and the benefits of control measures underestimated. Consultants in communicable disease control (CCDCs), in conjunction with clinical staff, should de-notify meningococcal disease if the diagnosis changes.

  15. CLOMP: Accurately Characterizing OpenMP Application Overheads

    Energy Technology Data Exchange (ETDEWEB)

    Bronevetsky, G; Gyllenhaal, J; de Supinski, B R

    2008-11-10

    Despite its ease of use, OpenMP has failed to gain widespread use on large scale systems, largely due to its failure to deliver sufficient performance. Our experience indicates that the cost of initiating OpenMP regions is simply too high for the desired OpenMP usage scenario of many applications. In this paper, we introduce CLOMP, a new benchmark to characterize this aspect of OpenMP implementations accurately. CLOMP complements the existing EPCC benchmark suite to provide simple, easy to understand measurements of OpenMP overheads in the context of application usage scenarios. Our results for several OpenMP implementations demonstrate that CLOMP identifies the amount of work required to compensate for the overheads observed with EPCC.We also show that CLOMP also captures limitations for OpenMP parallelization on SMT and NUMA systems. Finally, CLOMPI, our MPI extension of CLOMP, demonstrates which aspects of OpenMP interact poorly with MPI when MPI helper threads cannot run on the NIC.

  16. In pursuit of accurate timekeeping: Liverpool and Victorian electrical horology.

    Science.gov (United States)

    Ishibashi, Yuto

    2014-10-01

    This paper explores how nineteenth-century Liverpool became such an advanced city with regard to public timekeeping, and the wider impact of this on the standardisation of time. From the mid-1840s, local scientists and municipal bodies in the port city were engaged in improving the ways in which accurate time was communicated to ships and the general public. As a result, Liverpool was the first British city to witness the formation of a synchronised clock system, based on an invention by Robert Jones. His method gained a considerable reputation in the scientific and engineering communities, which led to its subsequent replication at a number of astronomical observatories such as Greenwich and Edinburgh. As a further key example of developments in time-signalling techniques, this paper also focuses on the time ball established in Liverpool by the Electric Telegraph Company in collaboration with George Biddell Airy, the Astronomer Royal. This is a particularly significant development because, as the present paper illustrates, one of the most important technologies in measuring the accuracy of the Greenwich time signal took shape in the experimental operation of the time ball. The inventions and knowledge which emerged from the context of Liverpool were vital to the transformation of public timekeeping in Victorian Britain.

  17. Accurate measurement of liquid transport through nanoscale conduits

    Science.gov (United States)

    Alibakhshi, Mohammad Amin; Xie, Quan; Li, Yinxiao; Duan, Chuanhua

    2016-01-01

    Nanoscale liquid transport governs the behaviour of a wide range of nanofluidic systems, yet remains poorly characterized and understood due to the enormous hydraulic resistance associated with the nanoconfinement and the resulting minuscule flow rates in such systems. To overcome this problem, here we present a new measurement technique based on capillary flow and a novel hybrid nanochannel design and use it to measure water transport through single 2-D hydrophilic silica nanochannels with heights down to 7 nm. Our results show that silica nanochannels exhibit increased mass flow resistance compared to the classical hydrodynamics prediction. This difference increases with decreasing channel height and reaches 45% in the case of 7 nm nanochannels. This resistance increase is attributed to the formation of a 7-angstrom-thick stagnant hydration layer on the hydrophilic surfaces. By avoiding use of any pressure and flow sensors or any theoretical estimations the hybrid nanochannel scheme enables facile and precise flow measurement through single nanochannels, nanotubes, or nanoporous media and opens the prospect for accurate characterization of both hydrophilic and hydrophobic nanofluidic systems. PMID:27112404

  18. Accurate measurement of RF exposure from emerging wireless communication systems

    Science.gov (United States)

    Letertre, Thierry; Monebhurrun, Vikass; Toffano, Zeno

    2013-04-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  19. Accurate light-time correction due to a gravitating mass

    CERN Document Server

    Ashby, Neil

    2009-01-01

    This work arose as an aftermath of Cassini's 2002 experiment \\cite{bblipt03}, in which the PPN parameter $\\gamma$ was measured with an accuracy $\\sigma_\\gamma = 2.3\\times 10^{-5}$ and found consistent with the prediction $\\gamma =1$ of general relativity. The Orbit Determination Program (ODP) of NASA's Jet Propulsion Laboratory, which was used in the data analysis, is based on an expression for the gravitational delay which differs from the standard formula; this difference is of second order in powers of $m$ -- the sun's gravitational radius -- but in Cassini's case it was much larger than the expected order of magnitude $m^2/b$, where $b$ is the ray's closest approach distance. Since the ODP does not account for any other second-order terms, it is necessary, also in view of future more accurate experiments, to systematically evaluate higher order corrections and to determine which terms are significant. Light propagation in a static spacetime is equivalent to a problem in ordinary geometrical optics; Fermat...

  20. Accurate determination of segmented X-ray detector geometry.

    Science.gov (United States)

    Yefanov, Oleksandr; Mariani, Valerio; Gati, Cornelius; White, Thomas A; Chapman, Henry N; Barty, Anton

    2015-11-02

    Recent advances in X-ray detector technology have resulted in the introduction of segmented detectors composed of many small detector modules tiled together to cover a large detection area. Due to mechanical tolerances and the desire to be able to change the module layout to suit the needs of different experiments, the pixels on each module might not align perfectly on a regular grid. Several detectors are designed to permit detector sub-regions (or modules) to be moved relative to each other for different experiments. Accurate determination of the location of detector elements relative to the beam-sample interaction point is critical for many types of experiment, including X-ray crystallography, coherent diffractive imaging (CDI), small angle X-ray scattering (SAXS) and spectroscopy. For detectors with moveable modules, the relative positions of pixels are no longer fixed, necessitating the development of a simple procedure to calibrate detector geometry after reconfiguration. We describe a simple and robust method for determining the geometry of segmented X-ray detectors using measurements obtained by serial crystallography. By comparing the location of observed Bragg peaks to the spot locations predicted from the crystal indexing procedure, the position, rotation and distance of each module relative to the interaction region can be refined. We show that the refined detector geometry greatly improves the results of experiments.