WorldWideScience

Sample records for accurate quantitative snp-typing

  1. SNP typing on the NanoChip electronic microarray

    DEFF Research Database (Denmark)

    Børsting, Claus; Sanchez Sanchez, Juan Jose; Morling, Niels

    2005-01-01

    We describe a single nucleotide polymorphism (SNP) typing protocol developed for the NanoChip electronic microarray. The NanoChip array consists of 100 electrodes covered by a thin hydrogel layer containing streptavidin. An electric currency can be applied to one, several, or all electrodes...

  2. Accurate quantitative XRD phase analysis of cement clinkers

    International Nuclear Information System (INIS)

    Kern, A.

    2002-01-01

    Full text: Knowledge about the absolute phase abundance in cement clinkers is a requirement for both, research and quality control. Traditionally, quantitative analysis of cement clinkers has been carried out by theoretical normative calculation from chemical analysis using the so-called Bogue method or by optical microscopy. Therefore chemical analysis, mostly performed by X-ray fluorescence (XRF), forms the basis of cement plan control by providing information for proportioning raw materials, adjusting kiln and burning conditions, as well as cement mill feed proportioning. In addition, XRF is of highest importance with respect to the environmentally relevant control of waste recovery raw materials and alternative fuels, as well as filters, plants and sewage. However, the performance of clinkers and cements is governed by the mineralogy and not the elemental composition, and the deficiencies and inherent errors of Bogue as well as microscopic point counting are well known. With XRD and Rietveld analysis a full quantitative analysis of cement clinkers can be performed providing detailed mineralogical information about the product. Until recently several disadvantages prevented the frequent application of the Rietveld method in the cement industry. As the measurement of a full pattern is required, extended measurement times made an integration of this method into existing automation environments difficult. In addition, several drawbacks of existing Rietveld software such as complexity, low performance and severe numerical instability were prohibitive for automated use. The latest developments of on-line instrumentation, as well as dedicated Rietveld software for quantitative phase analysis (TOPAS), now make a decisive breakthrough possible. TOPAS not only allows the analysis of extremely complex phase mixtures in the shortest time possible, but also a fully automated online phase analysis for production control and quality management, free of any human interaction

  3. QUESP and QUEST revisited - fast and accurate quantitative CEST experiments.

    Science.gov (United States)

    Zaiss, Moritz; Angelovski, Goran; Demetriou, Eleni; McMahon, Michael T; Golay, Xavier; Scheffler, Klaus

    2018-03-01

    Chemical exchange saturation transfer (CEST) NMR or MRI experiments allow detection of low concentrated molecules with enhanced sensitivity via their proton exchange with the abundant water pool. Be it endogenous metabolites or exogenous contrast agents, an exact quantification of the actual exchange rate is required to design optimal pulse sequences and/or specific sensitive agents. Refined analytical expressions allow deeper insight and improvement of accuracy for common quantification techniques. The accuracy of standard quantification methodologies, such as quantification of exchange rate using varying saturation power or varying saturation time, is improved especially for the case of nonequilibrium initial conditions and weak labeling conditions, meaning the saturation amplitude is smaller than the exchange rate (γB 1  exchange rate using varying saturation power/time' (QUESP/QUEST) equations allow for more accurate exchange rate determination, and provide clear insights on the general principles to execute the experiments and to perform numerical evaluation. The proposed methodology was evaluated on the large-shift regime of paramagnetic chemical-exchange-saturation-transfer agents using simulated data and data of the paramagnetic Eu(III) complex of DOTA-tetraglycineamide. The refined formulas yield improved exchange rate estimation. General convergence intervals of the methods that would apply for smaller shift agents are also discussed. Magn Reson Med 79:1708-1721, 2018. © 2017 International Society for Magnetic Resonance in Medicine. © 2017 International Society for Magnetic Resonance in Medicine.

  4. Accurate Quantitative Sensing of Intracellular pH based on Self-ratiometric Upconversion Luminescent Nanoprobe

    NARCIS (Netherlands)

    Li, C.; Zuo, J.; Zhang, L.; Chang, Y.; Zhang, Y.; Tu, L.; Liu, X.; Xue, B.; Li, Q.; Zhao, H.; Zhang, H.; Kong, X.

    2016-01-01

    Accurate quantitation of intracellular pH (pHi) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pHi sensing. Till now,

  5. Multiplexed SNP Typing of Ancient DNA Clarifies the Origin of Andaman mtDNA Haplogroups amongst South Asian Tribal Populations

    Science.gov (United States)

    Endicott, Phillip; Metspalu, Mait; Stringer, Chris; Macaulay, Vincent; Cooper, Alan; Sanchez, Juan J.

    2006-01-01

    The issue of errors in genetic data sets is of growing concern, particularly in population genetics where whole genome mtDNA sequence data is coming under increased scrutiny. Multiplexed PCR reactions, combined with SNP typing, are currently under-exploited in this context, but have the potential to genotype whole populations rapidly and accurately, significantly reducing the amount of errors appearing in published data sets. To show the sensitivity of this technique for screening mtDNA genomic sequence data, 20 historic samples of the enigmatic Andaman Islanders and 12 modern samples from three Indian tribal populations (Chenchu, Lambadi and Lodha) were genotyped for 20 coding region sites after provisional haplogroup assignment with control region sequences. The genotype data from the historic samples significantly revise the topologies for the Andaman M31 and M32 mtDNA lineages by rectifying conflicts in published data sets. The new Indian data extend the distribution of the M31a lineage to South Asia, challenging previous interpretations of mtDNA phylogeography. This genetic connection between the ancestors of the Andamanese and South Asian tribal groups ∼30 kya has important implications for the debate concerning migration routes and settlement patterns of humans leaving Africa during the late Pleistocene, and indicates the need for more detailed genotyping strategies. The methodology serves as a low-cost, high-throughput model for the production and authentication of data from modern or ancient DNA, and demonstrates the value of museum collections as important records of human genetic diversity. PMID:17218991

  6. Eight new genomes and synthetic controls increase the accessibility of rapid melt-MAMA SNP typing of Coxiella burnetii.

    Directory of Open Access Journals (Sweden)

    Edvin Karlsson

    Full Text Available The case rate of Q fever in Europe has increased dramatically in recent years, mainly because of an epidemic in the Netherlands in 2009. Consequently, there is a need for more extensive genetic characterization of the disease agent Coxiella burnetii in order to better understand the epidemiology and spread of this disease. Genome reference data are essential for this purpose, but only thirteen genome sequences are currently available. Current methods for typing C. burnetii are criticized for having problems in comparing results across laboratories, require the use of genomic control DNA, and/or rely on markers in highly variable regions. We developed in this work a method for single nucleotide polymorphism (SNP typing of C. burnetii isolates and tissue samples based on new assays targeting ten phylogenetically stable synonymous canonical SNPs (canSNPs. These canSNPs represent previously known phylogenetic branches and were here identified from sequence comparisons of twenty-one C. burnetii genomes, eight of which were sequenced in this work. Importantly, synthetic control templates were developed, to make the method useful to laboratories lacking genomic control DNA. An analysis of twenty-one C. burnetii genomes confirmed that the species exhibits high sequence identity. Most of its SNPs (7,493/7,559 shared by >1 genome follow a clonal inheritance pattern and are therefore stable phylogenetic typing markers. The assays were validated using twenty-six genetically diverse C. burnetii isolates and three tissue samples from small ruminants infected during the epidemic in the Netherlands. Each sample was assigned to a clade. Synthetic controls (vector and PCR amplified gave identical results compared to the corresponding genomic controls and are viable alternatives to genomic DNA. The results from the described method indicate that it could be useful for cheap and rapid disease source tracking at non-specialized laboratories, which requires accurate

  7. Accurate Quantitative Sensing of Intracellular pH based on Self-ratiometric Upconversion Luminescent Nanoprobe.

    Science.gov (United States)

    Li, Cuixia; Zuo, Jing; Zhang, Li; Chang, Yulei; Zhang, Youlin; Tu, Langping; Liu, Xiaomin; Xue, Bin; Li, Qiqing; Zhao, Huiying; Zhang, Hong; Kong, Xianggui

    2016-12-09

    Accurate quantitation of intracellular pH (pH i ) is of great importance in revealing the cellular activities and early warning of diseases. A series of fluorescence-based nano-bioprobes composed of different nanoparticles or/and dye pairs have already been developed for pH i sensing. Till now, biological auto-fluorescence background upon UV-Vis excitation and severe photo-bleaching of dyes are the two main factors impeding the accurate quantitative detection of pH i . Herein, we have developed a self-ratiometric luminescence nanoprobe based on förster resonant energy transfer (FRET) for probing pH i , in which pH-sensitive fluorescein isothiocyanate (FITC) and upconversion nanoparticles (UCNPs) were served as energy acceptor and donor, respectively. Under 980 nm excitation, upconversion emission bands at 475 nm and 645 nm of NaYF 4 :Yb 3+ , Tm 3+ UCNPs were used as pH i response and self-ratiometric reference signal, respectively. This direct quantitative sensing approach has circumvented the traditional software-based subsequent processing of images which may lead to relatively large uncertainty of the results. Due to efficient FRET and fluorescence background free, a highly-sensitive and accurate sensing has been achieved, featured by 3.56 per unit change in pH i value 3.0-7.0 with deviation less than 0.43. This approach shall facilitate the researches in pH i related areas and development of the intracellular drug delivery systems.

  8. Accurate virus quantitation using a Scanning Transmission Electron Microscopy (STEM) detector in a scanning electron microscope.

    Science.gov (United States)

    Blancett, Candace D; Fetterer, David P; Koistinen, Keith A; Morazzani, Elaine M; Monninger, Mitchell K; Piper, Ashley E; Kuehl, Kathleen A; Kearney, Brian J; Norris, Sarah L; Rossi, Cynthia A; Glass, Pamela J; Sun, Mei G

    2017-10-01

    A method for accurate quantitation of virus particles has long been sought, but a perfect method still eludes the scientific community. Electron Microscopy (EM) quantitation is a valuable technique because it provides direct morphology information and counts of all viral particles, whether or not they are infectious. In the past, EM negative stain quantitation methods have been cited as inaccurate, non-reproducible, and with detection limits that were too high to be useful. To improve accuracy and reproducibility, we have developed a method termed Scanning Transmission Electron Microscopy - Virus Quantitation (STEM-VQ), which simplifies sample preparation and uses a high throughput STEM detector in a Scanning Electron Microscope (SEM) coupled with commercially available software. In this paper, we demonstrate STEM-VQ with an alphavirus stock preparation to present the method's accuracy and reproducibility, including a comparison of STEM-VQ to viral plaque assay and the ViroCyt Virus Counter. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  9. Accurate and quantitative polarization-sensitive OCT by unbiased birefringence estimator with noise-stochastic correction

    Science.gov (United States)

    Kasaragod, Deepa; Sugiyama, Satoshi; Ikuno, Yasushi; Alonso-Caneiro, David; Yamanari, Masahiro; Fukuda, Shinichi; Oshika, Tetsuro; Hong, Young-Joo; Li, En; Makita, Shuichi; Miura, Masahiro; Yasuno, Yoshiaki

    2016-03-01

    Polarization sensitive optical coherence tomography (PS-OCT) is a functional extension of OCT that contrasts the polarization properties of tissues. It has been applied to ophthalmology, cardiology, etc. Proper quantitative imaging is required for a widespread clinical utility. However, the conventional method of averaging to improve the signal to noise ratio (SNR) and the contrast of the phase retardation (or birefringence) images introduce a noise bias offset from the true value. This bias reduces the effectiveness of birefringence contrast for a quantitative study. Although coherent averaging of Jones matrix tomography has been widely utilized and has improved the image quality, the fundamental limitation of nonlinear dependency of phase retardation and birefringence to the SNR was not overcome. So the birefringence obtained by PS-OCT was still not accurate for a quantitative imaging. The nonlinear effect of SNR to phase retardation and birefringence measurement was previously formulated in detail for a Jones matrix OCT (JM-OCT) [1]. Based on this, we had developed a maximum a-posteriori (MAP) estimator and quantitative birefringence imaging was demonstrated [2]. However, this first version of estimator had a theoretical shortcoming. It did not take into account the stochastic nature of SNR of OCT signal. In this paper, we present an improved version of the MAP estimator which takes into account the stochastic property of SNR. This estimator uses a probability distribution function (PDF) of true local retardation, which is proportional to birefringence, under a specific set of measurements of the birefringence and SNR. The PDF was pre-computed by a Monte-Carlo (MC) simulation based on the mathematical model of JM-OCT before the measurement. A comparison between this new MAP estimator, our previous MAP estimator [2], and the standard mean estimator is presented. The comparisons are performed both by numerical simulation and in vivo measurements of anterior and

  10. Computationally efficient and quantitatively accurate multiscale simulation of solid-solution strengthening by ab initio calculation

    International Nuclear Information System (INIS)

    Ma, Duancheng; Friák, Martin; Pezold, Johann von; Raabe, Dierk; Neugebauer, Jörg

    2015-01-01

    We propose an approach for the computationally efficient and quantitatively accurate prediction of solid-solution strengthening. It combines the 2-D Peierls–Nabarro model and a recently developed solid-solution strengthening model. Solid-solution strengthening is examined with Al–Mg and Al–Li as representative alloy systems, demonstrating a good agreement between theory and experiments within the temperature range in which the dislocation motion is overdamped. Through a parametric study, two guideline maps of the misfit parameters against (i) the critical resolved shear stress, τ 0 , at 0 K and (ii) the energy barrier, ΔE b , against dislocation motion in a solid solution with randomly distributed solute atoms are created. With these two guideline maps, τ 0 at finite temperatures is predicted for other Al binary systems, and compared with available experiments, achieving good agreement

  11. Forensic genetic SNP typing of low-template DNA and highly degraded DNA from crime case samples.

    Science.gov (United States)

    Børsting, Claus; Mogensen, Helle Smidt; Morling, Niels

    2013-05-01

    Heterozygote imbalances leading to allele drop-outs and disproportionally large stutters leading to allele drop-ins are known stochastic phenomena related to STR typing of low-template DNA (LtDNA). The large stutters and the many drop-ins in typical STR stutter positions are artifacts from the PCR amplification of tandem repeats. These artifacts may be avoided by typing bi-allelic markers instead of STRs. In this work, the SNPforID multiplex assay was used to type LtDNA. A sensitized SNP typing protocol was introduced, that increased signal strengths without increasing noise and without affecting the heterozygote balance. Allele drop-ins were only observed in experiments with 25 pg of DNA and not in experiments with 50 and 100 pg of DNA. The allele drop-in rate in the 25 pg experiments was 0.06% or 100 times lower than what was previously reported for STR typing of LtDNA. A composite model and two different consensus models were used to interpret the SNP data. Correct profiles with 42-49 SNPs were generated from the 50 and 100 pg experiments, whereas a few incorrect genotypes were included in the generated profiles from the 25 pg experiments. With the strict consensus model, between 35 and 48 SNPs were correctly typed in the 25 pg experiments and only one allele drop-out (error rate: 0.07%) was observed in the consensus profiles. A total of 28 crime case samples were selected for typing with the sensitized SNPforID protocol. The samples were previously typed with old STR kits during the crime case investigation and only partial profiles (0-6 STRs) were obtained. Eleven of the samples could not be quantified with the Quantifiler™ Human DNA Quantification kit because of partial or complete inhibition of the PCR. For eight of these samples, SNP typing was only possible when the buffer and DNA polymerase used in the original protocol was replaced with the AmpFℓSTR(®) SEfiler Plus™ Master Mix, which was developed specifically for challenging forensic samples. All

  12. High-throughput bacterial SNP typing identifies distinct clusters of Salmonella Typhi causing typhoid in Nepalese children

    LENUS (Irish Health Repository)

    Holt, Kathryn E

    2010-05-31

    Abstract Background Salmonella Typhi (S. Typhi) causes typhoid fever, which remains an important public health issue in many developing countries. Kathmandu, the capital of Nepal, is an area of high incidence and the pediatric population appears to be at high risk of exposure and infection. Methods We recently defined the population structure of S. Typhi, using new sequencing technologies to identify nearly 2,000 single nucleotide polymorphisms (SNPs) that can be used as unequivocal phylogenetic markers. Here we have used the GoldenGate (Illumina) platform to simultaneously type 1,500 of these SNPs in 62 S. Typhi isolates causing severe typhoid in children admitted to Patan Hospital in Kathmandu. Results Eight distinct S. Typhi haplotypes were identified during the 20-month study period, with 68% of isolates belonging to a subclone of the previously defined H58 S. Typhi. This subclone was closely associated with resistance to nalidixic acid, with all isolates from this group demonstrating a resistant phenotype and harbouring the same resistance-associated SNP in GyrA (Phe83). A secondary clone, comprising 19% of isolates, was observed only during the second half of the study. Conclusions Our data demonstrate the utility of SNP typing for monitoring bacterial populations over a defined period in a single endemic setting. We provide evidence for genotype introduction and define a nalidixic acid resistant subclone of S. Typhi, which appears to be the dominant cause of severe pediatric typhoid in Kathmandu during the study period.

  13. Autosomal SNP typing of forensic samples with the GenPlex(TM) HID System: Results of a collaborative study

    DEFF Research Database (Denmark)

    Tomas, C.; Axler-DiPerte, G.; Budimlija, Z.M.

    2011-01-01

    in Europe and 5 in the US) in order to test the robustness and reliability of the GenPlex(TM) HID System on forensic samples. Three samples with partly degraded DNA and 10 samples with low amounts of DNA were analyzed in duplicates using various amounts of DNA. In order to compare the performance of the Gen......Plex(TM) HID System with the most commonly used STR kits, 500 pg of partly degraded DNA from three samples was typed by the laboratories using one or more STR kits. The median SNP typing success rate was 92.3% with 500 pg of partly degraded DNA. Three of the fourteen laboratories counted for more than two...... was the least successful. With the exception of the MiniFiler(TM) kit (AB), GenPlex(TM) HID performed better than five other tested STR kits. When partly degraded DNA was analyzed, GenPlex(TM) HID showed a very low mean mach probability, while all STR kits except MiniFiler(TM) had very limited discriminatory...

  14. Forensic genetic SNP typing of low-template DNA and highly degraded DNA from crime case samples

    DEFF Research Database (Denmark)

    Børsting, Claus; Mogensen, Helle Smidt; Morling, Niels

    2013-01-01

    the heterozygote balance. Allele drop-ins were only observed in experiments with 25 pg of DNA and not in experiments with 50 and 100 pg of DNA. The allele drop-in rate in the 25 pg experiments was 0.06% or 100 times lower than what was previously reported for STR typing of LtDNA. A composite model and two......Heterozygote imbalances leading to allele drop-outs and disproportionally large stutters leading to allele drop-ins are known stochastic phenomena related to STR typing of low-template DNA (LtDNA). The large stutters and the many drop-ins in typical STR stutter positions are artifacts from the PCR...... amplification of tandem repeats. These artifacts may be avoided by typing bi-allelic markers instead of STRs. In this work, the SNPforID multiplex assay was used to type LtDNA. A sensitized SNP typing protocol was introduced, that increased signal strengths without increasing noise and without affecting...

  15. Highly sensitive capillary electrophoresis-mass spectrometry for rapid screening and accurate quantitation of drugs of abuse in urine.

    Science.gov (United States)

    Kohler, Isabelle; Schappler, Julie; Rudaz, Serge

    2013-05-30

    The combination of capillary electrophoresis (CE) and mass spectrometry (MS) is particularly well adapted to bioanalysis due to its high separation efficiency, selectivity, and sensitivity; its short analytical time; and its low solvent and sample consumption. For clinical and forensic toxicology, a two-step analysis is usually performed: first, a screening step for compound identification, and second, confirmation and/or accurate quantitation in cases of presumed positive results. In this study, a fast and sensitive CE-MS workflow was developed for the screening and quantitation of drugs of abuse in urine samples. A CE with a time-of-flight MS (CE-TOF/MS) screening method was developed using a simple urine dilution and on-line sample preconcentration with pH-mediated stacking. The sample stacking allowed for a high loading capacity (20.5% of the capillary length), leading to limits of detection as low as 2 ng mL(-1) for drugs of abuse. Compound quantitation of positive samples was performed by CE-MS/MS with a triple quadrupole MS equipped with an adapted triple-tube sprayer and an electrospray ionization (ESI) source. The CE-ESI-MS/MS method was validated for two model compounds, cocaine (COC) and methadone (MTD), according to the Guidance of the Food and Drug Administration. The quantitative performance was evaluated for selectivity, response function, the lower limit of quantitation, trueness, precision, and accuracy. COC and MTD detection in urine samples was determined to be accurate over the range of 10-1000 ng mL(-1) and 21-1000 ng mL(-1), respectively. Copyright © 2013 Elsevier B.V. All rights reserved.

  16. Quantitatively accurate activity measurements with a dedicated cardiac SPECT camera: Physical phantom experiments

    Energy Technology Data Exchange (ETDEWEB)

    Pourmoghaddas, Amir, E-mail: apour@ottawaheart.ca; Wells, R. Glenn [Physics Department, Carleton University, Ottawa, Ontario K1S 5B6, Canada and Cardiology, The University of Ottawa Heart Institute, Ottawa, Ontario K1Y4W7 (Canada)

    2016-01-15

    Purpose: Recently, there has been increased interest in dedicated cardiac single photon emission computed tomography (SPECT) scanners with pinhole collimation and improved detector technology due to their improved count sensitivity and resolution over traditional parallel-hole cameras. With traditional cameras, energy-based approaches are often used in the clinic for scatter compensation because they are fast and easily implemented. Some of the cardiac cameras use cadmium-zinc-telluride (CZT) detectors which can complicate the use of energy-based scatter correction (SC) due to the low-energy tail—an increased number of unscattered photons detected with reduced energy. Modified energy-based scatter correction methods can be implemented, but their level of accuracy is unclear. In this study, the authors validated by physical phantom experiments the quantitative accuracy and reproducibility of easily implemented correction techniques applied to {sup 99m}Tc myocardial imaging with a CZT-detector-based gamma camera with multiple heads, each with a single-pinhole collimator. Methods: Activity in the cardiac compartment of an Anthropomorphic Torso phantom (Data Spectrum Corporation) was measured through 15 {sup 99m}Tc-SPECT acquisitions. The ratio of activity concentrations in organ compartments resembled a clinical {sup 99m}Tc-sestamibi scan and was kept consistent across all experiments (1.2:1 heart to liver and 1.5:1 heart to lung). Two background activity levels were considered: no activity (cold) and an activity concentration 1/10th of the heart (hot). A plastic “lesion” was placed inside of the septal wall of the myocardial insert to simulate the presence of a region without tracer uptake and contrast in this lesion was calculated for all images. The true net activity in each compartment was measured with a dose calibrator (CRC-25R, Capintec, Inc.). A 10 min SPECT image was acquired using a dedicated cardiac camera with CZT detectors (Discovery NM530c, GE

  17. A method for accurate detection of genomic microdeletions using real-time quantitative PCR

    Directory of Open Access Journals (Sweden)

    Bassett Anne S

    2005-12-01

    Full Text Available Abstract Background Quantitative Polymerase Chain Reaction (qPCR is a well-established method for quantifying levels of gene expression, but has not been routinely applied to the detection of constitutional copy number alterations of human genomic DNA. Microdeletions or microduplications of the human genome are associated with a variety of genetic disorders. Although, clinical laboratories routinely use fluorescence in situ hybridization (FISH to identify such cryptic genomic alterations, there remains a significant number of individuals in which constitutional genomic imbalance is suspected, based on clinical parameters, but cannot be readily detected using current cytogenetic techniques. Results In this study, a novel application for real-time qPCR is presented that can be used to reproducibly detect chromosomal microdeletions and microduplications. This approach was applied to DNA from a series of patient samples and controls to validate genomic copy number alteration at cytoband 22q11. The study group comprised 12 patients with clinical symptoms of chromosome 22q11 deletion syndrome (22q11DS, 1 patient trisomic for 22q11 and 4 normal controls. 6 of the patients (group 1 had known hemizygous deletions, as detected by standard diagnostic FISH, whilst the remaining 6 patients (group 2 were classified as 22q11DS negative using the clinical FISH assay. Screening of the patients and controls with a set of 10 real time qPCR primers, spanning the 22q11.2-deleted region and flanking sequence, confirmed the FISH assay results for all patients with 100% concordance. Moreover, this qPCR enabled a refinement of the region of deletion at 22q11. Analysis of DNA from chromosome 22 trisomic sample demonstrated genomic duplication within 22q11. Conclusion In this paper we present a qPCR approach for the detection of chromosomal microdeletions and microduplications. The strategic use of in silico modelling for qPCR primer design to avoid regions of repetitive

  18. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding

    Science.gov (United States)

    Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill

    2017-01-01

    Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding. PMID:28729875

  19. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding

    Directory of Open Access Journals (Sweden)

    Yong-Bi Fu

    2017-07-01

    Full Text Available Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding.

  20. Searching for an Accurate Marker-Based Prediction of an Individual Quantitative Trait in Molecular Plant Breeding.

    Science.gov (United States)

    Fu, Yong-Bi; Yang, Mo-Hua; Zeng, Fangqin; Biligetu, Bill

    2017-01-01

    Molecular plant breeding with the aid of molecular markers has played an important role in modern plant breeding over the last two decades. Many marker-based predictions for quantitative traits have been made to enhance parental selection, but the trait prediction accuracy remains generally low, even with the aid of dense, genome-wide SNP markers. To search for more accurate trait-specific prediction with informative SNP markers, we conducted a literature review on the prediction issues in molecular plant breeding and on the applicability of an RNA-Seq technique for developing function-associated specific trait (FAST) SNP markers. To understand whether and how FAST SNP markers could enhance trait prediction, we also performed a theoretical reasoning on the effectiveness of these markers in a trait-specific prediction, and verified the reasoning through computer simulation. To the end, the search yielded an alternative to regular genomic selection with FAST SNP markers that could be explored to achieve more accurate trait-specific prediction. Continuous search for better alternatives is encouraged to enhance marker-based predictions for an individual quantitative trait in molecular plant breeding.

  1. Reverse transcription-quantitative polymerase chain reaction: description of a RIN-based algorithm for accurate data normalization

    Directory of Open Access Journals (Sweden)

    Boissière-Michot Florence

    2009-04-01

    Full Text Available Abstract Background Reverse transcription-quantitative polymerase chain reaction (RT-qPCR is the gold standard technique for mRNA quantification, but appropriate normalization is required to obtain reliable data. Normalization to accurately quantitated RNA has been proposed as the most reliable method for in vivo biopsies. However, this approach does not correct differences in RNA integrity. Results In this study, we evaluated the effect of RNA degradation on the quantification of the relative expression of nine genes (18S, ACTB, ATUB, B2M, GAPDH, HPRT, POLR2L, PSMB6 and RPLP0 that cover a wide expression spectrum. Our results show that RNA degradation could introduce up to 100% error in gene expression measurements when RT-qPCR data were normalized to total RNA. To achieve greater resolution of small differences in transcript levels in degraded samples, we improved this normalization method by developing a corrective algorithm that compensates for the loss of RNA integrity. This approach allowed us to achieve higher accuracy, since the average error for quantitative measurements was reduced to 8%. Finally, we applied our normalization strategy to the quantification of EGFR, HER2 and HER3 in 104 rectal cancer biopsies. Taken together, our data show that normalization of gene expression measurements by taking into account also RNA degradation allows much more reliable sample comparison. Conclusion We developed a new normalization method of RT-qPCR data that compensates for loss of RNA integrity and therefore allows accurate gene expression quantification in human biopsies.

  2. Accurate quantitation of D+ fetomaternal hemorrhage by flow cytometry using a novel reagent to eliminate granulocytes from analysis.

    Science.gov (United States)

    Kumpel, Belinda; Hazell, Matthew; Guest, Alan; Dixey, Jonathan; Mushens, Rosey; Bishop, Debbie; Wreford-Bush, Tim; Lee, Edmond

    2014-05-01

    Quantitation of fetomaternal hemorrhage (FMH) is performed to determine the dose of prophylactic anti-D (RhIG) required to prevent D immunization of D- women. Flow cytometry (FC) is the most accurate method. However, maternal white blood cells (WBCs) can give high background by binding anti-D nonspecifically, compromising accuracy. Maternal blood samples (69) were sent for FC quantitation of FMH after positive Kleihauer-Betke test (KBT) analysis and RhIG administration. Reagents used were BRAD-3-fluorescein isothiocyanate (FITC; anti-D), AEVZ5.3-FITC (anti-varicella zoster [anti-VZ], negative control), anti-fetal hemoglobin (HbF)-FITC, blended two-color reagents, BRAD-3-FITC/anti-CD45-phycoerythrin (PE; anti-D/L), and BRAD-3-FITC/anti-CD66b-PE (anti-D/G). PE-positive WBCs were eliminated from analysis by gating. Full blood counts were performed on maternal samples and female donors. Elevated numbers of neutrophils were present in 80% of patients. Red blood cell (RBC) indices varied widely in maternal blood. D+ FMH values obtained with anti-D/L, anti-D/G, and anti-HbF-FITC were very similar (r = 0.99, p < 0.001). Correlation between KBT and anti-HbF-FITC FMH results was low (r = 0.716). Inaccurate FMH quantitation using the current method (anti-D minus anti-VZ) occurred with 71% samples having less than 15 mL of D+ FMH (RBCs) and insufficient RhIG calculated for 9%. Using two-color reagents and anti-HbF-FITC, approximately 30% patients had elevated F cells, 26% had no fetal cells, 6% had D- FMH, 26% had 4 to 15 mL of D+ FMH, and 12% patients had more than 15 mL of D+ FMH (RBCs) requiring more than 300 μg of RhIG. Without accurate quantitation of D+ FMH by FC, some women would receive inappropriate or inadequate anti-D prophylaxis. The latter may be at risk of immunization leading to hemolytic disease of the newborn. © 2013 American Association of Blood Banks.

  3. Generation of accurate peptide retention data for targeted and data independent quantitative LC-MS analysis: Chromatographic lessons in proteomics.

    Science.gov (United States)

    Krokhin, Oleg V; Spicer, Vic

    2016-12-01

    The emergence of data-independent quantitative LC-MS/MS analysis protocols further highlights the importance of high-quality reproducible chromatographic procedures. Knowing, controlling and being able to predict the effect of multiple factors that alter peptide RP-HPLC separation selectivity is critical for successful data collection for the construction of ion libraries. Proteomic researchers have often regarded RP-HPLC as a "black box", while vast amount of research on peptide separation is readily available. In addition to obvious parameters, such as the type of ion-pairing modifier, stationary phase and column temperature, we describe the "mysterious" effects of gradient slope, column size and flow rate on peptide separation selectivity. Retention time variations due to these parameters are governed by the linear solvent strength (LSS) theory on a peptide level by the value of its slope S in the basic LSS equation-a parameter that can be accurately predicted. Thus, the application of shallower gradients, higher flow rates, or smaller columns will each increases the relative retention of peptides with higher S-values (long species with multiple positively charged groups). Simultaneous changes to these parameters that each drive shifts in separation selectivity in the same direction should be avoided. The unification of terminology represents another pressing issue in this field of applied proteomics that should be addressed to facilitate further progress. © 2016 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  4. Accurate, Fast and Cost-Effective Diagnostic Test for Monosomy 1p36 Using Real-Time Quantitative PCR

    Directory of Open Access Journals (Sweden)

    Pricila da Silva Cunha

    2014-01-01

    Full Text Available Monosomy 1p36 is considered the most common subtelomeric deletion syndrome in humans and it accounts for 0.5–0.7% of all the cases of idiopathic intellectual disability. The molecular diagnosis is often made by microarray-based comparative genomic hybridization (aCGH, which has the drawback of being a high-cost technique. However, patients with classic monosomy 1p36 share some typical clinical characteristics that, together with its common prevalence, justify the development of a less expensive, targeted diagnostic method. In this study, we developed a simple, rapid, and inexpensive real-time quantitative PCR (qPCR assay for targeted diagnosis of monosomy 1p36, easily accessible for low-budget laboratories in developing countries. For this, we have chosen two target genes which are deleted in the majority of patients with monosomy 1p36: PRKCZ and SKI. In total, 39 patients previously diagnosed with monosomy 1p36 by aCGH, fluorescent in situ hybridization (FISH, and/or multiplex ligation-dependent probe amplification (MLPA all tested positive on our qPCR assay. By simultaneously using these two genes we have been able to detect 1p36 deletions with 100% sensitivity and 100% specificity. We conclude that qPCR of PRKCZ and SKI is a fast and accurate diagnostic test for monosomy 1p36, costing less than 10 US dollars in reagent costs.

  5. Multiplex PCR with minisequencing as an effective high-throughput SNP typing method for formalin-fixed tissue

    DEFF Research Database (Denmark)

    Gilbert, Marcus T P; Sanchez, Juan J; Haselkorn, Tamara

    2007-01-01

    , multiplex PCR with minisequencing (MPMS), on 92 DNA extractions performed on six archival FFPE samples of variable DNA quality, which date between 9 and 25 years old. On the three extracts with highest quality, we found the assay efficiency to be near 100%. However, the efficiency of the lowest quality...... extracts varied significantly. In this study, we demonstrate that although direct measures of DNA concentration in the extracts provide no useful information with regard to subsequent MPMS success, the success of the assay can be determined to some degree a priori, through initial screening of the DNA...... quality using a simple quantitative real-time PCR (qPCR) assay for nuclear DNA, and/or an assay of the maximum PCR amplifiable size of nuclear DNA. MPMS promises to be of significant use in future genetic studies on FFPE material. It provides a streamlined approach for retrieving a large amount of genetic...

  6. A unique charge-coupled device/xenon arc lamp based imaging system for the accurate detection and quantitation of multicolour fluorescence.

    Science.gov (United States)

    Spibey, C A; Jackson, P; Herick, K

    2001-03-01

    In recent years the use of fluorescent dyes in biological applications has dramatically increased. The continual improvement in the capabilities of these fluorescent dyes demands increasingly sensitive detection systems that provide accurate quantitation over a wide linear dynamic range. In the field of proteomics, the detection, quantitation and identification of very low abundance proteins are of extreme importance in understanding cellular processes. Therefore, the instrumentation used to acquire an image of such samples, for spot picking and identification by mass spectrometry, must be sensitive enough to be able, not only, to maximise the sensitivity and dynamic range of the staining dyes but, as importantly, adapt to the ever changing portfolio of fluorescent dyes as they become available. Just as the available fluorescent probes are improving and evolving so are the users application requirements. Therefore, the instrumentation chosen must be flexible to address and adapt to those changing needs. As a result, a highly competitive market for the supply and production of such dyes and the instrumentation for their detection and quantitation have emerged. The instrumentation currently available is based on either laser/photomultiplier tube (PMT) scanning or lamp/charge-coupled device (CCD) based mechanisms. This review briefly discusses the advantages and disadvantages of both System types for fluorescence imaging, gives a technical overview of CCD technology and describes in detail a unique xenon/are lamp CCD based instrument, from PerkinElmer Life Sciences. The Wallac-1442 ARTHUR is unique in its ability to scan both large areas at high resolution and give accurate selectable excitation over the whole of the UV/visible range. It operates by filtering both the excitation and emission wavelengths, providing optimal and accurate measurement and quantitation of virtually any available dye and allows excellent spectral resolution between different fluorophores

  7. ExSTA: External Standard Addition Method for Accurate High-Throughput Quantitation in Targeted Proteomics Experiments.

    Science.gov (United States)

    Mohammed, Yassene; Pan, Jingxi; Zhang, Suping; Han, Jun; Borchers, Christoph H

    2018-03-01

    Targeted proteomics using MRM with stable-isotope-labeled internal-standard (SIS) peptides is the current method of choice for protein quantitation in complex biological matrices. Better quantitation can be achieved with the internal standard-addition method, where successive increments of synthesized natural form (NAT) of the endogenous analyte are added to each sample, a response curve is generated, and the endogenous concentration is determined at the x-intercept. Internal NAT-addition, however, requires multiple analyses of each sample, resulting in increased sample consumption and analysis time. To compare the following three methods, an MRM assay for 34 high-to-moderate abundance human plasma proteins is used: classical internal SIS-addition, internal NAT-addition, and external NAT-addition-generated in buffer using NAT and SIS peptides. Using endogenous-free chicken plasma, the accuracy is also evaluated. The internal NAT-addition outperforms the other two in precision and accuracy. However, the curves derived by internal vs. external NAT-addition differ by only ≈3.8% in slope, providing comparable accuracies and precision with good CV values. While the internal NAT-addition method may be "ideal", this new external NAT-addition can be used to determine the concentration of high-to-moderate abundance endogenous plasma proteins, providing a robust and cost-effective alternative for clinical analyses or other high-throughput applications. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Accurate quantitative CF-LIBS analysis of both major and minor elements in alloys via iterative correction of plasma temperature and spectral intensity

    Science.gov (United States)

    Shuxia, ZHAO; Lei, ZHANG; Jiajia, HOU; Yang, ZHAO; Wangbao, YIN; Weiguang, MA; Lei, DONG; Liantuan, XIAO; Suotang, JIA

    2018-03-01

    The chemical composition of alloys directly determines their mechanical behaviors and application fields. Accurate and rapid analysis of both major and minor elements in alloys plays a key role in metallurgy quality control and material classification processes. A quantitative calibration-free laser-induced breakdown spectroscopy (CF-LIBS) analysis method, which carries out combined correction of plasma temperature and spectral intensity by using a second-order iterative algorithm and two boundary standard samples, is proposed to realize accurate composition measurements. Experimental results show that, compared to conventional CF-LIBS analysis, the relative errors for major elements Cu and Zn and minor element Pb in the copper-lead alloys has been reduced from 12%, 26% and 32% to 1.8%, 2.7% and 13.4%, respectively. The measurement accuracy for all elements has been improved substantially.

  9. Evaluation of Faecalibacterium 16S rDNA genetic markers for accurate identification of swine faecal waste by quantitative PCR.

    Science.gov (United States)

    Duan, Chuanren; Cui, Yamin; Zhao, Yi; Zhai, Jun; Zhang, Baoyun; Zhang, Kun; Sun, Da; Chen, Hang

    2016-10-01

    A genetic marker within the 16S rRNA gene of Faecalibacterium was identified for use in a quantitative PCR (qPCR) assay to detect swine faecal contamination in water. A total of 146,038 bacterial sequences were obtained using 454 pyrosequencing. By comparative bioinformatics analysis of Faecalibacterium sequences with those of numerous swine and other animal species, swine-specific Faecalibacterium 16S rRNA gene sequences were identified and Polymerase Chain Okabe (PCR) primer sets designed and tested against faecal DNA samples from swine and non-swine sources. Two PCR primer sets, PFB-1 and PFB-2, showed the highest specificity to swine faecal waste and had no cross-reaction with other animal samples. PFB-1 and PFB-2 amplified 16S rRNA gene sequences from 50 samples of swine with positive ratios of 86 and 90%, respectively. We compared swine-specific Faecalibacterium qPCR assays for the purpose of quantifying the newly identified markers. The quantification limits (LOQs) of PFB-1 and PFB-2 markers in environmental water were 6.5 and 2.9 copies per 100 ml, respectively. Of the swine-associated assays tested, PFB-2 was more sensitive in detecting the swine faecal waste and quantifying the microbial load. Furthermore, the microbial abundance and diversity of the microbiomes of swine and other animal faeces were estimated using operational taxonomic units (OTUs). The species specificity was demonstrated for the microbial populations present in various animal faeces. Copyright © 2016 Elsevier Ltd. All rights reserved.

  10. Assessing reference genes for accurate transcript normalization using quantitative real-time PCR in pearl millet [Pennisetum glaucum (L. R. Br].

    Directory of Open Access Journals (Sweden)

    Prasenjit Saha

    Full Text Available Pearl millet [Pennisetum glaucum (L. R.Br.], a close relative of Panicoideae food crops and bioenergy grasses, offers an ideal system to perform functional genomics studies related to C4 photosynthesis and abiotic stress tolerance. Quantitative real-time reverse transcription polymerase chain reaction (qRT-PCR provides a sensitive platform to conduct such gene expression analyses. However, the lack of suitable internal control reference genes for accurate transcript normalization during qRT-PCR analysis in pearl millet is the major limitation. Here, we conducted a comprehensive assessment of 18 reference genes on 234 samples which included an array of different developmental tissues, hormone treatments and abiotic stress conditions from three genotypes to determine appropriate reference genes for accurate normalization of qRT-PCR data. Analyses of Ct values using Stability Index, BestKeeper, ΔCt, Normfinder, geNorm and RefFinder programs ranked PP2A, TIP41, UBC2, UBQ5 and ACT as the most reliable reference genes for accurate transcript normalization under different experimental conditions. Furthermore, we validated the specificity of these genes for precise quantification of relative gene expression and provided evidence that a combination of the best reference genes are required to obtain optimal expression patterns for both endogeneous genes as well as transgenes in pearl millet.

  11. Quantitative contrast-enhanced first-pass cardiac perfusion MRI at 3 tesla with accurate arterial input function and myocardial wall enhancement.

    Science.gov (United States)

    Breton, Elodie; Kim, Daniel; Chung, Sohae; Axel, Leon

    2011-09-01

    To develop, and validate in vivo, a robust quantitative first-pass perfusion cardiovascular MR (CMR) method with accurate arterial input function (AIF) and myocardial wall enhancement. A saturation-recovery (SR) pulse sequence was modified to sequentially acquire multiple slices after a single nonselective saturation pulse at 3 Tesla. In each heartbeat, an AIF image is acquired in the aortic root with a short time delay (TD) (50 ms), followed by the acquisition of myocardial images with longer TD values (∼150-400 ms). Longitudinal relaxation rates (R(1) = 1/T(1)) were calculated using an ideal saturation recovery equation based on the Bloch equation, and corresponding gadolinium contrast concentrations were calculated assuming fast water exchange condition. The proposed method was validated against a reference multi-point SR method by comparing their respective R(1) measurements in the blood and left ventricular myocardium, before and at multiple time-points following contrast injections, in 7 volunteers. R(1) measurements with the proposed method and reference multi-point method were strongly correlated (r > 0.88, P < 10(-5)) and in good agreement (mean difference ±1.96 standard deviation 0.131 ± 0.317/0.018 ± 0.140 s(-1) for blood/myocardium, respectively). The proposed quantitative first-pass perfusion CMR method measured accurate R(1) values for quantification of AIF and myocardial wall contrast agent concentrations in 3 cardiac short-axis slices, in a total acquisition time of 523 ms per heartbeat. Copyright © 2011 Wiley-Liss, Inc.

  12. A newly developed maneuver, field change conversion (FCC), improved evaluation of the left ventricular volume more accurately on quantitative gated SPECT (QGS) analysis

    International Nuclear Information System (INIS)

    Tajima, Osamu; Shibasaki, Masaki; Hoshi, Toshiko; Imai, Kamon

    2002-01-01

    The purpose of this study was to investigate whether a newly developed maneuver that reduces the reconstruction area by a half more accurately evaluates left ventricular (LV) volume on quantitative gated SPECT (QGS) analysis. The subjects were 38 patients who underwent left ventricular angiography (LVG) followed by G-SPECT within 2 weeks. Acquisition was performed with a general purpose collimator and a 64 x 64 matrix. On QGS analysis, the field magnification was 34 cm in original image (Original: ORI), and furthermore it was changed from 34 cm to 17 cm to enlarge the re-constructed image (Field Change Conversion: FCC). End-diastolic volume (EDV) and end-systolic volume (ESV) of the left ventricle were also obtained using LVG. EDV was 71±19 ml, 83±20 ml and 98±23 ml for ORI, FCC and LVG, respectively (p<0.001: ORI versus LVG, p<0.001: ORI versus FCC, p<0.001: FCC versus LVG). ESV was 28±12 ml, 34±13 ml and 41±14 ml for ORI, FCC and LVG, respectively (p<0.001: ORI versus LVG, p<0.001: ORI versus FCC, p<0.001: FCC versus LVG). FCC was better than ORI for calculating LV volume in clinical cases. Furthermore, FCC is a useful method for accurately measuring the LV volume on QGS analysis. (author)

  13. Mitochondrial DNA as a non-invasive biomarker: Accurate quantification using real time quantitative PCR without co-amplification of pseudogenes and dilution bias

    International Nuclear Information System (INIS)

    Malik, Afshan N.; Shahni, Rojeen; Rodriguez-de-Ledesma, Ana; Laftah, Abas; Cunningham, Phil

    2011-01-01

    Highlights: → Mitochondrial dysfunction is central to many diseases of oxidative stress. → 95% of the mitochondrial genome is duplicated in the nuclear genome. → Dilution of untreated genomic DNA leads to dilution bias. → Unique primers and template pretreatment are needed to accurately measure mitochondrial DNA content. -- Abstract: Circulating mitochondrial DNA (MtDNA) is a potential non-invasive biomarker of cellular mitochondrial dysfunction, the latter known to be central to a wide range of human diseases. Changes in MtDNA are usually determined by quantification of MtDNA relative to nuclear DNA (Mt/N) using real time quantitative PCR. We propose that the methodology for measuring Mt/N needs to be improved and we have identified that current methods have at least one of the following three problems: (1) As much of the mitochondrial genome is duplicated in the nuclear genome, many commonly used MtDNA primers co-amplify homologous pseudogenes found in the nuclear genome; (2) use of regions from genes such as β-actin and 18S rRNA which are repetitive and/or highly variable for qPCR of the nuclear genome leads to errors; and (3) the size difference of mitochondrial and nuclear genomes cause a 'dilution bias' when template DNA is diluted. We describe a PCR-based method using unique regions in the human mitochondrial genome not duplicated in the nuclear genome; unique single copy region in the nuclear genome and template treatment to remove dilution bias, to accurately quantify MtDNA from human samples.

  14. Fluorescence correlation spectroscopy analysis for accurate determination of proportion of doubly labeled DNA in fluorescent DNA pool for quantitative biochemical assays.

    Science.gov (United States)

    Hou, Sen; Sun, Lili; Wieczorek, Stefan A; Kalwarczyk, Tomasz; Kaminski, Tomasz S; Holyst, Robert

    2014-01-15

    Fluorescent double-stranded DNA (dsDNA) molecules labeled at both ends are commonly produced by annealing of complementary single-stranded DNA (ssDNA) molecules, labeled with fluorescent dyes at the same (3' or 5') end. Because the labeling efficiency of ssDNA is smaller than 100%, the resulting dsDNA have two, one or are without a dye. Existing methods are insufficient to measure the percentage of the doubly-labeled dsDNA component in the fluorescent DNA sample and it is even difficult to distinguish the doubly-labeled DNA component from the singly-labeled component. Accurate measurement of the percentage of such doubly labeled dsDNA component is a critical prerequisite for quantitative biochemical measurements, which has puzzled scientists for decades. We established a fluorescence correlation spectroscopy (FCS) system to measure the percentage of doubly labeled dsDNA (PDL) in the total fluorescent dsDNA pool. The method is based on comparative analysis of the given sample and a reference dsDNA sample prepared by adding certain amount of unlabeled ssDNA into the original ssDNA solution. From FCS autocorrelation functions, we obtain the number of fluorescent dsDNA molecules in the focal volume of the confocal microscope and PDL. We also calculate the labeling efficiency of ssDNA. The method requires minimal amount of material. The samples have the concentration of DNA in the nano-molar/L range and the volume of tens of microliters. We verify our method by using restriction enzyme Hind III to cleave the fluorescent dsDNA. The kinetics of the reaction depends strongly on PDL, a critical parameter for quantitative biochemical measurements. Copyright © 2013 Elsevier B.V. All rights reserved.

  15. Detection and quantitation of trace phenolphthalein (in pharmaceutical preparations and in forensic exhibits) by liquid chromatography-tandem mass spectrometry, a sensitive and accurate method.

    Science.gov (United States)

    Sharma, Kakali; Sharma, Shiba P; Lahiri, Sujit C

    2013-01-01

    Phenolphthalein, an acid-base indicator and laxative, is important as a constituent of widely used weight-reducing multicomponent food formulations. Phenolphthalein is an useful reagent in forensic science for the identification of blood stains of suspected victims and for apprehending erring officials accepting bribes in graft or trap cases. The pink-colored alkaline hand washes originating from the phenolphthalein-smeared notes can easily be determined spectrophotometrically. But in many cases, colored solution turns colorless with time, which renders the genuineness of bribe cases doubtful to the judiciary. No method is known till now for the detection and identification of phenolphthalein in colorless forensic exhibits with positive proof. Liquid chromatography-tandem mass spectrometry had been found to be most sensitive, accurate method capable of detection and quantitation of trace phenolphthalein in commercial formulations and colorless forensic exhibits with positive proof. The detection limit of phenolphthalein was found to be 1.66 pg/L or ng/mL, and the calibration curve shows good linearity (r(2) = 0.9974). © 2012 American Academy of Forensic Sciences.

  16. Genome-Wide Association Mapping for Intelligence in Military Working Dogs: Canine Cohort, Canine Intelligence Assessment Regimen, Genome-Wide Single Nucleotide Polymorphism (SNP) Typing, and Unsupervised Classification Algorithm for Genome-Wide Association Data Analysis

    Science.gov (United States)

    2011-09-01

    SNP Array v2. A ‘proof-of-concept’ advanced data mining algorithm for unsupervised analysis of genome-wide association study (GWAS) dataset was... Opal F AUS Yes U141 Peggs F AUS Yes U142 Taxi F AUS Yes U143 Riso MI MAL Yes U144 Szarik MI GSD Yes U145 Astor MI MAL Yes U146 Roy MC MAL Yes... mining of genetic studies in general, and especially GWAS. As a proof-of-concept, a classification analysis of the WG SNP typing dataset of a

  17. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data

    Directory of Open Access Journals (Sweden)

    Alves-Ferreira Marcio

    2010-03-01

    Full Text Available Abstract Background Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR. Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. Results By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1α5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhβTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. Conclusion We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references

  18. Identification and evaluation of new reference genes in Gossypium hirsutum for accurate normalization of real-time quantitative RT-PCR data.

    Science.gov (United States)

    Artico, Sinara; Nardeli, Sarah M; Brilhante, Osmundo; Grossi-de-Sa, Maria Fátima; Alves-Ferreira, Marcio

    2010-03-21

    Normalizing through reference genes, or housekeeping genes, can make more accurate and reliable results from reverse transcription real-time quantitative polymerase chain reaction (qPCR). Recent studies have shown that no single housekeeping gene is universal for all experiments. Thus, suitable reference genes should be the first step of any qPCR analysis. Only a few studies on the identification of housekeeping gene have been carried on plants. Therefore qPCR studies on important crops such as cotton has been hampered by the lack of suitable reference genes. By the use of two distinct algorithms, implemented by geNorm and NormFinder, we have assessed the gene expression of nine candidate reference genes in cotton: GhACT4, GhEF1alpha5, GhFBX6, GhPP2A1, GhMZA, GhPTB, GhGAPC2, GhbetaTUB3 and GhUBQ14. The candidate reference genes were evaluated in 23 experimental samples consisting of six distinct plant organs, eight stages of flower development, four stages of fruit development and in flower verticils. The expression of GhPP2A1 and GhUBQ14 genes were the most stable across all samples and also when distinct plants organs are examined. GhACT4 and GhUBQ14 present more stable expression during flower development, GhACT4 and GhFBX6 in the floral verticils and GhMZA and GhPTB during fruit development. Our analysis provided the most suitable combination of reference genes for each experimental set tested as internal control for reliable qPCR data normalization. In addition, to illustrate the use of cotton reference genes we checked the expression of two cotton MADS-box genes in distinct plant and floral organs and also during flower development. We have tested the expression stabilities of nine candidate genes in a set of 23 tissue samples from cotton plants divided into five different experimental sets. As a result of this evaluation, we recommend the use of GhUBQ14 and GhPP2A1 housekeeping genes as superior references for normalization of gene expression measures in

  19. Optimized slice-selective 1H NMR experiments combined with highly accurate quantitative 13C NMR using an internal reference method

    Science.gov (United States)

    Jézéquel, Tangi; Silvestre, Virginie; Dinis, Katy; Giraudeau, Patrick; Akoka, Serge

    2018-04-01

    Isotope ratio monitoring by 13C NMR spectrometry (irm-13C NMR) provides the complete 13C intramolecular position-specific composition at natural abundance. It represents a powerful tool to track the (bio)chemical pathway which has led to the synthesis of targeted molecules, since it allows Position-specific Isotope Analysis (PSIA). Due to the very small composition range (which represents the range of variation of the isotopic composition of a given nuclei) of 13C natural abundance values (50‰), irm-13C NMR requires a 1‰ accuracy and thus highly quantitative analysis by 13C NMR. Until now, the conventional strategy to determine the position-specific abundance xi relies on the combination of irm-MS (isotopic ratio monitoring Mass Spectrometry) and 13C quantitative NMR. However this approach presents a serious drawback since it relies on two different techniques and requires to measure separately the signal of all the carbons of the analyzed compound, which is not always possible. To circumvent this constraint, we recently proposed a new methodology to perform 13C isotopic analysis using an internal reference method and relying on NMR only. The method combines a highly quantitative 1H NMR pulse sequence (named DWET) with a 13C isotopic NMR measurement. However, the recently published DWET sequence is unsuited for samples with short T1, which forms a serious limitation for irm-13C NMR experiments where a relaxing agent is added. In this context, we suggest two variants of the DWET called Multi-WET and Profiled-WET, developed and optimized to reach the same accuracy of 1‰ with a better immunity towards T1 variations. Their performance is evaluated on the determination of the 13C isotopic profile of vanillin. Both pulse sequences show a 1‰ accuracy with an increased robustness to pulse miscalibrations compared to the initial DWET method. This constitutes a major advance in the context of irm-13C NMR since it is now possible to perform isotopic analysis with high

  20. Evaluation of the iPLEX(®) Sample ID Plus Panel designed for the Sequenom MassARRAY(®) system. A SNP typing assay developed for human identification and sample tracking based on the SNPforID panel

    DEFF Research Database (Denmark)

    Johansen, P; Andersen, J D; Børsting, Claus

    2013-01-01

    on the peak height and the signal to noise data exported from the TYPER 4.0 software. With the forensic analysis parameters, all inconsistencies were eliminated in reactions with ≥10ng DNA. However, the average call rate decreased to 69.9%. The iPLEX(®) Sample ID Plus Panel was tested on 10 degraded samples......Sequenom launched the first commercial SNP typing kit for human identification, named the iPLEX(®) Sample ID Plus Panel. The kit amplifies 47 of the 52 SNPs in the SNPforID panel, amelogenin and two Y-chromosome SNPs in one multiplex PCR. The SNPs were analyzed by single base extension (SBE......) and Matrix Assisted Laser Desorption/Ionization-Time of Flight Mass Spectrometry (MALDI-TOF MS). In this study, we evaluated the accuracy and sensitivity of the iPLEX(®) Sample ID Plus Panel by comparing the typing results of the iPLEX(®) Sample ID Plus Panel with those obtained with our ISO 17025 accredited...

  1. Selection of Suitable Internal Control Genes for Accurate Normalization of Real-Time Quantitative PCR Data of Buffalo (Bubalus bubalis) Blastocysts Produced by SCNT and IVF.

    Science.gov (United States)

    Sood, Tanushri Jerath; Lagah, Swati Viviyan; Sharma, Ankita; Singla, Suresh Kumar; Mukesh, Manishi; Chauhan, Manmohan Singh; Manik, Radheysham; Palta, Prabhat

    2017-10-01

    We evaluated the suitability of 10 candidate internal control genes (ICGs), belonging to different functional classes, namely ACTB, EEF1A1, GAPDH, HPRT1, HMBS, RPS15, RPS18, RPS23, SDHA, and UBC for normalizing the real-time quantitative polymerase chain reaction (qPCR) data of blastocyst-stage buffalo embryos produced by hand-made cloning and in vitro fertilization (IVF). Total RNA was isolated from three pools, each of cloned and IVF blastocysts (n = 50/pool) for cDNA synthesis. Two different statistical algorithms geNorm and NormFinder were used for evaluating the stability of these genes. Based on gene stability measure (M value) and pairwise variation (V value), calculated by geNorm analysis, the most stable ICGs were RPS15, HPRT1, and ACTB for cloned blastocysts, HMBS, UBC, and HPRT1 for IVF blastocysts and RPS15, GAPDH, and HPRT1 for both the embryo types analyzed together. RPS18 was the least stable gene for both cloned and IVF blastocysts. Following NormFinder analysis, the order of stability was RPS15 = HPRT1>GAPDH for cloned blastocysts, HMBS = UBC>RPS23 for IVF blastocysts, and HPRT1>GAPDH>RPS15 for cloned and IVF blastocysts together. These results suggest that despite overlapping of the three most stable ICGs between cloned and IVF blastocysts, the panel of ICGs selected for normalization of qPCR data of cloned and IVF blastocyst-stage embryos should be different.

  2. Quantitation of specific binding ratio in 123I-FP-CIT SPECT: accurate processing strategy for cerebral ventricular enlargement with use of 3D-striatal digital brain phantom.

    Science.gov (United States)

    Furuta, Akihiro; Onishi, Hideo; Amijima, Hizuru

    2018-06-01

    This study aimed to evaluate the effect of ventricular enlargement on the specific binding ratio (SBR) and to validate the cerebrospinal fluid (CSF)-Mask algorithm for quantitative SBR assessment of 123 I-FP-CIT single-photon emission computed tomography (SPECT) images with the use of a 3D-striatum digital brain (SDB) phantom. Ventricular enlargement was simulated by three-dimensional extensions in a 3D-SDB phantom comprising segments representing the striatum, ventricle, brain parenchyma, and skull bone. The Evans Index (EI) was measured in 3D-SDB phantom images of an enlarged ventricle. Projection data sets were generated from the 3D-SDB phantoms with blurring, scatter, and attenuation. Images were reconstructed using the ordered subset expectation maximization (OSEM) algorithm and corrected for attenuation, scatter, and resolution recovery. We bundled DaTView (Southampton method) with the CSF-Mask processing software for SBR. We assessed SBR with the use of various coefficients (f factor) of the CSF-Mask. Specific binding ratios of 1, 2, 3, 4, and 5 corresponded to SDB phantom simulations with true values. Measured SBRs > 50% that were underestimated with EI increased compared with the true SBR and this trend was outstanding at low SBR. The CSF-Mask improved 20% underestimates and brought the measured SBR closer to the true values at an f factor of 1.0 despite an increase in EI. We connected the linear regression function (y = - 3.53x + 1.95; r = 0.95) with the EI and f factor using root-mean-square error. Processing with CSF-Mask generates accurate quantitative SBR from dopamine transporter SPECT images of patients with ventricular enlargement.

  3. Analytical Validation of a Highly Quantitative, Sensitive, Accurate, and Reproducible Assay (HERmark® for the Measurement of HER2 Total Protein and HER2 Homodimers in FFPE Breast Cancer Tumor Specimens

    Directory of Open Access Journals (Sweden)

    Jeffrey S. Larson

    2010-01-01

    Full Text Available We report here the results of the analytical validation of assays that measure HER2 total protein (H2T and HER2 homodimer (H2D expression in Formalin Fixed Paraffin Embedded (FFPE breast cancer tumors as well as cell line controls. The assays are based on the VeraTag technology platform and are commercially available through a central CAP-accredited clinical reference laboratory. The accuracy of H2T measurements spans a broad dynamic range (2-3 logs as evaluated by comparison with cross-validating technologies. The measurement of H2T expression demonstrates a sensitivity that is approximately 7–10 times greater than conventional immunohistochemistry (IHC (HercepTest. The HERmark assay is a quantitative assay that sensitively and reproducibly measures continuous H2T and H2D protein expression levels and therefore may have the potential to stratify patients more accurately with respect to response to HER2-targeted therapies than current methods which rely on semiquantitative protein measurements (IHC or on indirect assessments of gene amplification (FISH.

  4. Rigour in quantitative research.

    Science.gov (United States)

    Claydon, Leica Sarah

    2015-07-22

    This article which forms part of the research series addresses scientific rigour in quantitative research. It explores the basis and use of quantitative research and the nature of scientific rigour. It examines how the reader may determine whether quantitative research results are accurate, the questions that should be asked to determine accuracy and the checklists that may be used in this process. Quantitative research has advantages in nursing, since it can provide numerical data to help answer questions encountered in everyday practice.

  5. Accurate Quantitation of Water-amide Proton Exchange Rates Using the Phase-Modulated CLEAN Chemical EXchange (CLEANEX-PM) Approach with a Fast-HSQC (FHSQC) Detection Scheme

    International Nuclear Information System (INIS)

    Hwang, Tsang-Lin; Zijl, Peter C.M. van; Mori, Susumu

    1998-01-01

    Measurement of exchange rates between water and NH protons by magnetization transfer methods is often complicated by artifacts, such as intramolecular NOEs, and/or TOCSY transfer from Cα protons coincident with the water frequency, or exchange-relayed NOEs from fast exchanging hydroxyl or amine protons. By applying the Phase-Modulated CLEAN chemical EXchange (CLEANEX-PM) spin-locking sequence, 135 o (x) 120 o (-x) 110 o (x) 110 o (-x) 120 o (x) 135 o (-x) during the mixing period, these artifacts can be eliminated, revealing an unambiguous water-NH exchange spectrum. In this paper, the CLEANEX-PM mixing scheme is combined with Fast-HSQC (FHSQC) detection and used to obtain accurate chemical exchange rates from the initial slope analysis for a sample of 15N labeled staphylococcal nuclease. The results are compared to rates obtained using Water EXchange filter (WEX) II-FHSQC, and spin-echo-filtered WEX II-FHSQC measurements, and clearly identify the spurious NOE contributions in the exchange system

  6. Spectrally accurate contour dynamics

    International Nuclear Information System (INIS)

    Van Buskirk, R.D.; Marcus, P.S.

    1994-01-01

    We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use

  7. Accurate quantum chemical calculations

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  8. Quantitatively accurate calculations of conductance and thermopower of molecular junctions

    DEFF Research Database (Denmark)

    Markussen, Troels; Jin, Chengjun; Thygesen, Kristian Sommer

    2013-01-01

    Thermopower measurements of molecular junctions have recently gained interest as a characterization technique that supplements the more traditional conductance measurements. Here we investigate the electronic conductance and thermopower of benzenediamine (BDA) and benzenedicarbonitrile (BDCN...

  9. Quantitative lymphography

    International Nuclear Information System (INIS)

    Mostbeck, A.; Lofferer, O.; Kahn, P.; Partsch, H.; Koehn, H.; Bialonczyk, Ch.; Koenig, B.

    1984-01-01

    Labelled colloids and macromolecules are removed lymphatically. The uptake of tracer in the regional lymphnodes is a parameter of lymphatic flow. Due to great variations in patient shape - obesity, cachexia - and accompanying variations in counting efficiencies quantitative measurements with reasonable accuracy have not been reported to date. A new approach to regional absorption correction is based on the combination of transmission and emission scans for each patient. The transmission scan is used for calculation of an absorption correction matrix. Accurate superposition of the correction matrix and the emission scan is achieved by computing the centers of gravity of point sources and - in the case of aligning opposite views - by cross correlation of binary images. In phantom studies the recovery was high (98.3%) and the coefficient of variation of repeated measurement below 1%. In patient studies a standardized stress is a prerequisite for reliable and comparable results. Discrimination between normals (14.3 +- 4.2D%) and patients with lymphedema (2.05 +- 2.5D%) was highly significant using praefascial lymphography and sc injection. Clearence curve analysis of the activities at the injection site, however, gave no reliable data for this purpose. In normals, the uptake in lymphnodes after im injection is by one order of magnitude lower then the uptake after sc injection. The discrimination between normals and patients with postthromboic syndrome was significant. Lymphography after ic injection was in the normal range in 2/3 of the patients with lymphedema and is therefore of no diagnostic value. The difference in uptake after ic and sc injection demonstrated for the first time by our quantitative method provides new insights into the pathophysiology of lymphedema and needs further investigation. (Author)

  10. Quantitative film radiography

    International Nuclear Information System (INIS)

    Devine, G.; Dobie, D.; Fugina, J.; Hernandez, J.; Logan, C.; Mohr, P.; Moss, R.; Schumacher, B.; Updike, E.; Weirup, D.

    1991-01-01

    We have developed a system of quantitative radiography in order to produce quantitative images displaying homogeneity of parts. The materials that we characterize are synthetic composites and may contain important subtle density variations not discernible by examining a raw film x-radiograph. In order to quantitatively interpret film radiographs, it is necessary to digitize, interpret, and display the images. Our integrated system of quantitative radiography displays accurate, high-resolution pseudo-color images in units of density. We characterize approximately 10,000 parts per year in hundreds of different configurations and compositions with this system. This report discusses: the method; film processor monitoring and control; verifying film and processor performance; and correction of scatter effects

  11. Quantitative EPR A Practitioners Guide

    CERN Document Server

    Eaton, Gareth R; Barr, David P; Weber, Ralph T

    2010-01-01

    This is the first comprehensive yet practical guide for people who perform quantitative EPR measurements. No existing book provides this level of practical guidance to ensure the successful use of EPR. There is a growing need in both industrial and academic research to provide meaningful and accurate quantitative EPR results. This text discusses the various sample, instrument and software related aspects required for EPR quantitation. Specific topics include: choosing a reference standard, resonator considerations (Q, B1, Bm), power saturation characteristics, sample positioning, and finally, putting all the factors together to obtain an accurate spin concentration of a sample.

  12. Accurate Evaluation of Quantum Integrals

    Science.gov (United States)

    Galant, D. C.; Goorvitch, D.; Witteborn, Fred C. (Technical Monitor)

    1995-01-01

    Combining an appropriate finite difference method with Richardson's extrapolation results in a simple, highly accurate numerical method for solving a Schrodinger's equation. Important results are that error estimates are provided, and that one can extrapolate expectation values rather than the wavefunctions to obtain highly accurate expectation values. We discuss the eigenvalues, the error growth in repeated Richardson's extrapolation, and show that the expectation values calculated on a crude mesh can be extrapolated to obtain expectation values of high accuracy.

  13. Towards accurate emergency response behavior

    International Nuclear Information System (INIS)

    Sargent, T.O.

    1981-01-01

    Nuclear reactor operator emergency response behavior has persisted as a training problem through lack of information. The industry needs an accurate definition of operator behavior in adverse stress conditions, and training methods which will produce the desired behavior. Newly assembled information from fifty years of research into human behavior in both high and low stress provides a more accurate definition of appropriate operator response, and supports training methods which will produce the needed control room behavior. The research indicates that operator response in emergencies is divided into two modes, conditioned behavior and knowledge based behavior. Methods which assure accurate conditioned behavior, and provide for the recovery of knowledge based behavior, are described in detail

  14. When Is Network Lasso Accurate?

    Directory of Open Access Journals (Sweden)

    Alexander Jung

    2018-01-01

    Full Text Available The “least absolute shrinkage and selection operator” (Lasso method has been adapted recently for network-structured datasets. In particular, this network Lasso method allows to learn graph signals from a small number of noisy signal samples by using the total variation of a graph signal for regularization. While efficient and scalable implementations of the network Lasso are available, only little is known about the conditions on the underlying network structure which ensure network Lasso to be accurate. By leveraging concepts of compressed sensing, we address this gap and derive precise conditions on the underlying network topology and sampling set which guarantee the network Lasso for a particular loss function to deliver an accurate estimate of the entire underlying graph signal. We also quantify the error incurred by network Lasso in terms of two constants which reflect the connectivity of the sampled nodes.

  15. The Accurate Particle Tracer Code

    OpenAIRE

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...

  16. Accurate x-ray spectroscopy

    International Nuclear Information System (INIS)

    Deslattes, R.D.

    1987-01-01

    Heavy ion accelerators are the most flexible and readily accessible sources of highly charged ions. These having only one or two remaining electrons have spectra whose accurate measurement is of considerable theoretical significance. Certain features of ion production by accelerators tend to limit the accuracy which can be realized in measurement of these spectra. This report aims to provide background about spectroscopic limitations and discuss how accelerator operations may be selected to permit attaining intrinsically limited data

  17. Quantitative research.

    Science.gov (United States)

    Watson, Roger

    2015-04-01

    This article describes the basic tenets of quantitative research. The concepts of dependent and independent variables are addressed and the concept of measurement and its associated issues, such as error, reliability and validity, are explored. Experiments and surveys – the principal research designs in quantitative research – are described and key features explained. The importance of the double-blind randomised controlled trial is emphasised, alongside the importance of longitudinal surveys, as opposed to cross-sectional surveys. Essential features of data storage are covered, with an emphasis on safe, anonymous storage. Finally, the article explores the analysis of quantitative data, considering what may be analysed and the main uses of statistics in analysis.

  18. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  19. Quantitative habitability.

    Science.gov (United States)

    Shock, Everett L; Holland, Melanie E

    2007-12-01

    A framework is proposed for a quantitative approach to studying habitability. Considerations of environmental supply and organismal demand of energy lead to the conclusions that power units are most appropriate and that the units for habitability become watts per organism. Extreme and plush environments are revealed to be on a habitability continuum, and extreme environments can be quantified as those where power supply only barely exceeds demand. Strategies for laboratory and field experiments are outlined that would quantify power supplies, power demands, and habitability. An example involving a comparison of various metabolisms pursued by halophiles is shown to be well on the way to a quantitative habitability analysis.

  20. Quantitative FDG in depression

    Energy Technology Data Exchange (ETDEWEB)

    Chua, P.; O`Keefe, G.J.; Egan, G.F.; Berlangieri, S.U.; Tochon-Danguy, H.J.; Mckay, W.J.; Morris, P.L.P.; Burrows, G.D. [Austin Hospital, Melbourne, VIC (Australia). Dept of Psychiatry and Centre for PET

    1998-03-01

    Full text: Studies of regional cerebral glucose metabolism (rCMRGlu) using positron emission tomography (PET) in patients with affective disorders have consistently demonstrated reduced metabolism in the frontal regions. Different quantitative and semi-quantitative rCMRGlu regions of interest (ROI) comparisons, e.g. absolute metabolic rates, ratios of dorsolateral prefrontal cortex (DLPFC) to ipsilateral hemisphere cortex, have been reported. These studies suffered from the use of a standard brain atlas to define ROls, whereas in this case study, the individual``s magnetic resonance imaging (MRI) scan was registered with the PET scan to enable accurate neuroanatomical ROI definition for the subject. The patient is a 36-year-old female with a six-week history of major depression (HAM-D = 34, MMSE = 28). A quantitative FDG PET study and an MRI scan were performed. Six MRI-guided ROls (DLPFC, PFC, whole hemisphere) were defined. The average rCMRGlu in the DLPFC (left = 28.8 + 5.8 mol/100g/min; right = 25.6 7.0 mol/100g/min) were slightly reduced compared to the ipsilateral hemispherical rate (left = 30.4 6.8 mol/100g/min; right = 29.5 7.2 mol/100g/min). The ratios of DLPFC to ipsilateral hemispheric rate were close to unity (left = 0.95 0.29; right 0.87 0.32). The right to left DLPFC ratio did not show any significant asymmetry (0.91 0.30). These results do not correlate with earlier published results reporting decreased left DLPFC rates compared to right DLPFC, although our results will need to be replicated with a group of depressed patients. Registration of PET and MRI studies is necessary in ROI-based quantitative FDG PET studies to allow for the normal anatomical variation among individuals, and thus is essential for accurate comparison of rCMRGlu between individuals.

  1. Quantitative FDG in depression

    International Nuclear Information System (INIS)

    Chua, P.; O'Keefe, G.J.; Egan, G.F.; Berlangieri, S.U.; Tochon-Danguy, H.J.; Mckay, W.J.; Morris, P.L.P.; Burrows, G.D.

    1998-01-01

    Full text: Studies of regional cerebral glucose metabolism (rCMRGlu) using positron emission tomography (PET) in patients with affective disorders have consistently demonstrated reduced metabolism in the frontal regions. Different quantitative and semi-quantitative rCMRGlu regions of interest (ROI) comparisons, e.g. absolute metabolic rates, ratios of dorsolateral prefrontal cortex (DLPFC) to ipsilateral hemisphere cortex, have been reported. These studies suffered from the use of a standard brain atlas to define ROls, whereas in this case study, the individual''s magnetic resonance imaging (MRI) scan was registered with the PET scan to enable accurate neuroanatomical ROI definition for the subject. The patient is a 36-year-old female with a six-week history of major depression (HAM-D = 34, MMSE = 28). A quantitative FDG PET study and an MRI scan were performed. Six MRI-guided ROls (DLPFC, PFC, whole hemisphere) were defined. The average rCMRGlu in the DLPFC (left = 28.8 + 5.8 mol/100g/min; right = 25.6 7.0 mol/100g/min) were slightly reduced compared to the ipsilateral hemispherical rate (left = 30.4 6.8 mol/100g/min; right = 29.5 7.2 mol/100g/min). The ratios of DLPFC to ipsilateral hemispheric rate were close to unity (left = 0.95 0.29; right 0.87 0.32). The right to left DLPFC ratio did not show any significant asymmetry (0.91 0.30). These results do not correlate with earlier published results reporting decreased left DLPFC rates compared to right DLPFC, although our results will need to be replicated with a group of depressed patients. Registration of PET and MRI studies is necessary in ROI-based quantitative FDG PET studies to allow for the normal anatomical variation among individuals, and thus is essential for accurate comparison of rCMRGlu between individuals

  2. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  3. Accurate thickness measurement of graphene

    International Nuclear Information System (INIS)

    Shearer, Cameron J; Slattery, Ashley D; Stapleton, Andrew J; Shapter, Joseph G; Gibson, Christopher T

    2016-01-01

    Graphene has emerged as a material with a vast variety of applications. The electronic, optical and mechanical properties of graphene are strongly influenced by the number of layers present in a sample. As a result, the dimensional characterization of graphene films is crucial, especially with the continued development of new synthesis methods and applications. A number of techniques exist to determine the thickness of graphene films including optical contrast, Raman scattering and scanning probe microscopy techniques. Atomic force microscopy (AFM), in particular, is used extensively since it provides three-dimensional images that enable the measurement of the lateral dimensions of graphene films as well as the thickness, and by extension the number of layers present. However, in the literature AFM has proven to be inaccurate with a wide range of measured values for single layer graphene thickness reported (between 0.4 and 1.7 nm). This discrepancy has been attributed to tip-surface interactions, image feedback settings and surface chemistry. In this work, we use standard and carbon nanotube modified AFM probes and a relatively new AFM imaging mode known as PeakForce tapping mode to establish a protocol that will allow users to accurately determine the thickness of graphene films. In particular, the error in measuring the first layer is reduced from 0.1–1.3 nm to 0.1–0.3 nm. Furthermore, in the process we establish that the graphene-substrate adsorbate layer and imaging force, in particular the pressure the tip exerts on the surface, are crucial components in the accurate measurement of graphene using AFM. These findings can be applied to other 2D materials. (paper)

  4. Quantitative Finance

    Science.gov (United States)

    James, Jessica

    2017-01-01

    Quantitative finance is a field that has risen to prominence over the last few decades. It encompasses the complex models and calculations that value financial contracts, particularly those which reference events in the future, and apply probabilities to these events. While adding greatly to the flexibility of the market available to corporations and investors, it has also been blamed for worsening the impact of financial crises. But what exactly does quantitative finance encompass, and where did these ideas and models originate? We show that the mathematics behind finance and behind games of chance have tracked each other closely over the centuries and that many well-known physicists and mathematicians have contributed to the field.

  5. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  6. Validating quantitative precipitation forecast for the Flood ...

    Indian Academy of Sciences (India)

    In order to issue an accurate warning for flood, a better or appropriate quantitative forecasting of precipitationis required. In view of this, the present study intends to validate the quantitative precipitationforecast (QPF) issued during southwest monsoon season for six river catchments (basin) under theflood meteorological ...

  7. Quantitative ion implantation

    International Nuclear Information System (INIS)

    Gries, W.H.

    1976-06-01

    This is a report of the study of the implantation of heavy ions at medium keV-energies into electrically conducting mono-elemental solids, at ion doses too small to cause significant loss of the implanted ions by resputtering. The study has been undertaken to investigate the possibility of accurate portioning of matter in submicrogram quantities, with some specific applications in mind. The problem is extensively investigated both on a theoretical level and in practice. A mathematical model is developed for calculating the loss of implanted ions by resputtering as a function of the implanted ion dose and the sputtering yield. Numerical data are produced therefrom which permit a good order-of-magnitude estimate of the loss for any ion/solid combination in which the ions are heavier than the solid atoms, and for any ion energy from 10 to 300 keV. The implanted ion dose is measured by integration of the ion beam current, and equipment and techniques are described which make possible the accurate integration of an ion current in an electromagnetic isotope separator. The methods are applied to two sample cases, one being a stable isotope, the other a radioisotope. In both cases independent methods are used to show that the implantation is indeed quantitative, as predicted. At the same time the sample cases are used to demonstrate two possible applications for quantitative ion implantation, viz. firstly for the manufacture of calibration standards for instrumental micromethods of elemental trace analysis in metals, and secondly for the determination of the half-lives of long-lived radioisotopes by a specific activity method. It is concluded that the present study has advanced quantitative ion implantation to the state where it can be successfully applied to the solution of problems in other fields

  8. Quantitative radiography

    International Nuclear Information System (INIS)

    Brase, J.M.; Martz, H.E.; Waltjen, K.E.; Hurd, R.L.; Wieting, M.G.

    1986-01-01

    Radiographic techniques have been used in nondestructive evaluation primarily to develop qualitative information (i.e., defect detection). This project applies and extends the techniques developed in medical x-ray imaging, particularly computed tomography (CT), to develop quantitative information (both spatial dimensions and material quantities) on the three-dimensional (3D) structure of solids. Accomplishments in FY 86 include (1) improvements in experimental equipment - an improved microfocus system that will give 20-μm resolution and has potential for increased imaging speed, and (2) development of a simple new technique for displaying 3D images so as to clearly show the structure of the object. Image reconstruction and data analysis for a series of synchrotron CT experiments conducted by LLNL's Chemistry Department has begun

  9. Quantitative Thermochronology

    Science.gov (United States)

    Braun, Jean; van der Beek, Peter; Batt, Geoffrey

    2006-05-01

    Thermochronology, the study of the thermal history of rocks, enables us to quantify the nature and timing of tectonic processes. Quantitative Thermochronology is a robust review of isotopic ages, and presents a range of numerical modeling techniques to allow the physical implications of isotopic age data to be explored. The authors provide analytical, semi-analytical, and numerical solutions to the heat transfer equation in a range of tectonic settings and under varying boundary conditions. They then illustrate their modeling approach built around a large number of case studies. The benefits of different thermochronological techniques are also described. Computer programs on an accompanying website at www.cambridge.org/9780521830577 are introduced through the text and provide a means of solving the heat transport equation in the deforming Earth to predict the ages of rocks and compare them directly to geological and geochronological data. Several short tutorials, with hints and solutions, are also included. Numerous case studies help geologists to interpret age data and relate it to Earth processes Essential background material to aid understanding and using thermochronological data Provides a thorough treatise on numerical modeling of heat transport in the Earth's crust Supported by a website hosting relevant computer programs and colour slides of figures from the book for use in teaching

  10. Quantitative inspection by computerized tomography

    International Nuclear Information System (INIS)

    Lopes, R.T.; Assis, J.T. de; Jesus, E.F.O. de

    1989-01-01

    The computerized Tomography (CT) is a method of nondestructive testing, that furnish quantitative information, that permit the detection and accurate localization of defects, internal dimension measurement, and, measurement and chart of the density distribution. The CT technology is much versatile, not presenting restriction in relation to form, size or composition of the object. A tomographic system, projected and constructed in our laboratory is presented. The applications and limitation of this system, illustrated by tomographyc images, are shown. (V.R.B.)

  11. The quantitative Morse theorem

    OpenAIRE

    Loi, Ta Le; Phien, Phan

    2013-01-01

    In this paper, we give a proof of the quantitative Morse theorem stated by {Y. Yomdin} in \\cite{Y1}. The proof is based on the quantitative Sard theorem, the quantitative inverse function theorem and the quantitative Morse lemma.

  12. Quantitative imaging methods in osteoporosis.

    Science.gov (United States)

    Oei, Ling; Koromani, Fjorda; Rivadeneira, Fernando; Zillikens, M Carola; Oei, Edwin H G

    2016-12-01

    Osteoporosis is characterized by a decreased bone mass and quality resulting in an increased fracture risk. Quantitative imaging methods are critical in the diagnosis and follow-up of treatment effects in osteoporosis. Prior radiographic vertebral fractures and bone mineral density (BMD) as a quantitative parameter derived from dual-energy X-ray absorptiometry (DXA) are among the strongest known predictors of future osteoporotic fractures. Therefore, current clinical decision making relies heavily on accurate assessment of these imaging features. Further, novel quantitative techniques are being developed to appraise additional characteristics of osteoporosis including three-dimensional bone architecture with quantitative computed tomography (QCT). Dedicated high-resolution (HR) CT equipment is available to enhance image quality. At the other end of the spectrum, by utilizing post-processing techniques such as the trabecular bone score (TBS) information on three-dimensional architecture can be derived from DXA images. Further developments in magnetic resonance imaging (MRI) seem promising to not only capture bone micro-architecture but also characterize processes at the molecular level. This review provides an overview of various quantitative imaging techniques based on different radiological modalities utilized in clinical osteoporosis care and research.

  13. Quantitative Decision Support Requires Quantitative User Guidance

    Science.gov (United States)

    Smith, L. A.

    2009-12-01

    Is it conceivable that models run on 2007 computer hardware could provide robust and credible probabilistic information for decision support and user guidance at the ZIP code level for sub-daily meteorological events in 2060? In 2090? Retrospectively, how informative would output from today’s models have proven in 2003? or the 1930’s? Consultancies in the United Kingdom, including the Met Office, are offering services to “future-proof” their customers from climate change. How is a US or European based user or policy maker to determine the extent to which exciting new Bayesian methods are relevant here? or when a commercial supplier is vastly overselling the insights of today’s climate science? How are policy makers and academic economists to make the closely related decisions facing them? How can we communicate deep uncertainty in the future at small length-scales without undermining the firm foundation established by climate science regarding global trends? Three distinct aspects of the communication of the uses of climate model output targeting users and policy makers, as well as other specialist adaptation scientists, are discussed. First, a brief scientific evaluation of the length and time scales at which climate model output is likely to become uninformative is provided, including a note on the applicability the latest Bayesian methodology to current state-of-the-art general circulation models output. Second, a critical evaluation of the language often employed in communication of climate model output, a language which accurately states that models are “better”, have “improved” and now “include” and “simulate” relevant meteorological processed, without clearly identifying where the current information is thought to be uninformative and misleads, both for the current climate and as a function of the state of the (each) climate simulation. And thirdly, a general approach for evaluating the relevance of quantitative climate model output

  14. Comparative analysis of core genome MLST and SNP typing within a European Salmonella serovar Enteritidis outbreak.

    Science.gov (United States)

    Pearce, Madison E; Alikhan, Nabil-Fareed; Dallman, Timothy J; Zhou, Zhemin; Grant, Kathie; Maiden, Martin C J

    2018-06-02

    Multi-country outbreaks of foodborne bacterial disease present challenges in their detection, tracking, and notification. As food is increasingly distributed across borders, such outbreaks are becoming more common. This increases the need for high-resolution, accessible, and replicable isolate typing schemes. Here we evaluate a core genome multilocus typing (cgMLST) scheme for the high-resolution reproducible typing of Salmonella enterica (S. enterica) isolates, by its application to a large European outbreak of S. enterica serovar Enteritidis. This outbreak had been extensively characterised using single nucleotide polymorphism (SNP)-based approaches. The cgMLST analysis was congruent with the original SNP-based analysis, the epidemiological data, and whole genome MLST (wgMLST) analysis. Combination of the cgMLST and epidemiological data confirmed that the genetic diversity among the isolates predated the outbreak, and was likely present at the infection source. There was consequently no link between country of isolation and genetic diversity, but the cgMLST clusters were congruent with date of isolation. Furthermore, comparison with publicly available Enteritidis isolate data demonstrated that the cgMLST scheme presented is highly scalable, enabling outbreaks to be contextualised within the Salmonella genus. The cgMLST scheme is therefore shown to be a standardised and scalable typing method, which allows Salmonella outbreaks to be analysed and compared across laboratories and jurisdictions. Copyright © 2018. Published by Elsevier B.V.

  15. A sensitive issue: Pyrosequencing as a valuable forensic SNP typing platform

    DEFF Research Database (Denmark)

    Harrison, C.; Musgrave-Brown, E.; Bender, K.

    2006-01-01

    Analysing minute amounts of DNA is a routine challenge in forensics in part due to the poor sensitivity of an instrument and its inability to detect results from forensic samples. In this study, the sensitivity of the Pyrosequencing method is investigated using varying concentrations of DNA and f...

  16. Equipment upgrade - Accurate positioning of ion chambers

    International Nuclear Information System (INIS)

    Doane, Harry J.; Nelson, George W.

    1990-01-01

    Five adjustable clamps were made to firmly support and accurately position the ion Chambers, that provide signals to the power channels for the University of Arizona TRIGA reactor. The design requirements, fabrication procedure and installation are described

  17. Accurate late gadolinium enhancement prediction by early T1- based quantitative synthetic mapping

    Energy Technology Data Exchange (ETDEWEB)

    Dijk, Randy van; Harst, Pim van der [University of Groningen, University Medical Centre Groningen, Centre for Medical Imaging, Groningen (Netherlands); University of Groningen, University Medical Centre Groningen, Department of Cardiology, Groningen (Netherlands); Kuijpers, Dirkjan [University of Groningen, University Medical Centre Groningen, Centre for Medical Imaging, Groningen (Netherlands); Department of Cardiovascular Imaging HMC-Bronovo, The Hague (Netherlands); Kaandorp, Theodorus A.M.; Dijkman, Paul R.M. van [Department of Cardiovascular Imaging HMC-Bronovo, The Hague (Netherlands); Vliegenthart, Rozemarijn [University of Groningen, University Medical Center Groningen, Department of Radiology, Groningen (Netherlands); Oudkerk, Matthijs [University of Groningen, University Medical Centre Groningen, Centre for Medical Imaging, Groningen (Netherlands); University Medical Center Groningen, Center for Medical Imaging, Groningen (Netherlands)

    2018-02-15

    Early synthetic gadolinium enhancement (ESGE) imaging from post-contrast T1 mapping after adenosine stress-perfusion cardiac magnetic resonance (CMR) was compared to conventional late gadolinium enhancement (LGE) imaging for assessing myocardial scar. Two hundred fourteen consecutive patients suspected of myocardial ischaemia were referred for stress-perfusion CMR. Myocardial infarct volume was quantified on a per-subsegment basis in both synthetic (2-3 min post-gadolinium) and conventional (9 min post-gadolinium) images by two independent observers. Sensitivity, specificity, PPV and NPV were calculated on a per-patient and per-subsegment basis. Both techniques detected 39 gadolinium enhancement areas in 23 patients. The median amount of scar was 2.0 (1.0-3.1) g in ESGE imaging and 2.2 (1.1-3.1) g in LGE imaging (p=0.39). Excellent correlation (r=0.997) and agreement (mean absolute difference: -0.028±0.289 ml) were found between ESGE and LGE images. Sensitivity, specificity, PPV and NPV of ESGE imaging were 96 (78.9-99.9), 99 (97.1-100.0)%, 96 (76.5-99.4) and 99.5 (96.6-99.9) in patient-based and 99 (94.5-100.0), 100 (99.9-100.0)%, 97.0 (91.3-99.0) and 100.0 (99.8-100.0) in subsegment-based analysis. ESGE based on post-contrast T1 mapping after adenosine stress-perfusion CMR imaging shows excellent agreement with conventional LGE imaging for assessing myocardial scar, and can substantially shorten clinical acquisition time. (orig.)

  18. Computationally efficient and quantitatively accurate multiscale simulation of solid-solution strengthening by ab initio calculation

    Czech Academy of Sciences Publication Activity Database

    Ma, D.; Friák, Martin; von Pezold, J.; Raabe, D.; Neugebauer, J.

    2015-01-01

    Roč. 85, FEB (2015), s. 53-66 ISSN 1359-6454 Institutional support: RVO:68081723 Keywords : Solid-solution strengthening * DFT * Peierls–Nabarro model * Ab initio * Al alloys Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 5.058, year: 2015

  19. Accurate late gadolinium enhancement prediction by early T1- based quantitative synthetic mapping

    International Nuclear Information System (INIS)

    Dijk, Randy van; Harst, Pim van der; Kuijpers, Dirkjan; Kaandorp, Theodorus A.M.; Dijkman, Paul R.M. van; Vliegenthart, Rozemarijn; Oudkerk, Matthijs

    2018-01-01

    Early synthetic gadolinium enhancement (ESGE) imaging from post-contrast T1 mapping after adenosine stress-perfusion cardiac magnetic resonance (CMR) was compared to conventional late gadolinium enhancement (LGE) imaging for assessing myocardial scar. Two hundred fourteen consecutive patients suspected of myocardial ischaemia were referred for stress-perfusion CMR. Myocardial infarct volume was quantified on a per-subsegment basis in both synthetic (2-3 min post-gadolinium) and conventional (9 min post-gadolinium) images by two independent observers. Sensitivity, specificity, PPV and NPV were calculated on a per-patient and per-subsegment basis. Both techniques detected 39 gadolinium enhancement areas in 23 patients. The median amount of scar was 2.0 (1.0-3.1) g in ESGE imaging and 2.2 (1.1-3.1) g in LGE imaging (p=0.39). Excellent correlation (r=0.997) and agreement (mean absolute difference: -0.028±0.289 ml) were found between ESGE and LGE images. Sensitivity, specificity, PPV and NPV of ESGE imaging were 96 (78.9-99.9), 99 (97.1-100.0)%, 96 (76.5-99.4) and 99.5 (96.6-99.9) in patient-based and 99 (94.5-100.0), 100 (99.9-100.0)%, 97.0 (91.3-99.0) and 100.0 (99.8-100.0) in subsegment-based analysis. ESGE based on post-contrast T1 mapping after adenosine stress-perfusion CMR imaging shows excellent agreement with conventional LGE imaging for assessing myocardial scar, and can substantially shorten clinical acquisition time. (orig.)

  20. Quantitative chemical shift-encoded MRI is an accurate method to quantify hepatic steatosis.

    Science.gov (United States)

    Kühn, Jens-Peter; Hernando, Diego; Mensel, Birger; Krüger, Paul C; Ittermann, Till; Mayerle, Julia; Hosten, Norbert; Reeder, Scott B

    2014-06-01

    To compare the accuracy of liver fat quantification using a three-echo chemical shift-encoded magnetic resonance imaging (MRI) technique without and with correction for confounders with spectroscopy (MRS) as the reference standard. Fifty patients (23 women, mean age 56.6 ± 13.2 years) with fatty liver disease were enrolled. Patients underwent T2-corrected single-voxel MRS and a three-echo chemical shift-encoded gradient echo (GRE) sequence at 3.0T. MRI fat fraction (FF) was calculated without and with T2* and T1 correction and multispectral modeling of fat and compared with MRS-FF using linear regression. The spectroscopic range of liver fat was 0.11%-38.7%. Excellent correlation between MRS-FF and MRI-FF was observed when using T2* correction (R(2)  = 0.96). With use of T2* correction alone, the slope was significantly different from 1 (1.16 ± 0.03, P fat were addressed, the results showed equivalence between fat quantification using MRI and MRS (slope: 1.02 ± 0.03, P = 0.528; intercept: 0.26% ± 0.46%, P = 0.572). Complex three-echo chemical shift-encoded MRI is equivalent to MRS for quantifying liver fat, but only with correction for T2* decay and T1 recovery and use of spectral modeling of fat. This is necessary because T2* decay, T1 recovery, and multispectral complexity of fat are processes which may otherwise bias the measurements. Copyright © 2013 Wiley Periodicals, Inc.

  1. Image interpolation allows accurate quantitative bone morphometry in registered micro-computed tomography scans.

    Science.gov (United States)

    Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph

    2014-04-01

    Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.

  2. Guidance to Achieve Accurate Aggregate Quantitation in Biopharmaceuticals by SV-AUC.

    Science.gov (United States)

    Arthur, Kelly K; Kendrick, Brent S; Gabrielson, John P

    2015-01-01

    The levels and types of aggregates present in protein biopharmaceuticals must be assessed during all stages of product development, manufacturing, and storage of the finished product. Routine monitoring of aggregate levels in biopharmaceuticals is typically achieved by size exclusion chromatography (SEC) due to its high precision, speed, robustness, and simplicity to operate. However, SEC is error prone and requires careful method development to ensure accuracy of reported aggregate levels. Sedimentation velocity analytical ultracentrifugation (SV-AUC) is an orthogonal technique that can be used to measure protein aggregation without many of the potential inaccuracies of SEC. In this chapter, we discuss applications of SV-AUC during biopharmaceutical development and how characteristics of the technique make it better suited for some applications than others. We then discuss the elements of a comprehensive analytical control strategy for SV-AUC. Successful implementation of these analytical control elements ensures that SV-AUC provides continued value over the long time frames necessary to bring biopharmaceuticals to market. © 2015 Elsevier Inc. All rights reserved.

  3. Whole-genome modeling accurately predicts quantitative traits, as revealed in plants.

    OpenAIRE

    Tatarinova, Tatiana; Shin, Min-Gyoung; Marjoram, Paul; Nuzhdin, Sergey; Triska, Martin; Rickauer, Martina; Nikolsky, Yuri; Mazurier, Melanie; Gentzbittel, Laurent; Ben, Cecile

    2016-01-01

    Many adaptive events in natural populations, as well as response to artificial selection, are caused by polygenic action. Under selective pressure, the adaptive traits can quickly respond via small allele frequency shifts spread across numerous loci. We hypothesize that a large proportion of current phenotypic variation between individuals may be best explained by population admixture. We thus consider the complete, genome-wide universe of genetic variability, spread across several ancestral ...

  4. Accurate Virus Quantitation Using a Scanning Transmission Electron Microscopy (STEM) Detector in a Scanning Electron Microscope

    Science.gov (United States)

    2017-06-29

    and then dispensed into a waste container. Three rinse cycles were performed by aspirating and immediately dispensing 40µl of deionized (dI) water ...aspirating 40µl of dI water followed by dispensing into a waste container. The capsule was removed from the pipette, lid opened, and allowed to air dry...sample distribution and different purification steps to decrease sedimentation that interfered with imaging and counting 14,15. In 1950, scientists

  5. Using an Educational Electronic Documentation System to Help Nursing Students Accurately Identify Nursing Diagnoses

    Science.gov (United States)

    Pobocik, Tamara J.

    2013-01-01

    The use of technology and electronic medical records in healthcare has exponentially increased. This quantitative research project used a pretest/posttest design, and reviewed how an educational electronic documentation system helped nursing students to identify the accurate related to statement of the nursing diagnosis for the patient in the case…

  6. More accurate picture of human body organs

    International Nuclear Information System (INIS)

    Kolar, J.

    1985-01-01

    Computerized tomography and nucler magnetic resonance tomography (NMRT) are revolutionary contributions to radiodiagnosis because they allow to obtain a more accurate image of human body organs. The principles are described of both methods. Attention is mainly devoted to NMRT which has clinically only been used for three years. It does not burden the organism with ionizing radiation. (Ha)

  7. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  8. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency

  9. Accurate activity recognition in a home setting

    NARCIS (Netherlands)

    van Kasteren, T.; Noulas, A.; Englebienne, G.; Kröse, B.

    2008-01-01

    A sensor system capable of automatically recognizing activities would allow many potential ubiquitous applications. In this paper, we present an easy to install sensor network and an accurate but inexpensive annotation method. A recorded dataset consisting of 28 days of sensor data and its

  10. Highly accurate surface maps from profilometer measurements

    Science.gov (United States)

    Medicus, Kate M.; Nelson, Jessica D.; Mandina, Mike P.

    2013-04-01

    Many aspheres and free-form optical surfaces are measured using a single line trace profilometer which is limiting because accurate 3D corrections are not possible with the single trace. We show a method to produce an accurate fully 2.5D surface height map when measuring a surface with a profilometer using only 6 traces and without expensive hardware. The 6 traces are taken at varying angular positions of the lens, rotating the part between each trace. The output height map contains low form error only, the first 36 Zernikes. The accuracy of the height map is ±10% of the actual Zernike values and within ±3% of the actual peak to valley number. The calculated Zernike values are affected by errors in the angular positioning, by the centering of the lens, and to a small effect, choices made in the processing algorithm. We have found that the angular positioning of the part should be better than 1?, which is achievable with typical hardware. The centering of the lens is essential to achieving accurate measurements. The part must be centered to within 0.5% of the diameter to achieve accurate results. This value is achievable with care, with an indicator, but the part must be edged to a clean diameter.

  11. Accurate and efficient spin integration for particle accelerators

    International Nuclear Information System (INIS)

    Abell, Dan T.; Meiser, Dominic; Ranjbar, Vahid H.; Barber, Desmond P.

    2015-01-01

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  12. Accurate and efficient spin integration for particle accelerators

    Energy Technology Data Exchange (ETDEWEB)

    Abell, Dan T.; Meiser, Dominic [Tech-X Corporation, Boulder, CO (United States); Ranjbar, Vahid H. [Brookhaven National Laboratory, Upton, NY (United States); Barber, Desmond P. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany)

    2015-01-15

    Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code GPUSPINTRACK. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  13. Accurate and efficient spin integration for particle accelerators

    Directory of Open Access Journals (Sweden)

    Dan T. Abell

    2015-02-01

    Full Text Available Accurate spin tracking is a valuable tool for understanding spin dynamics in particle accelerators and can help improve the performance of an accelerator. In this paper, we present a detailed discussion of the integrators in the spin tracking code gpuSpinTrack. We have implemented orbital integrators based on drift-kick, bend-kick, and matrix-kick splits. On top of the orbital integrators, we have implemented various integrators for the spin motion. These integrators use quaternions and Romberg quadratures to accelerate both the computation and the convergence of spin rotations. We evaluate their performance and accuracy in quantitative detail for individual elements as well as for the entire RHIC lattice. We exploit the inherently data-parallel nature of spin tracking to accelerate our algorithms on graphics processing units.

  14. urrent status and assessment of quantitative and qualitative one leg ...

    African Journals Online (AJOL)

    ... of only a quantitative assessment. These findings indicate that, when evaluating the one leg balance in children aged 3-6 years, a quantitative and qualitative assessment should be used in combination together to assure a more accurate assessment. (S. African J. for Research in Sport, Physical Ed. and Recreation: 2001 ...

  15. Quantitation of Proteinuria in Women With Pregnancy Induced ...

    African Journals Online (AJOL)

    This creates the need for a more accurate method for early detection and quantitation of proteinuria. Objective:To compare the accuracy of the Spot urine Protein to Creatinine ratio with that of Dipstick Tests in the quantitation of proteinuria in Nigerian women with Pregnancy Induced Hypertension. Methods: A cross-sectional ...

  16. Development and applications of quantitative NMR spectroscopy

    International Nuclear Information System (INIS)

    Yamazaki, Taichi

    2016-01-01

    Recently, quantitative NMR spectroscopy has attracted attention as an analytical method which can easily secure traceability to SI unit system, and discussions about its accuracy and inaccuracy are also started. This paper focuses on the literatures on the advancement of quantitative NMR spectroscopy reported between 2009 and 2016, and introduces both NMR measurement conditions and actual analysis cases in quantitative NMR. The quantitative NMR spectroscopy using an internal reference method enables accurate quantitative analysis with a quick and versatile way in general, and it is possible to obtain the precision sufficiently applicable to the evaluation of pure substances and standard solutions. Since the external reference method can easily prevent contamination to samples and the collection of samples, there are many reported cases related to the quantitative analysis of biologically related samples and highly scarce natural products in which NMR spectra are complicated. In the precision of quantitative NMR spectroscopy, the internal reference method is superior. As the quantitative NMR spectroscopy widely spreads, discussions are also progressing on how to utilize this analytical method as the official methods in various countries around the world. In Japan, this method is listed in the Pharmacopoeia and Japanese Standard of Food Additives, and it is also used as the official method for purity evaluation. In the future, this method will be expected to spread as the general-purpose analysis method that can ensure traceability to SI unit system. (A.O.)

  17. Quantitative investment analysis

    CERN Document Server

    DeFusco, Richard

    2007-01-01

    In the "Second Edition" of "Quantitative Investment Analysis," financial experts Richard DeFusco, Dennis McLeavey, Jerald Pinto, and David Runkle outline the tools and techniques needed to understand and apply quantitative methods to today's investment process.

  18. Accurate guitar tuning by cochlear implant musicians.

    Directory of Open Access Journals (Sweden)

    Thomas Lu

    Full Text Available Modern cochlear implant (CI users understand speech but find difficulty in music appreciation due to poor pitch perception. Still, some deaf musicians continue to perform with their CI. Here we show unexpected results that CI musicians can reliably tune a guitar by CI alone and, under controlled conditions, match simultaneously presented tones to <0.5 Hz. One subject had normal contralateral hearing and produced more accurate tuning with CI than his normal ear. To understand these counterintuitive findings, we presented tones sequentially and found that tuning error was larger at ∼ 30 Hz for both subjects. A third subject, a non-musician CI user with normal contralateral hearing, showed similar trends in performance between CI and normal hearing ears but with less precision. This difference, along with electric analysis, showed that accurate tuning was achieved by listening to beats rather than discriminating pitch, effectively turning a spectral task into a temporal discrimination task.

  19. Accurate estimation of indoor travel times

    DEFF Research Database (Denmark)

    Prentow, Thor Siiger; Blunck, Henrik; Stisen, Allan

    2014-01-01

    The ability to accurately estimate indoor travel times is crucial for enabling improvements within application areas such as indoor navigation, logistics for mobile workers, and facility management. In this paper, we study the challenges inherent in indoor travel time estimation, and we propose...... the InTraTime method for accurately estimating indoor travel times via mining of historical and real-time indoor position traces. The method learns during operation both travel routes, travel times and their respective likelihood---both for routes traveled as well as for sub-routes thereof. InTraTime...... allows to specify temporal and other query parameters, such as time-of-day, day-of-week or the identity of the traveling individual. As input the method is designed to take generic position traces and is thus interoperable with a variety of indoor positioning systems. The method's advantages include...

  20. On accurate determination of contact angle

    Science.gov (United States)

    Concus, P.; Finn, R.

    1992-01-01

    Methods are proposed that exploit a microgravity environment to obtain highly accurate measurement of contact angle. These methods, which are based on our earlier mathematical results, do not require detailed measurement of a liquid free-surface, as they incorporate discontinuous or nearly-discontinuous behavior of the liquid bulk in certain container geometries. Physical testing is planned in the forthcoming IML-2 space flight and in related preparatory ground-based experiments.

  1. Software Estimation: Developing an Accurate, Reliable Method

    Science.gov (United States)

    2011-08-01

    based and size-based estimates is able to accurately plan, launch, and execute on schedule. Bob Sinclair, NAWCWD Chris Rickets , NAWCWD Brad Hodgins...Office by Carnegie Mellon University. SMPSP and SMTSP are service marks of Carnegie Mellon University. 1. Rickets , Chris A, “A TSP Software Maintenance...Life Cycle”, CrossTalk, March, 2005. 2. Koch, Alan S, “TSP Can Be the Building blocks for CMMI”, CrossTalk, March, 2005. 3. Hodgins, Brad, Rickets

  2. Highly Accurate Prediction of Jobs Runtime Classes

    OpenAIRE

    Reiner-Benaim, Anat; Grabarnick, Anna; Shmueli, Edi

    2016-01-01

    Separating the short jobs from the long is a known technique to improve scheduling performance. In this paper we describe a method we developed for accurately predicting the runtimes classes of the jobs to enable this separation. Our method uses the fact that the runtimes can be represented as a mixture of overlapping Gaussian distributions, in order to train a CART classifier to provide the prediction. The threshold that separates the short jobs from the long jobs is determined during the ev...

  3. Accurate multiplicity scaling in isotopically conjugate reactions

    International Nuclear Information System (INIS)

    Golokhvastov, A.I.

    1989-01-01

    The generation of accurate scaling of mutiplicity distributions is presented. The distributions of π - mesons (negative particles) and π + mesons in different nucleon-nucleon interactions (PP, NP and NN) are described by the same universal function Ψ(z) and the same energy dependence of the scale parameter which determines the stretching factor for the unit function Ψ(z) to obtain the desired multiplicity distribution. 29 refs.; 6 figs

  4. Mental models accurately predict emotion transitions.

    Science.gov (United States)

    Thornton, Mark A; Tamir, Diana I

    2017-06-06

    Successful social interactions depend on people's ability to predict others' future actions and emotions. People possess many mechanisms for perceiving others' current emotional states, but how might they use this information to predict others' future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others' emotional dynamics. People could then use these mental models of emotion transitions to predict others' future emotions from currently observable emotions. To test this hypothesis, studies 1-3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants' ratings of emotion transitions predicted others' experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation-valence, social impact, rationality, and human mind-inform participants' mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants' accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone.

  5. Mental models accurately predict emotion transitions

    Science.gov (United States)

    Thornton, Mark A.; Tamir, Diana I.

    2017-01-01

    Successful social interactions depend on people’s ability to predict others’ future actions and emotions. People possess many mechanisms for perceiving others’ current emotional states, but how might they use this information to predict others’ future states? We hypothesized that people might capitalize on an overlooked aspect of affective experience: current emotions predict future emotions. By attending to regularities in emotion transitions, perceivers might develop accurate mental models of others’ emotional dynamics. People could then use these mental models of emotion transitions to predict others’ future emotions from currently observable emotions. To test this hypothesis, studies 1–3 used data from three extant experience-sampling datasets to establish the actual rates of emotional transitions. We then collected three parallel datasets in which participants rated the transition likelihoods between the same set of emotions. Participants’ ratings of emotion transitions predicted others’ experienced transitional likelihoods with high accuracy. Study 4 demonstrated that four conceptual dimensions of mental state representation—valence, social impact, rationality, and human mind—inform participants’ mental models. Study 5 used 2 million emotion reports on the Experience Project to replicate both of these findings: again people reported accurate models of emotion transitions, and these models were informed by the same four conceptual dimensions. Importantly, neither these conceptual dimensions nor holistic similarity could fully explain participants’ accuracy, suggesting that their mental models contain accurate information about emotion dynamics above and beyond what might be predicted by static emotion knowledge alone. PMID:28533373

  6. Quantitation: clinical applications

    International Nuclear Information System (INIS)

    Britton, K.E.

    1982-01-01

    Single photon emission tomography may be used quantitatively if its limitations are recognized and quantitation is made in relation to some reference area on the image. Relative quantitation is discussed in outline in relation to the liver, brain and pituitary, thyroid, adrenals, and heart. (U.K.)

  7. Robust and accurate vectorization of line drawings.

    Science.gov (United States)

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  8. The first accurate description of an aurora

    Science.gov (United States)

    Schröder, Wilfried

    2006-12-01

    As technology has advanced, the scientific study of auroral phenomena has increased by leaps and bounds. A look back at the earliest descriptions of aurorae offers an interesting look into how medieval scholars viewed the subjects that we study.Although there are earlier fragmentary references in the literature, the first accurate description of the aurora borealis appears to be that published by the German Catholic scholar Konrad von Megenberg (1309-1374) in his book Das Buch der Natur (The Book of Nature). The book was written between 1349 and 1350.

  9. Accurate Charge Densities from Powder Diffraction

    DEFF Research Database (Denmark)

    Bindzus, Niels; Wahlberg, Nanna; Becker, Jacob

    Synchrotron powder X-ray diffraction has in recent years advanced to a level, where it has become realistic to probe extremely subtle electronic features. Compared to single-crystal diffraction, it may be superior for simple, high-symmetry crystals owing to negligible extinction effects and minimal...... peak overlap. Additionally, it offers the opportunity for collecting data on a single scale. For charge densities studies, the critical task is to recover accurate and bias-free structure factors from the diffraction pattern. This is the focal point of the present study, scrutinizing the performance...

  10. Arbitrarily accurate twin composite π -pulse sequences

    Science.gov (United States)

    Torosov, Boyan T.; Vitanov, Nikolay V.

    2018-04-01

    We present three classes of symmetric broadband composite pulse sequences. The composite phases are given by analytic formulas (rational fractions of π ) valid for any number of constituent pulses. The transition probability is expressed by simple analytic formulas and the order of pulse area error compensation grows linearly with the number of pulses. Therefore, any desired compensation order can be produced by an appropriate composite sequence; in this sense, they are arbitrarily accurate. These composite pulses perform equally well as or better than previously published ones. Moreover, the current sequences are more flexible as they allow total pulse areas of arbitrary integer multiples of π .

  11. Systematization of Accurate Discrete Optimization Methods

    Directory of Open Access Journals (Sweden)

    V. A. Ovchinnikov

    2015-01-01

    Full Text Available The object of study of this paper is to define accurate methods for solving combinatorial optimization problems of structural synthesis. The aim of the work is to systemize the exact methods of discrete optimization and define their applicability to solve practical problems.The article presents the analysis, generalization and systematization of classical methods and algorithms described in the educational and scientific literature.As a result of research a systematic presentation of combinatorial methods for discrete optimization described in various sources is given, their capabilities are described and properties of the tasks to be solved using the appropriate methods are specified.

  12. Accurate shear measurement with faint sources

    International Nuclear Information System (INIS)

    Zhang, Jun; Foucaud, Sebastien; Luo, Wentao

    2015-01-01

    For cosmic shear to become an accurate cosmological probe, systematic errors in the shear measurement method must be unambiguously identified and corrected for. Previous work of this series has demonstrated that cosmic shears can be measured accurately in Fourier space in the presence of background noise and finite pixel size, without assumptions on the morphologies of galaxy and PSF. The remaining major source of error is source Poisson noise, due to the finiteness of source photon number. This problem is particularly important for faint galaxies in space-based weak lensing measurements, and for ground-based images of short exposure times. In this work, we propose a simple and rigorous way of removing the shear bias from the source Poisson noise. Our noise treatment can be generalized for images made of multiple exposures through MultiDrizzle. This is demonstrated with the SDSS and COSMOS/ACS data. With a large ensemble of mock galaxy images of unrestricted morphologies, we show that our shear measurement method can achieve sub-percent level accuracy even for images of signal-to-noise ratio less than 5 in general, making it the most promising technique for cosmic shear measurement in the ongoing and upcoming large scale galaxy surveys

  13. How Accurately can we Calculate Thermal Systems?

    International Nuclear Information System (INIS)

    Cullen, D; Blomquist, R N; Dean, C; Heinrichs, D; Kalugin, M A; Lee, M; Lee, Y; MacFarlan, R; Nagaya, Y; Trkov, A

    2004-01-01

    I would like to determine how accurately a variety of neutron transport code packages (code and cross section libraries) can calculate simple integral parameters, such as K eff , for systems that are sensitive to thermal neutron scattering. Since we will only consider theoretical systems, we cannot really determine absolute accuracy compared to any real system. Therefore rather than accuracy, it would be more precise to say that I would like to determine the spread in answers that we obtain from a variety of code packages. This spread should serve as an excellent indicator of how accurately we can really model and calculate such systems today. Hopefully, eventually this will lead to improvements in both our codes and the thermal scattering models that they use in the future. In order to accomplish this I propose a number of extremely simple systems that involve thermal neutron scattering that can be easily modeled and calculated by a variety of neutron transport codes. These are theoretical systems designed to emphasize the effects of thermal scattering, since that is what we are interested in studying. I have attempted to keep these systems very simple, and yet at the same time they include most, if not all, of the important thermal scattering effects encountered in a large, water-moderated, uranium fueled thermal system, i.e., our typical thermal reactors

  14. Accurate control testing for clay liner permeability

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, R J

    1991-08-01

    Two series of centrifuge tests were carried out to evaluate the use of centrifuge modelling as a method of accurate control testing of clay liner permeability. The first series used a large 3 m radius geotechnical centrifuge and the second series a small 0.5 m radius machine built specifically for research on clay liners. Two permeability cells were fabricated in order to provide direct data comparisons between the two methods of permeability testing. In both cases, the centrifuge method proved to be effective and efficient, and was found to be free of both the technical difficulties and leakage risks normally associated with laboratory permeability testing of fine grained soils. Two materials were tested, a consolidated kaolin clay having an average permeability coefficient of 1.2{times}10{sup -9} m/s and a compacted illite clay having a permeability coefficient of 2.0{times}10{sup -11} m/s. Four additional tests were carried out to demonstrate that the 0.5 m radius centrifuge could be used for linear performance modelling to evaluate factors such as volumetric water content, compaction method and density, leachate compatibility and other construction effects on liner leakage. The main advantages of centrifuge testing of clay liners are rapid and accurate evaluation of hydraulic properties and realistic stress modelling for performance evaluations. 8 refs., 12 figs., 7 tabs.

  15. The value of quantitative MRI using 1.5 T magnet in diagnosis of carpal tunnel syndrome

    Directory of Open Access Journals (Sweden)

    Mohammad Fouad Abdel Baki Allam

    2017-03-01

    Conclusion: Quantitative 1.5 T MRI is an accurate diagnostic tool in CTS. The increase in MN ADC value from proximal to distal with an ADC ratio cutoff value of 1 is highly accurate in diagnosing CTS.

  16. Quantitative Analysis of Renogram

    Energy Technology Data Exchange (ETDEWEB)

    Choi, Keun Chul [Seoul National University College of Medicine, Seoul (Korea, Republic of)

    1969-03-15

    value are useful for the differentiation of various renal diseases, however, qualitative analysis of the renogram with one or two parameters is not accurate. 3) In bilateral non-functioning kidney groups, a positive correlation between anemia and nitrogen retention was observed, although the quantitative assessment of the degree of non-functioning was impossible.

  17. Quantitative Analysis of Renogram

    International Nuclear Information System (INIS)

    Choi, Keun Chul

    1969-01-01

    are useful for the differentiation of various renal diseases, however, qualitative analysis of the renogram with one or two parameters is not accurate. 3) In bilateral non-functioning kidney groups, a positive correlation between anemia and nitrogen retention was observed, although the quantitative assessment of the degree of non-functioning was impossible.

  18. Quantitative analysis chemistry

    International Nuclear Information System (INIS)

    Ko, Wansuk; Lee, Choongyoung; Jun, Kwangsik; Hwang, Taeksung

    1995-02-01

    This book is about quantitative analysis chemistry. It is divided into ten chapters, which deal with the basic conception of material with the meaning of analysis chemistry and SI units, chemical equilibrium, basic preparation for quantitative analysis, introduction of volumetric analysis, acid-base titration of outline and experiment examples, chelate titration, oxidation-reduction titration with introduction, titration curve, and diazotization titration, precipitation titration, electrometric titration and quantitative analysis.

  19. Quantitative pulsed eddy current analysis

    International Nuclear Information System (INIS)

    Morris, R.A.

    1975-01-01

    The potential of pulsed eddy current testing for furnishing more information than conventional single-frequency eddy current methods has been known for some time. However, a fundamental problem has been analyzing the pulse shape with sufficient precision to produce accurate quantitative results. Accordingly, the primary goal of this investigation was to: demonstrate ways of digitizing the short pulses encountered in PEC testing, and to develop empirical analysis techniques that would predict some of the parameters (e.g., depth) of simple types of defect. This report describes a digitizing technique using a computer and either a conventional nuclear ADC or a fast transient analyzer; the computer software used to collect and analyze pulses; and some of the results obtained. (U.S.)

  20. Quantitative Algebraic Reasoning

    DEFF Research Database (Denmark)

    Mardare, Radu Iulian; Panangaden, Prakash; Plotkin, Gordon

    2016-01-01

    We develop a quantitative analogue of equational reasoning which we call quantitative algebra. We define an equality relation indexed by rationals: a =ε b which we think of as saying that “a is approximately equal to b up to an error of ε”. We have 4 interesting examples where we have a quantitative...... equational theory whose free algebras correspond to well known structures. In each case we have finitary and continuous versions. The four cases are: Hausdorff metrics from quantitive semilattices; pWasserstein metrics (hence also the Kantorovich metric) from barycentric algebras and also from pointed...

  1. Quantitative autoradiography of neurochemicals

    International Nuclear Information System (INIS)

    Rainbow, T.C.; Biegon, A.; Bleisch, W.V.

    1982-01-01

    Several new methods have been developed that apply quantitative autoradiography to neurochemistry. These methods are derived from the 2-deoxyglucose (2DG) technique of Sokoloff (1), which uses quantitative autoradiography to measure the rate of glucose utilization in brain structures. The new methods allow the measurement of the rate of cerbral protein synthesis and the levels of particular neurotransmitter receptors by quantitative autoradiography. As with the 2DG method, the new techniques can measure molecular levels in micron-sized brain structures; and can be used in conjunction with computerized systems of image processing. It is possible that many neurochemical measurements could be made by computerized analysis of quantitative autoradiograms

  2. Accurate metacognition for visual sensory memory representations.

    Science.gov (United States)

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  3. An accurate nonlinear Monte Carlo collision operator

    International Nuclear Information System (INIS)

    Wang, W.X.; Okamoto, M.; Nakajima, N.; Murakami, S.

    1995-03-01

    A three dimensional nonlinear Monte Carlo collision model is developed based on Coulomb binary collisions with the emphasis both on the accuracy and implementation efficiency. The operator of simple form fulfills particle number, momentum and energy conservation laws, and is equivalent to exact Fokker-Planck operator by correctly reproducing the friction coefficient and diffusion tensor, in addition, can effectively assure small-angle collisions with a binary scattering angle distributed in a limited range near zero. Two highly vectorizable algorithms are designed for its fast implementation. Various test simulations regarding relaxation processes, electrical conductivity, etc. are carried out in velocity space. The test results, which is in good agreement with theory, and timing results on vector computers show that it is practically applicable. The operator may be used for accurately simulating collisional transport problems in magnetized and unmagnetized plasmas. (author)

  4. Accurate predictions for the LHC made easy

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.

  5. Apparatus for accurately measuring high temperatures

    Science.gov (United States)

    Smith, D.D.

    The present invention is a thermometer used for measuring furnace temperatures in the range of about 1800/sup 0/ to 2700/sup 0/C. The thermometer comprises a broadband multicolor thermal radiation sensor positioned to be in optical alignment with the end of a blackbody sight tube extending into the furnace. A valve-shutter arrangement is positioned between the radiation sensor and the sight tube and a chamber for containing a charge of high pressure gas is positioned between the valve-shutter arrangement and the radiation sensor. A momentary opening of the valve shutter arrangement allows a pulse of the high gas to purge the sight tube of air-borne thermal radiation contaminants which permits the radiation sensor to accurately measure the thermal radiation emanating from the end of the sight tube.

  6. Accurate Modeling Method for Cu Interconnect

    Science.gov (United States)

    Yamada, Kenta; Kitahara, Hiroshi; Asai, Yoshihiko; Sakamoto, Hideo; Okada, Norio; Yasuda, Makoto; Oda, Noriaki; Sakurai, Michio; Hiroi, Masayuki; Takewaki, Toshiyuki; Ohnishi, Sadayuki; Iguchi, Manabu; Minda, Hiroyasu; Suzuki, Mieko

    This paper proposes an accurate modeling method of the copper interconnect cross-section in which the width and thickness dependence on layout patterns and density caused by processes (CMP, etching, sputtering, lithography, and so on) are fully, incorporated and universally expressed. In addition, we have developed specific test patterns for the model parameters extraction, and an efficient extraction flow. We have extracted the model parameters for 0.15μm CMOS using this method and confirmed that 10%τpd error normally observed with conventional LPE (Layout Parameters Extraction) was completely dissolved. Moreover, it is verified that the model can be applied to more advanced technologies (90nm, 65nm and 55nm CMOS). Since the interconnect delay variations due to the processes constitute a significant part of what have conventionally been treated as random variations, use of the proposed model could enable one to greatly narrow the guardbands required to guarantee a desired yield, thereby facilitating design closure.

  7. Accurate, fully-automated NMR spectral profiling for metabolomics.

    Directory of Open Access Journals (Sweden)

    Siamak Ravanbakhsh

    Full Text Available Many diseases cause significant changes to the concentrations of small molecules (a.k.a. metabolites that appear in a person's biofluids, which means such diseases can often be readily detected from a person's "metabolic profile"-i.e., the list of concentrations of those metabolites. This information can be extracted from a biofluids Nuclear Magnetic Resonance (NMR spectrum. However, due to its complexity, NMR spectral profiling has remained manual, resulting in slow, expensive and error-prone procedures that have hindered clinical and industrial adoption of metabolomics via NMR. This paper presents a system, BAYESIL, which can quickly, accurately, and autonomously produce a person's metabolic profile. Given a 1D 1H NMR spectrum of a complex biofluid (specifically serum or cerebrospinal fluid, BAYESIL can automatically determine the metabolic profile. This requires first performing several spectral processing steps, then matching the resulting spectrum against a reference compound library, which contains the "signatures" of each relevant metabolite. BAYESIL views spectral matching as an inference problem within a probabilistic graphical model that rapidly approximates the most probable metabolic profile. Our extensive studies on a diverse set of complex mixtures including real biological samples (serum and CSF, defined mixtures and realistic computer generated spectra; involving > 50 compounds, show that BAYESIL can autonomously find the concentration of NMR-detectable metabolites accurately (~ 90% correct identification and ~ 10% quantification error, in less than 5 minutes on a single CPU. These results demonstrate that BAYESIL is the first fully-automatic publicly-accessible system that provides quantitative NMR spectral profiling effectively-with an accuracy on these biofluids that meets or exceeds the performance of trained experts. We anticipate this tool will usher in high-throughput metabolomics and enable a wealth of new applications of

  8. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  9. Approaching system equilibrium with accurate or not accurate feedback information in a two-route system

    Science.gov (United States)

    Zhao, Xiao-mei; Xie, Dong-fan; Li, Qi

    2015-02-01

    With the development of intelligent transport system, advanced information feedback strategies have been developed to reduce traffic congestion and enhance the capacity. However, previous strategies provide accurate information to travelers and our simulation results show that accurate information brings negative effects, especially in delay case. Because travelers prefer to the best condition route with accurate information, and delayed information cannot reflect current traffic condition but past. Then travelers make wrong routing decisions, causing the decrease of the capacity and the increase of oscillations and the system deviating from the equilibrium. To avoid the negative effect, bounded rationality is taken into account by introducing a boundedly rational threshold BR. When difference between two routes is less than the BR, routes have equal probability to be chosen. The bounded rationality is helpful to improve the efficiency in terms of capacity, oscillation and the gap deviating from the system equilibrium.

  10. Magnetoresistive biosensors for quantitative proteomics

    Science.gov (United States)

    Zhou, Xiahan; Huang, Chih-Cheng; Hall, Drew A.

    2017-08-01

    Quantitative proteomics, as a developing method for study of proteins and identification of diseases, reveals more comprehensive and accurate information of an organism than traditional genomics. A variety of platforms, such as mass spectrometry, optical sensors, electrochemical sensors, magnetic sensors, etc., have been developed for detecting proteins quantitatively. The sandwich immunoassay is widely used as a labeled detection method due to its high specificity and flexibility allowing multiple different types of labels. While optical sensors use enzyme and fluorophore labels to detect proteins with high sensitivity, they often suffer from high background signal and challenges in miniaturization. Magnetic biosensors, including nuclear magnetic resonance sensors, oscillator-based sensors, Hall-effect sensors, and magnetoresistive sensors, use the specific binding events between magnetic nanoparticles (MNPs) and target proteins to measure the analyte concentration. Compared with other biosensing techniques, magnetic sensors take advantage of the intrinsic lack of magnetic signatures in biological samples to achieve high sensitivity and high specificity, and are compatible with semiconductor-based fabrication process to have low-cost and small-size for point-of-care (POC) applications. Although still in the development stage, magnetic biosensing is a promising technique for in-home testing and portable disease monitoring.

  11. On an efficient and accurate method to integrate restricted three-body orbits

    Science.gov (United States)

    Murison, Marc A.

    1989-01-01

    This work is a quantitative analysis of the advantages of the Bulirsch-Stoer (1966) method, demonstrating that this method is certainly worth considering when working with small N dynamical systems. The results, qualitatively suspected by many users, are quantitatively confirmed as follows: (1) the Bulirsch-Stoer extrapolation method is very fast and moderately accurate; (2) regularization of the equations of motion stabilizes the error behavior of the method and is, of course, essential during close approaches; and (3) when applicable, a manifold-correction algorithm reduces numerical errors to the limits of machine accuracy. In addition, for the specific case of the restricted three-body problem, even a small eccentricity for the orbit of the primaries drastically affects the accuracy of integrations, whether regularized or not; the circular restricted problem integrates much more accurately.

  12. Characterization of 3D PET systems for accurate quantification of myocardial blood flow

    OpenAIRE

    Renaud, Jennifer M.; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Éric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C.; Turkington, Timothy G

    2016-01-01

    Three-dimensional (3D) mode imaging is the current standard for positron emission tomography-computed tomography (PET-CT) systems. Dynamic imaging for quantification of myocardial blood flow (MBF) with short-lived tracers, such as Rb-82- chloride (Rb-82), requires accuracy to be maintained over a wide range of isotope activities and scanner count-rates. We propose new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative...

  13. The place of highly accurate methods by RNAA in metrology

    International Nuclear Information System (INIS)

    Dybczynski, R.; Danko, B.; Polkowska-Motrenko, H.; Samczynski, Z.

    2006-01-01

    With the introduction of physical metrological concepts to chemical analysis which require that the result should be accompanied by uncertainty statement written down in terms of Sl units, several researchers started to consider lD-MS as the only method fulfilling this requirement. However, recent publications revealed that in certain cases also some expert laboratories using lD-MS and analyzing the same material, produced results for which their uncertainty statements did not overlap, what theoretically should not have taken place. This shows that no monopoly is good in science and it would be desirable to widen the set of methods acknowledged as primary in inorganic trace analysis. Moreover, lD-MS cannot be used for monoisotopic elements. The need for searching for other methods having similar metrological quality as the lD-MS seems obvious. In this paper, our long-time experience on devising highly accurate ('definitive') methods by RNAA for the determination of selected trace elements in biological materials is reviewed. The general idea of definitive methods based on combination of neutron activation with the highly selective and quantitative isolation of the indicator radionuclide by column chromatography followed by gamma spectrometric measurement is reminded and illustrated by examples of the performance of such methods when determining Cd, Co, Mo, etc. lt is demonstrated that such methods are able to provide very reliable results with very low levels of uncertainty traceable to Sl units

  14. Machine learning of accurate energy-conserving molecular force fields

    Science.gov (United States)

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E.; Poltavsky, Igor; Schütt, Kristof T.; Müller, Klaus-Robert

    2017-01-01

    Using conservation of energy—a fundamental property of closed classical and quantum mechanical systems—we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol−1 for energies and 1 kcal mol−1 Å̊−1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods. PMID:28508076

  15. Fast and accurate modeling of stray light in optical systems

    Science.gov (United States)

    Perrin, Jean-Claude

    2017-11-01

    The first problem to be solved in most optical designs with respect to stray light is that of internal reflections on the several surfaces of individual lenses and mirrors, and on the detector itself. The level of stray light ratio can be considerably reduced by taking into account the stray light during the optimization to determine solutions in which the irradiance due to these ghosts is kept to the minimum possible value. Unhappily, the routines available in most optical design software's, for example CODE V, do not permit all alone to make exact quantitative calculations of the stray light due to these ghosts. Therefore, the engineer in charge of the optical design is confronted to the problem of using two different software's, one for the design and optimization, for example CODE V, one for stray light analysis, for example ASAP. This makes a complete optimization very complex . Nevertheless, using special techniques and combinations of the routines available in CODE V, it is possible to have at its disposal a software macro tool to do such an analysis quickly and accurately, including Monte-Carlo ray tracing, or taking into account diffraction effects. This analysis can be done in a few minutes, to be compared to hours with other software's.

  16. Accurate measurements of neutron activation cross sections

    International Nuclear Information System (INIS)

    Semkova, V.

    1999-01-01

    The applications of some recent achievements of neutron activation method on high intensity neutron sources are considered from the view point of associated errors of cross sections data for neutron induced reaction. The important corrections in -y-spectrometry insuring precise determination of the induced radioactivity, methods for accurate determination of the energy and flux density of neutrons, produced by different sources, and investigations of deuterium beam composition are considered as factors determining the precision of the experimental data. The influence of the ion beam composition on the mean energy of neutrons has been investigated by measurement of the energy of neutrons induced by different magnetically analysed deuterium ion groups. Zr/Nb method for experimental determination of the neutron energy in the 13-15 MeV energy range allows to measure energy of neutrons from D-T reaction with uncertainty of 50 keV. Flux density spectra from D(d,n) E d = 9.53 MeV and Be(d,n) E d = 9.72 MeV are measured by PHRS and foil activation method. Future applications of the activation method on NG-12 are discussed. (author)

  17. Implicit time accurate simulation of unsteady flow

    Science.gov (United States)

    van Buuren, René; Kuerten, Hans; Geurts, Bernard J.

    2001-03-01

    Implicit time integration was studied in the context of unsteady shock-boundary layer interaction flow. With an explicit second-order Runge-Kutta scheme, a reference solution to compare with the implicit second-order Crank-Nicolson scheme was determined. The time step in the explicit scheme is restricted by both temporal accuracy as well as stability requirements, whereas in the A-stable implicit scheme, the time step has to obey temporal resolution requirements and numerical convergence conditions. The non-linear discrete equations for each time step are solved iteratively by adding a pseudo-time derivative. The quasi-Newton approach is adopted and the linear systems that arise are approximately solved with a symmetric block Gauss-Seidel solver. As a guiding principle for properly setting numerical time integration parameters that yield an efficient time accurate capturing of the solution, the global error caused by the temporal integration is compared with the error resulting from the spatial discretization. Focus is on the sensitivity of properties of the solution in relation to the time step. Numerical simulations show that the time step needed for acceptable accuracy can be considerably larger than the explicit stability time step; typical ratios range from 20 to 80. At large time steps, convergence problems that are closely related to a highly complex structure of the basins of attraction of the iterative method may occur. Copyright

  18. Spectrally accurate initial data in numerical relativity

    Science.gov (United States)

    Battista, Nicholas A.

    Einstein's theory of general relativity has radically altered the way in which we perceive the universe. His breakthrough was to realize that the fabric of space is deformable in the presence of mass, and that space and time are linked into a continuum. Much evidence has been gathered in support of general relativity over the decades. Some of the indirect evidence for GR includes the phenomenon of gravitational lensing, the anomalous perihelion of mercury, and the gravitational redshift. One of the most striking predictions of GR, that has not yet been confirmed, is the existence of gravitational waves. The primary source of gravitational waves in the universe is thought to be produced during the merger of binary black hole systems, or by binary neutron stars. The starting point for computer simulations of black hole mergers requires highly accurate initial data for the space-time metric and for the curvature. The equations describing the initial space-time around the black hole(s) are non-linear, elliptic partial differential equations (PDE). We will discuss how to use a pseudo-spectral (collocation) method to calculate the initial puncture data corresponding to single black hole and binary black hole systems.

  19. A stiffly accurate integrator for elastodynamic problems

    KAUST Repository

    Michels, Dominik L.

    2017-07-21

    We present a new integration algorithm for the accurate and efficient solution of stiff elastodynamic problems governed by the second-order ordinary differential equations of structural mechanics. Current methods have the shortcoming that their performance is highly dependent on the numerical stiffness of the underlying system that often leads to unrealistic behavior or a significant loss of efficiency. To overcome these limitations, we present a new integration method which is based on a mathematical reformulation of the underlying differential equations, an exponential treatment of the full nonlinear forcing operator as opposed to more standard partially implicit or exponential approaches, and the utilization of the concept of stiff accuracy which ensures that the efficiency of the simulations is significantly less sensitive to increased stiffness. As a consequence, we are able to tremendously accelerate the simulation of stiff systems compared to established integrators and significantly increase the overall accuracy. The advantageous behavior of this approach is demonstrated on a broad spectrum of complex examples like deformable bodies, textiles, bristles, and human hair. Our easily parallelizable integrator enables more complex and realistic models to be explored in visual computing without compromising efficiency.

  20. Geodetic analysis of disputed accurate qibla direction

    Science.gov (United States)

    Saksono, Tono; Fulazzaky, Mohamad Ali; Sari, Zamah

    2018-04-01

    Muslims perform the prayers facing towards the correct qibla direction would be the only one of the practical issues in linking theoretical studies with practice. The concept of facing towards the Kaaba in Mecca during the prayers has long been the source of controversy among the muslim communities to not only in poor and developing countries but also in developed countries. The aims of this study were to analyse the geodetic azimuths of qibla calculated using three different models of the Earth. The use of ellipsoidal model of the Earth could be the best method for determining the accurate direction of Kaaba from anywhere on the Earth's surface. A muslim cannot direct himself towards the qibla correctly if he cannot see the Kaaba due to setting out process and certain motions during the prayer this can significantly shift the qibla direction from the actual position of the Kaaba. The requirement of muslim prayed facing towards the Kaaba is more as spiritual prerequisite rather than physical evidence.

  1. Accurate deuterium spectroscopy for fundamental studies

    Science.gov (United States)

    Wcisło, P.; Thibault, F.; Zaborowski, M.; Wójtewicz, S.; Cygan, A.; Kowzan, G.; Masłowski, P.; Komasa, J.; Puchalski, M.; Pachucki, K.; Ciuryło, R.; Lisak, D.

    2018-07-01

    We present an accurate measurement of the weak quadrupole S(2) 2-0 line in self-perturbed D2 and theoretical ab initio calculations of both collisional line-shape effects and energy of this rovibrational transition. The spectra were collected at the 247-984 Torr pressure range with a frequency-stabilized cavity ring-down spectrometer linked to an optical frequency comb (OFC) referenced to a primary time standard. Our line-shape modeling employed quantum calculations of molecular scattering (the pressure broadening and shift and their speed dependencies were calculated, while the complex frequency of optical velocity-changing collisions was fitted to experimental spectra). The velocity-changing collisions are handled with the hard-sphere collisional kernel. The experimental and theoretical pressure broadening and shift are consistent within 5% and 27%, respectively (the discrepancy for shift is 8% when referred not to the speed averaged value, which is close to zero, but to the range of variability of the speed-dependent shift). We use our high pressure measurement to determine the energy, ν0, of the S(2) 2-0 transition. The ab initio line-shape calculations allowed us to mitigate the expected collisional systematics reaching the 410 kHz accuracy of ν0. We report theoretical determination of ν0 taking into account relativistic and QED corrections up to α5. Our estimation of the accuracy of the theoretical ν0 is 1.3 MHz. We observe 3.4σ discrepancy between experimental and theoretical ν0.

  2. Towards Accurate Application Characterization for Exascale (APEX)

    Energy Technology Data Exchange (ETDEWEB)

    Hammond, Simon David [Sandia National Laboratories (SNL-NM), Albuquerque, NM (United States)

    2015-09-01

    Sandia National Laboratories has been engaged in hardware and software codesign activities for a number of years, indeed, it might be argued that prototyping of clusters as far back as the CPLANT machines and many large capability resources including ASCI Red and RedStorm were examples of codesigned solutions. As the research supporting our codesign activities has moved closer to investigating on-node runtime behavior a nature hunger has grown for detailed analysis of both hardware and algorithm performance from the perspective of low-level operations. The Application Characterization for Exascale (APEX) LDRD was a project concieved of addressing some of these concerns. Primarily the research was to intended to focus on generating accurate and reproducible low-level performance metrics using tools that could scale to production-class code bases. Along side this research was an advocacy and analysis role associated with evaluating tools for production use, working with leading industry vendors to develop and refine solutions required by our code teams and to directly engage with production code developers to form a context for the application analysis and a bridge to the research community within Sandia. On each of these accounts significant progress has been made, particularly, as this report will cover, in the low-level analysis of operations for important classes of algorithms. This report summarizes the development of a collection of tools under the APEX research program and leaves to other SAND and L2 milestone reports the description of codesign progress with Sandia’s production users/developers.

  3. How flatbed scanners upset accurate film dosimetry

    Science.gov (United States)

    van Battum, L. J.; Huizenga, H.; Verdaasdonk, R. M.; Heukelom, S.

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2-2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red-green-blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film.

  4. Accurate hydrocarbon estimates attained with radioactive isotope

    International Nuclear Information System (INIS)

    Hubbard, G.

    1983-01-01

    To make accurate economic evaluations of new discoveries, an oil company needs to know how much gas and oil a reservoir contains. The porous rocks of these reservoirs are not completely filled with gas or oil, but contain a mixture of gas, oil and water. It is extremely important to know what volume percentage of this water--called connate water--is contained in the reservoir rock. The percentage of connate water can be calculated from electrical resistivity measurements made downhole. The accuracy of this method can be improved if a pure sample of connate water can be analyzed or if the chemistry of the water can be determined by conventional logging methods. Because of the similarity of the mud filtrate--the water in a water-based drilling fluid--and the connate water, this is not always possible. If the oil company cannot distinguish between connate water and mud filtrate, its oil-in-place calculations could be incorrect by ten percent or more. It is clear that unless an oil company can be sure that a sample of connate water is pure, or at the very least knows exactly how much mud filtrate it contains, its assessment of the reservoir's water content--and consequently its oil or gas content--will be distorted. The oil companies have opted for the Repeat Formation Tester (RFT) method. Label the drilling fluid with small doses of tritium--a radioactive isotope of hydrogen--and it will be easy to detect and quantify in the sample

  5. How flatbed scanners upset accurate film dosimetry

    International Nuclear Information System (INIS)

    Van Battum, L J; Verdaasdonk, R M; Heukelom, S; Huizenga, H

    2016-01-01

    Film is an excellent dosimeter for verification of dose distributions due to its high spatial resolution. Irradiated film can be digitized with low-cost, transmission, flatbed scanners. However, a disadvantage is their lateral scan effect (LSE): a scanner readout change over its lateral scan axis. Although anisotropic light scattering was presented as the origin of the LSE, this paper presents an alternative cause. Hereto, LSE for two flatbed scanners (Epson 1680 Expression Pro and Epson 10000XL), and Gafchromic film (EBT, EBT2, EBT3) was investigated, focused on three effects: cross talk, optical path length and polarization. Cross talk was examined using triangular sheets of various optical densities. The optical path length effect was studied using absorptive and reflective neutral density filters with well-defined optical characteristics (OD range 0.2–2.0). Linear polarizer sheets were used to investigate light polarization on the CCD signal in absence and presence of (un)irradiated Gafchromic film. Film dose values ranged between 0.2 to 9 Gy, i.e. an optical density range between 0.25 to 1.1. Measurements were performed in the scanner’s transmission mode, with red–green–blue channels. LSE was found to depend on scanner construction and film type. Its magnitude depends on dose: for 9 Gy increasing up to 14% at maximum lateral position. Cross talk was only significant in high contrast regions, up to 2% for very small fields. The optical path length effect introduced by film on the scanner causes 3% for pixels in the extreme lateral position. Light polarization due to film and the scanner’s optical mirror system is the main contributor, different in magnitude for the red, green and blue channel. We concluded that any Gafchromic EBT type film scanned with a flatbed scanner will face these optical effects. Accurate dosimetry requires correction of LSE, therefore, determination of the LSE per color channel and dose delivered to the film. (paper)

  6. Simple Mathematical Models Do Not Accurately Predict Early SIV Dynamics

    Directory of Open Access Journals (Sweden)

    Cecilia Noecker

    2015-03-01

    Full Text Available Upon infection of a new host, human immunodeficiency virus (HIV replicates in the mucosal tissues and is generally undetectable in circulation for 1–2 weeks post-infection. Several interventions against HIV including vaccines and antiretroviral prophylaxis target virus replication at this earliest stage of infection. Mathematical models have been used to understand how HIV spreads from mucosal tissues systemically and what impact vaccination and/or antiretroviral prophylaxis has on viral eradication. Because predictions of such models have been rarely compared to experimental data, it remains unclear which processes included in these models are critical for predicting early HIV dynamics. Here we modified the “standard” mathematical model of HIV infection to include two populations of infected cells: cells that are actively producing the virus and cells that are transitioning into virus production mode. We evaluated the effects of several poorly known parameters on infection outcomes in this model and compared model predictions to experimental data on infection of non-human primates with variable doses of simian immunodifficiency virus (SIV. First, we found that the mode of virus production by infected cells (budding vs. bursting has a minimal impact on the early virus dynamics for a wide range of model parameters, as long as the parameters are constrained to provide the observed rate of SIV load increase in the blood of infected animals. Interestingly and in contrast with previous results, we found that the bursting mode of virus production generally results in a higher probability of viral extinction than the budding mode of virus production. Second, this mathematical model was not able to accurately describe the change in experimentally determined probability of host infection with increasing viral doses. Third and finally, the model was also unable to accurately explain the decline in the time to virus detection with increasing viral

  7. Accurate color measurement methods for medical displays.

    Science.gov (United States)

    Saha, Anindita; Kelley, Edward F; Badano, Aldo

    2010-01-01

    at shorter distances between the light sources, which translates to less contamination. The tails of the scans indicate the magnitude of the spread in signal due to light from areas outside the intended measurement spot. The measurements indicate a corresponding glare factor for a large spot of 140, 500, and 2000 for probe A, B1, and B2, respectively. The dual-laser setup suggests that color purity can be maintained up to a few tens of millimeters outside the measurement spot. The comparison shows that there are significant differences in the performance of each probe design, and that those differences have an effect on the measured quantity used to quantify display color. Different probe designs show different measurements of the level of light contamination that affects the quantitative color determination.

  8. Approaches for the accurate definition of geological time boundaries

    Science.gov (United States)

    Schaltegger, Urs; Baresel, Björn; Ovtcharova, Maria; Goudemand, Nicolas; Bucher, Hugo

    2015-04-01

    Which strategies lead to the most precise and accurate date of a given geological boundary? Geological units are usually defined by the occurrence of characteristic taxa and hence boundaries between these geological units correspond to dramatic faunal and/or floral turnovers and they are primarily defined using first or last occurrences of index species, or ideally by the separation interval between two consecutive, characteristic associations of fossil taxa. These boundaries need to be defined in a way that enables their worldwide recognition and correlation across different stratigraphic successions, using tools as different as bio-, magneto-, and chemo-stratigraphy, and astrochronology. Sedimentary sequences can be dated in numerical terms by applying high-precision chemical-abrasion, isotope-dilution, thermal-ionization mass spectrometry (CA-ID-TIMS) U-Pb age determination to zircon (ZrSiO4) in intercalated volcanic ashes. But, though volcanic activity is common in geological history, ashes are not necessarily close to the boundary we would like to date precisely and accurately. In addition, U-Pb zircon data sets may be very complex and difficult to interpret in terms of the age of ash deposition. To overcome these difficulties we use a multi-proxy approach we applied to the precise and accurate dating of the Permo-Triassic and Early-Middle Triassic boundaries in South China. a) Dense sampling of ashes across the critical time interval and a sufficiently large number of analysed zircons per ash sample can guarantee the recognition of all system complexities. Geochronological datasets from U-Pb dating of volcanic zircon may indeed combine effects of i) post-crystallization Pb loss from percolation of hydrothermal fluids (even using chemical abrasion), with ii) age dispersion from prolonged residence of earlier crystallized zircon in the magmatic system. As a result, U-Pb dates of individual zircons are both apparently younger and older than the depositional age

  9. Quantitative secondary electron detection

    Science.gov (United States)

    Agrawal, Jyoti; Joy, David C.; Nayak, Subuhadarshi

    2018-05-08

    Quantitative Secondary Electron Detection (QSED) using the array of solid state devices (SSD) based electron-counters enable critical dimension metrology measurements in materials such as semiconductors, nanomaterials, and biological samples (FIG. 3). Methods and devices effect a quantitative detection of secondary electrons with the array of solid state detectors comprising a number of solid state detectors. An array senses the number of secondary electrons with a plurality of solid state detectors, counting the number of secondary electrons with a time to digital converter circuit in counter mode.

  10. [Methods of quantitative proteomics].

    Science.gov (United States)

    Kopylov, A T; Zgoda, V G

    2007-01-01

    In modern science proteomic analysis is inseparable from other fields of systemic biology. Possessing huge resources quantitative proteomics operates colossal information on molecular mechanisms of life. Advances in proteomics help researchers to solve complex problems of cell signaling, posttranslational modification, structure and functional homology of proteins, molecular diagnostics etc. More than 40 various methods have been developed in proteomics for quantitative analysis of proteins. Although each method is unique and has certain advantages and disadvantages all these use various isotope labels (tags). In this review we will consider the most popular and effective methods employing both chemical modifications of proteins and also metabolic and enzymatic methods of isotope labeling.

  11. Extending Quantitative Easing

    DEFF Research Database (Denmark)

    Hallett, Andrew Hughes; Fiedler, Salomon; Kooths, Stefan

    The notes in this compilation address the pros and cons associated with the extension of ECB quantitative easing programme of asset purchases. The notes have been requested by the Committee on Economic and Monetary Affairs as an input for the February 2017 session of the Monetary Dialogue....

  12. Quantitative Moessbauer analysis

    International Nuclear Information System (INIS)

    Collins, R.L.

    1978-01-01

    The quantitative analysis of Moessbauer data, as in the measurement of Fe 3+ /Fe 2+ concentration, has not been possible because of the different mean square velocities (x 2 ) of Moessbauer nuclei at chemically different sites. A method is now described which, based on Moessbauer data at several temperatures, permits the comparison of absorption areas at (x 2 )=0. (Auth.)

  13. Quantitative SPECT reconstruction of iodine-123 data

    International Nuclear Information System (INIS)

    Gilland, D.R.; Jaszczak, R.J.; Greer, K.L.; Coleman, R.E.

    1991-01-01

    Many clinical and research studies in nuclear medicine require quantitation of iodine-123 ( 123 I) distribution for the determination of kinetics or localization. The objective of this study was to implement several reconstruction methods designed for single-photon emission computed tomography (SPECT) using 123 I and to evaluate their performance in terms of quantitative accuracy, image artifacts, and noise. The methods consisted of four attenuation and scatter compensation schemes incorporated into both the filtered backprojection/Chang (FBP) and maximum likelihood-expectation maximization (ML-EM) reconstruction algorithms. The methods were evaluated on data acquired of a phantom containing a hot sphere of 123 I activity in a lower level background 123 I distribution and nonuniform density media. For both reconstruction algorithms, nonuniform attenuation compensation combined with either scatter subtraction or Metz filtering produced images that were quantitatively accurate to within 15% of the true value. The ML-EM algorithm demonstrated quantitative accuracy comparable to FBP and smaller relative noise magnitude for all compensation schemes

  14. DNA DAMAGE QUANTITATION BY ALKALINE GEL ELECTROPHORESIS.

    Energy Technology Data Exchange (ETDEWEB)

    SUTHERLAND,B.M.; BENNETT,P.V.; SUTHERLAND, J.C.

    2004-03-24

    Physical and chemical agents in the environment, those used in clinical applications, or encountered during recreational exposures to sunlight, induce damages in DNA. Understanding the biological impact of these agents requires quantitation of the levels of such damages in laboratory test systems as well as in field or clinical samples. Alkaline gel electrophoresis provides a sensitive (down to {approx} a few lesions/5Mb), rapid method of direct quantitation of a wide variety of DNA damages in nanogram quantities of non-radioactive DNAs from laboratory, field, or clinical specimens, including higher plants and animals. This method stems from velocity sedimentation studies of DNA populations, and from the simple methods of agarose gel electrophoresis. Our laboratories have developed quantitative agarose gel methods, analytical descriptions of DNA migration during electrophoresis on agarose gels (1-6), and electronic imaging for accurate determinations of DNA mass (7-9). Although all these components improve sensitivity and throughput of large numbers of samples (7,8,10), a simple version using only standard molecular biology equipment allows routine analysis of DNA damages at moderate frequencies. We present here a description of the methods, as well as a brief description of the underlying principles, required for a simplified approach to quantitation of DNA damages by alkaline gel electrophoresis.

  15. Critical Quantitative Inquiry in Context

    Science.gov (United States)

    Stage, Frances K.; Wells, Ryan S.

    2014-01-01

    This chapter briefly traces the development of the concept of critical quantitative inquiry, provides an expanded conceptualization of the tasks of critical quantitative research, offers theoretical explanation and justification for critical research using quantitative methods, and previews the work of quantitative criticalists presented in this…

  16. Peopling of the North Circumpolar Region--insights from Y chromosome STR and SNP typing of Greenlanders.

    Directory of Open Access Journals (Sweden)

    Jill Katharina Olofsson

    Full Text Available The human population in Greenland is characterized by migration events of Paleo- and Neo-Eskimos, as well as admixture with Europeans. In this study, the Y-chromosomal variation in male Greenlanders was investigated in detail by typing 73 Y-chromosomal single nucleotide polymorphisms (Y-SNPs and 17 Y-chromosomal short tandem repeats (Y-STRs. Approximately 40% of the analyzed Greenlandic Y chromosomes were of European origin (I-M170, R1a-M513 and R1b-M343. Y chromosomes of European origin were mainly found in individuals from the west and south coasts of Greenland, which is in agreement with the historic records of the geographic placements of European settlements in Greenland. Two Inuit Y-chromosomal lineages, Q-M3 (xM19, M194, L663, SA01 and L766 and Q-NWT01 (xM265 were found in 23% and 31% of the male Greenlanders, respectively. The time to the most recent common ancestor (TMRCA of the Q-M3 lineage of the Greenlanders was estimated to be between 4,400 and 10,900 years ago (y. a. using two different methods. This is in agreement with the theory that the North Circumpolar Region was populated via a second expansion of humans in the North American continent. The TMRCA of the Q-NWT01 (xM265 lineage in Greenland was estimated to be between 7,000 and 14,300 y. a. using two different methods, which is older than the previously reported TMRCA of this lineage in other Inuit populations. Our results indicate that Inuit individuals carrying the Q-NWT01 (xM265 lineage may have their origin in the northeastern parts of North America and could be descendants of the Dorset culture. This in turn points to the possibility that the current Inuit population in Greenland is comprised of individuals of both Thule and Dorset descent.

  17. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    KAUST Repository

    Pan, Bing

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost. © 2014 Elsevier Ltd.

  18. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    Science.gov (United States)

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.

  19. Applied quantitative finance

    CERN Document Server

    Chen, Cathy; Overbeck, Ludger

    2017-01-01

    This volume provides practical solutions and introduces recent theoretical developments in risk management, pricing of credit derivatives, quantification of volatility and copula modeling. This third edition is devoted to modern risk analysis based on quantitative methods and textual analytics to meet the current challenges in banking and finance. It includes 14 new contributions and presents a comprehensive, state-of-the-art treatment of cutting-edge methods and topics, such as collateralized debt obligations, the high-frequency analysis of market liquidity, and realized volatility. The book is divided into three parts: Part 1 revisits important market risk issues, while Part 2 introduces novel concepts in credit risk and its management along with updated quantitative methods. The third part discusses the dynamics of risk management and includes risk analysis of energy markets and for cryptocurrencies. Digital assets, such as blockchain-based currencies, have become popular b ut are theoretically challenging...

  20. Quantitative skeletal scintiscanning

    International Nuclear Information System (INIS)

    Haushofer, R.

    1982-01-01

    330 patients were examined by skeletal scintiscanning with sup(99m)Tc pyrophosphate and sup(99m)methylene diphosphonate in the years between 1977 and 1979. Course control examinations were carried out in 12 patients. The collective of patients presented with primary skeletal tumours, metastases, inflammatory and degenerative skeletal diseases. Bone scintiscanning combined with the ''region of interest'' technique was found to be an objective and reproducible technique for quantitative measurement of skeletal radioactivity concentrations. The validity of nuclear skeletal examinations can thus be enhanced as far as diagnosis, course control, and differential diagnosis are concerned. Quantitative skeletal scintiscanning by means of the ''region of interest'' technique has opened up a new era in skeletal diagnosis by nuclear methods. (orig./MG) [de

  1. Accurate core-electron binding energy shifts from density functional theory

    International Nuclear Information System (INIS)

    Takahata, Yuji; Marques, Alberto Dos Santos

    2010-01-01

    Current review covers description of density functional methods of calculation of accurate core-electron binding energy (CEBE) of second and third row atoms; applications of calculated CEBEs and CEBE shifts (ΔCEBEs) in elucidation of topics such as: hydrogen-bonding, peptide bond, polymers, DNA bases, Hammett substituent (σ) constants, inductive and resonance effects, quantitative structure activity relationship (QSAR), and solid state effect (WD). This review limits itself to works of mainly Chong and his coworkers for the period post-2002. It is not a fully comprehensive account of the current state of the art.

  2. Quantitative traits and diversification.

    Science.gov (United States)

    FitzJohn, Richard G

    2010-12-01

    Quantitative traits have long been hypothesized to affect speciation and extinction rates. For example, smaller body size or increased specialization may be associated with increased rates of diversification. Here, I present a phylogenetic likelihood-based method (quantitative state speciation and extinction [QuaSSE]) that can be used to test such hypotheses using extant character distributions. This approach assumes that diversification follows a birth-death process where speciation and extinction rates may vary with one or more traits that evolve under a diffusion model. Speciation and extinction rates may be arbitrary functions of the character state, allowing much flexibility in testing models of trait-dependent diversification. I test the approach using simulated phylogenies and show that a known relationship between speciation and a quantitative character could be recovered in up to 80% of the cases on large trees (500 species). Consistent with other approaches, detecting shifts in diversification due to differences in extinction rates was harder than when due to differences in speciation rates. Finally, I demonstrate the application of QuaSSE to investigate the correlation between body size and diversification in primates, concluding that clade-specific differences in diversification may be more important than size-dependent diversification in shaping the patterns of diversity within this group.

  3. Quantitative analysis of myocardial tissue with digital autofluorescence microscopy

    Directory of Open Access Journals (Sweden)

    Thomas Jensen

    2016-01-01

    Full Text Available Background: The opportunity offered by whole slide scanners of automated histological analysis implies an ever increasing importance of digital pathology. To go beyond the importance of conventional pathology, however, digital pathology may need a basic histological starting point similar to that of hematoxylin and eosin staining in conventional pathology. This study presents an automated fluorescence-based microscopy approach providing highly detailed morphological data from unstained microsections. This data may provide a basic histological starting point from which further digital analysis including staining may benefit. Methods: This study explores the inherent tissue fluorescence, also known as autofluorescence, as a mean to quantitate cardiac tissue components in histological microsections. Data acquisition using a commercially available whole slide scanner and an image-based quantitation algorithm are presented. Results: It is shown that the autofluorescence intensity of unstained microsections at two different wavelengths is a suitable starting point for automated digital analysis of myocytes, fibrous tissue, lipofuscin, and the extracellular compartment. The output of the method is absolute quantitation along with accurate outlines of above-mentioned components. The digital quantitations are verified by comparison to point grid quantitations performed on the microsections after Van Gieson staining. Conclusion: The presented method is amply described as a prestain multicomponent quantitation and outlining tool for histological sections of cardiac tissue. The main perspective is the opportunity for combination with digital analysis of stained microsections, for which the method may provide an accurate digital framework.

  4. Multidetector row computed tomography may accurately estimate plaque vulnerability. Does MDCT accurately estimate plaque vulnerability? (Pro)

    International Nuclear Information System (INIS)

    Komatsu, Sei; Imai, Atsuko; Kodama, Kazuhisa

    2011-01-01

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)

  5. Toward 3D structural information from quantitative electron exit wave analysis

    International Nuclear Information System (INIS)

    Borisenko, Konstantin B; Moldovan, Grigore; Kirkland, Angus I; Wang, Amy; Van Dyck, Dirk; Chen, Fu-Rong

    2012-01-01

    Simulations show that using a new direct imaging detector and accurate exit wave restoration algorithms allows nearly quantitative restoration of electron exit wave phase, which can be regarded as only qualitative for conventional indirect imaging cameras. This opens up a possibility of extracting accurate information on 3D atomic structure of the sample even from a single projection.

  6. Absolute quantitation of proteins by Acid hydrolysis combined with amino Acid detection by mass spectrometry

    DEFF Research Database (Denmark)

    Mirgorodskaya, Olga A; Körner, Roman; Kozmin, Yuri P

    2012-01-01

    Amino acid analysis is among the most accurate methods for absolute quantification of proteins and peptides. Here, we combine acid hydrolysis with the addition of isotopically labeled standard amino acids and analysis by mass spectrometry for accurate and sensitive protein quantitation...

  7. Accurate formulas for the penalty caused by interferometric crosstalk

    DEFF Research Database (Denmark)

    Rasmussen, Christian Jørgen; Liu, Fenghai; Jeppesen, Palle

    2000-01-01

    New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas.......New simple formulas for the penalty caused by interferometric crosstalk in PIN receiver systems and optically preamplified receiver systems are presented. They are more accurate than existing formulas....

  8. A new, accurate predictive model for incident hypertension

    DEFF Research Database (Denmark)

    Völzke, Henry; Fung, Glenn; Ittermann, Till

    2013-01-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures.......Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures....

  9. Accurate and Simple Calibration of DLP Projector Systems

    DEFF Research Database (Denmark)

    Wilm, Jakob; Olesen, Oline Vinter; Larsen, Rasmus

    2014-01-01

    does not rely on an initial camera calibration, and so does not carry over the error into projector calibration. A radial interpolation scheme is used to convert features coordinates into projector space, thereby allowing for a very accurate procedure. This allows for highly accurate determination...

  10. Accurate Compton scattering measurements for N{sub 2} molecules

    Energy Technology Data Exchange (ETDEWEB)

    Kobayashi, Kohjiro [Advanced Technology Research Center, Gunma University, 1-5-1 Tenjin-cho, Kiryu, Gunma 376-8515 (Japan); Itou, Masayoshi; Tsuji, Naruki; Sakurai, Yoshiharu [Japan Synchrotron Radiation Research Institute (JASRI), 1-1-1 Kouto, Sayo-cho, Sayo-gun, Hyogo 679-5198 (Japan); Hosoya, Tetsuo; Sakurai, Hiroshi, E-mail: sakuraih@gunma-u.ac.jp [Department of Production Science and Technology, Gunma University, 29-1 Hon-cho, Ota, Gunma 373-0057 (Japan)

    2011-06-14

    The accurate Compton profiles of N{sub 2} gas were measured using 121.7 keV synchrotron x-rays. The present accurate measurement proves the better agreement of the CI (configuration interaction) calculation than the Hartree-Fock calculation and suggests the importance of multi-excitation in the CI calculations for the accuracy of wavefunctions in ground states.

  11. Are risks quantitatively determinable

    International Nuclear Information System (INIS)

    Buetzer, P.

    1985-01-01

    ''Chemical risks'' can only be determined with accurate figures in a few extraordinary cases. The difficulties lie, as has been shown by the example of the Flixborough catastrophe, mostly in the determination of the probabilities of occurrence. With a rough semiquantitative estimate of the potential hazards and the corresponding probabilities we can predict the risks with astonishing accuracy. Statistical data from incidents in the chemical industry are very useful, and they also show that ''chemical catastrophes'' are only to a very small extent initiated by uncontrolled chemical reactions. (orig.) [de

  12. Quantitative cardiac computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Thelen, M.; Dueber, C.; Wolff, P.; Erbel, R.; Hoffmann, T.

    1985-06-01

    The scope and limitations of quantitative cardiac CT have been evaluated in a series of experimental and clinical studies. The left ventricular muscle mass was estimated by computed tomography in 19 dogs (using volumetric methods, measurements in two axes and planes and reference volume). There was good correlation with anatomical findings. The enddiastolic volume of the left ventricle was estimated in 22 patients with cardiomyopathies; using angiography as a reference, CT led to systematic under-estimation. It is also shown that ECG-triggered magnetic resonance tomography results in improved visualisation and may be expected to improve measurements of cardiac morphology.

  13. F# for quantitative finance

    CERN Document Server

    Astborg, Johan

    2013-01-01

    To develop your confidence in F#, this tutorial will first introduce you to simpler tasks such as curve fitting. You will then advance to more complex tasks such as implementing algorithms for trading semi-automation in a practical scenario-based format.If you are a data analyst or a practitioner in quantitative finance, economics, or mathematics and wish to learn how to use F# as a functional programming language, this book is for you. You should have a basic conceptual understanding of financial concepts and models. Elementary knowledge of the .NET framework would also be helpful.

  14. Quantitative Imaging with a Mobile Phone Microscope

    Science.gov (United States)

    Skandarajah, Arunan; Reber, Clay D.; Switz, Neil A.; Fletcher, Daniel A.

    2014-01-01

    Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone–based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications. PMID:24824072

  15. Quantitative imaging with a mobile phone microscope.

    Directory of Open Access Journals (Sweden)

    Arunan Skandarajah

    Full Text Available Use of optical imaging for medical and scientific applications requires accurate quantification of features such as object size, color, and brightness. High pixel density cameras available on modern mobile phones have made photography simple and convenient for consumer applications; however, the camera hardware and software that enables this simplicity can present a barrier to accurate quantification of image data. This issue is exacerbated by automated settings, proprietary image processing algorithms, rapid phone evolution, and the diversity of manufacturers. If mobile phone cameras are to live up to their potential to increase access to healthcare in low-resource settings, limitations of mobile phone-based imaging must be fully understood and addressed with procedures that minimize their effects on image quantification. Here we focus on microscopic optical imaging using a custom mobile phone microscope that is compatible with phones from multiple manufacturers. We demonstrate that quantitative microscopy with micron-scale spatial resolution can be carried out with multiple phones and that image linearity, distortion, and color can be corrected as needed. Using all versions of the iPhone and a selection of Android phones released between 2007 and 2012, we show that phones with greater than 5 MP are capable of nearly diffraction-limited resolution over a broad range of magnifications, including those relevant for single cell imaging. We find that automatic focus, exposure, and color gain standard on mobile phones can degrade image resolution and reduce accuracy of color capture if uncorrected, and we devise procedures to avoid these barriers to quantitative imaging. By accommodating the differences between mobile phone cameras and the scientific cameras, mobile phone microscopes can be reliably used to increase access to quantitative imaging for a variety of medical and scientific applications.

  16. Quantitative and qualitative coronary arteriography. 1

    International Nuclear Information System (INIS)

    Brown, B.G.; Simpson, Paul; Dodge, J.T. Jr; Bolson, E.L.; Dodge, H.T.

    1991-01-01

    The clinical objectives of arteriography are to obtain information that contributes to an understanding of the mechanisms of the clinical syndrome, provides prognostic information, facilitates therapeutic decisions, and guides invasive therapy. Quantitative and improved qualitative assessments of arterial disease provide us with a common descriptive language which has the potential to accomplish these objectives more effectively and thus to improve clinical outcome. In certain situations, this potential has been demonstrated. Clinical investigation using quantitative techniques has definitely contributed to our understanding of disease mechanisms and of atherosclerosis progression/regression. Routine quantitation of clinical images should permit more accurate and repeatable estimates of disease severity and promises to provide useful estimates of coronary flow reserve. But routine clinical QCA awaits more cost- and time-efficient methods and clear proof of a clinical advantage. Careful inspection of highly magnified, high-resolution arteriographic images reveals morphologic features related to the pathophysiology of the clinical syndrome and to the likelihood of future progression or regression of obstruction. Features that have been found useful include thrombus in its various forms, ulceration and irregularity, eccentricity, flexing and dissection. The description of such high-resolution features should be included among, rather than excluded from, the goals of image processing, since they contribute substantially to the understanding and treatment of the clinical syndrome. (author). 81 refs.; 8 figs.; 1 tab

  17. Sample normalization methods in quantitative metabolomics.

    Science.gov (United States)

    Wu, Yiman; Li, Liang

    2016-01-22

    To reveal metabolomic changes caused by a biological event in quantitative metabolomics, it is critical to use an analytical tool that can perform accurate and precise quantification to examine the true concentration differences of individual metabolites found in different samples. A number of steps are involved in metabolomic analysis including pre-analytical work (e.g., sample collection and storage), analytical work (e.g., sample analysis) and data analysis (e.g., feature extraction and quantification). Each one of them can influence the quantitative results significantly and thus should be performed with great care. Among them, the total sample amount or concentration of metabolites can be significantly different from one sample to another. Thus, it is critical to reduce or eliminate the effect of total sample amount variation on quantification of individual metabolites. In this review, we describe the importance of sample normalization in the analytical workflow with a focus on mass spectrometry (MS)-based platforms, discuss a number of methods recently reported in the literature and comment on their applicability in real world metabolomics applications. Sample normalization has been sometimes ignored in metabolomics, partially due to the lack of a convenient means of performing sample normalization. We show that several methods are now available and sample normalization should be performed in quantitative metabolomics where the analyzed samples have significant variations in total sample amounts. Copyright © 2015 Elsevier B.V. All rights reserved.

  18. Some exercises in quantitative NMR imaging

    International Nuclear Information System (INIS)

    Bakker, C.J.G.

    1985-01-01

    The articles represented in this thesis result from a series of investigations that evaluate the potential of NMR imaging as a quantitative research tool. In the first article the possible use of proton spin-lattice relaxation time T 1 in tissue characterization, tumor recognition and monitoring tissue response to radiotherapy is explored. The next article addresses the question whether water proton spin-lattice relaxation curves of biological tissues are adequately described by a single time constant T 1 , and analyzes the implications of multi-exponentiality for quantitative NMR imaging. In the third article the use of NMR imaging as a quantitative research tool is discussed on the basis of phantom experiments. The fourth article describes a method which enables unambiguous retrieval of sign information in a set of magnetic resonance images of the inversion recovery type. The next article shows how this method can be adapted to allow accurate calculation of T 1 pictures on a pixel-by-pixel basis. The sixth article, finally, describes a simulation procedure which enables a straightforward determination of NMR imaging pulse sequence parameters for optimal tissue contrast. (orig.)

  19. Quantitative performance monitoring

    International Nuclear Information System (INIS)

    Heller, A.S.

    1987-01-01

    In the recently published update of NUREG/CR 3883, it was shown that Japanese plants of size and design similar to those in the US have significantly fewer trips in a given year of operation. One way to reduce such imbalance is the efficient use of available plant data. Since plant data are recorded and monitored continuously for management feedback and timely resolution of problems, this data should be actively used to increase the efficiency of operations and, ultimately, for a reduction of plant trips in power plants. A great deal of information is lost, however, if the analytical tools available for the data evaluation are misapplied or not adopted at all. This paper deals with a program developed to use quantitative techniques to monitor personnel performance in an operating power plant. Visual comparisons of ongoing performance with predetermined quantitative performance goals are made. A continuous feedback is provided to management for early detection of adverse trends and timely resolution of problems. Ultimately, costs are reduced through effective resource management and timely decision making

  20. Quantitative clinical radiobiology

    International Nuclear Information System (INIS)

    Bentzen, S.M.

    1993-01-01

    Based on a series of recent papers, a status is given of our current ability to quantify the radiobiology of human tumors and normal tissues. Progress has been made in the methods of analysis. This includes the introduction of 'direct' (maximum likelihood) analysis, incorporation of latent-time in the analyses, and statistical approaches to allow for the many factors of importance in predicting tumor-control probability of normal-tissue complications. Quantitative clinical radiobiology of normal tissues is reviewed with emphasis on fractionation sensitivity, repair kinetics, regeneration, latency, and the steepness of dose-response curves. In addition, combined modality treatment, functional endpoints, and the search for a correlation between the occurrence of different endpoints in the same individual are discussed. For tumors, quantitative analyses of fractionation sensitivity, repair kinetics, reoxygenation, and regeneration are reviewed. Other factors influencing local control are: Tumor volume, histopathologic differentiation and hemoglobin concentration. Also, the steepness of the dose-response curve for tumors is discussed. Radiobiological strategies for improving radiotherapy are discussed with emphasis on non-standard fractionation and individualization of treatment schedules. (orig.)

  1. Automated selected reaction monitoring software for accurate label-free protein quantification.

    Science.gov (United States)

    Teleman, Johan; Karlsson, Christofer; Waldemarson, Sofia; Hansson, Karin; James, Peter; Malmström, Johan; Levander, Fredrik

    2012-07-06

    Selected reaction monitoring (SRM) is a mass spectrometry method with documented ability to quantify proteins accurately and reproducibly using labeled reference peptides. However, the use of labeled reference peptides becomes impractical if large numbers of peptides are targeted and when high flexibility is desired when selecting peptides. We have developed a label-free quantitative SRM workflow that relies on a new automated algorithm, Anubis, for accurate peak detection. Anubis efficiently removes interfering signals from contaminating peptides to estimate the true signal of the targeted peptides. We evaluated the algorithm on a published multisite data set and achieved results in line with manual data analysis. In complex peptide mixtures from whole proteome digests of Streptococcus pyogenes we achieved a technical variability across the entire proteome abundance range of 6.5-19.2%, which was considerably below the total variation across biological samples. Our results show that the label-free SRM workflow with automated data analysis is feasible for large-scale biological studies, opening up new possibilities for quantitative proteomics and systems biology.

  2. Conjugate whole-body scanning system for quantitative measurement of organ distribution in vivo

    International Nuclear Information System (INIS)

    Tsui, B.M.W.; Chen, C.T.; Yasillo, N.J.; Ortega, C.J.; Charleston, D.B.; Lathrop, K.A.

    1979-01-01

    The determination of accurate, quantitative, biokinetic distribution of an internally dispersed radionuclide in humans is important in making realistic radiation absorbed dose estimates, studying biochemical transformations in health and disease, and developing clinical procedures indicative of abnormal functions. In order to collect these data, a whole-body imaging system is required which provides both adequate spatial resolution and some means of absolute quantitation. Based on these considerations, a new whole-body scanning system has been designed and constructed that employs the conjugate counting technique. The conjugate whole-body scanning system provides an efficient and accurate means of collecting absolute quantitative organ distribution data of radioactivity in vivo

  3. Accurately bearing measurement in non-cooperative passive location system

    International Nuclear Information System (INIS)

    Liu Zhiqiang; Ma Hongguang; Yang Lifeng

    2007-01-01

    The system of non-cooperative passive location based on array is proposed. In the system, target is detected by beamforming and Doppler matched filtering; and bearing is measured by a long-base-ling interferometer which is composed of long distance sub-arrays. For the interferometer with long-base-line, the bearing is measured accurately but ambiguously. To realize unambiguous accurately bearing measurement, beam width and multiple constraint adoptive beamforming technique is used to resolve azimuth ambiguous. Theory and simulation result shows this method is effective to realize accurately bearing measurement in no-cooperate passive location system. (authors)

  4. Quantitative radionuclide angiocardiography

    International Nuclear Information System (INIS)

    Scholz, P.M.; Rerych, S.K.; Moran, J.F.; Newman, G.E.; Douglas, J.M.; Sabiston, D.C. Jr.; Jones, R.H.

    1980-01-01

    This study introduces a new method for calculating actual left ventricular volumes and cardiac output from data recorded during a single transit of a radionuclide bolus through the heart, and describes in detail current radionuclide angiocardiography methodology. A group of 64 healthy adults with a wide age range were studied to define the normal range of hemodynamic parameters determined by the technique. Radionuclide angiocardiograms were performed in patients undergoing cardiac catherization to validate the measurements. In 33 patients studied by both techniques on the same day, a close correlation was documented for measurement of ejection fraction and end-diastolic volume. To validate the method of volumetric cardiac output calcuation, 33 simultaneous radionuclide and indocyanine green dye determinations of cardiac output were performed in 18 normal young adults. These independent comparisons of radionuclide measurements with two separate methods document that initial transit radionuclide angiocardiography accurately assesses left ventricular function

  5. Accurate determination of rates from non-uniformly sampled relaxation data

    Energy Technology Data Exchange (ETDEWEB)

    Stetz, Matthew A.; Wand, A. Joshua, E-mail: wand@upenn.edu [University of Pennsylvania Perelman School of Medicine, Johnson Research Foundation and Department of Biochemistry and Biophysics (United States)

    2016-08-15

    The application of non-uniform sampling (NUS) to relaxation experiments traditionally used to characterize the fast internal motion of proteins is quantitatively examined. Experimentally acquired Poisson-gap sampled data reconstructed with iterative soft thresholding are compared to regular sequentially sampled (RSS) data. Using ubiquitin as a model system, it is shown that 25 % sampling is sufficient for the determination of quantitatively accurate relaxation rates. When the sampling density is fixed at 25 %, the accuracy of rates is shown to increase sharply with the total number of sampled points until eventually converging near the inherent reproducibility of the experiment. Perhaps contrary to some expectations, it is found that accurate peak height reconstruction is not required for the determination of accurate rates. Instead, inaccuracies in rates arise from inconsistencies in reconstruction across the relaxation series that primarily manifest as a non-linearity in the recovered peak height. This indicates that the performance of an NUS relaxation experiment cannot be predicted from comparison of peak heights using a single RSS reference spectrum. The generality of these findings was assessed using three alternative reconstruction algorithms, eight different relaxation measurements, and three additional proteins that exhibit varying degrees of spectral complexity. From these data, it is revealed that non-linearity in peak height reconstruction across the relaxation series is strongly correlated with errors in NUS-derived relaxation rates. Importantly, it is shown that this correlation can be exploited to reliably predict the performance of an NUS-relaxation experiment by using three or more RSS reference planes from the relaxation series. The RSS reference time points can also serve to provide estimates of the uncertainty of the sampled intensity, which for a typical relaxation times series incurs no penalty in total acquisition time.

  6. [A new method of processing quantitative PCR data].

    Science.gov (United States)

    Ke, Bing-Shen; Li, Guang-Yun; Chen, Shi-Min; Huang, Xiang-Yan; Chen, Ying-Jian; Xu, Jun

    2003-05-01

    Today standard PCR can't satisfy the need of biotechnique development and clinical research any more. After numerous dynamic research, PE company found there is a linear relation between initial template number and cycling time when the accumulating fluorescent product is detectable.Therefore,they developed a quantitative PCR technique to be used in PE7700 and PE5700. But the error of this technique is too great to satisfy the need of biotechnique development and clinical research. A better quantitative PCR technique is needed. The mathematical model submitted here is combined with the achievement of relative science,and based on the PCR principle and careful analysis of molecular relationship of main members in PCR reaction system. This model describes the function relation between product quantity or fluorescence intensity and initial template number and other reaction conditions, and can reflect the accumulating rule of PCR product molecule accurately. Accurate quantitative PCR analysis can be made use this function relation. Accumulated PCR product quantity can be obtained from initial template number. Using this model to do quantitative PCR analysis,result error is only related to the accuracy of fluorescence intensity or the instrument used. For an example, when the fluorescence intensity is accurate to 6 digits and the template size is between 100 to 1,000,000, the quantitative result accuracy will be more than 99%. The difference of result error is distinct using same condition,same instrument but different analysis method. Moreover,if the PCR quantitative analysis system is used to process data, it will get result 80 times of accuracy than using CT method.

  7. Accurate modeling and maximum power point detection of ...

    African Journals Online (AJOL)

    Accurate modeling and maximum power point detection of photovoltaic ... Determination of MPP enables the PV system to deliver maximum available power. ..... adaptive artificial neural network: Proposition for a new sizing procedure.

  8. Accurate determination of light elements by charged particle activation analysis

    International Nuclear Information System (INIS)

    Shikano, K.; Shigematsu, T.

    1989-01-01

    To develop accurate determination of light elements by CPAA, accurate and practical standardization methods and uniform chemical etching are studied based on determination of carbon in gallium arsenide using the 12 C(d,n) 13 N reaction and the following results are obtained: (1)Average stopping power method with thick target yield is useful as an accurate and practical standardization method. (2)Front surface of sample has to be etched for accurate estimate of incident energy. (3)CPAA is utilized for calibration of light element analysis by physical method. (4)Calibration factor of carbon analysis in gallium arsenide using the IR method is determined to be (9.2±0.3) x 10 15 cm -1 . (author)

  9. ACCURATE ESTIMATES OF CHARACTERISTIC EXPONENTS FOR SECOND ORDER DIFFERENTIAL EQUATION

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    In this paper, a second order linear differential equation is considered, and an accurate estimate method of characteristic exponent for it is presented. Finally, we give some examples to verify the feasibility of our result.

  10. Importance of molecular diagnosis in the accurate diagnosis of ...

    Indian Academy of Sciences (India)

    1Department of Health and Environmental Sciences, Kyoto University Graduate School of Medicine, Yoshida Konoecho, ... of molecular diagnosis in the accurate diagnosis of systemic carnitine deficiency. .... 'affecting protein function' by SIFT.

  11. Automatic quantitative metallography

    International Nuclear Information System (INIS)

    Barcelos, E.J.B.V.; Ambrozio Filho, F.; Cunha, R.C.

    1976-01-01

    The quantitative determination of metallographic parameters is analysed through the description of Micro-Videomat automatic image analysis system and volumetric percentage of perlite in nodular cast irons, porosity and average grain size in high-density sintered pellets of UO 2 , and grain size of ferritic steel. Techniques adopted are described and results obtained are compared with the corresponding ones by the direct counting process: counting of systematic points (grid) to measure volume and intersections method, by utilizing a circunference of known radius for the average grain size. The adopted technique for nodular cast iron resulted from the small difference of optical reflectivity of graphite and perlite. Porosity evaluation of sintered UO 2 pellets is also analyzed [pt

  12. High accurate time system of the Low Latitude Meridian Circle.

    Science.gov (United States)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  13. Accurate Alignment of Plasma Channels Based on Laser Centroid Oscillations

    International Nuclear Information System (INIS)

    Gonsalves, Anthony; Nakamura, Kei; Lin, Chen; Osterhoff, Jens; Shiraishi, Satomi; Schroeder, Carl; Geddes, Cameron; Toth, Csaba; Esarey, Eric; Leemans, Wim

    2011-01-01

    A technique has been developed to accurately align a laser beam through a plasma channel by minimizing the shift in laser centroid and angle at the channel outptut. If only the shift in centroid or angle is measured, then accurate alignment is provided by minimizing laser centroid motion at the channel exit as the channel properties are scanned. The improvement in alignment accuracy provided by this technique is important for minimizing electron beam pointing errors in laser plasma accelerators.

  14. An accurate metric for the spacetime around neutron stars

    OpenAIRE

    Pappas, George

    2016-01-01

    The problem of having an accurate description of the spacetime around neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a neutron star. Furthermore, an accurate appropriately parameterised metric, i.e., a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to inf...

  15. Accurate forced-choice recognition without awareness of memory retrieval

    OpenAIRE

    Voss, Joel L.; Baym, Carol L.; Paller, Ken A.

    2008-01-01

    Recognition confidence and the explicit awareness of memory retrieval commonly accompany accurate responding in recognition tests. Memory performance in recognition tests is widely assumed to measure explicit memory, but the generality of this assumption is questionable. Indeed, whether recognition in nonhumans is always supported by explicit memory is highly controversial. Here we identified circumstances wherein highly accurate recognition was unaccompanied by hallmark features of explicit ...

  16. Accurate radiotherapy positioning system investigation based on video

    International Nuclear Information System (INIS)

    Tao Shengxiang; Wu Yican

    2006-01-01

    This paper introduces the newest research production on patient positioning method in accurate radiotherapy brought by Accurate Radiotherapy Treating System (ARTS) research team of Institute of Plasma Physics of Chinese Academy of Sciences, such as the positioning system based on binocular vision, the position-measuring system based on contour matching and the breath gate controlling system for positioning. Their basic principle, the application occasion and the prospects are briefly depicted. (authors)

  17. Accurate screening for synthetic preservatives in beverage using high performance liquid chromatography with time-of-flight mass spectrometry

    International Nuclear Information System (INIS)

    Li Xiuqin; Zhang Feng; Sun Yanyan; Yong Wei; Chu Xiaogang; Fang Yanyan; Zweigenbaum, Jerry

    2008-01-01

    In this study, liquid chromatography time-of-flight mass spectrometry (HPLC/TOF-MS) is applied to qualitation and quantitation of 18 synthetic preservatives in beverage. The identification by HPLC/TOF-MS is accomplished with the accurate mass (the subsequent generated empirical formula) of the protonated molecules [M + H]+ or the deprotonated molecules [M - H]-, along with the accurate mass of their main fragment ions. In order to obtain sufficient sensitivity for quantitation purposes (using the protonated or deprotonated molecule) and additional qualitative mass spectrum information provided by the fragments ions, segment program of fragmentor voltages is designed in positive and negative ion mode, respectively. Accurate mass measurements are highly useful in the complex sample analyses since they allow us to achieve a high degree of specificity, often needed when other interferents are present in the matrix. The mass accuracy typically obtained is routinely better than 3 ppm. The 18 compounds behave linearly in the 0.005-5.0 mg.kg -1 concentration range, with correlation coefficient >0.996. The recoveries at the tested concentrations of 1.0 mg.kg -1 -100 mg.kg -1 are 81-106%, with coefficients of variation -1 , which are far below the required maximum residue level (MRL) for these preservatives in foodstuff. The method is suitable for routine quantitative and qualitative analyses of synthetic preservatives in foodstuff

  18. A simplified and accurate detection of the genetically modified wheat MON71800 with one calibrator plasmid.

    Science.gov (United States)

    Kim, Jae-Hwan; Park, Saet-Byul; Roh, Hyo-Jeong; Park, Sunghoon; Shin, Min-Ki; Moon, Gui Im; Hong, Jin-Hwan; Kim, Hae-Yeong

    2015-06-01

    With the increasing number of genetically modified (GM) events, unauthorized GMO releases into the food market have increased dramatically, and many countries have developed detection tools for them. This study described the qualitative and quantitative detection methods of unauthorized the GM wheat MON71800 with a reference plasmid (pGEM-M71800). The wheat acetyl-CoA carboxylase (acc) gene was used as the endogenous gene. The plasmid pGEM-M71800, which contains both the acc gene and the event-specific target MON71800, was constructed as a positive control for the qualitative and quantitative analyses. The limit of detection in the qualitative PCR assay was approximately 10 copies. In the quantitative PCR assay, the standard deviation and relative standard deviation repeatability values ranged from 0.06 to 0.25 and from 0.23% to 1.12%, respectively. This study supplies a powerful and very simple but accurate detection strategy for unauthorized GM wheat MON71800 that utilizes a single calibrator plasmid. Copyright © 2014 Elsevier Ltd. All rights reserved.

  19. Quantitative measurements of shear displacement using atomic force microscopy

    International Nuclear Information System (INIS)

    Wang, Wenbo; Wu, Weida; Sun, Ying; Zhao, Yonggang

    2016-01-01

    We report a method to quantitatively measure local shear deformation with high sensitivity using atomic force microscopy. The key point is to simultaneously detect both torsional and buckling motions of atomic force microscopy (AFM) cantilevers induced by the lateral piezoelectric response of the sample. This requires the quantitative calibration of torsional and buckling response of AFM. This method is validated by measuring the angular dependence of the in-plane piezoelectric response of a piece of piezoelectric α-quartz. The accurate determination of the amplitude and orientation of the in-plane piezoelectric response, without rotation, would greatly enhance the efficiency of lateral piezoelectric force microscopy.

  20. Electronic imaging systems for quantitative electrophoresis of DNA

    International Nuclear Information System (INIS)

    Sutherland, J.C.

    1989-01-01

    Gel electrophoresis is one of the most powerful and widely used methods for the separation of DNA. During the last decade, instruments have been developed that accurately quantitate in digital form the distribution of materials in a gel or on a blot prepared from a gel. In this paper, I review the various physical properties that can be used to quantitate the distribution of DNA on gels or blots and the instrumentation that has been developed to perform these tasks. The emphasis here is on DNA, but much of what is said also applies to RNA, proteins and other molecules. 36 refs

  1. Quantitive DNA Fiber Mapping

    Energy Technology Data Exchange (ETDEWEB)

    Lu, Chun-Mei; Wang, Mei; Greulich-Bode, Karin M.; Weier, Jingly F.; Weier, Heinz-Ulli G.

    2008-01-28

    Several hybridization-based methods used to delineate single copy or repeated DNA sequences in larger genomic intervals take advantage of the increased resolution and sensitivity of free chromatin, i.e., chromatin released from interphase cell nuclei. Quantitative DNA fiber mapping (QDFM) differs from the majority of these methods in that it applies FISH to purified, clonal DNA molecules which have been bound with at least one end to a solid substrate. The DNA molecules are then stretched by the action of a receding meniscus at the water-air interface resulting in DNA molecules stretched homogeneously to about 2.3 kb/{micro}m. When non-isotopically, multicolor-labeled probes are hybridized to these stretched DNA fibers, their respective binding sites are visualized in the fluorescence microscope, their relative distance can be measured and converted into kilobase pairs (kb). The QDFM technique has found useful applications ranging from the detection and delineation of deletions or overlap between linked clones to the construction of high-resolution physical maps to studies of stalled DNA replication and transcription.

  2. Quantitative measurement of the cerebral blood flow

    International Nuclear Information System (INIS)

    Houdart, R.; Mamo, H.; Meric, P.; Seylaz, J.

    1976-01-01

    The value of the cerebral blood flow measurement (CBF) is outlined, its limits are defined and some future prospects discussed. The xenon 133 brain clearance study is at present the most accurate quantitative method to evaluate the CBF in different regions of the brain simultaneously. The method and the progress it has led to in the physiological, physiopathological and therapeutic fields are described. The major disadvantage of the method is shown to be the need to puncture the internal carotid for each measurement. Prospects are discussed concerning methods derived from the same general principle but using a simpler, non-traumatic way to introduce the radio-tracer, either by breathing into the lungs or intraveinously [fr

  3. Quantitative sexing (Q-Sexing) and relative quantitative sexing (RQ ...

    African Journals Online (AJOL)

    samer

    Key words: Polymerase chain reaction (PCR), quantitative real time polymerase chain reaction (qPCR), quantitative sexing, Siberian tiger. INTRODUCTION. Animal molecular sexing .... 43:3-12. Ellegren H (1996). First gene on the avian W chromosome (CHD) provides a tag for universal sexing of non-ratite birds. Proc.

  4. Fast and accurate computation of projected two-point functions

    Science.gov (United States)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithm1Our code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  5. Accurate isotope ratio mass spectrometry. Some problems and possibilities

    International Nuclear Information System (INIS)

    Bievre, P. de

    1978-01-01

    The review includes reference to 190 papers, mainly published during the last 10 years. It covers the following: important factors in accurate isotope ratio measurements (precision and accuracy of isotope ratio measurements -exemplified by determinations of 235 U/ 238 U and of other elements including 239 Pu/ 240 Pu; isotope fractionation -exemplified by curves for Rb, U); applications (atomic weights); the Oklo natural nuclear reactor (discovered by UF 6 mass spectrometry at Pierrelatte); nuclear and other constants; isotope ratio measurements in nuclear geology and isotope cosmology - accurate age determination; isotope ratio measurements on very small samples - archaeometry; isotope dilution; miscellaneous applications; and future prospects. (U.K.)

  6. ROLAIDS-CPM: A code for accurate resonance absorption calculations

    International Nuclear Information System (INIS)

    Kruijf, W.J.M. de.

    1993-08-01

    ROLAIDS is used to calculate group-averaged cross sections for specific zones in a one-dimensional geometry. This report describes ROLAIDS-CPM which is an extended version of ROLAIDS. The main extension in ROLAIDS-CPM is the possibility to use the collision probability method for a slab- or cylinder-geometry instead of the less accurate interface-currents method. In this way accurate resonance absorption calculations can be performed with ROLAIDS-CPM. ROLAIDS-CPM has been developed at ECN. (orig.)

  7. Accurate evaluation of exchange fields in finite element micromagnetic solvers

    Science.gov (United States)

    Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.

    2012-04-01

    Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.

  8. Deterministic quantitative risk assessment development

    Energy Technology Data Exchange (ETDEWEB)

    Dawson, Jane; Colquhoun, Iain [PII Pipeline Solutions Business of GE Oil and Gas, Cramlington Northumberland (United Kingdom)

    2009-07-01

    Current risk assessment practice in pipeline integrity management is to use a semi-quantitative index-based or model based methodology. This approach has been found to be very flexible and provide useful results for identifying high risk areas and for prioritizing physical integrity assessments. However, as pipeline operators progressively adopt an operating strategy of continual risk reduction with a view to minimizing total expenditures within safety, environmental, and reliability constraints, the need for quantitative assessments of risk levels is becoming evident. Whereas reliability based quantitative risk assessments can be and are routinely carried out on a site-specific basis, they require significant amounts of quantitative data for the results to be meaningful. This need for detailed and reliable data tends to make these methods unwieldy for system-wide risk k assessment applications. This paper describes methods for estimating risk quantitatively through the calibration of semi-quantitative estimates to failure rates for peer pipeline systems. The methods involve the analysis of the failure rate distribution, and techniques for mapping the rate to the distribution of likelihoods available from currently available semi-quantitative programs. By applying point value probabilities to the failure rates, deterministic quantitative risk assessment (QRA) provides greater rigor and objectivity than can usually be achieved through the implementation of semi-quantitative risk assessment results. The method permits a fully quantitative approach or a mixture of QRA and semi-QRA to suit the operator's data availability and quality, and analysis needs. For example, consequence analysis can be quantitative or can address qualitative ranges for consequence categories. Likewise, failure likelihoods can be output as classical probabilities or as expected failure frequencies as required. (author)

  9. Quantitating cellular immune responses to cancer vaccines.

    Science.gov (United States)

    Lyerly, H Kim

    2003-06-01

    While the future of immunotherapy in the treatment of cancer is promising, it is difficult to compare the various approaches because monitoring assays have not been standardized in approach or technique. Common assays for measuring the immune response need to be established so that these assays can one day serve as surrogate markers for clinical response. Assays that accurately detect and quantitate T-cell-mediated, antigen-specific immune responses are particularly desired. However, to date, increases in the number of cytotoxic T cells through immunization have not been correlated with clinical tumor regression. Ideally, then, a T-cell assay not only needs to be sensitive, specific, reliable, reproducible, simple, and quick to perform, it must also demonstrate close correlation with clinical outcome. Assays currently used to measure T-cell response are delayed-type hypersensitivity testing, flow cytometry using peptide major histocompatibility complex tetramers, lymphoproliferation assay, enzyme-linked immunosorbant assay, enzyme-linked immunospot assay, cytokine flow cytometry, direct cytotoxicity assay, measurement of cytokine mRNA by quantitative reverse transcriptase polymerase chain reaction, and limiting dilution analysis. The purpose of this review is to describe the attributes of each test and compare their advantages and disadvantages.

  10. Quantitative analysis by nuclear magnetic resonance spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Wainai, T; Mashimo, K [Nihon Univ., Tokyo. Coll. of Science and Engineering

    1976-04-01

    Recent papers on the practical quantitative analysis by nuclear magnetic resonance spectroscopy (NMR) are reviewed. Specifically, the determination of moisture in liquid N/sub 2/O/sub 4/ as an oxidizing agent for rocket propulsion, the analysis of hydroperoxides, the quantitative analysis using a shift reagent, the analysis of aromatic sulfonates, and the determination of acids and bases are reviewed. Attention is paid to the accuracy. The sweeping velocity and RF level in addition to the other factors must be on the optimal condition to eliminate the errors, particularly when computation is made with a machine. Higher sweeping velocity is preferable in view of S/N ratio, but it may be limited to 30 Hz/s. The relative error in the measurement of area is generally 1%, but when those of dilute concentration and integrated, the error will become smaller by one digit. If impurities are treated carefully, the water content on N/sub 2/O/sub 4/ can be determined with accuracy of about 0.002%. The comparison method between peak heights is as accurate as that between areas, when the uniformity of magnetic field and T/sub 2/ are not questionable. In the case of chemical shift movable due to content, the substance can be determined by the position of the chemical shift. Oil and water contents in rape-seed, peanuts, and sunflower-seed are determined by measuring T/sub 1/ with 90 deg pulses.

  11. Quantitative analysis method for ship construction quality

    Directory of Open Access Journals (Sweden)

    FU Senzong

    2017-03-01

    Full Text Available The excellent performance of a ship is assured by the accurate evaluation of its construction quality. For a long time, research into the construction quality of ships has mainly focused on qualitative analysis due to a shortage of process data, which results from limited samples, varied process types and non-standardized processes. Aiming at predicting and controlling the influence of the construction process on the construction quality of ships, this article proposes a reliability quantitative analysis flow path for the ship construction process and fuzzy calculation method. Based on the process-quality factor model proposed by the Function-Oriented Quality Control (FOQC method, we combine fuzzy mathematics with the expert grading method to deduce formulations calculating the fuzzy process reliability of the ordinal connection model, series connection model and mixed connection model. The quantitative analysis method is applied in analyzing the process reliability of a ship's shaft gear box installation, which proves the applicability and effectiveness of the method. The analysis results can be a useful reference for setting key quality inspection points and optimizing key processes.

  12. Quantitative diagnosis of skeletons with demineralizing osteopathy

    International Nuclear Information System (INIS)

    Banzer, D.

    1979-01-01

    The quantitative diagnosis of bone diseases must be assessed according to the accuracy of the applied method, the expense in apparatus, personnel and financial resources and the comparability of results. Nuclide absorptiometry and in the future perhaps computed tomography represent the most accurate methods for determining the mineral content of bones. Their application is the clinics' prerogative because of the costs. Morphometry provides quantiative information, in particular in course control, and enables an objective judgement of visual pictures. It requires little expenditure and should be combined with microradioscopy. Direct comparability of the findings of different working groups is most easy in morphometry; it depends on the equipment in computerized tomography and is still hardly possible in nuclide absorptiometry. For fundamental physical reason, it will hardly be possible to produce a low-cost, fast and easy-to-handle instrument for the determination of the mineral salt concentration in bones. Instead, there is rather a trend towards more expensive equipment, e.g. CT instruments; the universal use of these instruments, however, will help to promote quantitative diagnoses. (orig.) [de

  13. Quantitative assessment of growth plate activity

    International Nuclear Information System (INIS)

    Harcke, H.T.; Macy, N.J.; Mandell, G.A.; MacEwen, G.D.

    1984-01-01

    In the immature skeleton the physis or growth plate is the area of bone least able to withstand external forces and is therefore prone to trauma. Such trauma often leads to premature closure of the plate and results in limb shortening and/or angular deformity (varus or valgus). Active localization of bone seeking tracers in the physis makes bone scintigraphy an excellent method for assessing growth plate physiology. To be most effective, however, physeal activity should be quantified so that serial evaluations are accurate and comparable. The authors have developed a quantitative method for assessing physeal activity and have applied it ot the hip and knee. Using computer acquired pinhole images of the abnormal and contralateral normal joints, ten regions of interest are placed at key locations around each joint and comparative ratios are generated to form a growth plate profile. The ratios compare segmental physeal activity to total growth plate activity on both ipsilateral and contralateral sides and to adjacent bone. In 25 patients, ages 2 to 15 years, with angular deformities of the legs secondary to trauma, Blount's disease, and Perthes disease, this technique is able to differentiate abnormal segmental physeal activity. This is important since plate closure does not usually occur uniformly across the physis. The technique may permit the use of scintigraphy in the prediction of early closure through the quantitative analysis of serial studies

  14. Quantitative tomographic measurements of opaque multiphase flows

    Energy Technology Data Exchange (ETDEWEB)

    GEORGE,DARIN L.; TORCZYNSKI,JOHN R.; SHOLLENBERGER,KIM ANN; O' HERN,TIMOTHY J.; CECCIO,STEVEN L.

    2000-03-01

    An electrical-impedance tomography (EIT) system has been developed for quantitative measurements of radial phase distribution profiles in two-phase and three-phase vertical column flows. The EIT system is described along with the computer algorithm used for reconstructing phase volume fraction profiles. EIT measurements were validated by comparison with a gamma-densitometry tomography (GDT) system. The EIT system was used to accurately measure average solid volume fractions up to 0.05 in solid-liquid flows, and radial gas volume fraction profiles in gas-liquid flows with gas volume fractions up to 0.15. In both flows, average phase volume fractions and radial volume fraction profiles from GDT and EIT were in good agreement. A minor modification to the formula used to relate conductivity data to phase volume fractions was found to improve agreement between the methods. GDT and EIT were then applied together to simultaneously measure the solid, liquid, and gas radial distributions within several vertical three-phase flows. For average solid volume fractions up to 0.30, the gas distribution for each gas flow rate was approximately independent of the amount of solids in the column. Measurements made with this EIT system demonstrate that EIT may be used successfully for noninvasive, quantitative measurements of dispersed multiphase flows.

  15. Quantitative Nuclear Medicine. Chapter 17

    Energy Technology Data Exchange (ETDEWEB)

    Ouyang, J.; El Fakhri, G. [Massachusetts General Hospital and Harvard Medical School, Boston (United States)

    2014-12-15

    Planar imaging is still used in clinical practice although tomographic imaging (single photon emission computed tomography (SPECT) and positron emission tomography (PET)) is becoming more established. In this chapter, quantitative methods for both imaging techniques are presented. Planar imaging is limited to single photon. For both SPECT and PET, the focus is on the quantitative methods that can be applied to reconstructed images.

  16. Mastering R for quantitative finance

    CERN Document Server

    Berlinger, Edina; Badics, Milán; Banai, Ádám; Daróczi, Gergely; Dömötör, Barbara; Gabler, Gergely; Havran, Dániel; Juhász, Péter; Margitai, István; Márkus, Balázs; Medvegyev, Péter; Molnár, Julia; Szucs, Balázs Árpád; Tuza, Ágnes; Vadász, Tamás; Váradi, Kata; Vidovics-Dancs, Ágnes

    2015-01-01

    This book is intended for those who want to learn how to use R's capabilities to build models in quantitative finance at a more advanced level. If you wish to perfectly take up the rhythm of the chapters, you need to be at an intermediate level in quantitative finance and you also need to have a reasonable knowledge of R.

  17. Quantitative analysis of receptor imaging

    International Nuclear Information System (INIS)

    Fu Zhanli; Wang Rongfu

    2004-01-01

    Model-based methods for quantitative analysis of receptor imaging, including kinetic, graphical and equilibrium methods, are introduced in detail. Some technical problem facing quantitative analysis of receptor imaging, such as the correction for in vivo metabolism of the tracer and the radioactivity contribution from blood volume within ROI, and the estimation of the nondisplaceable ligand concentration, is also reviewed briefly

  18. Quantitative self-assembly prediction yields targeted nanomedicines

    Science.gov (United States)

    Shamay, Yosi; Shah, Janki; Işık, Mehtap; Mizrachi, Aviram; Leibold, Josef; Tschaharganeh, Darjus F.; Roxbury, Daniel; Budhathoki-Uprety, Januka; Nawaly, Karla; Sugarman, James L.; Baut, Emily; Neiman, Michelle R.; Dacek, Megan; Ganesh, Kripa S.; Johnson, Darren C.; Sridharan, Ramya; Chu, Karen L.; Rajasekhar, Vinagolu K.; Lowe, Scott W.; Chodera, John D.; Heller, Daniel A.

    2018-02-01

    Development of targeted nanoparticle drug carriers often requires complex synthetic schemes involving both supramolecular self-assembly and chemical modification. These processes are generally difficult to predict, execute, and control. We describe herein a targeted drug delivery system that is accurately and quantitatively predicted to self-assemble into nanoparticles based on the molecular structures of precursor molecules, which are the drugs themselves. The drugs assemble with the aid of sulfated indocyanines into particles with ultrahigh drug loadings of up to 90%. We devised quantitative structure-nanoparticle assembly prediction (QSNAP) models to identify and validate electrotopological molecular descriptors as highly predictive indicators of nano-assembly and nanoparticle size. The resulting nanoparticles selectively targeted kinase inhibitors to caveolin-1-expressing human colon cancer and autochthonous liver cancer models to yield striking therapeutic effects while avoiding pERK inhibition in healthy skin. This finding enables the computational design of nanomedicines based on quantitative models for drug payload selection.

  19. Prevalence of accurate nursing documentation in patient records

    NARCIS (Netherlands)

    Paans, Wolter; Sermeus, Walter; Nieweg, Roos; van der Schans, Cees

    2010-01-01

    AIM: This paper is a report of a study conducted to describe the accuracy of nursing documentation in patient records in hospitals. Background.  Accurate nursing documentation enables nurses to systematically review the nursing process and to evaluate the quality of care. Assessing nurses' reports

  20. Using an eye tracker for accurate eye movement artifact correction

    NARCIS (Netherlands)

    Kierkels, J.J.M.; Riani, J.; Bergmans, J.W.M.; Boxtel, van G.J.M.

    2007-01-01

    We present a new method to correct eye movement artifacts in electroencephalogram (EEG) data. By using an eye tracker, whose data cannot be corrupted by any electrophysiological signals, an accurate method for correction is developed. The eye-tracker data is used in a Kalman filter to estimate which

  1. Accurate method of the magnetic field measurement of quadrupole magnets

    International Nuclear Information System (INIS)

    Kumada, M.; Sakai, I.; Someya, H.; Sasaki, H.

    1983-01-01

    We present an accurate method of the magnetic field measurement of the quadrupole magnet. The method of obtaining the information of the field gradient and the effective focussing length is given. A new scheme to obtain the information of the skew field components is also proposed. The relative accuracy of the measurement was 1 x 10 -4 or less. (author)

  2. Feedforward signal prediction for accurate motion systems using digital filters

    NARCIS (Netherlands)

    Butler, H.

    2012-01-01

    A positioning system that needs to accurately track a reference can benefit greatly from using feedforward. When using a force actuator, the feedforward needs to generate a force proportional to the reference acceleration, which can be measured by means of an accelerometer or can be created by

  3. Fishing site mapping using local knowledge provides accurate and ...

    African Journals Online (AJOL)

    Accurate fishing ground maps are necessary for fisheries monitoring. In Velondriake locally managed marine area (LMMA) we observed that the nomenclature of shared fishing sites (FS) is villages dependent. Additionally, the level of illiteracy makes data collection more complicated, leading to data collectors improvising ...

  4. Laser guided automated calibrating system for accurate bracket ...

    African Journals Online (AJOL)

    It is widely recognized that accurate bracket placement is of critical importance in the efficient application of biomechanics and in realizing the full potential of a preadjusted edgewise appliance. Aim: The purpose of ... placement. Keywords: Hough transforms, Indirect bonding technique, Laser, Orthodontic bracket placement ...

  5. Foresight begins with FMEA. Delivering accurate risk assessments.

    Science.gov (United States)

    Passey, R D

    1999-03-01

    If sufficient factors are taken into account and two- or three-stage analysis is employed, failure mode and effect analysis represents an excellent technique for delivering accurate risk assessments for products and processes, and for relating them to legal liability. This article describes a format that facilitates easy interpretation.

  6. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Science.gov (United States)

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  7. Fast and Accurate Residential Fire Detection Using Wireless Sensor Networks

    NARCIS (Netherlands)

    Bahrepour, Majid; Meratnia, Nirvana; Havinga, Paul J.M.

    2010-01-01

    Prompt and accurate residential fire detection is important for on-time fire extinguishing and consequently reducing damages and life losses. To detect fire sensors are needed to measure the environmental parameters and algorithms are required to decide about occurrence of fire. Recently, wireless

  8. Towards accurate de novo assembly for genomes with repeats

    NARCIS (Netherlands)

    Bucur, Doina

    2017-01-01

    De novo genome assemblers designed for short k-mer length or using short raw reads are unlikely to recover complex features of the underlying genome, such as repeats hundreds of bases long. We implement a stochastic machine-learning method which obtains accurate assemblies with repeats and

  9. Dense and accurate whole-chromosome haplotyping of individual genomes

    NARCIS (Netherlands)

    Porubsky, David; Garg, Shilpa; Sanders, Ashley D.; Korbel, Jan O.; Guryev, Victor; Lansdorp, Peter M.; Marschall, Tobias

    2017-01-01

    The diploid nature of the human genome is neglected in many analyses done today, where a genome is perceived as a set of unphased variants with respect to a reference genome. This lack of haplotype-level analyses can be explained by a lack of methods that can produce dense and accurate

  10. Accurate automatic tuning circuit for bipolar integrated filters

    NARCIS (Netherlands)

    de Heij, Wim J.A.; de Heij, W.J.A.; Hoen, Klaas; Hoen, Klaas; Seevinck, Evert; Seevinck, E.

    1990-01-01

    An accurate automatic tuning circuit for tuning the cutoff frequency and Q-factor of high-frequency bipolar filters is presented. The circuit is based on a voltage controlled quadrature oscillator (VCO). The frequency and the RMS (root mean square) amplitude of the oscillator output signal are

  11. Laser Guided Automated Calibrating System for Accurate Bracket ...

    African Journals Online (AJOL)

    Background: The basic premise of preadjusted bracket system is accurate bracket positioning. ... using MATLAB ver. 7 software (The MathWorks Inc.). These images are in the form of matrices of size 640 × 480. 650 nm (red light) type III diode laser is used as ... motion control and Pitch, Yaw, Roll degrees of freedom (DOF).

  12. Dynamic weighing for accurate fertilizer application and monitoring

    NARCIS (Netherlands)

    Bergeijk, van J.; Goense, D.; Willigenburg, van L.G.; Speelman, L.

    2001-01-01

    The mass flow of fertilizer spreaders must be calibrated for the different types of fertilizers used. To obtain accurate fertilizer application manual calibration of actual mass flow must be repeated frequently. Automatic calibration is possible by measurement of the actual mass flow, based on

  13. A Simple and Accurate Method for Measuring Enzyme Activity.

    Science.gov (United States)

    Yip, Din-Yan

    1997-01-01

    Presents methods commonly used for investigating enzyme activity using catalase and presents a new method for measuring catalase activity that is more reliable and accurate. Provides results that are readily reproduced and quantified. Can also be used for investigations of enzyme properties such as the effects of temperature, pH, inhibitors,…

  14. How Accurate are Government Forecast of Economic Fundamentals?

    NARCIS (Netherlands)

    C-L. Chang (Chia-Lin); Ph.H.B.F. Franses (Philip Hans); M.J. McAleer (Michael)

    2009-01-01

    textabstractA government’s ability to forecast key economic fundamentals accurately can affect business confidence, consumer sentiment, and foreign direct investment, among others. A government forecast based on an econometric model is replicable, whereas one that is not fully based on an

  15. Quantifying Accurate Calorie Estimation Using the "Think Aloud" Method

    Science.gov (United States)

    Holmstrup, Michael E.; Stearns-Bruening, Kay; Rozelle, Jeffrey

    2013-01-01

    Objective: Clients often have limited time in a nutrition education setting. An improved understanding of the strategies used to accurately estimate calories may help to identify areas of focused instruction to improve nutrition knowledge. Methods: A "Think Aloud" exercise was recorded during the estimation of calories in a standard dinner meal…

  16. General approach for accurate resonance analysis in transformer windings

    NARCIS (Netherlands)

    Popov, M.

    2018-01-01

    In this paper, resonance effects in transformer windings are thoroughly investigated and analyzed. The resonance is determined by making use of an accurate approach based on the application of the impedance matrix of a transformer winding. The method is validated by a test coil and the numerical

  17. Novel multi-beam radiometers for accurate ocean surveillance

    DEFF Research Database (Denmark)

    Cappellin, C.; Pontoppidan, K.; Nielsen, P. H.

    2014-01-01

    Novel antenna architectures for real aperture multi-beam radiometers providing high resolution and high sensitivity for accurate sea surface temperature (SST) and ocean vector wind (OVW) measurements are investigated. On the basis of the radiometer requirements set for future SST/OVW missions...

  18. Planimetric volumetry of the prostate: how accurate is it?

    NARCIS (Netherlands)

    Aarnink, R. G.; Giesen, R. J.; de la Rosette, J. J.; Huynen, A. L.; Debruyne, F. M.; Wijkstra, H.

    1995-01-01

    Planimetric volumetry is used in clinical practice when accurate volume determination of the prostate is needed. The prostate volume is determined by discretization of the 3D prostate shape. The are of the prostate is calculated in consecutive ultrasonographic cross-sections. This area is multiplied

  19. Accurate conjugate gradient methods for families of shifted systems

    NARCIS (Netherlands)

    Eshof, J. van den; Sleijpen, G.L.G.

    We present an efficient and accurate variant of the conjugate gradient method for solving families of shifted systems. In particular we are interested in shifted systems that occur in Tikhonov regularization for inverse problems since these problems can be sensitive to roundoff errors. The

  20. Accurate 3D Mapping Algorithm for Flexible Antennas

    Directory of Open Access Journals (Sweden)

    Saed Asaly

    2018-01-01

    Full Text Available This work addresses the problem of performing an accurate 3D mapping of a flexible antenna surface. Consider a high-gain satellite flexible antenna; even a submillimeter change in the antenna surface may lead to a considerable loss in the antenna gain. Using a robotic subreflector, such changes can be compensated for. Yet, in order to perform such tuning, an accurate 3D mapping of the main antenna is required. This paper presents a general method for performing an accurate 3D mapping of marked surfaces such as satellite dish antennas. Motivated by the novel technology for nanosatellites with flexible high-gain antennas, we propose a new accurate mapping framework which requires a small-sized monocamera and known patterns on the antenna surface. The experimental result shows that the presented mapping method can detect changes up to 0.1-millimeter accuracy, while the camera is located 1 meter away from the dish, allowing an RF antenna optimization for Ka and Ku frequencies. Such optimization process can improve the gain of the flexible antennas and allow an adaptive beam shaping. The presented method is currently being implemented on a nanosatellite which is scheduled to be launched at the end of 2018.

  1. Device accurately measures and records low gas-flow rates

    Science.gov (United States)

    Branum, L. W.

    1966-01-01

    Free-floating piston in a vertical column accurately measures and records low gas-flow rates. The system may be calibrated, using an adjustable flow-rate gas supply, a low pressure gage, and a sequence recorder. From the calibration rates, a nomograph may be made for easy reduction. Temperature correction may be added for further accuracy.

  2. Rapid and Accurate Machine Learning Recognition of High Performing Metal Organic Frameworks for CO2 Capture.

    Science.gov (United States)

    Fernandez, Michael; Boyd, Peter G; Daff, Thomas D; Aghaji, Mohammad Zein; Woo, Tom K

    2014-09-04

    In this work, we have developed quantitative structure-property relationship (QSPR) models using advanced machine learning algorithms that can rapidly and accurately recognize high-performing metal organic framework (MOF) materials for CO2 capture. More specifically, QSPR classifiers have been developed that can, in a fraction of a section, identify candidate MOFs with enhanced CO2 adsorption capacity (>1 mmol/g at 0.15 bar and >4 mmol/g at 1 bar). The models were tested on a large set of 292 050 MOFs that were not part of the training set. The QSPR classifier could recover 945 of the top 1000 MOFs in the test set while flagging only 10% of the whole library for compute intensive screening. Thus, using the machine learning classifiers as part of a high-throughput screening protocol would result in an order of magnitude reduction in compute time and allow intractably large structure libraries and search spaces to be screened.

  3. Quantitative microanalysis with a nuclear microprobe

    International Nuclear Information System (INIS)

    Themner, Klas.

    1989-01-01

    The analytical techniques of paticle induced X-ray emission (PIXE) and Rutherford backscattering (RBS), together with the nuclear microprobe, form a very powerful tool for performing quantitative microanalysis of biological material. Calibration of the X-ray detection system in the microprobe set-up has been performed and the accuracy of the quantitative procedure using RBS for determination of the areal mass density was investigated. The accuracy of the analysis can be affected by alteration in the elemental concentrations during irradiation due to the radiation damage induced by the very intense beams of ionixing radiation. Loss of matrix elements from freeze-dried tissue sections and polymer films have been studied during proton and photon irradiation and the effect on the accuracy discussed. Scanning the beam over an area of the target, with e.g. 32x32 pixels, in order to produce en elemental map, yields a lot of information and, to be able to make an accurate quantitatification, a fast algorithm using descriptions of the different spectral contributions is of need. The production of continuum X-rays by 2.55 MeV protons has been studied and absolute cross-sections for the bremsstrahlung production from thin carbon and some polymer films determined. For the determination of the bremsstrahlung background knowledge of the amounts of the matrix elements is important and a fast program for the evaluation of spectra of proton back- and forward scattering from biological samples has been developed. Quantitative microanalysis with the nuclear microprobe has been performed on brain tissue from rats subjected to different pathological conditions. Increase in calcium levels and decrease in potssium levels for animals subjected to crebral ischaemia and for animals suffering from epileptic seizures were observed coincidentally with or, in some cases before, visible signs of cell necrosis. (author)

  4. Exploring the relationship between sequence similarity and accurate phylogenetic trees.

    Science.gov (United States)

    Cantarel, Brandi L; Morrison, Hilary G; Pearson, William

    2006-11-01

    We have characterized the relationship between accurate phylogenetic reconstruction and sequence similarity, testing whether high levels of sequence similarity can consistently produce accurate evolutionary trees. We generated protein families with known phylogenies using a modified version of the PAML/EVOLVER program that produces insertions and deletions as well as substitutions. Protein families were evolved over a range of 100-400 point accepted mutations; at these distances 63% of the families shared significant sequence similarity. Protein families were evolved using balanced and unbalanced trees, with ancient or recent radiations. In families sharing statistically significant similarity, about 60% of multiple sequence alignments were 95% identical to true alignments. To compare recovered topologies with true topologies, we used a score that reflects the fraction of clades that were correctly clustered. As expected, the accuracy of the phylogenies was greatest in the least divergent families. About 88% of phylogenies clustered over 80% of clades in families that shared significant sequence similarity, using Bayesian, parsimony, distance, and maximum likelihood methods. However, for protein families with short ancient branches (ancient radiation), only 30% of the most divergent (but statistically significant) families produced accurate phylogenies, and only about 70% of the second most highly conserved families, with median expectation values better than 10(-60), produced accurate trees. These values represent upper bounds on expected tree accuracy for sequences with a simple divergence history; proteins from 700 Giardia families, with a similar range of sequence similarities but considerably more gaps, produced much less accurate trees. For our simulated insertions and deletions, correct multiple sequence alignments did not perform much better than those produced by T-COFFEE, and including sequences with expressed sequence tag-like sequencing errors did not

  5. Chlorophyll fluorescence imaging accurately quantifies freezing damage and cold acclimation responses in Arabidopsis leaves

    Directory of Open Access Journals (Sweden)

    Hincha Dirk K

    2008-05-01

    Full Text Available Abstract Background Freezing tolerance is an important factor in the geographical distribution of plants and strongly influences crop yield. Many plants increase their freezing tolerance during exposure to low, nonfreezing temperatures in a process termed cold acclimation. There is considerable natural variation in the cold acclimation capacity of Arabidopsis that has been used to study the molecular basis of this trait. Accurate methods for the quantitation of freezing damage in leaves that include spatial information about the distribution of damage and the possibility to screen large populations of plants are necessary, but currently not available. In addition, currently used standard methods such as electrolyte leakage assays are very laborious and therefore not easily applicable for large-scale screening purposes. Results We have performed freezing experiments with the Arabidopsis accessions C24 and Tenela, which differ strongly in their freezing tolerance, both before and after cold acclimation. Freezing tolerance of detached leaves was investigated using the well established electrolyte leakage assay as a reference. Chlorophyll fluorescence imaging was used as an alternative method that provides spatial resolution of freezing damage over the leaf area. With both methods, LT50 values (i.e. temperature where 50% damage occurred could be derived as quantitative measures of leaf freezing tolerance. Both methods revealed the expected differences between acclimated and nonacclimated plants and between the two accessions and LT50 values were tightly correlated. However, electrolyte leakage assays consistently yielded higher LT50 values than chlorophyll fluorescence imaging. This was to a large part due to the incubation of leaves for electrolyte leakage measurements in distilled water, which apparently led to secondary damage, while this pre-incubation was not necessary for the chlorophyll fluorescence measurements. Conclusion Chlorophyll

  6. Accurate quantification of microRNA via single strand displacement reaction on DNA origami motif.

    Directory of Open Access Journals (Sweden)

    Jie Zhu

    Full Text Available DNA origami is an emerging technology that assembles hundreds of staple strands and one single-strand DNA into certain nanopattern. It has been widely used in various fields including detection of biological molecules such as DNA, RNA and proteins. MicroRNAs (miRNAs play important roles in post-transcriptional gene repression as well as many other biological processes such as cell growth and differentiation. Alterations of miRNAs' expression contribute to many human diseases. However, it is still a challenge to quantitatively detect miRNAs by origami technology. In this study, we developed a novel approach based on streptavidin and quantum dots binding complex (STV-QDs labeled single strand displacement reaction on DNA origami to quantitatively detect the concentration of miRNAs. We illustrated a linear relationship between the concentration of an exemplary miRNA as miRNA-133 and the STV-QDs hybridization efficiency; the results demonstrated that it is an accurate nano-scale miRNA quantifier motif. In addition, both symmetrical rectangular motif and asymmetrical China-map motif were tested. With significant linearity in both motifs, our experiments suggested that DNA Origami motif with arbitrary shape can be utilized in this method. Since this DNA origami-based method we developed owns the unique advantages of simple, time-and-material-saving, potentially multi-targets testing in one motif and relatively accurate for certain impurity samples as counted directly by atomic force microscopy rather than fluorescence signal detection, it may be widely used in quantification of miRNAs.

  7. Accurate Quantification of microRNA via Single Strand Displacement Reaction on DNA Origami Motif

    Science.gov (United States)

    Lou, Jingyu; Li, Weidong; Li, Sheng; Zhu, Hongxin; Yang, Lun; Zhang, Aiping; He, Lin; Li, Can

    2013-01-01

    DNA origami is an emerging technology that assembles hundreds of staple strands and one single-strand DNA into certain nanopattern. It has been widely used in various fields including detection of biological molecules such as DNA, RNA and proteins. MicroRNAs (miRNAs) play important roles in post-transcriptional gene repression as well as many other biological processes such as cell growth and differentiation. Alterations of miRNAs' expression contribute to many human diseases. However, it is still a challenge to quantitatively detect miRNAs by origami technology. In this study, we developed a novel approach based on streptavidin and quantum dots binding complex (STV-QDs) labeled single strand displacement reaction on DNA origami to quantitatively detect the concentration of miRNAs. We illustrated a linear relationship between the concentration of an exemplary miRNA as miRNA-133 and the STV-QDs hybridization efficiency; the results demonstrated that it is an accurate nano-scale miRNA quantifier motif. In addition, both symmetrical rectangular motif and asymmetrical China-map motif were tested. With significant linearity in both motifs, our experiments suggested that DNA Origami motif with arbitrary shape can be utilized in this method. Since this DNA origami-based method we developed owns the unique advantages of simple, time-and-material-saving, potentially multi-targets testing in one motif and relatively accurate for certain impurity samples as counted directly by atomic force microscopy rather than fluorescence signal detection, it may be widely used in quantification of miRNAs. PMID:23990889

  8. Accurate quantification of microRNA via single strand displacement reaction on DNA origami motif.

    Science.gov (United States)

    Zhu, Jie; Feng, Xiaolu; Lou, Jingyu; Li, Weidong; Li, Sheng; Zhu, Hongxin; Yang, Lun; Zhang, Aiping; He, Lin; Li, Can

    2013-01-01

    DNA origami is an emerging technology that assembles hundreds of staple strands and one single-strand DNA into certain nanopattern. It has been widely used in various fields including detection of biological molecules such as DNA, RNA and proteins. MicroRNAs (miRNAs) play important roles in post-transcriptional gene repression as well as many other biological processes such as cell growth and differentiation. Alterations of miRNAs' expression contribute to many human diseases. However, it is still a challenge to quantitatively detect miRNAs by origami technology. In this study, we developed a novel approach based on streptavidin and quantum dots binding complex (STV-QDs) labeled single strand displacement reaction on DNA origami to quantitatively detect the concentration of miRNAs. We illustrated a linear relationship between the concentration of an exemplary miRNA as miRNA-133 and the STV-QDs hybridization efficiency; the results demonstrated that it is an accurate nano-scale miRNA quantifier motif. In addition, both symmetrical rectangular motif and asymmetrical China-map motif were tested. With significant linearity in both motifs, our experiments suggested that DNA Origami motif with arbitrary shape can be utilized in this method. Since this DNA origami-based method we developed owns the unique advantages of simple, time-and-material-saving, potentially multi-targets testing in one motif and relatively accurate for certain impurity samples as counted directly by atomic force microscopy rather than fluorescence signal detection, it may be widely used in quantification of miRNAs.

  9. A general method for bead-enhanced quantitation by flow cytometry

    Science.gov (United States)

    Montes, Martin; Jaensson, Elin A.; Orozco, Aaron F.; Lewis, Dorothy E.; Corry, David B.

    2009-01-01

    Flow cytometry provides accurate relative cellular quantitation (percent abundance) of cells from diverse samples, but technical limitations of most flow cytometers preclude accurate absolute quantitation. Several quantitation standards are now commercially available which, when added to samples, permit absolute quantitation of CD4+ T cells. However, these reagents are limited by their cost, technical complexity, requirement for additional software and/or limited applicability. Moreover, few studies have validated the use of such reagents in complex biological samples, especially for quantitation of non-T cells. Here we show that addition to samples of known quantities of polystyrene fluorescence standardization beads permits accurate quantitation of CD4+ T cells from complex cell samples. This procedure, here termed single bead-enhanced cytofluorimetry (SBEC), was equally capable of enumerating eosinophils as well as subcellular fragments of apoptotic cells, moieties with very different optical and fluorescent characteristics. Relative to other proprietary products, SBEC is simple, inexpensive and requires no special software, suggesting that the method is suitable for the routine quantitation of most cells and other particles by flow cytometry. PMID:17067632

  10. The accurate assessment of small-angle X-ray scattering data.

    Science.gov (United States)

    Grant, Thomas D; Luft, Joseph R; Carter, Lester G; Matsui, Tsutomu; Weiss, Thomas M; Martel, Anne; Snell, Edward H

    2015-01-01

    Small-angle X-ray scattering (SAXS) has grown in popularity in recent times with the advent of bright synchrotron X-ray sources, powerful computational resources and algorithms enabling the calculation of increasingly complex models. However, the lack of standardized data-quality metrics presents difficulties for the growing user community in accurately assessing the quality of experimental SAXS data. Here, a series of metrics to quantitatively describe SAXS data in an objective manner using statistical evaluations are defined. These metrics are applied to identify the effects of radiation damage, concentration dependence and interparticle interactions on SAXS data from a set of 27 previously described targets for which high-resolution structures have been determined via X-ray crystallography or nuclear magnetic resonance (NMR) spectroscopy. The studies show that these metrics are sufficient to characterize SAXS data quality on a small sample set with statistical rigor and sensitivity similar to or better than manual analysis. The development of data-quality analysis strategies such as these initial efforts is needed to enable the accurate and unbiased assessment of SAXS data quality.

  11. An Accurate Estimate of the Free Energy and Phase Diagram of All-DNA Bulk Fluids

    Directory of Open Access Journals (Sweden)

    Emanuele Locatelli

    2018-04-01

    Full Text Available We present a numerical study in which large-scale bulk simulations of self-assembled DNA constructs have been carried out with a realistic coarse-grained model. The investigation aims at obtaining a precise, albeit numerically demanding, estimate of the free energy for such systems. We then, in turn, use these accurate results to validate a recently proposed theoretical approach that builds on a liquid-state theory, the Wertheim theory, to compute the phase diagram of all-DNA fluids. This hybrid theoretical/numerical approach, based on the lowest-order virial expansion and on a nearest-neighbor DNA model, can provide, in an undemanding way, a parameter-free thermodynamic description of DNA associating fluids that is in semi-quantitative agreement with experiments. We show that the predictions of the scheme are as accurate as those obtained with more sophisticated methods. We also demonstrate the flexibility of the approach by incorporating non-trivial additional contributions that go beyond the nearest-neighbor model to compute the DNA hybridization free energy.

  12. Accurate thermodynamic relations of the melting temperature of nanocrystals with different shapes and pure theoretical calculation

    Energy Technology Data Exchange (ETDEWEB)

    Zhu, Jinhua; Fu, Qingshan; Xue, Yongqiang, E-mail: xyqlw@126.com; Cui, Zixiang

    2017-05-01

    Based on the surface pre-melting model, accurate thermodynamic relations of the melting temperature of nanocrystals with different shapes (tetrahedron, cube, octahedron, dodecahedron, icosahedron, nanowire) were derived. The theoretically calculated melting temperatures are in relative good agreements with experimental, molecular dynamic simulation and other theoretical results for nanometer Au, Ag, Al, In and Pb. It is found that the particle size and shape have notable effects on the melting temperature of nanocrystals, and the smaller the particle size, the greater the effect of shape. Furthermore, at the same equivalent radius, the more the shape deviates from sphere, the lower the melting temperature is. The value of melting temperature depression of cylindrical nanowire is just half of that of spherical nanoparticle with an identical radius. The theoretical relations enable one to quantitatively describe the influence regularities of size and shape on the melting temperature and to provide an effective way to predict and interpret the melting temperature of nanocrystals with different sizes and shapes. - Highlights: • Accurate relations of T{sub m} of nanocrystals with various shapes are derived. • Calculated T{sub m} agree with literature results for nano Au, Ag, Al, In and Pb. • ΔT{sub m} (nanowire) = 0.5ΔT{sub m} (spherical nanocrystal). • The relations apply to predict and interpret the melting behaviors of nanocrystals.

  13. Accurate measurement of mitochondrial DNA deletion level and copy number differences in human skeletal muscle.

    Directory of Open Access Journals (Sweden)

    John P Grady

    Full Text Available Accurate and reliable quantification of the abundance of mitochondrial DNA (mtDNA molecules, both wild-type and those harbouring pathogenic mutations, is important not only for understanding the progression of mtDNA disease but also for evaluating novel therapeutic approaches. A clear understanding of the sensitivity of mtDNA measurement assays under different experimental conditions is therefore critical, however it is routinely lacking for most published mtDNA quantification assays. Here, we comprehensively assess the variability of two quantitative Taqman real-time PCR assays, a widely-applied MT-ND1/MT-ND4 multiplex mtDNA deletion assay and a recently developed MT-ND1/B2M singleplex mtDNA copy number assay, across a range of DNA concentrations and mtDNA deletion/copy number levels. Uniquely, we provide a specific guide detailing necessary numbers of sample and real-time PCR plate replicates for accurately and consistently determining a given difference in mtDNA deletion levels and copy number in homogenate skeletal muscle DNA.

  14. DNA barcode data accurately assign higher spider taxa

    Directory of Open Access Journals (Sweden)

    Jonathan A. Coddington

    2016-07-01

    Full Text Available The use of unique DNA sequences as a method for taxonomic identification is no longer fundamentally controversial, even though debate continues on the best markers, methods, and technology to use. Although both existing databanks such as GenBank and BOLD, as well as reference taxonomies, are imperfect, in best case scenarios “barcodes” (whether single or multiple, organelle or nuclear, loci clearly are an increasingly fast and inexpensive method of identification, especially as compared to manual identification of unknowns by increasingly rare expert taxonomists. Because most species on Earth are undescribed, a complete reference database at the species level is impractical in the near term. The question therefore arises whether unidentified species can, using DNA barcodes, be accurately assigned to more inclusive groups such as genera and families—taxonomic ranks of putatively monophyletic groups for which the global inventory is more complete and stable. We used a carefully chosen test library of CO1 sequences from 49 families, 313 genera, and 816 species of spiders to assess the accuracy of genus and family-level assignment. We used BLAST queries of each sequence against the entire library and got the top ten hits. The percent sequence identity was reported from these hits (PIdent, range 75–100%. Accurate assignment of higher taxa (PIdent above which errors totaled less than 5% occurred for genera at PIdent values >95 and families at PIdent values ≥ 91, suggesting these as heuristic thresholds for accurate generic and familial identifications in spiders. Accuracy of identification increases with numbers of species/genus and genera/family in the library; above five genera per family and fifteen species per genus all higher taxon assignments were correct. We propose that using percent sequence identity between conventional barcode sequences may be a feasible and reasonably accurate method to identify animals to family/genus. However

  15. Mixing quantitative with qualitative methods

    DEFF Research Database (Denmark)

    Morrison, Ann; Viller, Stephen; Heck, Tamara

    2017-01-01

    with or are considering, researching, or working with both quantitative and qualitative evaluation methods (in academia or industry), join us in this workshop. In particular, we look at adding quantitative to qualitative methods to build a whole picture of user experience. We see a need to discuss both quantitative...... and qualitative research because there is often a perceived lack of understanding of the rigor involved in each. The workshop will result in a White Paper on the latest developments in this field, within Australia and comparative with international work. We anticipate sharing submissions and workshop outcomes...

  16. Understanding quantitative research: part 1.

    Science.gov (United States)

    Hoe, Juanita; Hoare, Zoë

    This article, which is the first in a two-part series, provides an introduction to understanding quantitative research, basic statistics and terminology used in research articles. Critical appraisal of research articles is essential to ensure that nurses remain up to date with evidence-based practice to provide consistent and high-quality nursing care. This article focuses on developing critical appraisal skills and understanding the use and implications of different quantitative approaches to research. Part two of this article will focus on explaining common statistical terms and the presentation of statistical data in quantitative research.

  17. Experimental Influences in the Accurate Measurement of Cartilage Thickness in MRI.

    Science.gov (United States)

    Wang, Nian; Badar, Farid; Xia, Yang

    2018-01-01

    Objective To study the experimental influences to the measurement of cartilage thickness by magnetic resonance imaging (MRI). Design The complete thicknesses of healthy and trypsin-degraded cartilage were measured at high-resolution MRI under different conditions, using two intensity-based imaging sequences (ultra-short echo [UTE] and multislice-multiecho [MSME]) and 3 quantitative relaxation imaging sequences (T 1 , T 2 , and T 1 ρ). Other variables included different orientations in the magnet, 2 soaking solutions (saline and phosphate buffered saline [PBS]), and external loading. Results With cartilage soaked in saline, UTE and T 1 methods yielded complete and consistent measurement of cartilage thickness, while the thickness measurement by T 2 , T 1 ρ, and MSME methods were orientation dependent. The effect of external loading on cartilage thickness is also sequence and orientation dependent. All variations in cartilage thickness in MRI could be eliminated with the use of a 100 mM PBS or imaged by UTE sequence. Conclusions The appearance of articular cartilage and the measurement accuracy of cartilage thickness in MRI can be influenced by a number of experimental factors in ex vivo MRI, from the use of various pulse sequences and soaking solutions to the health of the tissue. T 2 -based imaging sequence, both proton-intensity sequence and quantitative relaxation sequence, similarly produced the largest variations. With adequate resolution, the accurate measurement of whole cartilage tissue in clinical MRI could be utilized to detect differences between healthy and osteoarthritic cartilage after compression.

  18. Characterization of 3-Dimensional PET Systems for Accurate Quantification of Myocardial Blood Flow.

    Science.gov (United States)

    Renaud, Jennifer M; Yip, Kathy; Guimond, Jean; Trottier, Mikaël; Pibarot, Philippe; Turcotte, Eric; Maguire, Conor; Lalonde, Lucille; Gulenchyn, Karen; Farncombe, Troy; Wisenberg, Gerald; Moody, Jonathan; Lee, Benjamin; Port, Steven C; Turkington, Timothy G; Beanlands, Rob S; deKemp, Robert A

    2017-01-01

    Three-dimensional (3D) mode imaging is the current standard for PET/CT systems. Dynamic imaging for quantification of myocardial blood flow with short-lived tracers, such as 82 Rb-chloride, requires accuracy to be maintained over a wide range of isotope activities and scanner counting rates. We proposed new performance standard measurements to characterize the dynamic range of PET systems for accurate quantitative imaging. 82 Rb or 13 N-ammonia (1,100-3,000 MBq) was injected into the heart wall insert of an anthropomorphic torso phantom. A decaying isotope scan was obtained over 5 half-lives on 9 different 3D PET/CT systems and 1 3D/2-dimensional PET-only system. Dynamic images (28 × 15 s) were reconstructed using iterative algorithms with all corrections enabled. Dynamic range was defined as the maximum activity in the myocardial wall with less than 10% bias, from which corresponding dead-time, counting rates, and/or injected activity limits were established for each scanner. Scatter correction residual bias was estimated as the maximum cavity blood-to-myocardium activity ratio. Image quality was assessed via the coefficient of variation measuring nonuniformity of the left ventricular myocardium activity distribution. Maximum recommended injected activity/body weight, peak dead-time correction factor, counting rates, and residual scatter bias for accurate cardiac myocardial blood flow imaging were 3-14 MBq/kg, 1.5-4.0, 22-64 Mcps singles and 4-14 Mcps prompt coincidence counting rates, and 2%-10% on the investigated scanners. Nonuniformity of the myocardial activity distribution varied from 3% to 16%. Accurate dynamic imaging is possible on the 10 3D PET systems if the maximum injected MBq/kg values are respected to limit peak dead-time losses during the bolus first-pass transit. © 2017 by the Society of Nuclear Medicine and Molecular Imaging.

  19. Accurate lithography simulation model based on convolutional neural networks

    Science.gov (United States)

    Watanabe, Yuki; Kimura, Taiki; Matsunawa, Tetsuaki; Nojima, Shigeki

    2017-07-01

    Lithography simulation is an essential technique for today's semiconductor manufacturing process. In order to calculate an entire chip in realistic time, compact resist model is commonly used. The model is established for faster calculation. To have accurate compact resist model, it is necessary to fix a complicated non-linear model function. However, it is difficult to decide an appropriate function manually because there are many options. This paper proposes a new compact resist model using CNN (Convolutional Neural Networks) which is one of deep learning techniques. CNN model makes it possible to determine an appropriate model function and achieve accurate simulation. Experimental results show CNN model can reduce CD prediction errors by 70% compared with the conventional model.

  20. Fast and accurate edge orientation processing during object manipulation

    Science.gov (United States)

    Flanagan, J Randall; Johansson, Roland S

    2018-01-01

    Quickly and accurately extracting information about a touched object’s orientation is a critical aspect of dexterous object manipulation. However, the speed and acuity of tactile edge orientation processing with respect to the fingertips as reported in previous perceptual studies appear inadequate in these respects. Here we directly establish the tactile system’s capacity to process edge-orientation information during dexterous manipulation. Participants extracted tactile information about edge orientation very quickly, using it within 200 ms of first touching the object. Participants were also strikingly accurate. With edges spanning the entire fingertip, edge-orientation resolution was better than 3° in our object manipulation task, which is several times better than reported in previous perceptual studies. Performance remained impressive even with edges as short as 2 mm, consistent with our ability to precisely manipulate very small objects. Taken together, our results radically redefine the spatial processing capacity of the tactile system. PMID:29611804

  1. A Highly Accurate Approach for Aeroelastic System with Hysteresis Nonlinearity

    Directory of Open Access Journals (Sweden)

    C. C. Cui

    2017-01-01

    Full Text Available We propose an accurate approach, based on the precise integration method, to solve the aeroelastic system of an airfoil with a pitch hysteresis. A major procedure for achieving high precision is to design a predictor-corrector algorithm. This algorithm enables accurate determination of switching points resulting from the hysteresis. Numerical examples show that the results obtained by the presented method are in excellent agreement with exact solutions. In addition, the high accuracy can be maintained as the time step increases in a reasonable range. It is also found that the Runge-Kutta method may sometimes provide quite different and even fallacious results, though the step length is much less than that adopted in the presented method. With such high computational accuracy, the presented method could be applicable in dynamical systems with hysteresis nonlinearities.

  2. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  3. An Accurate Link Correlation Estimator for Improving Wireless Protocol Performance

    Science.gov (United States)

    Zhao, Zhiwei; Xu, Xianghua; Dong, Wei; Bu, Jiajun

    2015-01-01

    Wireless link correlation has shown significant impact on the performance of various sensor network protocols. Many works have been devoted to exploiting link correlation for protocol improvements. However, the effectiveness of these designs heavily relies on the accuracy of link correlation measurement. In this paper, we investigate state-of-the-art link correlation measurement and analyze the limitations of existing works. We then propose a novel lightweight and accurate link correlation estimation (LACE) approach based on the reasoning of link correlation formation. LACE combines both long-term and short-term link behaviors for link correlation estimation. We implement LACE as a stand-alone interface in TinyOS and incorporate it into both routing and flooding protocols. Simulation and testbed results show that LACE: (1) achieves more accurate and lightweight link correlation measurements than the state-of-the-art work; and (2) greatly improves the performance of protocols exploiting link correlation. PMID:25686314

  4. Accurate phylogenetic tree reconstruction from quartets: a heuristic approach.

    Science.gov (United States)

    Reaz, Rezwana; Bayzid, Md Shamsuzzoha; Rahman, M Sohel

    2014-01-01

    Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A 'quartet' is an unrooted tree over 4 taxa, hence the quartet-based supertree methods combine many 4-taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets.

  5. Improved fingercode alignment for accurate and compact fingerprint recognition

    CSIR Research Space (South Africa)

    Brown, Dane

    2016-05-01

    Full Text Available Alignment for Accurate and Compact Fingerprint Recognition Dane Brown∗† and Karen Bradshaw∗ ∗Department of Computer Science Rhodes University Grahamstown, South Africa †Council for Scientific and Industrial Research Modelling and Digital Sciences Pretoria.... The experimental analysis and results are discussed in Section IV. Section V concludes the paper. II. RELATED STUDIES FingerCode [1] uses circular tessellation of filtered finger- print images centered at the reference point, which results in a circular ROI...

  6. D-BRAIN : Anatomically accurate simulated diffusion MRI brain data

    OpenAIRE

    Perrone, Daniele; Jeurissen, Ben; Aelterman, Jan; Roine, Timo; Sijbers, Jan; Pizurica, Aleksandra; Leemans, Alexander; Philips, Wilfried

    2016-01-01

    Diffusion Weighted (DW) MRI allows for the non-invasive study of water diffusion inside living tissues. As such, it is useful for the investigation of human brain white matter (WM) connectivity in vivo through fiber tractography (FT) algorithms. Many DW-MRI tailored restoration techniques and FT algorithms have been developed. However, it is not clear how accurately these methods reproduce the WM bundle characteristics in real-world conditions, such as in the presence of noise, partial volume...

  7. Accurate Online Full Charge Capacity Modeling of Smartphone Batteries

    OpenAIRE

    Hoque, Mohammad A.; Siekkinen, Matti; Koo, Jonghoe; Tarkoma, Sasu

    2016-01-01

    Full charge capacity (FCC) refers to the amount of energy a battery can hold. It is the fundamental property of smartphone batteries that diminishes as the battery ages and is charged/discharged. We investigate the behavior of smartphone batteries while charging and demonstrate that the battery voltage and charging rate information can together characterize the FCC of a battery. We propose a new method for accurately estimating FCC without exposing low-level system details or introducing new ...

  8. Multigrid time-accurate integration of Navier-Stokes equations

    Science.gov (United States)

    Arnone, Andrea; Liou, Meng-Sing; Povinelli, Louis A.

    1993-01-01

    Efficient acceleration techniques typical of explicit steady-state solvers are extended to time-accurate calculations. Stability restrictions are greatly reduced by means of a fully implicit time discretization. A four-stage Runge-Kutta scheme with local time stepping, residual smoothing, and multigridding is used instead of traditional time-expensive factorizations. Some applications to natural and forced unsteady viscous flows show the capability of the procedure.

  9. Fast, Accurate Memory Architecture Simulation Technique Using Memory Access Characteristics

    OpenAIRE

    小野, 貴継; 井上, 弘士; 村上, 和彰

    2007-01-01

    This paper proposes a fast and accurate memory architecture simulation technique. To design memory architecture, the first steps commonly involve using trace-driven simulation. However, expanding the design space makes the evaluation time increase. A fast simulation is achieved by a trace size reduction, but it reduces the simulation accuracy. Our approach can reduce the simulation time while maintaining the accuracy of the simulation results. In order to evaluate validity of proposed techniq...

  10. Discrete sensors distribution for accurate plantar pressure analyses.

    Science.gov (United States)

    Claverie, Laetitia; Ille, Anne; Moretto, Pierre

    2016-12-01

    The aim of this study was to determine the distribution of discrete sensors under the footprint for accurate plantar pressure analyses. For this purpose, two different sensor layouts have been tested and compared, to determine which was the most accurate to monitor plantar pressure with wireless devices in research and/or clinical practice. Ten healthy volunteers participated in the study (age range: 23-58 years). The barycenter of pressures (BoP) determined from the plantar pressure system (W-inshoe®) was compared to the center of pressures (CoP) determined from a force platform (AMTI) in the medial-lateral (ML) and anterior-posterior (AP) directions. Then, the vertical ground reaction force (vGRF) obtained from both W-inshoe® and force platform was compared for both layouts for each subject. The BoP and vGRF determined from the plantar pressure system data showed good correlation (SCC) with those determined from the force platform data, notably for the second sensor organization (ML SCC= 0.95; AP SCC=0.99; vGRF SCC=0.91). The study demonstrates that an adjusted placement of removable sensors is key to accurate plantar pressure analyses. These results are promising for a plantar pressure recording outside clinical or laboratory settings, for long time monitoring, real time feedback or for whatever activity requiring a low-cost system. Copyright © 2016 IPEM. Published by Elsevier Ltd. All rights reserved.

  11. Can blind persons accurately assess body size from the voice?

    Science.gov (United States)

    Pisanski, Katarzyna; Oleszkiewicz, Anna; Sorokowska, Agnieszka

    2016-04-01

    Vocal tract resonances provide reliable information about a speaker's body size that human listeners use for biosocial judgements as well as speech recognition. Although humans can accurately assess men's relative body size from the voice alone, how this ability is acquired remains unknown. In this study, we test the prediction that accurate voice-based size estimation is possible without prior audiovisual experience linking low frequencies to large bodies. Ninety-one healthy congenitally or early blind, late blind and sighted adults (aged 20-65) participated in the study. On the basis of vowel sounds alone, participants assessed the relative body sizes of male pairs of varying heights. Accuracy of voice-based body size assessments significantly exceeded chance and did not differ among participants who were sighted, or congenitally blind or who had lost their sight later in life. Accuracy increased significantly with relative differences in physical height between men, suggesting that both blind and sighted participants used reliable vocal cues to size (i.e. vocal tract resonances). Our findings demonstrate that prior visual experience is not necessary for accurate body size estimation. This capacity, integral to both nonverbal communication and speech perception, may be present at birth or may generalize from broader cross-modal correspondences. © 2016 The Author(s).

  12. An accurate determination of the flux within a slab

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Lapenta, G.

    1993-01-01

    During the past decade, several articles have been written concerning accurate solutions to the monoenergetic neutron transport equation in infinite and semi-infinite geometries. The numerical formulations found in these articles were based primarily on the extensive theoretical investigations performed by the open-quotes transport greatsclose quotes such as Chandrasekhar, Busbridge, Sobolev, and Ivanov, to name a few. The development of numerical solutions in infinite and semi-infinite geometries represents an example of how mathematical transport theory can be utilized to provide highly accurate and efficient numerical transport solutions. These solutions, or analytical benchmarks, are useful as open-quotes industry standards,close quotes which provide guidance to code developers and promote learning in the classroom. The high accuracy of these benchmarks is directly attributable to the rapid advancement of the state of computing and computational methods. Transport calculations that were beyond the capability of the open-quotes supercomputersclose quotes of just a few years ago are now possible at one's desk. In this paper, we again build upon the past to tackle the slab problem, which is of the next level of difficulty in comparison to infinite media problems. The formulation is based on the monoenergetic Green's function, which is the most fundamental transport solution. This method of solution requires a fast and accurate evaluation of the Green's function, which, with today's computational power, is now readily available

  13. Can Measured Synergy Excitations Accurately Construct Unmeasured Muscle Excitations?

    Science.gov (United States)

    Bianco, Nicholas A; Patten, Carolynn; Fregly, Benjamin J

    2018-01-01

    Accurate prediction of muscle and joint contact forces during human movement could improve treatment planning for disorders such as osteoarthritis, stroke, Parkinson's disease, and cerebral palsy. Recent studies suggest that muscle synergies, a low-dimensional representation of a large set of muscle electromyographic (EMG) signals (henceforth called "muscle excitations"), may reduce the redundancy of muscle excitation solutions predicted by optimization methods. This study explores the feasibility of using muscle synergy information extracted from eight muscle EMG signals (henceforth called "included" muscle excitations) to accurately construct muscle excitations from up to 16 additional EMG signals (henceforth called "excluded" muscle excitations). Using treadmill walking data collected at multiple speeds from two subjects (one healthy, one poststroke), we performed muscle synergy analysis on all possible subsets of eight included muscle excitations and evaluated how well the calculated time-varying synergy excitations could construct the remaining excluded muscle excitations (henceforth called "synergy extrapolation"). We found that some, but not all, eight-muscle subsets yielded synergy excitations that achieved >90% extrapolation variance accounted for (VAF). Using the top 10% of subsets, we developed muscle selection heuristics to identify included muscle combinations whose synergy excitations achieved high extrapolation accuracy. For 3, 4, and 5 synergies, these heuristics yielded extrapolation VAF values approximately 5% lower than corresponding reconstruction VAF values for each associated eight-muscle subset. These results suggest that synergy excitations obtained from experimentally measured muscle excitations can accurately construct unmeasured muscle excitations, which could help limit muscle excitations predicted by muscle force optimizations.

  14. Can cancer researchers accurately judge whether preclinical reports will reproduce?

    Directory of Open Access Journals (Sweden)

    Daniel Benjamin

    2017-06-01

    Full Text Available There is vigorous debate about the reproducibility of research findings in cancer biology. Whether scientists can accurately assess which experiments will reproduce original findings is important to determining the pace at which science self-corrects. We collected forecasts from basic and preclinical cancer researchers on the first 6 replication studies conducted by the Reproducibility Project: Cancer Biology (RP:CB to assess the accuracy of expert judgments on specific replication outcomes. On average, researchers forecasted a 75% probability of replicating the statistical significance and a 50% probability of replicating the effect size, yet none of these studies successfully replicated on either criterion (for the 5 studies with results reported. Accuracy was related to expertise: experts with higher h-indices were more accurate, whereas experts with more topic-specific expertise were less accurate. Our findings suggest that experts, especially those with specialized knowledge, were overconfident about the RP:CB replicating individual experiments within published reports; researcher optimism likely reflects a combination of overestimating the validity of original studies and underestimating the difficulties of repeating their methodologies.

  15. Is bioelectrical impedance accurate for use in large epidemiological studies?

    Directory of Open Access Journals (Sweden)

    Merchant Anwar T

    2008-09-01

    Full Text Available Abstract Percentage of body fat is strongly associated with the risk of several chronic diseases but its accurate measurement is difficult. Bioelectrical impedance analysis (BIA is a relatively simple, quick and non-invasive technique, to measure body composition. It measures body fat accurately in controlled clinical conditions but its performance in the field is inconsistent. In large epidemiologic studies simpler surrogate techniques such as body mass index (BMI, waist circumference, and waist-hip ratio are frequently used instead of BIA to measure body fatness. We reviewed the rationale, theory, and technique of recently developed systems such as foot (or hand-to-foot BIA measurement, and the elements that could influence its results in large epidemiologic studies. BIA results are influenced by factors such as the environment, ethnicity, phase of menstrual cycle, and underlying medical conditions. We concluded that BIA measurements validated for specific ethnic groups, populations and conditions can accurately measure body fat in those populations, but not others and suggest that for large epdiemiological studies with diverse populations BIA may not be the appropriate choice for body composition measurement unless specific calibration equations are developed for different groups participating in the study.

  16. Accurate and approximate thermal rate constants for polyatomic chemical reactions

    International Nuclear Information System (INIS)

    Nyman, Gunnar

    2007-01-01

    In favourable cases it is possible to calculate thermal rate constants for polyatomic reactions to high accuracy from first principles. Here, we discuss the use of flux correlation functions combined with the multi-configurational time-dependent Hartree (MCTDH) approach to efficiently calculate cumulative reaction probabilities and thermal rate constants for polyatomic chemical reactions. Three isotopic variants of the H 2 + CH 3 → CH 4 + H reaction are used to illustrate the theory. There is good agreement with experimental results although the experimental rates generally are larger than the calculated ones, which are believed to be at least as accurate as the experimental rates. Approximations allowing evaluation of the thermal rate constant above 400 K are treated. It is also noted that for the treated reactions, transition state theory (TST) gives accurate rate constants above 500 K. TST theory also gives accurate results for kinetic isotope effects in cases where the mass of the transfered atom is unchanged. Due to neglect of tunnelling, TST however fails below 400 K if the mass of the transferred atom changes between the isotopic reactions

  17. Accurate thermoelastic tensor and acoustic velocities of NaCl

    Energy Technology Data Exchange (ETDEWEB)

    Marcondes, Michel L., E-mail: michel@if.usp.br [Physics Institute, University of Sao Paulo, Sao Paulo, 05508-090 (Brazil); Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Shukla, Gaurav, E-mail: shukla@physics.umn.edu [School of Physics and Astronomy, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States); Silveira, Pedro da [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Wentzcovitch, Renata M., E-mail: wentz002@umn.edu [Chemical Engineering and Material Science, University of Minnesota, Minneapolis, 55455 (United States); Minnesota supercomputer Institute, University of Minnesota, Minneapolis, 55455 (United States)

    2015-12-15

    Despite the importance of thermoelastic properties of minerals in geology and geophysics, their measurement at high pressures and temperatures are still challenging. Thus, ab initio calculations are an essential tool for predicting these properties at extreme conditions. Owing to the approximate description of the exchange-correlation energy, approximations used in calculations of vibrational effects, and numerical/methodological approximations, these methods produce systematic deviations. Hybrid schemes combining experimental data and theoretical results have emerged as a way to reconcile available information and offer more reliable predictions at experimentally inaccessible thermodynamics conditions. Here we introduce a method to improve the calculated thermoelastic tensor by using highly accurate thermal equation of state (EoS). The corrective scheme is general, applicable to crystalline solids with any symmetry, and can produce accurate results at conditions where experimental data may not exist. We apply it to rock-salt-type NaCl, a material whose structural properties have been challenging to describe accurately by standard ab initio methods and whose acoustic/seismic properties are important for the gas and oil industry.

  18. Indexed variation graphs for efficient and accurate resistome profiling.

    Science.gov (United States)

    Rowe, Will P M; Winn, Martyn D

    2018-05-14

    Antimicrobial resistance remains a major threat to global health. Profiling the collective antimicrobial resistance genes within a metagenome (the "resistome") facilitates greater understanding of antimicrobial resistance gene diversity and dynamics. In turn, this can allow for gene surveillance, individualised treatment of bacterial infections and more sustainable use of antimicrobials. However, resistome profiling can be complicated by high similarity between reference genes, as well as the sheer volume of sequencing data and the complexity of analysis workflows. We have developed an efficient and accurate method for resistome profiling that addresses these complications and improves upon currently available tools. Our method combines a variation graph representation of gene sets with an LSH Forest indexing scheme to allow for fast classification of metagenomic sequence reads using similarity-search queries. Subsequent hierarchical local alignment of classified reads against graph traversals enables accurate reconstruction of full-length gene sequences using a scoring scheme. We provide our implementation, GROOT, and show it to be both faster and more accurate than a current reference-dependent tool for resistome profiling. GROOT runs on a laptop and can process a typical 2 gigabyte metagenome in 2 minutes using a single CPU. Our method is not restricted to resistome profiling and has the potential to improve current metagenomic workflows. GROOT is written in Go and is available at https://github.com/will-rowe/groot (MIT license). will.rowe@stfc.ac.uk. Supplementary data are available at Bioinformatics online.

  19. MR Fingerprinting for Rapid Quantitative Abdominal Imaging.

    Science.gov (United States)

    Chen, Yong; Jiang, Yun; Pahwa, Shivani; Ma, Dan; Lu, Lan; Twieg, Michael D; Wright, Katherine L; Seiberlich, Nicole; Griswold, Mark A; Gulani, Vikas

    2016-04-01

    To develop a magnetic resonance (MR) "fingerprinting" technique for quantitative abdominal imaging. This HIPAA-compliant study had institutional review board approval, and informed consent was obtained from all subjects. To achieve accurate quantification in the presence of marked B0 and B1 field inhomogeneities, the MR fingerprinting framework was extended by using a two-dimensional fast imaging with steady-state free precession, or FISP, acquisition and a Bloch-Siegert B1 mapping method. The accuracy of the proposed technique was validated by using agarose phantoms. Quantitative measurements were performed in eight asymptomatic subjects and in six patients with 20 focal liver lesions. A two-tailed Student t test was used to compare the T1 and T2 results in metastatic adenocarcinoma with those in surrounding liver parenchyma and healthy subjects. Phantom experiments showed good agreement with standard methods in T1 and T2 after B1 correction. In vivo studies demonstrated that quantitative T1, T2, and B1 maps can be acquired within a breath hold of approximately 19 seconds. T1 and T2 measurements were compatible with those in the literature. Representative values included the following: liver, 745 msec ± 65 (standard deviation) and 31 msec ± 6; renal medulla, 1702 msec ± 205 and 60 msec ± 21; renal cortex, 1314 msec ± 77 and 47 msec ± 10; spleen, 1232 msec ± 92 and 60 msec ± 19; skeletal muscle, 1100 msec ± 59 and 44 msec ± 9; and fat, 253 msec ± 42 and 77 msec ± 16, respectively. T1 and T2 in metastatic adenocarcinoma were 1673 msec ± 331 and 43 msec ± 13, respectively, significantly different from surrounding liver parenchyma relaxation times of 840 msec ± 113 and 28 msec ± 3 (P < .0001 and P < .01) and those in hepatic parenchyma in healthy volunteers (745 msec ± 65 and 31 msec ± 6, P < .0001 and P = .021, respectively). A rapid technique for quantitative abdominal imaging was developed that allows simultaneous quantification of multiple tissue

  20. EQPlanar: a maximum-likelihood method for accurate organ activity estimation from whole body planar projections

    International Nuclear Information System (INIS)

    Song, N; Frey, E C; He, B; Wahl, R L

    2011-01-01

    Optimizing targeted radionuclide therapy requires patient-specific estimation of organ doses. The organ doses are estimated from quantitative nuclear medicine imaging studies, many of which involve planar whole body scans. We have previously developed the quantitative planar (QPlanar) processing method and demonstrated its ability to provide more accurate activity estimates than conventional geometric-mean-based planar (CPlanar) processing methods using physical phantom and simulation studies. The QPlanar method uses the maximum likelihood-expectation maximization algorithm, 3D organ volume of interests (VOIs), and rigorous models of physical image degrading factors to estimate organ activities. However, the QPlanar method requires alignment between the 3D organ VOIs and the 2D planar projections and assumes uniform activity distribution in each VOI. This makes application to patients challenging. As a result, in this paper we propose an extended QPlanar (EQPlanar) method that provides independent-organ rigid registration and includes multiple background regions. We have validated this method using both Monte Carlo simulation and patient data. In the simulation study, we evaluated the precision and accuracy of the method in comparison to the original QPlanar method. For the patient studies, we compared organ activity estimates at 24 h after injection with those from conventional geometric mean-based planar quantification using a 24 h post-injection quantitative SPECT reconstruction as the gold standard. We also compared the goodness of fit of the measured and estimated projections obtained from the EQPlanar method to those from the original method at four other time points where gold standard data were not available. In the simulation study, more accurate activity estimates were provided by the EQPlanar method for all the organs at all the time points compared with the QPlanar method. Based on the patient data, we concluded that the EQPlanar method provided a

  1. Accurate Classification of Chronic Migraine via Brain Magnetic Resonance Imaging

    Science.gov (United States)

    Schwedt, Todd J.; Chong, Catherine D.; Wu, Teresa; Gaw, Nathan; Fu, Yinlin; Li, Jing

    2015-01-01

    Background The International Classification of Headache Disorders provides criteria for the diagnosis and subclassification of migraine. Since there is no objective gold standard by which to test these diagnostic criteria, the criteria are based on the consensus opinion of content experts. Accurate migraine classifiers consisting of brain structural measures could serve as an objective gold standard by which to test and revise diagnostic criteria. The objectives of this study were to utilize magnetic resonance imaging measures of brain structure for constructing classifiers: 1) that accurately identify individuals as having chronic vs. episodic migraine vs. being a healthy control; and 2) that test the currently used threshold of 15 headache days/month for differentiating chronic migraine from episodic migraine. Methods Study participants underwent magnetic resonance imaging for determination of regional cortical thickness, cortical surface area, and volume. Principal components analysis combined structural measurements into principal components accounting for 85% of variability in brain structure. Models consisting of these principal components were developed to achieve the classification objectives. Ten-fold cross validation assessed classification accuracy within each of the ten runs, with data from 90% of participants randomly selected for classifier development and data from the remaining 10% of participants used to test classification performance. Headache frequency thresholds ranging from 5–15 headache days/month were evaluated to determine the threshold allowing for the most accurate subclassification of individuals into lower and higher frequency subgroups. Results Participants were 66 migraineurs and 54 healthy controls, 75.8% female, with an average age of 36 +/− 11 years. Average classifier accuracies were: a) 68% for migraine (episodic + chronic) vs. healthy controls; b) 67.2% for episodic migraine vs. healthy controls; c) 86.3% for chronic

  2. The controlled incorporation of foreign elements in metal surfaces by means of quantitative ion implantation

    International Nuclear Information System (INIS)

    Gries, W.H.

    1977-01-01

    Quantitative ion implantation is a powerful new method for the doping of metal surfaces with accurately known quantities of an element or one of its isotopes. It can be applied for the preparation of standards for various uses in instrumental methods of surface and bulk analysis. This paper provides selected information on some theoretical and practical aspects of quantitative ion implantation with the object of promoting the application of the method and stimulating further purposeful research on the subject. (Auth.)

  3. An accurate metric for the spacetime around rotating neutron stars

    Science.gov (United States)

    Pappas, George

    2017-04-01

    The problem of having an accurate description of the spacetime around rotating neutron stars is of great astrophysical interest. For astrophysical applications, one needs to have a metric that captures all the properties of the spacetime around a rotating neutron star. Furthermore, an accurate appropriately parametrized metric, I.e. a metric that is given in terms of parameters that are directly related to the physical structure of the neutron star, could be used to solve the inverse problem, which is to infer the properties of the structure of a neutron star from astrophysical observations. In this work, we present such an approximate stationary and axisymmetric metric for the exterior of rotating neutron stars, which is constructed using the Ernst formalism and is parametrized by the relativistic multipole moments of the central object. This metric is given in terms of an expansion on the Weyl-Papapetrou coordinates with the multipole moments as free parameters and is shown to be extremely accurate in capturing the physical properties of a neutron star spacetime as they are calculated numerically in general relativity. Because the metric is given in terms of an expansion, the expressions are much simpler and easier to implement, in contrast to previous approaches. For the parametrization of the metric in general relativity, the recently discovered universal 3-hair relations are used to produce a three-parameter metric. Finally, a straightforward extension of this metric is given for scalar-tensor theories with a massless scalar field, which also admit a formulation in terms of an Ernst potential.

  4. Accurate modeling of the hose instability in plasma wakefield accelerators

    Science.gov (United States)

    Mehrling, T. J.; Benedetti, C.; Schroeder, C. B.; Martinez de la Ossa, A.; Osterhoff, J.; Esarey, E.; Leemans, W. P.

    2018-05-01

    Hosing is a major challenge for the applicability of plasma wakefield accelerators and its modeling is therefore of fundamental importance to facilitate future stable and compact plasma-based particle accelerators. In this contribution, we present a new model for the evolution of the plasma centroid, which enables the accurate investigation of the hose instability in the nonlinear blowout regime. It paves the road for more precise and comprehensive studies of hosing, e.g., with drive and witness beams, which were not possible with previous models.

  5. An Accurate Technique for Calculation of Radiation From Printed Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min; Sorensen, Stig B.; Jorgensen, Erik

    2011-01-01

    The accuracy of various techniques for calculating the radiation from printed reflectarrays is examined, and an improved technique based on the equivalent currents approach is proposed. The equivalent currents are found from a continuous plane wave spectrum calculated by use of the spectral dyadic...... Green's function. This ensures a correct relation between the equivalent electric and magnetic currents and thus allows an accurate calculation of the radiation over the entire far-field sphere. A comparison to DTU-ESA Facility measurements of a reference offset reflectarray designed and manufactured...

  6. Fast and accurate methods of independent component analysis: A survey

    Czech Academy of Sciences Publication Activity Database

    Tichavský, Petr; Koldovský, Zbyněk

    2011-01-01

    Roč. 47, č. 3 (2011), s. 426-438 ISSN 0023-5954 R&D Projects: GA MŠk 1M0572; GA ČR GA102/09/1278 Institutional research plan: CEZ:AV0Z10750506 Keywords : Blind source separation * artifact removal * electroencephalogram * audio signal processing Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 0.454, year: 2011 http://library.utia.cas.cz/separaty/2011/SI/tichavsky-fast and accurate methods of independent component analysis a survey.pdf

  7. Accurate ocean bottom seismometer positioning method inspired by multilateration technique

    Science.gov (United States)

    Benazzouz, Omar; Pinheiro, Luis M.; Matias, Luis M. A.; Afilhado, Alexandra; Herold, Daniel; Haines, Seth S.

    2018-01-01

    The positioning of ocean bottom seismometers (OBS) is a key step in the processing flow of OBS data, especially in the case of self popup types of OBS instruments. The use of first arrivals from airgun shots, rather than relying on the acoustic transponders mounted in the OBS, is becoming a trend and generally leads to more accurate positioning due to the statistics from a large number of shots. In this paper, a linearization of the OBS positioning problem via the multilateration technique is discussed. The discussed linear solution solves jointly for the average water layer velocity and the OBS position using only shot locations and first arrival times as input data.

  8. Quantum-Accurate Molecular Dynamics Potential for Tungsten

    Energy Technology Data Exchange (ETDEWEB)

    Wood, Mitchell; Thompson, Aidan P.

    2017-03-01

    The purpose of this short contribution is to report on the development of a Spectral Neighbor Analysis Potential (SNAP) for tungsten. We have focused on the characterization of elastic and defect properties of the pure material in order to support molecular dynamics simulations of plasma-facing materials in fusion reactors. A parallel genetic algorithm approach was used to efficiently search for fitting parameters optimized against a large number of objective functions. In addition, we have shown that this many-body tungsten potential can be used in conjunction with a simple helium pair potential1 to produce accurate defect formation energies for the W-He binary system.

  9. A Novel Device for Accurate Chest Tube Insertion

    DEFF Research Database (Denmark)

    Katballe, Niels; Moeller, Lars B; Olesen, Winnie H

    2016-01-01

    BACKGROUND: Optimal positioning of a large-bore chest tube is in the part of the pleural cavity that needs drainage. It is recommended that the chest tube be positioned apically in pneumothorax and basally for fluids. However, targeted chest tube positioning to a specific part of the pleural cavity...... can be a challenge. METHODS: A new medical device, the KatGuide, was developed for accurate guiding of a chest tube (28F) to an intended part of the pleural cavity. The primary end point of this randomized, controlled trial was optimal position of the chest tube. The optimal position in pneumothorax...

  10. Pink-Beam, Highly-Accurate Compact Water Cooled Slits

    International Nuclear Information System (INIS)

    Lyndaker, Aaron; Deyhim, Alex; Jayne, Richard; Waterman, Dave; Caletka, Dave; Steadman, Paul; Dhesi, Sarnjeet

    2007-01-01

    Advanced Design Consulting, Inc. (ADC) has designed accurate compact slits for applications where high precision is required. The system consists of vertical and horizontal slit mechanisms, a vacuum vessel which houses them, water cooling lines with vacuum guards connected to the individual blades, stepper motors with linear encoders, limit (home position) switches and electrical connections including internal wiring for a drain current measurement system. The total slit size is adjustable from 0 to 15 mm both vertically and horizontally. Each of the four blades are individually controlled and motorized. In this paper, a summary of the design and Finite Element Analysis of the system are presented

  11. Accurate characterization of OPVs: Device masking and different solar simulators

    DEFF Research Database (Denmark)

    Gevorgyan, Suren; Carlé, Jon Eggert; Søndergaard, Roar R.

    2013-01-01

    One of the prime objects of organic solar cell research has been to improve the power conversion efficiency. Unfortunately, the accurate determination of this property is not straight forward and has led to the recommendation that record devices be tested and certified at a few accredited...... laboratories following rigorous ASTM and IEC standards. This work tries to address some of the issues confronting the standard laboratory in this regard. Solar simulator lamps are investigated for their light field homogeneity and direct versus diffuse components, as well as the correct device area...

  12. Allele-sharing models: LOD scores and accurate linkage tests.

    Science.gov (United States)

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  13. Accurate Ne-heavier rare gas interatomic potentials

    International Nuclear Information System (INIS)

    Candori, R.; Pirani, F.; Vecchiocattivi, F.

    1983-01-01

    Accurate interatomic potential curves for Ne-heavier rare gas systems are obtained by a multiproperty analysis. The curves are given via a parametric function which consists of a modified Dunham expansion connected at long range with the van der Waals expansion. The experimental properties considered in the analysis are the differential scattering cross sections at two different collision energies, the integral cross sections in the glory energy range and the second virial coefficients. The transport properties are considered indirectly by using the potential energy values recently obtained by inversion of the transport coefficients. (author)

  14. Accurate masking technology for high-resolution powder blasting

    Science.gov (United States)

    Pawlowski, Anne-Gabrielle; Sayah, Abdeljalil; Gijs, Martin A. M.

    2005-07-01

    We have combined eroding 10 µm diameter Al2O3 particles with a new masking technology to realize the smallest and most accurate possible structures by powder blasting. Our masking technology is based on the sequential combination of two polymers:(i) the brittle epoxy resin SU8 for its photosensitivity and (ii) the elastic and thermocurable poly-dimethylsiloxane for its large erosion resistance. We have micropatterned various types of structures with a minimum width of 20 µm for test structures with an aspect ratio of 1, and 50 µm for test structures with an aspect ratio of 2.

  15. Rapid and Accurate Idea Transfer: Presenting Ideas with Concept Maps

    Science.gov (United States)

    2008-07-30

    questions from the Pre-test were included in the Post-test. I. At the end of the day, people in the camps take home about __ taka to feed their families. 2...teachers are not paid regularly. Tuition fees for different classes are 40 to 80 taka (Bangladesh currency) 27 per month which is very high for the...October 26, 2002. 27 In April 2005, exchange rate, one $ = 60 taka . 44 Rapid and Accurate Idea Transfer: CDRL (DI-MIS-807 11 A, 00012 1) Presenting

  16. An Accurate Transmitting Power Control Method in Wireless Communication Transceivers

    Science.gov (United States)

    Zhang, Naikang; Wen, Zhiping; Hou, Xunping; Bi, Bo

    2018-01-01

    Power control circuits are widely used in transceivers aiming at stabilizing the transmitted signal power to a specified value, thereby reducing power consumption and interference to other frequency bands. In order to overcome the shortcomings of traditional modes of power control, this paper proposes an accurate signal power detection method by multiplexing the receiver and realizes transmitting power control in the digital domain. The simulation results show that this novel digital power control approach has advantages of small delay, high precision and simplified design procedure. The proposed method is applicable to transceivers working at large frequency dynamic range, and has good engineering practicability.

  17. Quantitative mass spectrometry: an overview

    Science.gov (United States)

    Urban, Pawel L.

    2016-10-01

    Mass spectrometry (MS) is a mainstream chemical analysis technique in the twenty-first century. It has contributed to numerous discoveries in chemistry, physics and biochemistry. Hundreds of research laboratories scattered all over the world use MS every day to investigate fundamental phenomena on the molecular level. MS is also widely used by industry-especially in drug discovery, quality control and food safety protocols. In some cases, mass spectrometers are indispensable and irreplaceable by any other metrological tools. The uniqueness of MS is due to the fact that it enables direct identification of molecules based on the mass-to-charge ratios as well as fragmentation patterns. Thus, for several decades now, MS has been used in qualitative chemical analysis. To address the pressing need for quantitative molecular measurements, a number of laboratories focused on technological and methodological improvements that could render MS a fully quantitative metrological platform. In this theme issue, the experts working for some of those laboratories share their knowledge and enthusiasm about quantitative MS. I hope this theme issue will benefit readers, and foster fundamental and applied research based on quantitative MS measurements. This article is part of the themed issue 'Quantitative mass spectrometry'.

  18. Quantitative Analysis of Thallium-201 Myocardial Tomograms

    International Nuclear Information System (INIS)

    Kim, Sang Eun; Nam, Gi Byung; Choi, Chang Woon

    1991-01-01

    The purpose of this study was to assess the ability of quantitative Tl-201 tomography to identify and localize coronary artery disease (CAD). The study population consisted of 41 patients (31 males, 10 females; mean age 55 ± 7 yr) including 14 with prior myocardial infarction who underwent both exercise Tl-201 myocardium SPECT and coronary angiography for the evaluation of chest pain. From the short axis and vertical long axis tomograms, stress extent polar maps were generated by Cedars-Sinai Medical Center program, and the 9 stress defect extent (SDE) was quantified for each coronary artery territory. For the purpose of this study, the coronary circulation was divided into 6 arterial segments, and the myocardial ischemic score (MIS) was calculated from the coronary angiogram. Sensitivity for the detection of CAD (>50% coronary stenosis by angiography) by stress extent polar map was 95% in single vessel disease, and 100% in double and triple vessel diseases. Overall sensitivity was 97%<. Sensitivity and specificity for the detection of individual diseased vessels were, respectively, 87% and 90% for the left anterior descending artery (LAD), 36% and 93% for the left circumflex artery (LCX), and 71% and 70%, for the right coronary artery (RCA). Concordance for the detection of individual diseased vessels between the coronary angiography and stress polar map was fair for the LAD (kappa=0.70), and RCA (kappa=0.41) lesions, whereas it was poor for the LCK lesions (kappa =0.32) There were significant correlations between the MIS and SDE in LAD (rs=0. 56, p=0.0027), and RCA territory (rs=0.60, p=0.0094). No significant correlation was found in LCX territory. When total vascular territories were combined, there was a significant correlation between the MIS and SDE (rs=0.42, p=0,0116). In conclusion, the quantitative analysis of Tl-201 tomograms appears to be accurate for determining the presence and location of CAD.

  19. How accurately can 21cm tomography constrain cosmology?

    Science.gov (United States)

    Mao, Yi; Tegmark, Max; McQuinn, Matthew; Zaldarriaga, Matias; Zahn, Oliver

    2008-07-01

    There is growing interest in using 3-dimensional neutral hydrogen mapping with the redshifted 21 cm line as a cosmological probe. However, its utility depends on many assumptions. To aid experimental planning and design, we quantify how the precision with which cosmological parameters can be measured depends on a broad range of assumptions, focusing on the 21 cm signal from 6noise, to uncertainties in the reionization history, and to the level of contamination from astrophysical foregrounds. We derive simple analytic estimates for how various assumptions affect an experiment’s sensitivity, and we find that the modeling of reionization is the most important, followed by the array layout. We present an accurate yet robust method for measuring cosmological parameters that exploits the fact that the ionization power spectra are rather smooth functions that can be accurately fit by 7 phenomenological parameters. We find that for future experiments, marginalizing over these nuisance parameters may provide constraints almost as tight on the cosmology as if 21 cm tomography measured the matter power spectrum directly. A future square kilometer array optimized for 21 cm tomography could improve the sensitivity to spatial curvature and neutrino masses by up to 2 orders of magnitude, to ΔΩk≈0.0002 and Δmν≈0.007eV, and give a 4σ detection of the spectral index running predicted by the simplest inflation models.

  20. Individual Differences in Accurately Judging Personality From Text.

    Science.gov (United States)

    Hall, Judith A; Goh, Jin X; Mast, Marianne Schmid; Hagedorn, Christian

    2016-08-01

    This research examines correlates of accuracy in judging Big Five traits from first-person text excerpts. Participants in six studies were recruited from psychology courses or online. In each study, participants performed a task of judging personality from text and performed other ability tasks and/or filled out questionnaires. Participants who were more accurate in judging personality from text were more likely to be female; had personalities that were more agreeable, conscientious, and feminine, and less neurotic and dominant (all controlling for participant gender); scored higher on empathic concern; self-reported more interest in, and attentiveness to, people's personalities in their daily lives; and reported reading more for pleasure, especially fiction. Accuracy was not associated with SAT scores but had a significant relation to vocabulary knowledge. Accuracy did not correlate with tests of judging personality and emotion based on audiovisual cues. This research is the first to address individual differences in accurate judgment of personality from text, thus adding to the literature on correlates of the good judge of personality. © 2015 Wiley Periodicals, Inc.

  1. Canadian consumer issues in accurate and fair electricity metering

    International Nuclear Information System (INIS)

    2000-07-01

    The Public Interest Advocacy Centre (PIAC), located in Ottawa, participates in regulatory proceedings concerning electricity and natural gas to support public and consumer interest. PIAC provides legal representation, research and policy support and public advocacy. A study aimed toward the determination of the issues at stake for residential electricity consumers in the provision of fair and accurate electricity metering, was commissioned by Measurement Canada in consultation with Industry Canada's Consumer Affairs. The metering of electricity must be carried out in a fair and efficient manner for all residential consumers. The Electricity, Gas and Inspection Act was developed to ensure compliance with standards for measuring instrumentation. The accurate metering of electricity through the distribution systems for electricity in Canada represents the main focus of this study and report. The role played by Measurement Canada and the increased efficiencies of service delivery by Measurement Canada or the changing of electricity market conditions are of special interest. The role of Measurement Canada was explained, as were the concerns of residential consumers. A comparison was then made between the interests of residential consumers and those of commercial and industrial electricity consumers in electricity metering. Selected American and Commonwealth jurisdictions were reviewed in light of their electricity metering practices. A section on compliance and conflict resolution was included, in addition to a section on the use of voluntary codes for compliance and conflict resolution

  2. KFM: a homemade yet accurate and dependable fallout meter

    International Nuclear Information System (INIS)

    Kearny, C.H.; Barnes, P.R.; Chester, C.V.; Cortner, M.W.

    1978-01-01

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of +-25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The step-by-step illustrated instructions for making and using a KFM are presented. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM

  3. Accurate modeling and evaluation of microstructures in complex materials

    Science.gov (United States)

    Tahmasebi, Pejman

    2018-02-01

    Accurate characterization of heterogeneous materials is of great importance for different fields of science and engineering. Such a goal can be achieved through imaging. Acquiring three- or two-dimensional images under different conditions is not, however, always plausible. On the other hand, accurate characterization of complex and multiphase materials requires various digital images (I) under different conditions. An ensemble method is presented that can take one single (or a set of) I(s) and stochastically produce several similar models of the given disordered material. The method is based on a successive calculating of a conditional probability by which the initial stochastic models are produced. Then, a graph formulation is utilized for removing unrealistic structures. A distance transform function for the Is with highly connected microstructure and long-range features is considered which results in a new I that is more informative. Reproduction of the I is also considered through a histogram matching approach in an iterative framework. Such an iterative algorithm avoids reproduction of unrealistic structures. Furthermore, a multiscale approach, based on pyramid representation of the large Is, is presented that can produce materials with millions of pixels in a matter of seconds. Finally, the nonstationary systems—those for which the distribution of data varies spatially—are studied using two different methods. The method is tested on several complex and large examples of microstructures. The produced results are all in excellent agreement with the utilized Is and the similarities are quantified using various correlation functions.

  4. Funnel metadynamics as accurate binding free-energy method

    Science.gov (United States)

    Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele

    2013-01-01

    A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839

  5. Ultra-accurate collaborative information filtering via directed user similarity

    Science.gov (United States)

    Guo, Q.; Song, W.-J.; Liu, J.-G.

    2014-07-01

    A key challenge of the collaborative filtering (CF) information filtering is how to obtain the reliable and accurate results with the help of peers' recommendation. Since the similarities from small-degree users to large-degree users would be larger than the ones in opposite direction, the large-degree users' selections are recommended extensively by the traditional second-order CF algorithms. By considering the users' similarity direction and the second-order correlations to depress the influence of mainstream preferences, we present the directed second-order CF (HDCF) algorithm specifically to address the challenge of accuracy and diversity of the CF algorithm. The numerical results for two benchmark data sets, MovieLens and Netflix, show that the accuracy of the new algorithm outperforms the state-of-the-art CF algorithms. Comparing with the CF algorithm based on random walks proposed by Liu et al. (Int. J. Mod. Phys. C, 20 (2009) 285) the average ranking score could reach 0.0767 and 0.0402, which is enhanced by 27.3% and 19.1% for MovieLens and Netflix, respectively. In addition, the diversity, precision and recall are also enhanced greatly. Without relying on any context-specific information, tuning the similarity direction of CF algorithms could obtain accurate and diverse recommendations. This work suggests that the user similarity direction is an important factor to improve the personalized recommendation performance.

  6. Accurately controlled sequential self-folding structures by polystyrene film

    Science.gov (United States)

    Deng, Dongping; Yang, Yang; Chen, Yong; Lan, Xing; Tice, Jesse

    2017-08-01

    Four-dimensional (4D) printing overcomes the traditional fabrication limitations by designing heterogeneous materials to enable the printed structures evolve over time (the fourth dimension) under external stimuli. Here, we present a simple 4D printing of self-folding structures that can be sequentially and accurately folded. When heated above their glass transition temperature pre-strained polystyrene films shrink along the XY plane. In our process silver ink traces printed on the film are used to provide heat stimuli by conducting current to trigger the self-folding behavior. The parameters affecting the folding process are studied and discussed. Sequential folding and accurately controlled folding angles are achieved by using printed ink traces and angle lock design. Theoretical analyses are done to guide the design of the folding processes. Programmable structures such as a lock and a three-dimensional antenna are achieved to test the feasibility and potential applications of this method. These self-folding structures change their shapes after fabrication under controlled stimuli (electric current) and have potential applications in the fields of electronics, consumer devices, and robotics. Our design and fabrication method provides an easy way by using silver ink printed on polystyrene films to 4D print self-folding structures for electrically induced sequential folding with angular control.

  7. A method of accurate determination of voltage stability margin

    Energy Technology Data Exchange (ETDEWEB)

    Wiszniewski, A.; Rebizant, W. [Wroclaw Univ. of Technology, Wroclaw (Poland); Klimek, A. [AREVA Transmission and Distribution, Stafford (United Kingdom)

    2008-07-01

    In the process of developing power system disturbance, voltage instability at the receiving substations often contributes to deteriorating system stability, which eventually may lead to severe blackouts. The voltage stability margin at receiving substations may be used to determine measures to prevent voltage collapse, primarily by operating or blocking the transformer tap changing device, or by load shedding. The best measure of the stability margin is the actual load to source impedance ratio and its critical value, which is unity. This paper presented an accurate method of calculating the load to source impedance ratio, derived from the Thevenin's equivalent circuit of the system, which led to calculation of the stability margin. The paper described the calculation of the load to source impedance ratio including the supporting equations. The calculation was based on the very definition of voltage stability, which says that system stability is maintained as long as the change of power, which follows the increase of admittance is positive. The testing of the stability margin assessment method was performed in a simulative way for a number of power network structures and simulation scenarios. Results of the simulations revealed that this method is accurate and stable for all possible events occurring downstream of the device location. 3 refs., 8 figs.

  8. Learning fast accurate movements requires intact frontostriatal circuits

    Directory of Open Access Journals (Sweden)

    Britne eShabbott

    2013-11-01

    Full Text Available The basal ganglia are known to play a crucial role in movement execution, but their importance for motor skill learning remains unclear. Obstacles to our understanding include the lack of a universally accepted definition of motor skill learning (definition confound, and difficulties in distinguishing learning deficits from execution impairments (performance confound. We studied how healthy subjects and subjects with a basal ganglia disorder learn fast accurate reaching movements, and we addressed the definition and performance confounds by: 1 focusing on an operationally defined core element of motor skill learning (speed-accuracy learning, and 2 using normal variation in initial performance to separate movement execution impairment from motor learning abnormalities. We measured motor skill learning learning as performance improvement in a reaching task with a speed-accuracy trade-off. We compared the performance of subjects with Huntington’s disease (HD, a neurodegenerative basal ganglia disorder, to that of premanifest carriers of the HD mutation and of control subjects. The initial movements of HD subjects were less skilled (slower and/or less accurate than those of control subjects. To factor out these differences in initial execution, we modeled the relationship between learning and baseline performance in control subjects. Subjects with HD exhibited a clear learning impairment that was not explained by differences in initial performance. These results support a role for the basal ganglia in both movement execution and motor skill learning.

  9. Accurate Multisteps Traffic Flow Prediction Based on SVM

    Directory of Open Access Journals (Sweden)

    Zhang Mingheng

    2013-01-01

    Full Text Available Accurate traffic flow prediction is prerequisite and important for realizing intelligent traffic control and guidance, and it is also the objective requirement for intelligent traffic management. Due to the strong nonlinear, stochastic, time-varying characteristics of urban transport system, artificial intelligence methods such as support vector machine (SVM are now receiving more and more attentions in this research field. Compared with the traditional single-step prediction method, the multisteps prediction has the ability that can predict the traffic state trends over a certain period in the future. From the perspective of dynamic decision, it is far important than the current traffic condition obtained. Thus, in this paper, an accurate multi-steps traffic flow prediction model based on SVM was proposed. In which, the input vectors were comprised of actual traffic volume and four different types of input vectors were compared to verify their prediction performance with each other. Finally, the model was verified with actual data in the empirical analysis phase and the test results showed that the proposed SVM model had a good ability for traffic flow prediction and the SVM-HPT model outperformed the other three models for prediction.

  10. Accurate phylogenetic classification of DNA fragments based onsequence composition

    Energy Technology Data Exchange (ETDEWEB)

    McHardy, Alice C.; Garcia Martin, Hector; Tsirigos, Aristotelis; Hugenholtz, Philip; Rigoutsos, Isidore

    2006-05-01

    Metagenome studies have retrieved vast amounts of sequenceout of a variety of environments, leading to novel discoveries and greatinsights into the uncultured microbial world. Except for very simplecommunities, diversity makes sequence assembly and analysis a verychallenging problem. To understand the structure a 5 nd function ofmicrobial communities, a taxonomic characterization of the obtainedsequence fragments is highly desirable, yet currently limited mostly tothose sequences that contain phylogenetic marker genes. We show that forclades at the rank of domain down to genus, sequence composition allowsthe very accurate phylogenetic 10 characterization of genomic sequence.We developed a composition-based classifier, PhyloPythia, for de novophylogenetic sequence characterization and have trained it on adata setof 340 genomes. By extensive evaluation experiments we show that themethodis accurate across all taxonomic ranks considered, even forsequences that originate fromnovel organisms and are as short as 1kb.Application to two metagenome datasets 15 obtained from samples ofphosphorus-removing sludge showed that the method allows the accurateclassification at genus level of most sequence fragments from thedominant populations, while at the same time correctly characterizingeven larger parts of the samples at higher taxonomic levels.

  11. The economic value of accurate wind power forecasting to utilities

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S J [Rutherford Appleton Lab., Oxfordshire (United Kingdom); Giebel, G; Joensen, A [Risoe National Lab., Dept. of Wind Energy and Atmospheric Physics, Roskilde (Denmark)

    1999-03-01

    With increasing penetrations of wind power, the need for accurate forecasting is becoming ever more important. Wind power is by its very nature intermittent. For utility schedulers this presents its own problems particularly when the penetration of wind power capacity in a grid reaches a significant level (>20%). However, using accurate forecasts of wind power at wind farm sites, schedulers are able to plan the operation of conventional power capacity to accommodate the fluctuating demands of consumers and wind farm output. The results of a study to assess the value of forecasting at several potential wind farm sites in the UK and in the US state of Iowa using the Reading University/Rutherford Appleton Laboratory National Grid Model (NGM) are presented. The results are assessed for different types of wind power forecasting, namely: persistence, optimised numerical weather prediction or perfect forecasting. In particular, it will shown how the NGM has been used to assess the value of numerical weather prediction forecasts from the Danish Meteorological Institute model, HIRLAM, and the US Nested Grid Model, which have been `site tailored` by the use of the linearized flow model WA{sup s}P and by various Model output Statistics (MOS) and autoregressive techniques. (au)

  12. Cerebral fat embolism: Use of MR spectroscopy for accurate diagnosis

    Directory of Open Access Journals (Sweden)

    Laxmi Kokatnur

    2015-01-01

    Full Text Available Cerebral fat embolism (CFE is an uncommon but serious complication following orthopedic procedures. It usually presents with altered mental status, and can be a part of fat embolism syndrome (FES if associated with cutaneous and respiratory manifestations. Because of the presence of other common factors affecting the mental status, particularly in the postoperative period, the diagnosis of CFE can be challenging. Magnetic resonance imaging (MRI of brain typically shows multiple lesions distributed predominantly in the subcortical region, which appear as hyperintense lesions on T2 and diffusion weighted images. Although the location offers a clue, the MRI findings are not specific for CFE. Watershed infarcts, hypoxic encephalopathy, disseminated infections, demyelinating disorders, diffuse axonal injury can also show similar changes on MRI of brain. The presence of fat in these hyperintense lesions, identified by MR spectroscopy as raised lipid peaks will help in accurate diagnosis of CFE. Normal brain tissue or conditions producing similar MRI changes will not show any lipid peak on MR spectroscopy. We present a case of CFE initially misdiagnosed as brain stem stroke based on clinical presentation and cranial computed tomography (CT scan, and later, MR spectroscopy elucidated the accurate diagnosis.

  13. Accurate location estimation of moving object In Wireless Sensor network

    Directory of Open Access Journals (Sweden)

    Vinay Bhaskar Semwal

    2011-12-01

    Full Text Available One of the central issues in wirless sensor networks is track the location, of moving object which have overhead of saving data, an accurate estimation of the target location of object with energy constraint .We do not have any mechanism which control and maintain data .The wireless communication bandwidth is also very limited. Some field which is using this technique are flood and typhoon detection, forest fire detection, temperature and humidity and ones we have these information use these information back to a central air conditioning and ventilation.In this research paper, we propose protocol based on the prediction and adaptive based algorithm which is using less sensor node reduced by an accurate estimation of the target location. We had shown that our tracking method performs well in terms of energy saving regardless of mobility pattern of the mobile target. We extends the life time of network with less sensor node. Once a new object is detected, a mobile agent will be initiated to track the roaming path of the object.

  14. A practical method for accurate quantification of large fault trees

    International Nuclear Information System (INIS)

    Choi, Jong Soo; Cho, Nam Zin

    2007-01-01

    This paper describes a practical method to accurately quantify top event probability and importance measures from incomplete minimal cut sets (MCS) of a large fault tree. The MCS-based fault tree method is extensively used in probabilistic safety assessments. Several sources of uncertainties exist in MCS-based fault tree analysis. The paper is focused on quantification of the following two sources of uncertainties: (1) the truncation neglecting low-probability cut sets and (2) the approximation in quantifying MCSs. The method proposed in this paper is based on a Monte Carlo simulation technique to estimate probability of the discarded MCSs and the sum of disjoint products (SDP) approach complemented by the correction factor approach (CFA). The method provides capability to accurately quantify the two uncertainties and estimate the top event probability and importance measures of large coherent fault trees. The proposed fault tree quantification method has been implemented in the CUTREE code package and is tested on the two example fault trees

  15. Quality metric for accurate overlay control in <20nm nodes

    Science.gov (United States)

    Klein, Dana; Amit, Eran; Cohen, Guy; Amir, Nuriel; Har-Zvi, Michael; Huang, Chin-Chou Kevin; Karur-Shanmugam, Ramkumar; Pierson, Bill; Kato, Cindy; Kurita, Hiroyuki

    2013-04-01

    The semiconductor industry is moving toward 20nm nodes and below. As the Overlay (OVL) budget is getting tighter at these advanced nodes, the importance in the accuracy in each nanometer of OVL error is critical. When process owners select OVL targets and methods for their process, they must do it wisely; otherwise the reported OVL could be inaccurate, resulting in yield loss. The same problem can occur when the target sampling map is chosen incorrectly, consisting of asymmetric targets that will cause biased correctable terms and a corrupted wafer. Total measurement uncertainty (TMU) is the main parameter that process owners use when choosing an OVL target per layer. Going towards the 20nm nodes and below, TMU will not be enough for accurate OVL control. KLA-Tencor has introduced a quality score named `Qmerit' for its imaging based OVL (IBO) targets, which is obtained on the-fly for each OVL measurement point in X & Y. This Qmerit score will enable the process owners to select compatible targets which provide accurate OVL values for their process and thereby improve their yield. Together with K-T Analyzer's ability to detect the symmetric targets across the wafer and within the field, the Archer tools will continue to provide an independent, reliable measurement of OVL error into the next advanced nodes, enabling fabs to manufacture devices that meet their tight OVL error budgets.

  16. Machine learning of parameters for accurate semiempirical quantum chemical calculations

    International Nuclear Information System (INIS)

    Dral, Pavlo O.; Lilienfeld, O. Anatole von; Thiel, Walter

    2015-01-01

    We investigate possible improvements in the accuracy of semiempirical quantum chemistry (SQC) methods through the use of machine learning (ML) models for the parameters. For a given class of compounds, ML techniques require sufficiently large training sets to develop ML models that can be used for adapting SQC parameters to reflect changes in molecular composition and geometry. The ML-SQC approach allows the automatic tuning of SQC parameters for individual molecules, thereby improving the accuracy without deteriorating transferability to molecules with molecular descriptors very different from those in the training set. The performance of this approach is demonstrated for the semiempirical OM2 method using a set of 6095 constitutional isomers C 7 H 10 O 2 , for which accurate ab initio atomization enthalpies are available. The ML-OM2 results show improved average accuracy and a much reduced error range compared with those of standard OM2 results, with mean absolute errors in atomization enthalpies dropping from 6.3 to 1.7 kcal/mol. They are also found to be superior to the results from specific OM2 reparameterizations (rOM2) for the same set of isomers. The ML-SQC approach thus holds promise for fast and reasonably accurate high-throughput screening of materials and molecules

  17. AMID: Accurate Magnetic Indoor Localization Using Deep Learning

    Directory of Open Access Journals (Sweden)

    Namkyoung Lee

    2018-05-01

    Full Text Available Geomagnetic-based indoor positioning has drawn a great attention from academia and industry due to its advantage of being operable without infrastructure support and its reliable signal characteristics. However, it must overcome the problems of ambiguity that originate with the nature of geomagnetic data. Most studies manage this problem by incorporating particle filters along with inertial sensors. However, they cannot yield reliable positioning results because the inertial sensors in smartphones cannot precisely predict the movement of users. There have been attempts to recognize the magnetic sequence pattern, but these attempts are proven only in a one-dimensional space, because magnetic intensity fluctuates severely with even a slight change of locations. This paper proposes accurate magnetic indoor localization using deep learning (AMID, an indoor positioning system that recognizes magnetic sequence patterns using a deep neural network. Features are extracted from magnetic sequences, and then the deep neural network is used for classifying the sequences by patterns that are generated by nearby magnetic landmarks. Locations are estimated by detecting the landmarks. AMID manifested the proposed features and deep learning as an outstanding classifier, revealing the potential of accurate magnetic positioning with smartphone sensors alone. The landmark detection accuracy was over 80% in a two-dimensional environment.

  18. Atomic spectroscopy and highly accurate measurement: determination of fundamental constants

    International Nuclear Information System (INIS)

    Schwob, C.

    2006-12-01

    This document reviews the theoretical and experimental achievements of the author concerning highly accurate atomic spectroscopy applied for the determination of fundamental constants. A pure optical frequency measurement of the 2S-12D 2-photon transitions in atomic hydrogen and deuterium has been performed. The experimental setting-up is described as well as the data analysis. Optimized values for the Rydberg constant and Lamb shifts have been deduced (R = 109737.31568516 (84) cm -1 ). An experiment devoted to the determination of the fine structure constant with an aimed relative uncertainty of 10 -9 began in 1999. This experiment is based on the fact that Bloch oscillations in a frequency chirped optical lattice are a powerful tool to transfer coherently many photon momenta to the atoms. We have used this method to measure accurately the ratio h/m(Rb). The measured value of the fine structure constant is α -1 = 137.03599884 (91) with a relative uncertainty of 6.7*10 -9 . The future and perspectives of this experiment are presented. This document presented before an academic board will allow his author to manage research work and particularly to tutor thesis students. (A.C.)

  19. An Accurate and Dynamic Computer Graphics Muscle Model

    Science.gov (United States)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  20. Accurate Sample Time Reconstruction of Inertial FIFO Data

    Directory of Open Access Journals (Sweden)

    Sebastian Stieber

    2017-12-01

    Full Text Available In the context of modern cyber-physical systems, the accuracy of underlying sensor data plays an increasingly important role in sensor data fusion and feature extraction. The raw events of multiple sensors have to be aligned in time to enable high quality sensor fusion results. However, the growing number of simultaneously connected sensor devices make the energy saving data acquisition and processing more and more difficult. Hence, most of the modern sensors offer a first-in-first-out (FIFO interface to store multiple data samples and to relax timing constraints, when handling multiple sensor devices. However, using the FIFO interface increases the negative influence of individual clock drifts—introduced by fabrication inaccuracies, temperature changes and wear-out effects—onto the sampling data reconstruction. Furthermore, additional timing offset errors due to communication and software latencies increases with a growing number of sensor devices. In this article, we present an approach for an accurate sample time reconstruction independent of the actual clock drift with the help of an internal sensor timer. Such timers are already available in modern sensors, manufactured in micro-electromechanical systems (MEMS technology. The presented approach focuses on calculating accurate time stamps using the sensor FIFO interface in a forward-only processing manner as a robust and energy saving solution. The proposed algorithm is able to lower the overall standard deviation of reconstructed sampling periods below 40 μ s, while run-time savings of up to 42% are achieved, compared to single sample acquisition.

  1. Quantitative densitometry of neurotransmitter receptors

    International Nuclear Information System (INIS)

    Rainbow, T.C.; Bleisch, W.V.; Biegon, A.; McEwen, B.S.

    1982-01-01

    An autoradiographic procedure is described that allows the quantitative measurement of neurotransmitter receptors by optical density readings. Frozen brain sections are labeled in vitro with [ 3 H]ligands under conditions that maximize specific binding to neurotransmitter receptors. The labeled sections are then placed against the 3 H-sensitive LKB Ultrofilm to produce the autoradiograms. These autoradiograms resemble those produced by [ 14 C]deoxyglucose autoradiography and are suitable for quantitative analysis with a densitometer. Muscarinic cholinergic receptors in rat and zebra finch brain and 5-HT receptors in rat brain were visualized by this method. When the proper combination of ligand concentration and exposure time are used, the method provides quantitative information about the amount and affinity of neurotransmitter receptors in brain sections. This was established by comparisons of densitometric readings with parallel measurements made by scintillation counting of sections. (Auth.)

  2. Energy Education: The Quantitative Voice

    Science.gov (United States)

    Wolfson, Richard

    2010-02-01

    A serious study of energy use and its consequences has to be quantitative. It makes little sense to push your favorite renewable energy source if it can't provide enough energy to make a dent in humankind's prodigious energy consumption. Conversely, it makes no sense to dismiss alternatives---solar in particular---that supply Earth with energy at some 10,000 times our human energy consumption rate. But being quantitative---especially with nonscience students or the general public---is a delicate business. This talk draws on the speaker's experience presenting energy issues to diverse audiences through single lectures, entire courses, and a textbook. The emphasis is on developing a quick, ``back-of-the-envelope'' approach to quantitative understanding of energy issues. )

  3. Tomosynthesis can facilitate accurate measurement of joint space width under the condition of the oblique incidence of X-rays in patients with rheumatoid arthritis.

    Science.gov (United States)

    Ono, Yohei; Kashihara, Rina; Yasojima, Nobutoshi; Kasahara, Hideki; Shimizu, Yuka; Tamura, Kenichi; Tsutsumi, Kaori; Sutherland, Kenneth; Koike, Takao; Kamishima, Tamotsu

    2016-06-01

    Accurate evaluation of joint space width (JSW) is important in the assessment of rheumatoid arthritis (RA). In clinical radiography of bilateral hands, the oblique incidence of X-rays is unavoidable, which may cause perceptional or measurement error of JSW. The objective of this study was to examine whether tomosynthesis, a recently developed modality, can facilitate a more accurate evaluation of JSW than radiography under the condition of oblique incidence of X-rays. We investigated quantitative errors derived from the oblique incidence of X-rays by imaging phantoms simulating various finger joint spaces using radiographs and tomosynthesis images. We then compared the qualitative results of the modified total Sharp score of a total of 320 joints from 20 patients with RA between these modalities. A quantitative error was prominent when the location of the phantom was shifted along the JSW direction. Modified total Sharp scores of tomosynthesis images were significantly higher than those of radiography, that is to say JSW was regarded as narrower in tomosynthesis than in radiography when finger joints were located where the oblique incidence of X-rays is expected in the JSW direction. Tomosynthesis can facilitate accurate evaluation of JSW in finger joints of patients with RA, even with oblique incidence of X-rays. Accurate evaluation of JSW is necessary for the management of patients with RA. Through phantom and clinical studies, we demonstrate that tomosynthesis may achieve more accurate evaluation of JSW.

  4. Quantitative nature of overexpression experiments

    Science.gov (United States)

    Moriya, Hisao

    2015-01-01

    Overexpression experiments are sometimes considered as qualitative experiments designed to identify novel proteins and study their function. However, in order to draw conclusions regarding protein overexpression through association analyses using large-scale biological data sets, we need to recognize the quantitative nature of overexpression experiments. Here I discuss the quantitative features of two different types of overexpression experiment: absolute and relative. I also introduce the four primary mechanisms involved in growth defects caused by protein overexpression: resource overload, stoichiometric imbalance, promiscuous interactions, and pathway modulation associated with the degree of overexpression. PMID:26543202

  5. The usefulness of 3D quantitative analysis with using MRI for measuring osteonecrosis of the femoral head

    International Nuclear Information System (INIS)

    Hwang, Ji Young; Lee, Sun Wha; Park, Youn Soo

    2006-01-01

    We wanted to evaluate the usefulness of MRI 3D quantitative analysis for measuring osteonecrosis of the femoral head in comparison with MRI 2D quantitative analysis and quantitative analysis of the specimen. For 3 months at our hospital, 14 femoral head specimens with osteonecrosis were obtained after total hip arthroplasty. The patients preoperative MRIs were retrospectively reviewed for quantitative analysis of the size of the necrosis. Each necrotic fraction of the femoral head was measured by 2D quantitative analysis with using mid-coronal and mid-sagittal MRIs, and by 3D quantitative analysis with using serial continuous coronal MRIs and 3D reconstruction software. The necrotic fraction of the specimen was physically measured by the fluid displacement method. The necrotic fraction according to MRI 2D or 3D quantitative analysis was compared with that of the specimen by using Spearman's correlation test. On the correlative analysis, the necrotic fraction by MRI 2D quantitative analysis and quantitative analysis of the specimen showed moderate correlation (r = 0.657); on the other hand, the necrotic fraction by MRI 3D quantitative analysis and quantitative analysis of the specimen demonstrated a strong correlation (r = 0.952) (ρ < 0.05). MRI 3D quantitative analysis was more accurate than 2D quantitative analysis using MRI for measuring osteonecrosis of the femoral head. Therefore, it may be useful for predicting the clinical outcome and deciding the proper treatment option

  6. The KFM, A Homemade Yet Accurate and Dependable Fallout Meter

    Energy Technology Data Exchange (ETDEWEB)

    Kearny, C.H.

    2001-11-20

    The KFM is a homemade fallout meter that can be made using only materials, tools, and skills found in millions of American homes. It is an accurate and dependable electroscope-capacitor. The KFM, in conjunction with its attached table and a watch, is designed for use as a rate meter. Its attached table relates observed differences in the separations of its two leaves (before and after exposures at the listed time intervals) to the dose rates during exposures of these time intervals. In this manner dose rates from 30 mR/hr up to 43 R/hr can be determined with an accuracy of {+-}25%. A KFM can be charged with any one of the three expedient electrostatic charging devices described. Due to the use of anhydrite (made by heating gypsum from wallboard) inside a KFM and the expedient ''dry-bucket'' in which it can be charged when the air is very humid, this instrument always can be charged and used to obtain accurate measurements of gamma radiation no matter how high the relative humidity. The heart of this report is the step-by-step illustrated instructions for making and using a KFM. These instructions have been improved after each successive field test. The majority of the untrained test families, adequately motivated by cash bonuses offered for success and guided only by these written instructions, have succeeded in making and using a KFM. NOTE: ''The KFM, A Homemade Yet Accurate and Dependable Fallout Meter'', was published by Oak Ridge National Laboratory report in1979. Some of the materials originally suggested for suspending the leaves of the Kearny Fallout Meter (KFM) are no longer available. Because of changes in the manufacturing process, other materials (e.g., sewing thread, unwaxed dental floss) may not have the insulating capability to work properly. Oak Ridge National Laboratory has not tested any of the suggestions provided in the preface of the report, but they have been used by other groups. When using these

  7. Supramolecular assembly affording a ratiometric two-photon fluorescent nanoprobe for quantitative detection and bioimaging.

    Science.gov (United States)

    Wang, Peng; Zhang, Cheng; Liu, Hong-Wen; Xiong, Mengyi; Yin, Sheng-Yan; Yang, Yue; Hu, Xiao-Xiao; Yin, Xia; Zhang, Xiao-Bing; Tan, Weihong

    2017-12-01

    Fluorescence quantitative analyses for vital biomolecules are in great demand in biomedical science owing to their unique detection advantages with rapid, sensitive, non-damaging and specific identification. However, available fluorescence strategies for quantitative detection are usually hard to design and achieve. Inspired by supramolecular chemistry, a two-photon-excited fluorescent supramolecular nanoplatform ( TPSNP ) was designed for quantitative analysis with three parts: host molecules (β-CD polymers), a guest fluorophore of sensing probes (Np-Ad) and a guest internal reference (NpRh-Ad). In this strategy, the TPSNP possesses the merits of (i) improved water-solubility and biocompatibility; (ii) increased tissue penetration depth for bioimaging by two-photon excitation; (iii) quantitative and tunable assembly of functional guest molecules to obtain optimized detection conditions; (iv) a common approach to avoid the limitation of complicated design by adjustment of sensing probes; and (v) accurate quantitative analysis by virtue of reference molecules. As a proof-of-concept, we utilized the two-photon fluorescent probe NHS-Ad-based TPSNP-1 to realize accurate quantitative analysis of hydrogen sulfide (H 2 S), with high sensitivity and good selectivity in live cells, deep tissues and ex vivo -dissected organs, suggesting that the TPSNP is an ideal quantitative indicator for clinical samples. What's more, TPSNP will pave the way for designing and preparing advanced supramolecular sensors for biosensing and biomedicine.

  8. A novel dual energy method for enhanced quantitative computed tomography

    Science.gov (United States)

    Emami, A.; Ghadiri, H.; Rahmim, A.; Ay, M. R.

    2018-01-01

    Accurate assessment of bone mineral density (BMD) is critically important in clinical practice, and conveniently enabled via quantitative computed tomography (QCT). Meanwhile, dual-energy QCT (DEQCT) enables enhanced detection of small changes in BMD relative to single-energy QCT (SEQCT). In the present study, we aimed to investigate the accuracy of QCT methods, with particular emphasis on a new dual-energy approach, in comparison to single-energy and conventional dual-energy techniques. We used a sinogram-based analytical CT simulator to model the complete chain of CT data acquisitions, and assessed performance of SEQCT and different DEQCT techniques in quantification of BMD. We demonstrate a 120% reduction in error when using a proposed dual-energy Simultaneous Equation by Constrained Least-squares method, enabling more accurate bone mineral measurements.

  9. The importance of accurate meteorological input fields and accurate planetary boundary layer parameterizations, tested against ETEX-1

    International Nuclear Information System (INIS)

    Brandt, J.; Ebel, A.; Elbern, H.; Jakobs, H.; Memmesheimer, M.; Mikkelsen, T.; Thykier-Nielsen, S.; Zlatev, Z.

    1997-01-01

    Atmospheric transport of air pollutants is, in principle, a well understood process. If information about the state of the atmosphere is given in all details (infinitely accurate information about wind speed, etc.) and infinitely fast computers are available then the advection equation could in principle be solved exactly. This is, however, not the case: discretization of the equations and input data introduces some uncertainties and errors in the results. Therefore many different issues have to be carefully studied in order to diminish these uncertainties and to develop an accurate transport model. Some of these are e.g. the numerical treatment of the transport equation, accuracy of the mean meteorological input fields and parameterizations of sub-grid scale phenomena (as e.g. parameterizations of the 2 nd and higher order turbulence terms in order to reach closure in the perturbation equation). A tracer model for studying transport and dispersion of air pollution caused by a single but strong source is under development. The model simulations from the first ETEX release illustrate the differences caused by using various analyzed fields directly in the tracer model or using a meteorological driver. Also different parameterizations of the mixing height and the vertical exchange are compared. (author)

  10. Accurate frequency measurements on gyrotrons using a ''gyro-radiometer''

    International Nuclear Information System (INIS)

    Rebuffi, L.

    1986-08-01

    Using an heterodyne system, called ''Gyro-radiometer'', accurated frequency measurements have been carried out on VARIAN 60 GHz gyrotrons. Changing the principal tuning parameters of a gyrotron, we have detected frequency variations up to 100 MHz, ∼ 40 MHz frequency jumps and smaller jumps (∼ 10 MHz) when mismatches in the transmission line were present. FWHM bandwidth of 300 KHz, parasitic frequencies and frequency drift during 100 msec pulses have also been observed. An efficient method to find a stable-, high power-, long pulse-working point of a gyrotron loaded by a transmission line, has been derived. In general, for any power value it is possible to find stable working conditions tuning the principal parameters of the tube in correspondance of a maximum of the emitted frequency

  11. Phase rainbow refractometry for accurate droplet variation characterization.

    Science.gov (United States)

    Wu, Yingchun; Promvongsa, Jantarat; Saengkaew, Sawitree; Wu, Xuecheng; Chen, Jia; Gréhan, Gérard

    2016-10-15

    We developed a one-dimensional phase rainbow refractometer for the accurate trans-dimensional measurements of droplet size on the micrometer scale as well as the tiny droplet diameter variations at the nanoscale. The dependence of the phase shift of the rainbow ripple structures on the droplet variations is revealed. The phase-shifting rainbow image is recorded by a telecentric one-dimensional rainbow imaging system. Experiments on the evaporating monodispersed droplet stream show that the phase rainbow refractometer can measure the tiny droplet diameter changes down to tens of nanometers. This one-dimensional phase rainbow refractometer is capable of measuring the droplet refractive index and diameter, as well as variations.

  12. Vessel calibration for accurate material accountancy at RRP

    International Nuclear Information System (INIS)

    Yanagisawa, Yuu; Ono, Sawako; Iwamoto, Tomonori

    2004-01-01

    RRP has a 800t·Upr capacity a year to re-process, where would be handled a large amount of nuclear materials as solution. A large scale plant like RRP will require accurate materials accountancy system, so that the vessel calibration with high-precision is very important as initial vessel calibration before operation. In order to obtain the calibration curve, it is needed well-known each the increment volume related with liquid height. Then we performed at least 2 or 3 times run with water for vessel calibration and careful evaluation for the calibration data should be needed. We performed vessel calibration overall 210 vessels, and the calibration of 81 vessels including IAT and OAT were held under presence of JSGO and IAEA inspectors taking into account importance on the material accountancy. This paper describes outline of the initial vessel calibration and calibration results based on back pressure measurement with dip tubes. (author)

  13. Accurate EPR radiosensitivity calibration using small sample masses

    Science.gov (United States)

    Hayes, R. B.; Haskell, E. H.; Barrus, J. K.; Kenner, G. H.; Romanyukha, A. A.

    2000-03-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed.

  14. Accurate EPR radiosensitivity calibration using small sample masses

    International Nuclear Information System (INIS)

    Hayes, R.B.; Haskell, E.H.; Barrus, J.K.; Kenner, G.H.; Romanyukha, A.A.

    2000-01-01

    We demonstrate a procedure in retrospective EPR dosimetry which allows for virtually nondestructive sample evaluation in terms of sample irradiations. For this procedure to work, it is shown that corrections must be made for cavity response characteristics when using variable mass samples. Likewise, methods are employed to correct for empty tube signals, sample anisotropy and frequency drift while considering the effects of dose distribution optimization. A demonstration of the method's utility is given by comparing sample portions evaluated using both the described methodology and standard full sample additive dose techniques. The samples used in this study are tooth enamel from teeth removed during routine dental care. We show that by making all the recommended corrections, very small masses can be both accurately measured and correlated with measurements of other samples. Some issues relating to dose distribution optimization are also addressed

  15. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  16. Zadoff-Chu coded ultrasonic signal for accurate range estimation

    KAUST Repository

    AlSharif, Mohammed H.

    2017-11-02

    This paper presents a new adaptation of Zadoff-Chu sequences for the purpose of range estimation and movement tracking. The proposed method uses Zadoff-Chu sequences utilizing a wideband ultrasonic signal to estimate the range between two devices with very high accuracy and high update rate. This range estimation method is based on time of flight (TOF) estimation using cyclic cross correlation. The system was experimentally evaluated under different noise levels and multi-user interference scenarios. For a single user, the results show less than 7 mm error for 90% of range estimates in a typical indoor environment. Under the interference from three other users, the 90% error was less than 25 mm. The system provides high estimation update rate allowing accurate tracking of objects moving with high speed.

  17. Fast and accurate determination of modularity and its effect size

    International Nuclear Information System (INIS)

    Treviño, Santiago III; Nyberg, Amy; Bassler, Kevin E; Del Genio, Charo I

    2015-01-01

    We present a fast spectral algorithm for community detection in complex networks. Our method searches for the partition with the maximum value of the modularity via the interplay of several refinement steps that include both agglomeration and division. We validate the accuracy of the algorithm by applying it to several real-world benchmark networks. On all these, our algorithm performs as well or better than any other known polynomial scheme. This allows us to extensively study the modularity distribution in ensembles of Erdős–Rényi networks, producing theoretical predictions for means and variances inclusive of finite-size corrections. Our work provides a way to accurately estimate the effect size of modularity, providing a z-score measure of it and enabling a more informative comparison of networks with different numbers of nodes and links. (paper)

  18. Fast and accurate automated cell boundary determination for fluorescence microscopy

    Science.gov (United States)

    Arce, Stephen Hugo; Wu, Pei-Hsun; Tseng, Yiider

    2013-07-01

    Detailed measurement of cell phenotype information from digital fluorescence images has the potential to greatly advance biomedicine in various disciplines such as patient diagnostics or drug screening. Yet, the complexity of cell conformations presents a major barrier preventing effective determination of cell boundaries, and introduces measurement error that propagates throughout subsequent assessment of cellular parameters and statistical analysis. State-of-the-art image segmentation techniques that require user-interaction, prolonged computation time and specialized training cannot adequately provide the support for high content platforms, which often sacrifice resolution to foster the speedy collection of massive amounts of cellular data. This work introduces a strategy that allows us to rapidly obtain accurate cell boundaries from digital fluorescent images in an automated format. Hence, this new method has broad applicability to promote biotechnology.

  19. Stereotypes of Age Differences in Personality Traits: Universal and Accurate?

    Science.gov (United States)

    Chan, Wayne; McCrae, Robert R.; De Fruyt, Filip; Jussim, Lee; Löckenhoff, Corinna E.; De Bolle, Marleen; Costa, Paul T.; Sutin, Angelina R.; Realo, Anu; Allik, Jüri; Nakazato, Katsuharu; Shimonaka, Yoshiko; Hřebíčková, Martina; Kourilova, Sylvie; Yik, Michelle; Ficková, Emília; Brunner-Sciarra, Marina; de Figueora, Nora Leibovich; Schmidt, Vanina; Ahn, Chang-kyu; Ahn, Hyun-nie; Aguilar-Vafaie, Maria E.; Siuta, Jerzy; Szmigielska, Barbara; Cain, Thomas R.; Crawford, Jarret T.; Mastor, Khairul Anwar; Rolland, Jean-Pierre; Nansubuga, Florence; Miramontez, Daniel R.; Benet-Martínez, Veronica; Rossier, Jérôme; Bratko, Denis; Halberstadt, Jamin; Yamaguchi, Mami; Knežević, Goran; Martin, Thomas A.; Gheorghiu, Mirona; Smith, Peter B.; Barbaranelli, Claduio; Wang, Lei; Shakespeare-Finch, Jane; Lima, Margarida P.; Klinkosz, Waldemar; Sekowski, Andrzej; Alcalay, Lidia; Simonetti, Franco; Avdeyeva, Tatyana V.; Pramila, V. S.; Terracciano, Antonio

    2012-01-01

    Age trajectories for personality traits are known to be similar across cultures. To address whether stereotypes of age groups reflect these age-related changes in personality, we asked participants in 26 countries (N = 3,323) to rate typical adolescents, adults, and old persons in their own country. Raters across nations tended to share similar beliefs about different age groups; adolescents were seen as impulsive, rebellious, undisciplined, preferring excitement and novelty, whereas old people were consistently considered lower on impulsivity, activity, antagonism, and Openness. These consensual age group stereotypes correlated strongly with published age differences on the five major dimensions of personality and most of 30 specific traits, using as criteria of accuracy both self-reports and observer ratings, different survey methodologies, and data from up to 50 nations. However, personal stereotypes were considerably less accurate, and consensual stereotypes tended to exaggerate differences across age groups. PMID:23088227

  20. Capsule-odometer: a concept to improve accurate lesion localisation.

    Science.gov (United States)

    Karargyris, Alexandros; Koulaouzidis, Anastasios

    2013-09-21

    In order to improve lesion localisation in small-bowel capsule endoscopy, a modified capsule design has been proposed incorporating localisation and - in theory - stabilization capabilities. The proposed design consists of a capsule fitted with protruding wheels attached to a spring-mechanism. This would act as a miniature odometer, leading to more accurate lesion localization information in relation to the onset of the investigation (spring expansion e.g., pyloric opening). Furthermore, this capsule could allow stabilization of the recorded video as any erratic, non-forward movement through the gut is minimised. Three-dimensional (3-D) printing technology was used to build a capsule prototype. Thereafter, miniature wheels were also 3-D printed and mounted on a spring which was attached to conventional capsule endoscopes for the purpose of this proof-of-concept experiment. In vitro and ex vivo experiments with porcine small-bowel are presented herein. Further experiments have been scheduled.

  1. Effective and accurate processing and inversion of airborne electromagnetic data

    DEFF Research Database (Denmark)

    Auken, Esben; Christiansen, Anders Vest; Andersen, Kristoffer Rønne

    Airborne electromagnetic (AEM) data is used throughout the world for mapping of mineral targets and groundwater resources. The development of technology and inversion algorithms has been tremendously over the last decade and results from these surveys are high-resolution images of the subsurface....... In this keynote talk, we discuss an effective inversion algorithm, which is both subjected to intense research and development as well as production. This is the well know Laterally Constrained Inversion (LCI) and Spatial Constrained Inversion algorithm. The same algorithm is also used in a voxel setup (3D model......) and for sheet inversions. An integral part of these different model discretization is an accurate modelling of the system transfer function and of auxiliary parameters like flight altitude, bird pitch,etc....

  2. Multi-objective optimization of inverse planning for accurate radiotherapy

    International Nuclear Information System (INIS)

    Cao Ruifen; Pei Xi; Cheng Mengyun; Li Gui; Hu Liqin; Wu Yican; Jing Jia; Li Guoli

    2011-01-01

    The multi-objective optimization of inverse planning based on the Pareto solution set, according to the multi-objective character of inverse planning in accurate radiotherapy, was studied in this paper. Firstly, the clinical requirements of a treatment plan were transformed into a multi-objective optimization problem with multiple constraints. Then, the fast and elitist multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) was introduced to optimize the problem. A clinical example was tested using this method. The results show that an obtained set of non-dominated solutions were uniformly distributed and the corresponding dose distribution of each solution not only approached the expected dose distribution, but also met the dose-volume constraints. It was indicated that the clinical requirements were better satisfied using the method and the planner could select the optimal treatment plan from the non-dominated solution set. (authors)

  3. Accurate mass measurements on neutron-deficient krypton isotopes

    CERN Document Server

    Rodríguez, D.; Äystö, J.; Beck, D.; Blaum, K.; Bollen, G.; Herfurth, F.; Jokinen, A.; Kellerbauer, A.; Kluge, H.-J.; Kolhinen, V.S.; Oinonen, M.; Sauvan, E.; Schwarz, S.

    2006-01-01

    The masses of $^{72–78,80,82,86}$Kr were measured directly with the ISOLTRAP Penning trap mass spectrometer at ISOLDE/CERN. For all these nuclides, the measurements yielded mass uncertainties below 10 keV. The ISOLTRAP mass values for $^{72–75}$Kr being more precise than the previous results obtained by means of other techniques, and thus completely determine the new values in the Atomic-Mass Evaluation. Besides the interest of these masses for nuclear astrophysics, nuclear structure studies, and Standard Model tests, these results constitute a valuable and accurate input to improve mass models. In this paper, we present the mass measurements and discuss the mass evaluation for these Kr isotopes.

  4. Accurate performance analysis of opportunistic decode-and-forward relaying

    KAUST Repository

    Tourki, Kamel

    2011-07-01

    In this paper, we investigate an opportunistic relaying scheme where the selected relay assists the source-destination (direct) communication. In our study, we consider a regenerative opportunistic relaying scheme in which the direct path may be considered unusable, and the destination may use a selection combining technique. We first derive the exact statistics of each hop, in terms of probability density function (PDF). Then, the PDFs are used to determine accurate closed form expressions for end-to-end outage probability for a transmission rate R. Furthermore, we evaluate the asymptotical performance analysis and the diversity order is deduced. Finally, we validate our analysis by showing that performance simulation results coincide with our analytical results over different network architectures. © 2011 IEEE.

  5. A new approach to determine accurately minority-carrier lifetime

    International Nuclear Information System (INIS)

    Idali Oumhand, M.; Mir, Y.; Zazoui, M.

    2009-01-01

    Electron or proton irradiations introduce recombination centers, which tend to affect solar cell parameters by reducing the minority-carrier lifetime (MCLT). Because this MCLT plays a fundamental role in the performance degradation of solar cells, in this work we present a new approach that allows us to get accurate values of MCLT. The relationship between MCLT in p-region and n-region both before and after irradiation has been determined by the new method. The validity and accuracy of this approach are justified by the fact that the degradation parameters that fit the experimental data are the same for both short-circuit current and the open-circuit voltages. This method is applied to the p + /n-InGaP solar cell under 1 MeV electron irradiation

  6. An Integrative Approach to Accurate Vehicle Logo Detection

    Directory of Open Access Journals (Sweden)

    Hao Pan

    2013-01-01

    required for many applications in intelligent transportation systems and automatic surveillance. The task is challenging considering the small target of logos and the wide range of variability in shape, color, and illumination. A fast and reliable vehicle logo detection approach is proposed following visual attention mechanism from the human vision. Two prelogo detection steps, that is, vehicle region detection and a small RoI segmentation, rapidly focalize a small logo target. An enhanced Adaboost algorithm, together with two types of features of Haar and HOG, is proposed to detect vehicles. An RoI that covers logos is segmented based on our prior knowledge about the logos’ position relative to license plates, which can be accurately localized from frontal vehicle images. A two-stage cascade classier proceeds with the segmented RoI, using a hybrid of Gentle Adaboost and Support Vector Machine (SVM, resulting in precise logo positioning. Extensive experiments were conducted to verify the efficiency of the proposed scheme.

  7. Anatomically accurate, finite model eye for optical modeling.

    Science.gov (United States)

    Liou, H L; Brennan, N A

    1997-08-01

    There is a need for a schematic eye that models vision accurately under various conditions such as refractive surgical procedures, contact lens and spectacle wear, and near vision. Here we propose a new model eye close to anatomical, biometric, and optical realities. This is a finite model with four aspheric refracting surfaces and a gradient-index lens. It has an equivalent power of 60.35 D and an axial length of 23.95 mm. The new model eye provides spherical aberration values within the limits of empirical results and predicts chromatic aberration for wavelengths between 380 and 750 nm. It provides a model for calculating optical transfer functions and predicting optical performance of the eye.

  8. A fast and accurate dihedral interpolation loop subdivision scheme

    Science.gov (United States)

    Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan

    2018-04-01

    In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.

  9. Zadoff-Chu coded ultrasonic signal for accurate range estimation

    KAUST Repository

    AlSharif, Mohammed H.; Saad, Mohamed; Siala, Mohamed; Ballal, Tarig; Boujemaa, Hatem; Al-Naffouri, Tareq Y.

    2017-01-01

    This paper presents a new adaptation of Zadoff-Chu sequences for the purpose of range estimation and movement tracking. The proposed method uses Zadoff-Chu sequences utilizing a wideband ultrasonic signal to estimate the range between two devices with very high accuracy and high update rate. This range estimation method is based on time of flight (TOF) estimation using cyclic cross correlation. The system was experimentally evaluated under different noise levels and multi-user interference scenarios. For a single user, the results show less than 7 mm error for 90% of range estimates in a typical indoor environment. Under the interference from three other users, the 90% error was less than 25 mm. The system provides high estimation update rate allowing accurate tracking of objects moving with high speed.

  10. A Modified Proportional Navigation Guidance for Accurate Target Hitting

    Directory of Open Access Journals (Sweden)

    A. Moharampour

    2010-03-01

    First, the pure proportional navigation guidance (PPNG in 3-dimensional state is explained in a new point of view. The main idea is based on the distinction between angular rate vector and rotation vector conceptions. The current innovation is based on selection of line of sight (LOS coordinates. A comparison between two available choices for LOS coordinates system is proposed. An improvement is made by adding two additional terms. First term includes a cross range compensator which is used to provide and enhance path observability, and obtain convergent estimates of state variables. The second term is new concept lead bias term, which has been calculated by assuming an equivalent acceleration along the target longitudinal axis. Simulation results indicate that the lead bias term properly provides terminal conditions for accurate target interception.

  11. Accurate Calculation of Fringe Fields in the LHC Main Dipoles

    CERN Document Server

    Kurz, S; Siegel, N

    2000-01-01

    The ROXIE program developed at CERN for the design and optimization of the superconducting LHC magnets has been recently extended in a collaboration with the University of Stuttgart, Germany, with a field computation method based on the coupling between the boundary element (BEM) and the finite element (FEM) technique. This avoids the meshing of the coils and the air regions, and avoids the artificial far field boundary conditions. The method is therefore specially suited for the accurate calculation of fields in the superconducting magnets in which the field is dominated by the coil. We will present the fringe field calculations in both 2d and 3d geometries to evaluate the effect of connections and the cryostat on the field quality and the flux density to which auxiliary bus-bars are exposed.

  12. An unconventional method of quantitative microstructural analysis

    International Nuclear Information System (INIS)

    Rastani, M.

    1995-01-01

    The experiment described here introduces a simple methodology which could be used to replace the time-consuming and expensive conventional methods of metallographic and quantitative analysis of thermal treatment effect on microstructure. The method is ideal for the microstructural evaluation of tungsten filaments and other wire samples such as copper wire which can be conveniently coiled. Ten such samples were heat treated by ohmic resistance at temperatures which were expected to span the recrystallization range. After treatment, the samples were evaluated in the elastic recovery test. The normalized elastic recovery factor was defined in terms of these deflections. Experimentally it has shown that elastic recovery factor depends on the degree of recrystallization. In other words this factor is used to determine the fraction of unrecrystallized material. Because the elastic recovery method examines the whole filament rather than just one section through the filament as in metallographical method, it more accurately measures the degree of recrystallization. The method also takes a considerably shorter time and cost compared to the conventional method

  13. Individual patient dosimetry using quantitative SPECT imaging

    International Nuclear Information System (INIS)

    Gonzalez, J.; Oliva, J.; Baum, R.; Fisher, S.

    2002-01-01

    An approach is described to provide individual patient dosimetry for routine clinical use. Accurate quantitative SPECT imaging was achieved using appropriate methods. The volume of interest (VOI) was defined semi-automatically using a fixed threshold value obtained from phantom studies. The calibration factor to convert the voxel counts from SPECT images into activity values was determine from calibrated point source using the same threshold value as in phantom studies. From selected radionuclide the dose within and outside a sphere of voxel dimension at different distances was computed through dose point-kernels to obtain a discrete absorbed dose kernel representation around the volume source with uniform activity distribution. The spatial activity distribution from SPECT imaging was convolved with this kernel representation using the discrete Fourier transform method to yield three-dimensional absorbed dose rate distribution. The accuracy of dose rates calculation was validated by software phantoms. The absorbed dose was determined by integration of the dose rate distribution for each volume of interest (VOI). Parameters for treatment optimization such as dose rate volume histograms and dose rate statistic are provided. A patient example was used to illustrate our dosimetric calculations

  14. Quantitative rotating frame relaxometry methods in MRI.

    Science.gov (United States)

    Gilani, Irtiza Ali; Sepponen, Raimo

    2016-06-01

    Macromolecular degeneration and biochemical changes in tissue can be quantified using rotating frame relaxometry in MRI. It has been shown in several studies that the rotating frame longitudinal relaxation rate constant (R1ρ ) and the rotating frame transverse relaxation rate constant (R2ρ ) are sensitive biomarkers of phenomena at the cellular level. In this comprehensive review, existing MRI methods for probing the biophysical mechanisms that affect the rotating frame relaxation rates of the tissue (i.e. R1ρ and R2ρ ) are presented. Long acquisition times and high radiofrequency (RF) energy deposition into tissue during the process of spin-locking in rotating frame relaxometry are the major barriers to the establishment of these relaxation contrasts at high magnetic fields. Therefore, clinical applications of R1ρ and R2ρ MRI using on- or off-resonance RF excitation methods remain challenging. Accordingly, this review describes the theoretical and experimental approaches to the design of hard RF pulse cluster- and adiabatic RF pulse-based excitation schemes for accurate and precise measurements of R1ρ and R2ρ . The merits and drawbacks of different MRI acquisition strategies for quantitative relaxation rate measurement in the rotating frame regime are reviewed. In addition, this review summarizes current clinical applications of rotating frame MRI sequences. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.

  15. Accurate SERS detection of malachite green in aquatic products on basis of graphene wrapped flexible sensor.

    Science.gov (United States)

    Ouyang, Lei; Yao, Ling; Zhou, Taohong; Zhu, Lihua

    2018-10-16

    Malachite Green (MG) is a banned pesticide for aquaculture products. As a required inspection item, its fast and accurate determination before the products' accessing market is very important. Surface enhanced Raman scattering (SERS) is a promising tool for MG sensing, but it requires the overcoming of several problems such as fairly poor sensitivity and reproducibility, especially laser induced chemical conversion and photo-bleaching during SERS observation. By using a graphene wrapped Ag array based flexible membrane sensor, a modified SERS strategy was proposed for the sensitive and accurate detection of MG. The graphene layer functioned as an inert protector for impeding chemical transferring of the bioproduct Leucomalachite Green (LMG) to MG during the SERS detection, and as a heat transmitter for preventing laser induced photo-bleaching, which enables the separate detection of MG and LMG in fish extracts. The combination of the Ag array and the graphene cover also produced plentiful densely and uniformly distributed hot spots, leading to analytical enhancement factor up to 3.9 × 10 8 and excellent reproducibility (relative standard deviation low to 5.8% for 70 runs). The proposed method was easily used for MG detection with limit of detection (LOD) as low as 2.7 × 10 -11  mol L -1 . The flexibility of the sensor enable it have a merit for in-field fast detection of MG residues on the scale of a living fish through a surface extraction and paste transferring manner. The developed strategy was successfully applied in the analysis of real samples, showing good prospects for both the fast inspection and quantitative detection of MG. Copyright © 2018 Elsevier B.V. All rights reserved.

  16. A Machine Learned Classifier That Uses Gene Expression Data to Accurately Predict Estrogen Receptor Status

    Science.gov (United States)

    Bastani, Meysam; Vos, Larissa; Asgarian, Nasimeh; Deschenes, Jean; Graham, Kathryn; Mackey, John; Greiner, Russell

    2013-01-01

    Background Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER) status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. Methods To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. Results This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. Conclusions Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions. PMID:24312637

  17. A machine learned classifier that uses gene expression data to accurately predict estrogen receptor status.

    Directory of Open Access Journals (Sweden)

    Meysam Bastani

    Full Text Available BACKGROUND: Selecting the appropriate treatment for breast cancer requires accurately determining the estrogen receptor (ER status of the tumor. However, the standard for determining this status, immunohistochemical analysis of formalin-fixed paraffin embedded samples, suffers from numerous technical and reproducibility issues. Assessment of ER-status based on RNA expression can provide more objective, quantitative and reproducible test results. METHODS: To learn a parsimonious RNA-based classifier of hormone receptor status, we applied a machine learning tool to a training dataset of gene expression microarray data obtained from 176 frozen breast tumors, whose ER-status was determined by applying ASCO-CAP guidelines to standardized immunohistochemical testing of formalin fixed tumor. RESULTS: This produced a three-gene classifier that can predict the ER-status of a novel tumor, with a cross-validation accuracy of 93.17±2.44%. When applied to an independent validation set and to four other public databases, some on different platforms, this classifier obtained over 90% accuracy in each. In addition, we found that this prediction rule separated the patients' recurrence-free survival curves with a hazard ratio lower than the one based on the IHC analysis of ER-status. CONCLUSIONS: Our efficient and parsimonious classifier lends itself to high throughput, highly accurate and low-cost RNA-based assessments of ER-status, suitable for routine high-throughput clinical use. This analytic method provides a proof-of-principle that may be applicable to developing effective RNA-based tests for other biomarkers and conditions.

  18. Quantitative analysis by computer controlled X-ray fluorescence spectrometer

    International Nuclear Information System (INIS)

    Balasubramanian, T.V.; Angelo, P.C.

    1981-01-01

    X-ray fluorescence spectroscopy has become a widely accepted method in the metallurgical field for analysis of both minor and major elements. As encountered in many other analytical techniques, the problem of matrix effect generally known as the interelemental effects is to be dealt with effectively in order to make the analysis accurate. There are several methods by which the effects of matrix on the analyte are minimised or corrected for and the mathematical correction is one among them. In this method the characteristic secondary X-ray intensities are measured from standard samples and correction coefficients. If any, for interelemental effects are evaluated by mathematical calculations. This paper describes attempts to evaluate the correction coefficients for interelemental effects by multiple linear regression programmes using a computer for the quantitative analysis of stainless steel and a nickel base cast alloy. The quantitative results obtained using this method for a standard stainless steel sample are compared with the given certified values. (author)

  19. QUAIL: A Quantitative Security Analyzer for Imperative Code

    DEFF Research Database (Denmark)

    Biondi, Fabrizio; Wasowski, Andrzej; Traonouez, Louis-Marie

    2013-01-01

    Quantitative security analysis evaluates and compares how effectively a system protects its secret data. We introduce QUAIL, the first tool able to perform an arbitrary-precision quantitative analysis of the security of a system depending on private information. QUAIL builds a Markov Chain model...... of the system’s behavior as observed by an attacker, and computes the correlation between the system’s observable output and the behavior depending on the private information, obtaining the expected amount of bits of the secret that the attacker will infer by observing the system. QUAIL is able to evaluate...... the safety of randomized protocols depending on secret data, allowing to verify a security protocol’s effectiveness. We experiment with a few examples and show that QUAIL’s security analysis is more accurate and revealing than results of other tools...

  20. An accurate and portable solid state neutron rem meter

    Energy Technology Data Exchange (ETDEWEB)

    Oakes, T.M. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Bellinger, S.L. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Miller, W.H. [Nuclear Science and Engineering Institute, University of Missouri, Columbia, MO (United States); Missouri University Research Reactor, Columbia, MO (United States); Myers, E.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Fronk, R.G.; Cooper, B.W [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Sobering, T.J. [Electronics Design Laboratory, Kansas State University, KS (United States); Scott, P.R. [Department of Physics, University of Missouri, Kansas City, MO (United States); Ugorowski, P.; McGregor, D.S; Shultis, J.K. [Department of Mechanical and Nuclear Engineering, Kansas State University, Manhattan, KS (United States); Caruso, A.N., E-mail: carusoan@umkc.edu [Department of Physics, University of Missouri, Kansas City, MO (United States)

    2013-08-11

    Accurately resolving the ambient neutron dose equivalent spanning the thermal to 15 MeV energy range with a single configuration and lightweight instrument is desirable. This paper presents the design of a portable, high intrinsic efficiency, and accurate neutron rem meter whose energy-dependent response is electronically adjusted to a chosen neutron dose equivalent standard. The instrument may be classified as a moderating type neutron spectrometer, based on an adaptation to the classical Bonner sphere and position sensitive long counter, which, simultaneously counts thermalized neutrons by high thermal efficiency solid state neutron detectors. The use of multiple detectors and moderator arranged along an axis of symmetry (e.g., long axis of a cylinder) with known neutron-slowing properties allows for the construction of a linear combination of responses that approximate the ambient neutron dose equivalent. Variations on the detector configuration are investigated via Monte Carlo N-Particle simulations to minimize the total instrument mass while maintaining acceptable response accuracy—a dose error less than 15% for bare {sup 252}Cf, bare AmBe, an epi-thermal and mixed monoenergetic sources is found at less than 4.5 kg moderator mass in all studied cases. A comparison of the energy dependent dose equivalent response and resultant energy dependent dose equivalent error of the present dosimeter to commercially-available portable rem meters and the prior art are presented. Finally, the present design is assessed by comparison of the simulated output resulting from applications of several known neutron sources and dose rates.

  1. Accurate and efficient calculation of response times for groundwater flow

    Science.gov (United States)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

  2. How Accurate Are Our Processed ENDF Cross Sections?

    International Nuclear Information System (INIS)

    Cullen, Dermott E.

    2014-05-01

    Let me start by reassuring you that currently our nuclear data processing codes are very accurate in the calculations that they perform INSIDE COMNPUTERS. However, most of them drop the ball in what should be a trivial final step to output their results into the ENDF format. This is obviously a very important step, because without accurately outputting their results we would not be able to confidently use their results in our applications. This is indeed a very important step, but unfortunately it is one that is not given the attention it deserves; hence we come to the purpose of this paper. Here I document first the state of a number of nuclear data processing codes as of February 2012, when this comparison began, and then the current state, November 2013, of the same codes. I have delayed publishing results until now to give participants time to distribute updated codes and data. The codes compared include, in alphabetical order: AMPX, NJOY, PREPRO, and SAMMY/SAMRML. During this time we have seen considerable improvement in output results, and as a direct result of this study we now have four codes that produce high precision results, but this is still a long way from ensuring that all codes that handle nuclear data are maintaining the accuracy that we require today. In the first part of this report I consider the precision of our tabulated energies; here we see obvious flaws when less-precise output is used. In the second part I consider the precision of our cross sections; here we see more subtle flaws. The important point to stress is that once these flaws are recognized it is relatively easy to eliminate them and produce high precision energies and cross sections.

  3. Bayesian calibration of power plant models for accurate performance prediction

    International Nuclear Information System (INIS)

    Boksteen, Sowande Z.; Buijtenen, Jos P. van; Pecnik, Rene; Vecht, Dick van der

    2014-01-01

    Highlights: • Bayesian calibration is applied to power plant performance prediction. • Measurements from a plant in operation are used for model calibration. • A gas turbine performance model and steam cycle model are calibrated. • An integrated plant model is derived. • Part load efficiency is accurately predicted as a function of ambient conditions. - Abstract: Gas turbine combined cycles are expected to play an increasingly important role in the balancing of supply and demand in future energy markets. Thermodynamic modeling of these energy systems is frequently applied to assist in decision making processes related to the management of plant operation and maintenance. In most cases, model inputs, parameters and outputs are treated as deterministic quantities and plant operators make decisions with limited or no regard of uncertainties. As the steady integration of wind and solar energy into the energy market induces extra uncertainties, part load operation and reliability are becoming increasingly important. In the current study, methods are proposed to not only quantify various types of uncertainties in measurements and plant model parameters using measured data, but to also assess their effect on various aspects of performance prediction. The authors aim to account for model parameter and measurement uncertainty, and for systematic discrepancy of models with respect to reality. For this purpose, the Bayesian calibration framework of Kennedy and O’Hagan is used, which is especially suitable for high-dimensional industrial problems. The article derives a calibrated model of the plant efficiency as a function of ambient conditions and operational parameters, which is also accurate in part load. The article shows that complete statistical modeling of power plants not only enhances process models, but can also increases confidence in operational decisions

  4. Accurate atom-mapping computation for biochemical reactions.

    Science.gov (United States)

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  5. Accurate thermodynamic characterization of a synthetic coal mine methane mixture

    International Nuclear Information System (INIS)

    Hernández-Gómez, R.; Tuma, D.; Villamañán, M.A.; Mondéjar, M.E.; Chamorro, C.R.

    2014-01-01

    Highlights: • Accurate density data of a 10 components synthetic coal mine methane mixture are presented. • Experimental data are compared with the densities calculated from the GERG-2008 equation of state. • Relative deviations in density were within a 0.2% band at temperatures above 275 K. • Densities at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations. -- Abstract: In the last few years, coal mine methane (CMM) has gained significance as a potential non-conventional gas fuel. The progressive depletion of common fossil fuels reserves and, on the other hand, the positive estimates of CMM resources as a by-product of mining promote this fuel gas as a promising alternative fuel. The increasing importance of its exploitation makes it necessary to check the capability of the present-day models and equations of state for natural gas to predict the thermophysical properties of gases with a considerably different composition, like CMM. In this work, accurate density measurements of a synthetic CMM mixture are reported in the temperature range from (250 to 400) K and pressures up to 15 MPa, as part of the research project EMRP ENG01 of the European Metrology Research Program for the characterization of non-conventional energy gases. Experimental data were compared with the densities calculated with the GERG-2008 equation of state. Relative deviations between experimental and estimated densities were within a 0.2% band at temperatures above 275 K, while data at 250 K as well as at 275 K and pressures above 10 MPa showed higher deviations

  6. An accurate solver for forward and inverse transport

    International Nuclear Information System (INIS)

    Monard, Francois; Bal, Guillaume

    2010-01-01

    This paper presents a robust and accurate way to solve steady-state linear transport (radiative transfer) equations numerically. Our main objective is to address the inverse transport problem, in which the optical parameters of a domain of interest are reconstructed from measurements performed at the domain's boundary. This inverse problem has important applications in medical and geophysical imaging, and more generally in any field involving high frequency waves or particles propagating in scattering environments. Stable solutions of the inverse transport problem require that the singularities of the measurement operator, which maps the optical parameters to the available measurements, be captured with sufficient accuracy. This in turn requires that the free propagation of particles be calculated with care, which is a difficult problem on a Cartesian grid. A standard discrete ordinates method is used for the direction of propagation of the particles. Our methodology to address spatial discretization is based on rotating the computational domain so that each direction of propagation is always aligned with one of the grid axes. Rotations are performed in the Fourier domain to achieve spectral accuracy. The numerical dispersion of the propagating particles is therefore minimal. As a result, the ballistic and single scattering components of the transport solution are calculated robustly and accurately. Physical blurring effects, such as small angular diffusion, are also incorporated into the numerical tool. Forward and inverse calculations performed in a two-dimensional setting exemplify the capabilities of the method. Although the methodology might not be the fastest way to solve transport equations, its physical accuracy provides us with a numerical tool to assess what can and cannot be reconstructed in inverse transport theory.

  7. Establishing Accurate and Sustainable Geospatial Reference Layers in Developing Countries

    Science.gov (United States)

    Seaman, V. Y.

    2017-12-01

    Accurate geospatial reference layers (settlement names & locations, administrative boundaries, and population) are not readily available for most developing countries. This critical information gap makes it challenging for governments to efficiently plan, allocate resources, and provide basic services. It also hampers international agencies' response to natural disasters, humanitarian crises, and other emergencies. The current work involves a recent successful effort, led by the Bill & Melinda Gates Foundation and the Government of Nigeria, to obtain such data. The data collection began in 2013, with local teams collecting names, coordinates, and administrative attributes for over 100,000 settlements using ODK-enabled smartphones. A settlement feature layer extracted from satellite imagery was used to ensure all settlements were included. Administrative boundaries (Ward, LGA) were created using the settlement attributes. These "new" boundary layers were much more accurate than existing shapefiles used by the government and international organizations. The resulting data sets helped Nigeria eradicate polio from all areas except in the extreme northeast, where security issues limited access and vaccination activities. In addition to the settlement and boundary layers, a GIS-based population model was developed, in partnership with Oak Ridge National Laboratories and Flowminder), that used the extracted settlement areas and characteristics, along with targeted microcensus data. This model provides population and demographics estimates independent of census or other administrative data, at a resolution of 90 meters. These robust geospatial data layers found many other uses, including establishing catchment area settlements and populations for health facilities, validating denominators for population-based surveys, and applications across a variety of government sectors. Based on the success of the Nigeria effort, a partnership between DfID and the Bill & Melinda Gates

  8. A New Multiscale Technique for Time-Accurate Geophysics Simulations

    Science.gov (United States)

    Omelchenko, Y. A.; Karimabadi, H.

    2006-12-01

    Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.

  9. Accurate fluid force measurement based on control surface integration

    Science.gov (United States)

    Lentink, David

    2018-01-01

    Nonintrusive 3D fluid force measurements are still challenging to conduct accurately for freely moving animals, vehicles, and deforming objects. Two techniques, 3D particle image velocimetry (PIV) and a new technique, the aerodynamic force platform (AFP), address this. Both rely on the control volume integral for momentum; whereas PIV requires numerical integration of flow fields, the AFP performs the integration mechanically based on rigid walls that form the control surface. The accuracy of both PIV and AFP measurements based on the control surface integration is thought to hinge on determining the unsteady body force associated with the acceleration of the volume of displaced fluid. Here, I introduce a set of non-dimensional error ratios to show which fluid and body parameters make the error negligible. The unsteady body force is insignificant in all conditions where the average density of the body is much greater than the density of the fluid, e.g., in gas. Whenever a strongly deforming body experiences significant buoyancy and acceleration, the error is significant. Remarkably, this error can be entirely corrected for with an exact factor provided that the body has a sufficiently homogenous density or acceleration distribution, which is common in liquids. The correction factor for omitting the unsteady body force, {{{ {ρ f}} {1 - {ρ f} ( {{ρ b}+{ρ f}} )}.{( {{{{ρ }}b}+{ρ f}} )}}} , depends only on the fluid, {ρ f}, and body, {{ρ }}b, density. Whereas these straightforward solutions work even at the liquid-gas interface in a significant number of cases, they do not work for generalized bodies undergoing buoyancy in combination with appreciable body density inhomogeneity, volume change (PIV), or volume rate-of-change (PIV and AFP). In these less common cases, the 3D body shape needs to be measured and resolved in time and space to estimate the unsteady body force. The analysis shows that accounting for the unsteady body force is straightforward to non

  10. Time-resolved quantitative phosphoproteomics

    DEFF Research Database (Denmark)

    Verano-Braga, Thiago; Schwämmle, Veit; Sylvester, Marc

    2012-01-01

    proteins involved in the Ang-(1-7) signaling, we performed a mass spectrometry-based time-resolved quantitative phosphoproteome study of human aortic endothelial cells (HAEC) treated with Ang-(1-7). We identified 1288 unique phosphosites on 699 different proteins with 99% certainty of correct peptide...

  11. Quantitative Characterisation of Surface Texture

    DEFF Research Database (Denmark)

    De Chiffre, Leonardo; Lonardo, P.M.; Trumpold, H.

    2000-01-01

    This paper reviews the different methods used to give a quantitative characterisation of surface texture. The paper contains a review of conventional 2D as well as 3D roughness parameters, with particular emphasis on recent international standards and developments. It presents new texture...

  12. GPC and quantitative phase imaging

    DEFF Research Database (Denmark)

    Palima, Darwin; Banas, Andrew Rafael; Villangca, Mark Jayson

    2016-01-01

    shaper followed by the potential of GPC for biomedical and multispectral applications where we experimentally demonstrate the active light shaping of a supercontinuum laser over most of the visible wavelength range. Finally, we discuss how GPC can be advantageously applied for Quantitative Phase Imaging...

  13. Compositional and Quantitative Model Checking

    DEFF Research Database (Denmark)

    Larsen, Kim Guldstrand

    2010-01-01

    This paper gives a survey of a composition model checking methodology and its succesfull instantiation to the model checking of networks of finite-state, timed, hybrid and probabilistic systems with respect; to suitable quantitative versions of the modal mu-calculus [Koz82]. The method is based...

  14. La quantite en islandais modern

    Directory of Open Access Journals (Sweden)

    Magnús Pétursson

    1978-12-01

    Full Text Available La réalisation phonétique de la quantité en syllabe accentuée dans la lecture de deux textes continus. Le problème de la quantité est un des problèmes les plus étudiés dans la phonologie de l'islandais moderne. Du point de vue phonologique il semble qu'on ne peut pas espérer apporter du nouveau, les possibilités théoriques ayant été pratiquement épuisées comme nous 1'avons rappelé dans notre étude récente (Pétursson 1978, pp. 76-78. Le résultat le plus inattendu des recherches des dernières années est sans doute la découverte d'une différenciation quantitative entre le Nord et le Sud de l'Islande (Pétursson 1976a. Il est pourtant encore prématuré de parler de véritables zones quantitatives puisqu'on n' en connaît ni les limites ni l' étendue sur le plan géographique.

  15. Quantitative Reasoning in Problem Solving

    Science.gov (United States)

    Ramful, Ajay; Ho, Siew Yin

    2015-01-01

    In this article, Ajay Ramful and Siew Yin Ho explain the meaning of quantitative reasoning, describing how it is used in the to solve mathematical problems. They also describe a diagrammatic approach to represent relationships among quantities and provide examples of problems and their solutions.

  16. Quantitative reactive modeling and verification.

    Science.gov (United States)

    Henzinger, Thomas A

    Formal verification aims to improve the quality of software by detecting errors before they do harm. At the basis of formal verification is the logical notion of correctness , which purports to capture whether or not a program behaves as desired. We suggest that the boolean partition of software into correct and incorrect programs falls short of the practical need to assess the behavior of software in a more nuanced fashion against multiple criteria. We therefore propose to introduce quantitative fitness measures for programs, specifically for measuring the function, performance, and robustness of reactive programs such as concurrent processes. This article describes the goals of the ERC Advanced Investigator Project QUAREM. The project aims to build and evaluate a theory of quantitative fitness measures for reactive models. Such a theory must strive to obtain quantitative generalizations of the paradigms that have been success stories in qualitative reactive modeling, such as compositionality, property-preserving abstraction and abstraction refinement, model checking, and synthesis. The theory will be evaluated not only in the context of software and hardware engineering, but also in the context of systems biology. In particular, we will use the quantitative reactive models and fitness measures developed in this project for testing hypotheses about the mechanisms behind data from biological experiments.

  17. Quantitative phylogenetic assessment of microbial communities indiverse environments

    Energy Technology Data Exchange (ETDEWEB)

    von Mering, C.; Hugenholtz, P.; Raes, J.; Tringe, S.G.; Doerks,T.; Jensen, L.J.; Ward, N.; Bork, P.

    2007-01-01

    The taxonomic composition of environmental communities is an important indicator of their ecology and function. Here, we use a set of protein-coding marker genes, extracted from large-scale environmental shotgun sequencing data, to provide a more direct, quantitative and accurate picture of community composition than traditional rRNA-based approaches using polymerase chain reaction (PCR). By mapping marker genes from four diverse environmental data sets onto a reference species phylogeny, we show that certain communities evolve faster than others, determine preferred habitats for entire microbial clades, and provide evidence that such habitat preferences are often remarkably stable over time.

  18. Integration of hydrothermal-energy economics: related quantitative studies

    Energy Technology Data Exchange (ETDEWEB)

    1982-08-01

    A comparison of ten models for computing the cost of hydrothermal energy is presented. This comparison involved a detailed examination of a number of technical and economic parameters of the various quantitative models with the objective of identifying the most important parameters in the context of accurate estimates of cost of hydrothermal energy. Important features of various models, such as focus of study, applications, marked sectors covered, methodology, input data requirements, and output are compared in the document. A detailed sensitivity analysis of all the important engineering and economic parameters is carried out to determine the effect of non-consideration of individual parameters.

  19. Quantitative determination of α and f parameters for κo NAA

    International Nuclear Information System (INIS)

    Moon, J. H.; Kim, S. H.; Jeong, Y. S.

    2002-01-01

    Instrumental Neutron Activation Analysis as a representative method of nuclear analytical techniques, has advantages of non-destructive, simultaneous multi-element analysis with the characteristics of absolute measurement. Recently, use of κ o quantitative method which is accurate, convenient and user-friendly has been generalized world-widely. In this study, α and f parameters which is indispensable to implement κ o NAA were experimentally measured at NAA No.1-irradiation hole of HANARO research reactor. In addition, it was intended to apply routine analysis by the establishment of reliable and effective procedure of the measurement

  20. Ratio of slopes method for quantitative analysis in ceramic bodies

    International Nuclear Information System (INIS)

    Zainal Arifin Ahmad; Ahmad Fauzi Mohd Noor; Radzali Othman; Messer, P.F.

    1996-01-01

    A quantitative x-ray diffraction analysis technique developed at University of Sheffield was adopted, rather than the previously widely used internal standard method, to determine the amount of the phases present in a reformulated whiteware porcelain and a BaTiO sub 3 electrochemical material. This method, although still employs an internal standard, was found to be very easy and accurate. The required weight fraction of a phase in the mixture to be analysed is determined from the ratio of slopes of two linear plots, designated as the analysis and reference lines, passing through their origins using the least squares method

  1. A CT-based method for fully quantitative TI SPECT

    International Nuclear Information System (INIS)

    Willowson, Kathy; Bailey, Dale; Baldock, Clive

    2009-01-01

    Full text: Objectives: To develop and validate a method for quantitative 2 0 l TI SPECT data based on corrections derived from X-ray CT data, and to apply the method in the clinic for quantitative determination of recurrence of brain tumours. Method: A previously developed method for achieving quantitative SPECT with 9 9 m Tc based on corrections derived from xray CT data was extended to apply to 2 0 l Tl. Experimental validation was performed on a cylindrical phantom by comparing known injected activity and measured concentration to quantitative calculations. Further evaluation was performed on a RSI Striatal Brain Phantom containing three 'lesions' with activity to background ratios of 1: 1, 1.5: I and 2: I. The method was subsequently applied to a series of scans from patients with suspected recurrence of brain tumours (principally glioma) to determine an SUV-like measure (Standardised Uptake Value). Results: The total activity and concentration in the phantom were calculated to within 3% and I % of the true values, respectively. The calculated values for the concentration of activity in the background and corresponding lesions of the brain phantom (in increasing ratios) were found to be within 2%,10%,1% and 2%, respectively, of the true concentrations. Patient studies showed that an initial SUV greater than 1.5 corresponded to a 56% mortality rate in the first 12 months, as opposed to a 14% mortality rate for those with a SUV less than 1.5. Conclusion: The quantitative technique produces accurate results for the radionuclide 2 0 l Tl. Initial investigation in clinical brain SPECT suggests correlation between quantitative uptake and survival.

  2. Quantitative SIMS Imaging of Agar-Based Microbial Communities.

    Science.gov (United States)

    Dunham, Sage J B; Ellis, Joseph F; Baig, Nameera F; Morales-Soto, Nydia; Cao, Tianyuan; Shrout, Joshua D; Bohn, Paul W; Sweedler, Jonathan V

    2018-05-01

    After several decades of widespread use for mapping elemental ions and small molecular fragments in surface science, secondary ion mass spectrometry (SIMS) has emerged as a powerful analytical tool for molecular imaging in biology. Biomolecular SIMS imaging has primarily been used as a qualitative technique; although the distribution of a single analyte can be accurately determined, it is difficult to map the absolute quantity of a compound or even to compare the relative abundance of one molecular species to that of another. We describe a method for quantitative SIMS imaging of small molecules in agar-based microbial communities. The microbes are cultivated on a thin film of agar, dried under nitrogen, and imaged directly with SIMS. By use of optical microscopy, we show that the area of the agar is reduced by 26 ± 2% (standard deviation) during dehydration, but the overall biofilm morphology and analyte distribution are largely retained. We detail a quantitative imaging methodology, in which the ion intensity of each analyte is (1) normalized to an external quadratic regression curve, (2) corrected for isomeric interference, and (3) filtered for sample-specific noise and lower and upper limits of quantitation. The end result is a two-dimensional surface density image for each analyte. The sample preparation and quantitation methods are validated by quantitatively imaging four alkyl-quinolone and alkyl-quinoline N-oxide signaling molecules (including Pseudomonas quinolone signal) in Pseudomonas aeruginosa colony biofilms. We show that the relative surface densities of the target biomolecules are substantially different from values inferred through direct intensity comparison and that the developed methodologies can be used to quantitatively compare as many ions as there are available standards.

  3. Verification of Scientific Simulations via Hypothesis-Driven Comparative and Quantitative Visualization

    Energy Technology Data Exchange (ETDEWEB)

    Ahrens, James P [ORNL; Heitmann, Katrin [ORNL; Petersen, Mark R [ORNL; Woodring, Jonathan [Los Alamos National Laboratory (LANL); Williams, Sean [Los Alamos National Laboratory (LANL); Fasel, Patricia [Los Alamos National Laboratory (LANL); Ahrens, Christine [Los Alamos National Laboratory (LANL); Hsu, Chung-Hsing [ORNL; Geveci, Berk [ORNL

    2010-11-01

    This article presents a visualization-assisted process that verifies scientific-simulation codes. Code verification is necessary because scientists require accurate predictions to interpret data confidently. This verification process integrates iterative hypothesis verification with comparative, feature, and quantitative visualization. Following this process can help identify differences in cosmological and oceanographic simulations.

  4. Quantitative dynamic nuclear polarization‐NMR on blood plasma for assays of drug metabolism

    DEFF Research Database (Denmark)

    Lerche, Mathilde Hauge; Meier, Sebastian; Jensen, Pernille Rose

    2011-01-01

    ‐scan 13C DNP‐NMR. An internal standard is used for the accurate quantification of drug and metabolite. Comparison of quantitative DNP‐NMR data with an established analytical method (liquid chromatography‐mass spectrometry) yields a Pearson correlation coefficient r of 0.99. Notably, all DNP...

  5. In vivo quantitative NMR imaging of fruit tissues during growth using Spoiled Gradient Echo sequence

    DEFF Research Database (Denmark)

    Kenouche, S.; Perrier, M.; Bertin, N.

    2014-01-01

    of this study was to design a robust and accurate quantitative measurement method based on NMR imaging combined with contrast agent (CA) for mapping and quantifying water transport in growing cherry tomato fruits. A multiple flip-angle Spoiled Gradient Echo (SGE) imaging sequence was used to evaluate...

  6. An approach for the accurate measurement of social morality levels.

    Science.gov (United States)

    Liu, Haiyan; Chen, Xia; Zhang, Bo

    2013-01-01

    In the social sciences, computer-based modeling has become an increasingly important tool receiving widespread attention. However, the derivation of the quantitative relationships linking individual moral behavior and social morality levels, so as to provide a useful basis for social policy-making, remains a challenge in the scholarly literature today. A quantitative measurement of morality from the perspective of complexity science constitutes an innovative attempt. Based on the NetLogo platform, this article examines the effect of various factors on social morality levels, using agents modeling moral behavior, immoral behavior, and a range of environmental social resources. Threshold values for the various parameters are obtained through sensitivity analysis; and practical solutions are proposed for reversing declines in social morality levels. The results show that: (1) Population size may accelerate or impede the speed with which immoral behavior comes to determine the overall level of social morality, but it has no effect on the level of social morality itself; (2) The impact of rewards and punishment on social morality levels follows the "5∶1 rewards-to-punishment rule," which is to say that 5 units of rewards have the same effect as 1 unit of punishment; (3) The abundance of public resources is inversely related to the level of social morality; (4) When the cost of population mobility reaches 10% of the total energy level, immoral behavior begins to be suppressed (i.e. the 1/10 moral cost rule). The research approach and methods presented in this paper successfully address the difficulties involved in measuring social morality levels, and promise extensive application potentials.

  7. Single-Cell Based Quantitative Assay of Chromosome Transmission Fidelity.

    Science.gov (United States)

    Zhu, Jin; Heinecke, Dominic; Mulla, Wahid A; Bradford, William D; Rubinstein, Boris; Box, Andrew; Haug, Jeffrey S; Li, Rong

    2015-03-30

    Errors in mitosis are a primary cause of chromosome instability (CIN), generating aneuploid progeny cells. Whereas a variety of factors can influence CIN, under most conditions mitotic errors are rare events that have been difficult to measure accurately. Here we report a green fluorescent protein-based quantitative chromosome transmission fidelity (qCTF) assay in budding yeast that allows sensitive and quantitative detection of CIN and can be easily adapted to high-throughput analysis. Using the qCTF assay, we performed genome-wide quantitative profiling of genes that affect CIN in a dosage-dependent manner and identified genes that elevate CIN when either increased (icCIN) or decreased in copy number (dcCIN). Unexpectedly, qCTF screening also revealed genes whose change in copy number quantitatively suppress CIN, suggesting that the basal error rate of the wild-type genome is not minimized, but rather, may have evolved toward an optimal level that balances both stability and low-level karyotype variation for evolutionary adaptation. Copyright © 2015 Zhu et al.

  8. Quantitative X-ray microanalysis of biological specimens

    International Nuclear Information System (INIS)

    Roomans, G.M.

    1988-01-01

    Qualitative X-ray microanalysis of biological specimens requires an approach that is somewhat different from that used in the materials sciences. The first step is deconvolution and background subtraction on the obtained spectrum. The further treatment depends on the type of specimen: thin, thick, or semithick. For thin sections, the continuum method of quantitation is most often used, but it should be combined with an accurate correction for extraneous background. However, alternative methods to determine local mass should also be considered. In the analysis of biological bulk specimens, the ZAF-correction method appears to be less useful, primarily because of the uneven surface of biological specimens. The peak-to-local background model may be a more adequate method for thick specimens that are not mounted on a thick substrate. Quantitative X-ray microanalysis of biological specimens generally requires the use of standards that preferably should resemble the specimen in chemical and physical properties. Special problems in biological microanalysis include low count rates, specimen instability and mass loss, extraneous contributions to the spectrum, and preparative artifacts affecting quantitation. A relatively recent development in X-ray microanalysis of biological specimens is the quantitative determination of local water content

  9. Reconciling Anti-essentialism and Quantitative Methodology

    DEFF Research Database (Denmark)

    Jensen, Mathias Fjællegaard

    2017-01-01

    Quantitative methodology has a contested role in feminist scholarship which remains almost exclusively qualitative. Considering Irigaray’s notion of mimicry, Spivak’s strategic essentialism, and Butler’s contingent foundations, the essentialising implications of quantitative methodology may prove...... the potential to reconcile anti-essentialism and quantitative methodology, and thus, to make peace in the quantitative/qualitative Paradigm Wars....

  10. ACVP-02: Plasma SIV/SHIV RNA Viral Load Measurements through the AIDS and Cancer Virus Program Quantitative Molecular Diagnostics Core | Frederick National Laboratory for Cancer Research

    Science.gov (United States)

    The SIV plasma viral load assay performed by the Quantitative Molecular Diagnostics Core (QMDC) utilizes reagents specifically designed to detect and accurately quantify the full range of SIV/SHIV viral variants and clones in common usage in the rese

  11. Accurate episomal HIV 2-LTR circles quantification using optimized DNA isolation and droplet digital PCR.

    Science.gov (United States)

    Malatinkova, Eva; Kiselinova, Maja; Bonczkowski, Pawel; Trypsteen, Wim; Messiaen, Peter; Vermeire, Jolien; Verhasselt, Bruno; Vervisch, Karen; Vandekerckhove, Linos; De Spiegelaere, Ward

    2014-01-01

    In HIV-infected patients on combination antiretroviral therapy (cART), the detection of episomal HIV 2-LTR circles is a potential marker for ongoing viral replication. Quantification of 2-LTR circles is based on quantitative PCR or more recently on digital PCR assessment, but is hampered due to its low abundance. Sample pre-PCR processing is a critical step for 2-LTR circles quantification, which has not yet been sufficiently evaluated in patient derived samples. We compared two sample processing procedures to more accurately quantify 2-LTR circles using droplet digital PCR (ddPCR). Episomal HIV 2-LTR circles were either isolated by genomic DNA isolation or by a modified plasmid DNA isolation, to separate the small episomal circular DNA from chromosomal DNA. This was performed in a dilution series of HIV-infected cells and HIV-1 infected patient derived samples (n=59). Samples for the plasmid DNA isolation method were spiked with an internal control plasmid. Genomic DNA isolation enables robust 2-LTR circles quantification. However, in the lower ranges of detection, PCR inhibition caused by high genomic DNA load substantially limits the amount of sample input and this impacts sensitivity and accuracy. Moreover, total genomic DNA isolation resulted in a lower recovery of 2-LTR templates per isolate, further reducing its sensitivity. The modified plasmid DNA isolation with a spiked reference for normalization was more accurate in these low ranges compared to genomic DNA isolation. A linear correlation of both methods was observed in the dilution series (R2=0.974) and in the patient derived samples with 2-LTR numbers above 10 copies per million peripheral blood mononuclear cells (PBMCs), (R2=0.671). Furthermore, Bland-Altman analysis revealed an average agreement between the methods within the 27 samples in which 2-LTR circles were detectable with both methods (bias: 0.3875±1.2657 log10). 2-LTR circles quantification in HIV-infected patients proved to be more

  12. Reducing dose calculation time for accurate iterative IMRT planning

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Lauterbach, Marc; Tong, Shidong; Wu Qiuwen; Mohan, Radhe

    2002-01-01

    A time-consuming component of IMRT optimization is the dose computation required in each iteration for the evaluation of the objective function. Accurate superposition/convolution (SC) and Monte Carlo (MC) dose calculations are currently considered too time-consuming for iterative IMRT dose calculation. Thus, fast, but less accurate algorithms such as pencil beam (PB) algorithms are typically used in most current IMRT systems. This paper describes two hybrid methods that utilize the speed of fast PB algorithms yet achieve the accuracy of optimizing based upon SC algorithms via the application of dose correction matrices. In one method, the ratio method, an infrequently computed voxel-by-voxel dose ratio matrix (R=D SC /D PB ) is applied for each beam to the dose distributions calculated with the PB method during the optimization. That is, D PB xR is used for the dose calculation during the optimization. The optimization proceeds until both the IMRT beam intensities and the dose correction ratio matrix converge. In the second method, the correction method, a periodically computed voxel-by-voxel correction matrix for each beam, defined to be the difference between the SC and PB dose computations, is used to correct PB dose distributions. To validate the methods, IMRT treatment plans developed with the hybrid methods are compared with those obtained when the SC algorithm is used for all optimization iterations and with those obtained when PB-based optimization is followed by SC-based optimization. In the 12 patient cases studied, no clinically significant differences exist in the final treatment plans developed with each of the dose computation methodologies. However, the number of time-consuming SC iterations is reduced from 6-32 for pure SC optimization to four or less for the ratio matrix method and five or less for the correction method. Because the PB algorithm is faster at computing dose, this reduces the inverse planning optimization time for our implementation

  13. Accurate, low-cost 3D-models of gullies

    Science.gov (United States)

    Onnen, Nils; Gronz, Oliver; Ries, Johannes B.; Brings, Christine

    2015-04-01

    Soil erosion is a widespread problem in arid and semi-arid areas. The most severe form is the gully erosion. They often cut into agricultural farmland and can make a certain area completely unproductive. To understand the development and processes inside and around gullies, we calculated detailed 3D-models of gullies in the Souss Valley in South Morocco. Near Taroudant, we had four study areas with five gullies different in size, volume and activity. By using a Canon HF G30 Camcorder, we made varying series of Full HD videos with 25fps. Afterwards, we used the method Structure from Motion (SfM) to create the models. To generate accurate models maintaining feasible runtimes, it is necessary to select around 1500-1700 images from the video, while the overlap of neighboring images should be at least 80%. In addition, it is very important to avoid selecting photos that are blurry or out of focus. Nearby pixels of a blurry image tend to have similar color values. That is why we used a MATLAB script to compare the derivatives of the images. The higher the sum of the derivative, the sharper an image of similar objects. MATLAB subdivides the video into image intervals. From each interval, the image with the highest sum is selected. E.g.: 20min. video at 25fps equals 30.000 single images. The program now inspects the first 20 images, saves the sharpest and moves on to the next 20 images etc. Using this algorithm, we selected 1500 images for our modeling. With VisualSFM, we calculated features and the matches between all images and produced a point cloud. Then, MeshLab has been used to build a surface out of it using the Poisson surface reconstruction approach. Afterwards we are able to calculate the size and the volume of the gullies. It is also possible to determine soil erosion rates, if we compare the data with old recordings. The final step would be the combination of the terrestrial data with the data from our aerial photography. So far, the method works well and we

  14. A spectroscopic transfer standard for accurate atmospheric CO measurements

    Science.gov (United States)

    Nwaboh, Javis A.; Li, Gang; Serdyukov, Anton; Werhahn, Olav; Ebert, Volker

    2016-04-01

    Atmospheric carbon monoxide (CO) is a precursor of essential climate variables and has an indirect effect for enhancing global warming. Accurate and reliable measurements of atmospheric CO concentration are becoming indispensable. WMO-GAW reports states a compatibility goal of ±2 ppb for atmospheric CO concentration measurements. Therefore, the EMRP-HIGHGAS (European metrology research program - high-impact greenhouse gases) project aims at developing spectroscopic transfer standards for CO concentration measurements to meet this goal. A spectroscopic transfer standard would provide results that are directly traceable to the SI, can be very useful for calibration of devices operating in the field, and could complement classical gas standards in the field where calibration gas mixtures in bottles often are not accurate, available or stable enough [1][2]. Here, we present our new direct tunable diode laser absorption spectroscopy (dTDLAS) sensor capable of performing absolute ("calibration free") CO concentration measurements, and being operated as a spectroscopic transfer standard. To achieve the compatibility goal stated by WMO for CO concentration measurements and ensure the traceability of the final concentration results, traceable spectral line data especially line intensities with appropriate uncertainties are needed. Therefore, we utilize our new high-resolution Fourier-transform infrared (FTIR) spectroscopy CO line data for the 2-0 band, with significantly reduced uncertainties, for the dTDLAS data evaluation. Further, we demonstrate the capability of our sensor for atmospheric CO measurements, discuss uncertainty calculation following the guide to the expression of uncertainty in measurement (GUM) principles and show that CO concentrations derived using the sensor, based on the TILSAM (traceable infrared laser spectroscopic amount fraction measurement) method, are in excellent agreement with gravimetric values. Acknowledgement Parts of this work have been

  15. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    Science.gov (United States)

    Moore, Michael J; Dhingra, Amit; Soltis, Pamela S; Shaw, Regina; Farmerie, William G; Folta, Kevin M; Soltis, Douglas E

    2006-01-01

    Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20) System (454 Life Sciences Corporation), to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae) and Platanus occidentalis (Platanaceae). Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy observed in the GS 20 plastid

  16. Rapid and accurate pyrosequencing of angiosperm plastid genomes

    Directory of Open Access Journals (Sweden)

    Farmerie William G

    2006-08-01

    Full Text Available Abstract Background Plastid genome sequence information is vital to several disciplines in plant biology, including phylogenetics and molecular biology. The past five years have witnessed a dramatic increase in the number of completely sequenced plastid genomes, fuelled largely by advances in conventional Sanger sequencing technology. Here we report a further significant reduction in time and cost for plastid genome sequencing through the successful use of a newly available pyrosequencing platform, the Genome Sequencer 20 (GS 20 System (454 Life Sciences Corporation, to rapidly and accurately sequence the whole plastid genomes of the basal eudicot angiosperms Nandina domestica (Berberidaceae and Platanus occidentalis (Platanaceae. Results More than 99.75% of each plastid genome was simultaneously obtained during two GS 20 sequence runs, to an average depth of coverage of 24.6× in Nandina and 17.3× in Platanus. The Nandina and Platanus plastid genomes shared essentially identical gene complements and possessed the typical angiosperm plastid structure and gene arrangement. To assess the accuracy of the GS 20 sequence, over 45 kilobases of sequence were generated for each genome using conventional sequencing. Overall error rates of 0.043% and 0.031% were observed in GS 20 sequence for Nandina and Platanus, respectively. More than 97% of all observed errors were associated with homopolymer runs, with ~60% of all errors associated with homopolymer runs of 5 or more nucleotides and ~50% of all errors associated with regions of extensive homopolymer runs. No substitution errors were present in either genome. Error rates were generally higher in the single-copy and noncoding regions of both plastid genomes relative to the inverted repeat and coding regions. Conclusion Highly accurate and essentially complete sequence information was obtained for the Nandina and Platanus plastid genomes using the GS 20 System. More importantly, the high accuracy

  17. Ultra-fast quantitative imaging using ptychographic iterative engine based digital micro-mirror device

    Science.gov (United States)

    Sun, Aihui; Tian, Xiaolin; Kong, Yan; Jiang, Zhilong; Liu, Fei; Xue, Liang; Wang, Shouyu; Liu, Cheng

    2018-01-01

    As a lensfree imaging technique, ptychographic iterative engine (PIE) method can provide both quantitative sample amplitude and phase distributions avoiding aberration. However, it requires field of view (FoV) scanning often relying on mechanical translation, which not only slows down measuring speed, but also introduces mechanical errors decreasing both resolution and accuracy in retrieved information. In order to achieve high-accurate quantitative imaging with fast speed, digital micromirror device (DMD) is adopted in PIE for large FoV scanning controlled by on/off state coding by DMD. Measurements were implemented using biological samples as well as USAF resolution target, proving high resolution in quantitative imaging using the proposed system. Considering its fast and accurate imaging capability, it is believed the DMD based PIE technique provides a potential solution for medical observation and measurements.

  18. High-performance liquid chromatographic quantitation of desmosine plus isodesmosine in elastin and whole tissue hydrolysates

    International Nuclear Information System (INIS)

    Soskel, N.T.

    1987-01-01

    Quantitation of desmosine and isodesmosine, the major crosslinks in elastin, has been of interest because of their uniqueness and use as markers of that protein. Accurate measurement of these crosslinks may allow determination of elastin degradation in vivo and elastin content in tissues, obviating lengthy extraction procedures. We have developed a method of quantitating desmosine plus isodesmosine in hydrolysates of tissue and insoluble elastin using high-performance liquid chromatographic separation and absorbance detection that is rapid (21-35 min) and sensitive (accurate linearity from 100 pmol to 5 nmol). This method has been used to quantitate desmosines in elastin from bovine nuchal ligament and lung and in whole aorta from hamsters. The ability to completely separate [ 3 H]lysine from desmosine plus isodesmosine allows the method to be used to study incorporation of lysine into crosslinks in elastin

  19. Quantitative (real-time) PCR

    International Nuclear Information System (INIS)

    Denman, S.E.; McSweeney, C.S.

    2005-01-01

    Many nucleic acid-based probe and PCR assays have been developed for the detection tracking of specific microbes within the rumen ecosystem. Conventional PCR assays detect PCR products at the end stage of each PCR reaction, where exponential amplification is no longer being achieved. This approach can result in different end product (amplicon) quantities being generated. In contrast, using quantitative, or real-time PCR, quantification of the amplicon is performed not at the end of the reaction, but rather during exponential amplification, where theoretically each cycle will result in a doubling of product being created. For real-time PCR, the cycle at which fluorescence is deemed to be detectable above the background during the exponential phase is termed the cycle threshold (Ct). The Ct values obtained are then used for quantitation, which will be discussed later

  20. QUANTITATIVE CONFOCAL LASER SCANNING MICROSCOPY

    Directory of Open Access Journals (Sweden)

    Merete Krog Raarup

    2011-05-01

    Full Text Available This paper discusses recent advances in confocal laser scanning microscopy (CLSM for imaging of 3D structure as well as quantitative characterization of biomolecular interactions and diffusion behaviour by means of one- and two-photon excitation. The use of CLSM for improved stereological length estimation in thick (up to 0.5 mm tissue is proposed. The techniques of FRET (Fluorescence Resonance Energy Transfer, FLIM (Fluorescence Lifetime Imaging Microscopy, FCS (Fluorescence Correlation Spectroscopy and FRAP (Fluorescence Recovery After Photobleaching are introduced and their applicability for quantitative imaging of biomolecular (co-localization and trafficking in live cells described. The advantage of two-photon versus one-photon excitation in relation to these techniques is discussed.

  1. Quantitative phase imaging of arthropods

    Science.gov (United States)

    Sridharan, Shamira; Katz, Aron; Soto-Adames, Felipe; Popescu, Gabriel

    2015-11-01

    Classification of arthropods is performed by characterization of fine features such as setae and cuticles. An unstained whole arthropod specimen mounted on a slide can be preserved for many decades, but is difficult to study since current methods require sample manipulation or tedious image processing. Spatial light interference microscopy (SLIM) is a quantitative phase imaging (QPI) technique that is an add-on module to a commercial phase contrast microscope. We use SLIM to image a whole organism springtail Ceratophysella denticulata mounted on a slide. This is the first time, to our knowledge, that an entire organism has been imaged using QPI. We also demonstrate the ability of SLIM to image fine structures in addition to providing quantitative data that cannot be obtained by traditional bright field microscopy.

  2. Hydration free energies of cyanide and hydroxide ions from molecular dynamics simulations with accurate force fields

    Science.gov (United States)

    Lee, M.W.; Meuwly, M.

    2013-01-01

    The evaluation of hydration free energies is a sensitive test to assess force fields used in atomistic simulations. We showed recently that the vibrational relaxation times, 1D- and 2D-infrared spectroscopies for CN(-) in water can be quantitatively described from molecular dynamics (MD) simulations with multipolar force fields and slightly enlarged van der Waals radii for the C- and N-atoms. To validate such an approach, the present work investigates the solvation free energy of cyanide in water using MD simulations with accurate multipolar electrostatics. It is found that larger van der Waals radii are indeed necessary to obtain results close to the experimental values when a multipolar force field is used. For CN(-), the van der Waals ranges refined in our previous work yield hydration free energy between -72.0 and -77.2 kcal mol(-1), which is in excellent agreement with the experimental data. In addition to the cyanide ion, we also study the hydroxide ion to show that the method used here is readily applicable to similar systems. Hydration free energies are found to sensitively depend on the intermolecular interactions, while bonded interactions are less important, as expected. We also investigate in the present work the possibility of applying the multipolar force field in scoring trajectories generated using computationally inexpensive methods, which should be useful in broader parametrization studies with reduced computational resources, as scoring is much faster than the generation of the trajectories.

  3. Accurate step-hold tracking of smoothly varying periodic and aperiodic probability.

    Science.gov (United States)

    Ricci, Matthew; Gallistel, Randy

    2017-07-01

    Subjects observing many samples from a Bernoulli distribution are able to perceive an estimate of the generating parameter. A question of fundamental importance is how the current percept-what we think the probability now is-depends on the sequence of observed samples. Answers to this question are strongly constrained by the manner in which the current percept changes in response to changes in the hidden parameter. Subjects do not update their percept trial-by-trial when the hidden probability undergoes unpredictable and unsignaled step changes; instead, they update it only intermittently in a step-hold pattern. It could be that the step-hold pattern is not essential to the perception of probability and is only an artifact of step changes in the hidden parameter. However, we now report that the step-hold pattern obtains even when the parameter varies slowly and smoothly. It obtains even when the smooth variation is periodic (sinusoidal) and perceived as such. We elaborate on a previously published theory that accounts for: (i) the quantitative properties of the step-hold update pattern; (ii) subjects' quick and accurate reporting of changes; (iii) subjects' second thoughts about previously reported changes; (iv) subjects' detection of higher-order structure in patterns of change. We also call attention to the challenges these results pose for trial-by-trial updating theories.

  4. Robust and Accurate Discrimination of Self/Non-Self Antigen Presentations by Regulatory T Cell Suppression.

    Directory of Open Access Journals (Sweden)

    Chikara Furusawa

    Full Text Available The immune response by T cells usually discriminates self and non-self antigens, even though the negative selection of self-reactive T cells is imperfect and a certain fraction of T cells can respond to self-antigens. In this study, we construct a simple mathematical model of T cell populations to analyze how such self/non-self discrimination is possible. The results demonstrate that the control of the immune response by regulatory T cells enables a robust and accurate discrimination of self and non-self antigens, even when there is a significant overlap between the affinity distribution of T cells to self and non-self antigens. Here, the number of regulatory T cells in the system acts as a global variable controlling the T cell population dynamics. The present study provides a basis for the development of a quantitative theory for self and non-self discrimination in the immune system and a possible strategy for its experimental verification.

  5. Robust and Accurate Discrimination of Self/Non-Self Antigen Presentations by Regulatory T Cell Suppression.

    Science.gov (United States)

    Furusawa, Chikara; Yamaguchi, Tomoyuki

    The immune response by T cells usually discriminates self and non-self antigens, even though the negative selection of self-reactive T cells is imperfect and a certain fraction of T cells can respond to self-antigens. In this study, we construct a simple mathematical model of T cell populations to analyze how such self/non-self discrimination is possible. The results demonstrate that the control of the immune response by regulatory T cells enables a robust and accurate discrimination of self and non-self antigens, even when there is a significant overlap between the affinity distribution of T cells to self and non-self antigens. Here, the number of regulatory T cells in the system acts as a global variable controlling the T cell population dynamics. The present study provides a basis for the development of a quantitative theory for self and non-self discrimination in the immune system and a possible strategy for its experimental verification.

  6. Comparison of PIV with 4D-Flow in a physiological accurate flow phantom

    Science.gov (United States)

    Sansom, Kurt; Balu, Niranjan; Liu, Haining; Aliseda, Alberto; Yuan, Chun; Canton, Maria De Gador

    2016-11-01

    Validation of 4D MRI flow sequences with planar particle image velocimetry (PIV) is performed in a physiologically-accurate flow phantom. A patient-specific phantom of a carotid artery is connected to a pulsatile flow loop to simulate the 3D unsteady flow in the cardiovascular anatomy. Cardiac-cycle synchronized MRI provides time-resolved 3D blood velocity measurements in clinical tool that is promising but lacks a robust validation framework. PIV at three different Reynolds numbers (540, 680, and 815, chosen based on +/- 20 % of the average velocity from the patient-specific CCA waveform) and four different Womersley numbers (3.30, 3.68, 4.03, and 4.35, chosen to reflect a physiological range of heart rates) are compared to 4D-MRI measurements. An accuracy assessment of raw velocity measurements and a comparison of estimated and measureable flow parameters such as wall shear stress, fluctuating velocity rms, and Lagrangian particle residence time, will be presented, with justification for their biomechanics relevance to the pathophysiology of arterial disease: atherosclerosis and intimal hyperplasia. Lastly, the framework is applied to a new 4D-Flow MRI sequence and post processing techniques to provide a quantitative assessment with the benchmarked data. Department of Education GAANN Fellowship.

  7. Accurate donor electron wave functions from a multivalley effective mass theory.

    Science.gov (United States)

    Pendo, Luke; Hu, Xuedong

    Multivalley effective mass (MEM) theories combine physical intuition with a marginal need for computational resources, but they tend to be insensitive to variations in the wavefunction. However, recent papers suggest full Bloch functions and suitable central cell donor potential corrections are essential to replicating qualitative and quantitative features of the wavefunction. In this talk, we consider a variational MEM method that can accurately predict both spectrum and wavefunction of isolated phosphorus donors. As per Gamble et. al, we employ a truncated series representation of the Bloch function with a tetrahedrally symmetric central cell correction. We use a dynamic dielectric constant, a feature commonly seen in tight-binding methods. Uniquely, we use a freely extensible basis of either all Slater- or all Gaussian-type functions. With a large basis able to capture the influence of higher energy eigenstates, this method is well positioned to consider the influence of external perturbations, such as electric field or applied strain, on the charge density. This work is supported by the US Army Research Office (W911NF1210609).

  8. Using an educational electronic documentation system to help nursing students accurately identify patient data.

    Science.gov (United States)

    Pobocik, Tamara

    2015-01-01

    This quantitative research study used a pretest/posttest design and reviewed how an educational electronic documentation system helped nursing students to identify the accurate "related to" statement of the nursing diagnosis for the patient in the case study. Students in the sample population were senior nursing students in a bachelor of science nursing program in the northeastern United States. Two distinct groups were used for a control and intervention group. The intervention group used the educational electronic documentation system for three class assignments. Both groups were given a pretest and posttest case study. The Accuracy Tool was used to score the students' responses to the related to statement of a nursing diagnosis given at the end of the case study. The scores of the Accuracy Tool were analyzed, and then the numeric scores were placed in SPSS, and the paired t test scores were analyzed for statistical significance. The intervention group's scores were statistically different from the pretest scores to posttest scores, while the control group's scores remained the same from pretest to posttest. The recommendation to nursing education is to use the educational electronic documentation system as a teaching pedagogy to help nursing students prepare for nursing practice. © 2014 NANDA International, Inc.

  9. Using pseudoalignment and base quality to accurately quantify microbial community composition.

    Directory of Open Access Journals (Sweden)

    Mark Reppell

    2018-04-01

    Full Text Available Pooled DNA from multiple unknown organisms arises in a variety of contexts, for example microbial samples from ecological or human health research. Determining the composition of pooled samples can be difficult, especially at the scale of modern sequencing data and reference databases. Here we propose a novel method for taxonomic profiling in pooled DNA that combines the speed and low-memory requirements of k-mer based pseudoalignment with a likelihood framework that uses base quality information to better resolve multiply mapped reads. We apply the method to the problem of classifying 16S rRNA reads using a reference database of known organisms, a common challenge in microbiome research. Using simulations, we show the method is accurate across a variety of read lengths, with different length reference sequences, at different sample depths, and when samples contain reads originating from organisms absent from the reference. We also assess performance in real 16S data, where we reanalyze previous genetic association data to show our method discovers a larger number of quantitative trait associations than other widely used methods. We implement our method in the software Karp, for k-mer based analysis of read pools, to provide a novel combination of speed and accuracy that is uniquely suited for enhancing discoveries in microbial studies.

  10. Determination of accurate metal silicide layer thickness by RBS

    International Nuclear Information System (INIS)

    Kirchhoff, J.F.; Baumann, S.M.; Evans, C.; Ward, I.; Coveney, P.

    1995-01-01

    Rutherford Backscattering Spectrometry (RBS) is a proven useful analytical tool for determining compositional information of a wide variety of materials. One of the most widely utilized applications of RBS is the study of the composition of metal silicides (MSi x ), also referred to as polycides. A key quantity obtained from an analysis of a metal silicide is the ratio of silicon to metal (Si/M). Although compositional information is very reliable in these applications, determination of metal silicide layer thickness by RBS techniques can differ from true layer thicknesses by more than 40%. The cause of these differences lies in how the densities utilized in the RBS analysis are calculated. The standard RBS analysis software packages calculate layer densities by assuming each element's bulk densities weighted by the fractional atomic presence. This calculation causes large thickness discrepancies in metal silicide thicknesses because most films form into crystal structures with distinct densities. Assuming a constant layer density for a full spectrum of Si/M values for metal silicide samples improves layer thickness determination but ignores the underlying physics of the films. We will present results of RBS determination of the thickness various metal silicide films with a range of Si/M values using a physically accurate model for the calculation of layer densities. The thicknesses are compared to scanning electron microscopy (SEM) cross-section micrographs. We have also developed supporting software that incorporates these calculations into routine analyses. (orig.)

  11. Highly accurate symplectic element based on two variational principles

    Science.gov (United States)

    Qing, Guanghui; Tian, Jia

    2018-02-01

    For the stability requirement of numerical resultants, the mathematical theory of classical mixed methods are relatively complex. However, generalized mixed methods are automatically stable, and their building process is simple and straightforward. In this paper, based on the seminal idea of the generalized mixed methods, a simple, stable, and highly accurate 8-node noncompatible symplectic element (NCSE8) was developed by the combination of the modified Hellinger-Reissner mixed variational principle and the minimum energy principle. To ensure the accuracy of in-plane stress results, a simultaneous equation approach was also suggested. Numerical experimentation shows that the accuracy of stress results of NCSE8 are nearly the same as that of displacement methods, and they are in good agreement with the exact solutions when the mesh is relatively fine. NCSE8 has advantages of the clearing concept, easy calculation by a finite element computer program, higher accuracy and wide applicability for various linear elasticity compressible and nearly incompressible material problems. It is possible that NCSE8 becomes even more advantageous for the fracture problems due to its better accuracy of stresses.

  12. Highly Accurate Calculations of the Phase Diagram of Cold Lithium

    Science.gov (United States)

    Shulenburger, Luke; Baczewski, Andrew

    The phase diagram of lithium is particularly complicated, exhibiting many different solid phases under the modest application of pressure. Experimental efforts to identify these phases using diamond anvil cells have been complemented by ab initio theory, primarily using density functional theory (DFT). Due to the multiplicity of crystal structures whose enthalpy is nearly degenerate and the uncertainty introduced by density functional approximations, we apply the highly accurate many-body diffusion Monte Carlo (DMC) method to the study of the solid phases at low temperature. These calculations span many different phases, including several with low symmetry, demonstrating the viability of DMC as a method for calculating phase diagrams for complex solids. Our results can be used as a benchmark to test the accuracy of various density functionals. This can strengthen confidence in DFT based predictions of more complex phenomena such as the anomalous melting behavior predicted for lithium at high pressures. Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. DOE's National Nuclear Security Administration under contract DE-AC04-94AL85000.

  13. Accurate stereochemistry for two related 22,26-epiminocholestene derivatives

    Energy Technology Data Exchange (ETDEWEB)

    Vega-Baez, José Luis; Sandoval-Ramírez, Jesús; Meza-Reyes, Socorro; Montiel-Smith, Sara; Gómez-Calvario, Victor [Facultad de Ciencias Químicas, Benemérita Universidad Autónoma de Puebla, Ciudad Universitaria, San Manuel, 72000 Puebla, Pue. (Mexico); Bernès, Sylvain, E-mail: sylvain-bernes@hotmail.com [DEP Facultad de Ciencias Químicas, UANL, Guerrero y Progreso S/N, Col. Treviño, 64570 Monterrey, NL (Mexico); Facultad de Ciencias Químicas, Benemérita Universidad Autónoma de Puebla, Ciudad Universitaria, San Manuel, 72000 Puebla, Pue. (Mexico)

    2008-04-01

    Regioselective opening of ring E of solasodine under various conditions afforded (25R)-22,26-epimino@@cholesta-5,22(N)-di@@ene-3β,16β-diyl diacetate (previously known as 3,16-diacetyl pseudosolasodine B), C{sub 31}H{sub 47}NO{sub 4}, or (22S,25R)-16β-hydr@@oxy-22,26-epimino@@cholesta-5-en-3β-yl acetate (a derivative of the naturally occurring alkaloid oblonginine), C{sub 29}H{sub 47}NO{sub 3}. In both cases, the reactions are carried out with retention of chirality at the C16, C20 and C25 stereogenic centers, which are found to be S, S and R, respectively. Although pseudosolasodine was synthesized 50 years ago, these accurate assignments clarify some controversial points about the actual stereochemistry for these alkaloids. This is of particular importance in the case of oblonginine, since this compound is currently under consideration for the treatment of aphasia arising from apoplexy; the present study defines a diastereoisomerically pure compound for pharmacological studies.

  14. An accurate projection algorithm for array processor based SPECT systems

    International Nuclear Information System (INIS)

    King, M.A.; Schwinger, R.B.; Cool, S.L.

    1985-01-01

    A data re-projection algorithm has been developed for use in single photon emission computed tomography (SPECT) on an array processor based computer system. The algorithm makes use of an accurate representation of pixel activity (uniform square pixel model of intensity distribution), and is rapidly performed due to the efficient handling of an array based algorithm and the Fast Fourier Transform (FFT) on parallel processing hardware. The algorithm consists of using a pixel driven nearest neighbour projection operation to an array of subdivided projection bins. This result is then convolved with the projected uniform square pixel distribution before being compressed to original bin size. This distribution varies with projection angle and is explicitly calculated. The FFT combined with a frequency space multiplication is used instead of a spatial convolution for more rapid execution. The new algorithm was tested against other commonly used projection algorithms by comparing the accuracy of projections of a simulated transverse section of the abdomen against analytically determined projections of that transverse section. The new algorithm was found to yield comparable or better standard error and yet result in easier and more efficient implementation on parallel hardware. Applications of the algorithm include iterative reconstruction and attenuation correction schemes and evaluation of regions of interest in dynamic and gated SPECT

  15. Study of accurate volume measurement system for plutonium nitrate solution

    Energy Technology Data Exchange (ETDEWEB)

    Hosoma, T. [Power Reactor and Nuclear Fuel Development Corp., Tokai, Ibaraki (Japan). Tokai Works

    1998-12-01

    It is important for effective safeguarding of nuclear materials to establish a technique for accurate volume measurement of plutonium nitrate solution in accountancy tank. The volume of the solution can be estimated by two differential pressures between three dip-tubes, in which the air is purged by an compressor. One of the differential pressure corresponds to the density of the solution, and another corresponds to the surface level of the solution in the tank. The measurement of the differential pressure contains many uncertain errors, such as precision of pressure transducer, fluctuation of back-pressure, generation of bubbles at the front of the dip-tubes, non-uniformity of temperature and density of the solution, pressure drop in the dip-tube, and so on. The various excess pressures at the volume measurement are discussed and corrected by a reasonable method. High precision-differential pressure measurement system is developed with a quartz oscillation type transducer which converts a differential pressure to a digital signal. The developed system is used for inspection by the government and IAEA. (M. Suetake)

  16. AN ACCURATE FLUX DENSITY SCALE FROM 1 TO 50 GHz

    International Nuclear Information System (INIS)

    Perley, R. A.; Butler, B. J.

    2013-01-01

    We develop an absolute flux density scale for centimeter-wavelength astronomy by combining accurate flux density ratios determined by the Very Large Array between the planet Mars and a set of potential calibrators with the Rudy thermophysical emission model of Mars, adjusted to the absolute scale established by the Wilkinson Microwave Anisotropy Probe. The radio sources 3C123, 3C196, 3C286, and 3C295 are found to be varying at a level of less than ∼5% per century at all frequencies between 1 and 50 GHz, and hence are suitable as flux density standards. We present polynomial expressions for their spectral flux densities, valid from 1 to 50 GHz, with absolute accuracy estimated at 1%-3% depending on frequency. Of the four sources, 3C286 is the most compact and has the flattest spectral index, making it the most suitable object on which to establish the spectral flux density scale. The sources 3C48, 3C138, 3C147, NGC 7027, NGC 6542, and MWC 349 show significant variability on various timescales. Polynomial coefficients for the spectral flux density are developed for 3C48, 3C138, and 3C147 for each of the 17 observation dates, spanning 1983-2012. The planets Venus, Uranus, and Neptune are included in our observations, and we derive their brightness temperatures over the same frequency range.

  17. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    Science.gov (United States)

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  18. Time scale controversy: Accurate orbital calibration of the early Paleogene

    Science.gov (United States)

    Roehl, U.; Westerhold, T.; Laskar, J.

    2012-12-01

    Timing is crucial to understanding the causes and consequences of events in Earth history. The calibration of geological time relies heavily on the accuracy of radioisotopic and astronomical dating. Uncertainties in the computations of Earth's orbital parameters and in radioisotopic dating have hampered the construction of a reliable astronomically calibrated time scale beyond 40 Ma. Attempts to construct a robust astronomically tuned time scale for the early Paleogene by integrating radioisotopic and astronomical dating are only partially consistent. Here, using the new La2010 and La2011 orbital solutions, we present the first accurate astronomically calibrated time scale for the early Paleogene (47-65 Ma) uniquely based on astronomical tuning and thus independent of the radioisotopic determination of the Fish Canyon standard. Comparison with geological data confirms the stability of the new La2011 solution back to 54 Ma. Subsequent anchoring of floating chronologies to the La2011 solution using the very long eccentricity nodes provides an absolute age of 55.530 ± 0.05 Ma for the onset of the Paleocene/Eocene Thermal Maximum (PETM), 54.850 ± 0.05 Ma for the early Eocene ash -17, and 65.250 ± 0.06 Ma for the K/Pg boundary. The new astrochronology presented here indicates that the intercalibration and synchronization of U/Pb and 40Ar/39Ar radioisotopic geochronology is much more challenging than previously thought.

  19. Accurate Recovery of H i Velocity Dispersion from Radio Interferometers

    Energy Technology Data Exchange (ETDEWEB)

    Ianjamasimanana, R. [Max-Planck Institut für Astronomie, Königstuhl 17, D-69117, Heidelberg (Germany); Blok, W. J. G. de [Netherlands Institute for Radio Astronomy (ASTRON), Postbus 2, 7990 AA Dwingeloo (Netherlands); Heald, George H., E-mail: roger@mpia.de, E-mail: blok@astron.nl, E-mail: George.Heald@csiro.au [Kapteyn Astronomical Institute, University of Groningen, P.O. Box 800, 9700 AV, Groningen (Netherlands)

    2017-05-01

    Gas velocity dispersion measures the amount of disordered motion of a rotating disk. Accurate estimates of this parameter are of the utmost importance because the parameter is directly linked to disk stability and star formation. A global measure of the gas velocity dispersion can be inferred from the width of the atomic hydrogen (H i) 21 cm line. We explore how several systematic effects involved in the production of H i cubes affect the estimate of H i velocity dispersion. We do so by comparing the H i velocity dispersion derived from different types of data cubes provided by The H i Nearby Galaxy Survey. We find that residual-scaled cubes best recover the H i velocity dispersion, independent of the weighting scheme used and for a large range of signal-to-noise ratio. For H i observations, where the dirty beam is substantially different from a Gaussian, the velocity dispersion values are overestimated unless the cubes are cleaned close to (e.g., ∼1.5 times) the noise level.

  20. Concurrent and Accurate Short Read Mapping on Multicore Processors.

    Science.gov (United States)

    Martínez, Héctor; Tárraga, Joaquín; Medina, Ignacio; Barrachina, Sergio; Castillo, Maribel; Dopazo, Joaquín; Quintana-Ortí, Enrique S

    2015-01-01

    We introduce a parallel aligner with a work-flow organization for fast and accurate mapping of RNA sequences on servers equipped with multicore processors. Our software, HPG Aligner SA (HPG Aligner SA is an open-source application. The software is available at http://www.opencb.org, exploits a suffix array to rapidly map a large fraction of the RNA fragments (reads), as well as leverages the accuracy of the Smith-Waterman algorithm to deal with conflictive reads. The aligner is enhanced with a careful strategy to detect splice junctions based on an adaptive division of RNA reads into small segments (or seeds), which are then mapped onto a number of candidate alignment locations, providing crucial information for the successful alignment of the complete reads. The experimental results on a platform with Intel multicore technology report the parallel performance of HPG Aligner SA, on RNA reads of 100-400 nucleotides, which excels in execution time/sensitivity to state-of-the-art aligners such as TopHat 2+Bowtie 2, MapSplice, and STAR.

  1. Accurate Fall Detection in a Top View Privacy Preserving Configuration.

    Science.gov (United States)

    Ricciuti, Manola; Spinsante, Susanna; Gambi, Ennio

    2018-05-29

    Fall detection is one of the most investigated themes in the research on assistive solutions for aged people. In particular, a false-alarm-free discrimination between falls and non-falls is indispensable, especially to assist elderly people living alone. Current technological solutions designed to monitor several types of activities in indoor environments can guarantee absolute privacy to the people that decide to rely on them. Devices integrating RGB and depth cameras, such as the Microsoft Kinect, can ensure privacy and anonymity, since the depth information is considered to extract only meaningful information from video streams. In this paper, we propose an accurate fall detection method investigating the depth frames of the human body using a single device in a top-view configuration, with the subjects located under the device inside a room. Features extracted from depth frames train a classifier based on a binary support vector machine learning algorithm. The dataset includes 32 falls and 8 activities considered for comparison, for a total of 800 sequences performed by 20 adults. The system showed an accuracy of 98.6% and only one false positive.

  2. Development of fast and accurate Monte Carlo code MVP

    International Nuclear Information System (INIS)

    Mori, Takamasa

    2001-01-01

    The development work of fast and accurate Monte Carlo code MVP has started at JAERI in late 80s. From the beginning, the code was designed to utilize vector supercomputers and achieved higher computation speed by a factor of 10 or more compared with conventional codes. In 1994, the first version of MVP was released together with cross section libraries based on JENDL-3.1 and JENDL-3.2. In 1996, minor revision was made by adding several functions such as treatments of ENDF-B6 file 6 data, time dependent problem, and so on. Since 1996, several works have been carried out for the next version of MVP. The main works are (1) the development of continuous energy Monte Carlo burn-up calculation code MVP-BURN, (2) the development of a system to generate cross section libraries at arbitrary temperature, and (3) the study on error estimations and their biases in Monte Carlo eigenvalue calculations. This paper summarizes the main features of MVP, results of recent studies and future plans for MVP. (author)

  3. Accurate Complex Systems Design: Integrating Serious Games with Petri Nets

    Directory of Open Access Journals (Sweden)

    Kirsten Sinclair

    2016-03-01

    Full Text Available Difficulty understanding the large number of interactions involved in complex systems makes their successful engineering a problem. Petri Nets are one graphical modelling technique used to describe and check proposed designs of complex systems thoroughly. While automatic analysis capabilities of Petri Nets are useful, their visual form is less so, particularly for communicating the design they represent. In engineering projects, this can lead to a gap in communications between people with different areas of expertise, negatively impacting achieving accurate designs.In contrast, although capable of representing a variety of real and imaginary objects effectively, behaviour of serious games can only be analysed manually through interactive simulation. This paper examines combining the complementary strengths of Petri Nets and serious games. The novel contribution of this work is a serious game prototype of a complex system design that has been checked thoroughly. Underpinned by Petri Net analysis, the serious game can be used as a high-level interface to communicate and refine the design.Improvement of a complex system design is demonstrated by applying the integration to a proof-of-concept case study.   

  4. A more accurate scheme for calculating Earth's skin temperature

    Science.gov (United States)

    Tsuang, Ben-Jei; Tu, Chia-Ying; Tsai, Jeng-Lin; Dracup, John A.; Arpe, Klaus; Meyers, Tilden

    2009-02-01

    The theoretical framework of the vertical discretization of a ground column for calculating Earth’s skin temperature is presented. The suggested discretization is derived from the evenly heat-content discretization with the optimal effective thickness for layer-temperature simulation. For the same level number, the suggested discretization is more accurate in skin temperature as well as surface ground heat flux simulations than those used in some state-of-the-art models. A proposed scheme (“op(3,2,0)”) can reduce the normalized root-mean-square error (or RMSE/STD ratio) of the calculated surface ground heat flux of a cropland site significantly to 2% (or 0.9 W m-2), from 11% (or 5 W m-2) by a 5-layer scheme used in ECMWF, from 19% (or 8 W m-2) by a 5-layer scheme used in ECHAM, and from 74% (or 32 W m-2) by a single-layer scheme used in the UCLA GCM. Better accuracy can be achieved by including more layers to the vertical discretization. Similar improvements are expected for other locations with different land types since the numerical error is inherited into the models for all the land types. The proposed scheme can be easily implemented into state-of-the-art climate models for the temperature simulation of snow, ice and soil.

  5. Iterative feature refinement for accurate undersampled MR image reconstruction

    Science.gov (United States)

    Wang, Shanshan; Liu, Jianbo; Liu, Qiegen; Ying, Leslie; Liu, Xin; Zheng, Hairong; Liang, Dong

    2016-05-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches.

  6. Accurate measurement of RF exposure from emerging wireless communication systems

    International Nuclear Information System (INIS)

    Letertre, Thierry; Toffano, Zeno; Monebhurrun, Vikass

    2013-01-01

    Isotropic broadband probes or spectrum analyzers (SAs) may be used for the measurement of rapidly varying electromagnetic fields generated by emerging wireless communication systems. In this paper this problematic is investigated by comparing the responses measured by two different isotropic broadband probes typically used to perform electric field (E-field) evaluations. The broadband probes are submitted to signals with variable duty cycles (DC) and crest factors (CF) either with or without Orthogonal Frequency Division Multiplexing (OFDM) modulation but with the same root-mean-square (RMS) power. The two probes do not provide accurate enough results for deterministic signals such as Worldwide Interoperability for Microwave Access (WIMAX) or Long Term Evolution (LTE) as well as for non-deterministic signals such as Wireless Fidelity (WiFi). The legacy measurement protocols should be adapted to cope for the emerging wireless communication technologies based on the OFDM modulation scheme. This is not easily achieved except when the statistics of the RF emission are well known. In this case the measurement errors are shown to be systematic and a correction factor or calibration can be applied to obtain a good approximation of the total RMS power.

  7. SPECTROPOLARIMETRICALLY ACCURATE MAGNETOHYDROSTATIC SUNSPOT MODEL FOR FORWARD MODELING IN HELIOSEISMOLOGY

    Energy Technology Data Exchange (ETDEWEB)

    Przybylski, D.; Shelyag, S.; Cally, P. S. [Monash Center for Astrophysics, School of Mathematical Sciences, Monash University, Clayton, Victoria 3800 (Australia)

    2015-07-01

    We present a technique to construct a spectropolarimetrically accurate magnetohydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion, and absorption in the solar interior and photosphere with the sunspot embedded into it. With the 6173 Å magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as the full Stokes vector for the simulation at various positions at the solar disk, and analyze the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterized. An increase in acoustic power in the simulated observations of the sunspot umbra away from the solar disk center was confirmed as the slow magnetoacoustic wave.

  8. SPECTROPOLARIMETRICALLY ACCURATE MAGNETOHYDROSTATIC SUNSPOT MODEL FOR FORWARD MODELING IN HELIOSEISMOLOGY

    International Nuclear Information System (INIS)

    Przybylski, D.; Shelyag, S.; Cally, P. S.

    2015-01-01

    We present a technique to construct a spectropolarimetrically accurate magnetohydrostatic model of a large-scale solar magnetic field concentration, mimicking a sunspot. Using the constructed model we perform a simulation of acoustic wave propagation, conversion, and absorption in the solar interior and photosphere with the sunspot embedded into it. With the 6173 Å magnetically sensitive photospheric absorption line of neutral iron, we calculate observable quantities such as continuum intensities, Doppler velocities, as well as the full Stokes vector for the simulation at various positions at the solar disk, and analyze the influence of non-locality of radiative transport in the solar photosphere on helioseismic measurements. Bisector shapes were used to perform multi-height observations. The differences in acoustic power at different heights within the line formation region at different positions at the solar disk were simulated and characterized. An increase in acoustic power in the simulated observations of the sunspot umbra away from the solar disk center was confirmed as the slow magnetoacoustic wave

  9. Accurate calculation of high harmonics generated by relativistic Thomson scattering

    International Nuclear Information System (INIS)

    Popa, Alexandru

    2008-01-01

    The recent emergence of the field of ultraintense laser pulses, corresponding to beam intensities higher than 10 18 W cm -2 , brings about the problem of the high harmonic generation (HHG) by the relativistic Thomson scattering of the electromagnetic radiation by free electrons. Starting from the equations of the relativistic motion of the electron in the electromagnetic field, we give an exact solution of this problem. Taking into account the Lienard-Wiechert equations, we obtain a periodic scattered electromagnetic field. Without loss of generality, the solution is strongly simplified by observing that the electromagnetic field is always normal to the direction electron-detector. The Fourier series expansion of this field leads to accurate expressions of the high harmonics generated by the Thomson scattering. Our calculations lead to a discrete HHG spectrum, whose shape and angular distribution are in agreement with the experimental data from the literature. Since no approximations were made, our approach is also valid in the ultrarelativistic regime, corresponding to intensities higher than 10 23 W cm -2 , where it predicts a strong increase of the HHG intensities and of the order of harmonics. In this domain, the nonlinear Thomson scattering could be an efficient source of hard x-rays

  10. Accurate SHAPE-directed RNA secondary structure modeling, including pseudoknots.

    Science.gov (United States)

    Hajdin, Christine E; Bellaousov, Stanislav; Huggins, Wayne; Leonard, Christopher W; Mathews, David H; Weeks, Kevin M

    2013-04-02

    A pseudoknot forms in an RNA when nucleotides in a loop pair with a region outside the helices that close the loop. Pseudoknots occur relatively rarely in RNA but are highly overrepresented in functionally critical motifs in large catalytic RNAs, in riboswitches, and in regulatory elements of viruses. Pseudoknots are usually excluded from RNA structure prediction algorithms. When included, these pairings are difficult to model accurately, especially in large RNAs, because allowing this structure dramatically increases the number of possible incorrect folds and because it is difficult to search the fold space for an optimal structure. We have developed a concise secondary structure modeling approach that combines SHAPE (selective 2'-hydroxyl acylation analyzed by primer extension) experimental chemical probing information and a simple, but robust, energy model for the entropic cost of single pseudoknot formation. Structures are predicted with iterative refinement, using a dynamic programming algorithm. This melded experimental and thermodynamic energy function predicted the secondary structures and the pseudoknots for a set of 21 challenging RNAs of known structure ranging in size from 34 to 530 nt. On average, 93% of known base pairs were predicted, and all pseudoknots in well-folded RNAs were identified.

  11. Iterative feature refinement for accurate undersampled MR image reconstruction

    International Nuclear Information System (INIS)

    Wang, Shanshan; Liu, Jianbo; Liu, Xin; Zheng, Hairong; Liang, Dong; Liu, Qiegen; Ying, Leslie

    2016-01-01

    Accelerating MR scan is of great significance for clinical, research and advanced applications, and one main effort to achieve this is the utilization of compressed sensing (CS) theory. Nevertheless, the existing CSMRI approaches still have limitations such as fine structure loss or high computational complexity. This paper proposes a novel iterative feature refinement (IFR) module for accurate MR image reconstruction from undersampled K-space data. Integrating IFR with CSMRI which is equipped with fixed transforms, we develop an IFR-CS method to restore meaningful structures and details that are originally discarded without introducing too much additional complexity. Specifically, the proposed IFR-CS is realized with three iterative steps, namely sparsity-promoting denoising, feature refinement and Tikhonov regularization. Experimental results on both simulated and in vivo MR datasets have shown that the proposed module has a strong capability to capture image details, and that IFR-CS is comparable and even superior to other state-of-the-art reconstruction approaches. (paper)

  12. Accurate quantification of supercoiled DNA by digital PCR

    Science.gov (United States)

    Dong, Lianhua; Yoo, Hee-Bong; Wang, Jing; Park, Sang-Ryoul

    2016-01-01

    Digital PCR (dPCR) as an enumeration-based quantification method is capable of quantifying the DNA copy number without the help of standards. However, it can generate false results when the PCR conditions are not optimized. A recent international comparison (CCQM P154) showed that most laboratories significantly underestimated the concentration of supercoiled plasmid DNA by dPCR. Mostly, supercoiled DNAs are linearized before dPCR to avoid such underestimations. The present study was conducted to overcome this problem. In the bilateral comparison, the National Institute of Metrology, China (NIM) optimized and applied dPCR for supercoiled DNA determination, whereas Korea Research Institute of Standards and Science (KRISS) prepared the unknown samples and quantified them by flow cytometry. In this study, several factors like selection of the PCR master mix, the fluorescent label, and the position of the primers were evaluated for quantifying supercoiled DNA by dPCR. This work confirmed that a 16S PCR master mix avoided poor amplification of the supercoiled DNA, whereas HEX labels on dPCR probe resulted in robust amplification curves. Optimizing the dPCR assay based on these two observations resulted in accurate quantification of supercoiled DNA without preanalytical linearization. This result was validated in close agreement (101~113%) with the result from flow cytometry. PMID:27063649

  13. Stonehenge: A Simple and Accurate Predictor of Lunar Eclipses

    Science.gov (United States)

    Challener, S.

    1999-12-01

    Over the last century, much has been written about the astronomical significance of Stonehenge. The rage peaked in the mid to late 1960s when new computer technology enabled astronomers to make the first complete search for celestial alignments. Because there are hundreds of rocks or holes at Stonehenge and dozens of bright objects in the sky, the quest was fraught with obvious statistical problems. A storm of controversy followed and the subject nearly vanished from print. Only a handful of these alignments remain compelling. Today, few astronomers and still fewer archaeologists would argue that Stonehenge served primarily as an observatory. Instead, Stonehenge probably served as a sacred meeting place, which was consecrated by certain celestial events. These would include the sun's risings and settings at the solstices and possibly some lunar risings as well. I suggest that Stonehenge was also used to predict lunar eclipses. While Hawkins and Hoyle also suggested that Stonehenge was used in this way, their methods are complex and they make use of only early, minor, or outlying areas of Stonehenge. In contrast, I suggest a way that makes use of the imposing, central region of Stonehenge; the area built during the final phase of activity. To predict every lunar eclipse without predicting eclipses that do not occur, I use the less familiar lunar cycle of 47 lunar months. By moving markers about the Sarsen Circle, the Bluestone Circle, and the Bluestone Horseshoe, all umbral lunar eclipses can be predicted accurately.

  14. Accurate stereochemistry for two related 22,26-epiminocholestene derivatives

    International Nuclear Information System (INIS)

    Vega-Baez, José Luis; Sandoval-Ramírez, Jesús; Meza-Reyes, Socorro; Montiel-Smith, Sara; Gómez-Calvario, Victor; Bernès, Sylvain

    2008-01-01

    Regioselective opening of ring E of solasodine under various conditions afforded (25R)-22,26-epimino@@cholesta-5,22(N)-di@@ene-3β,16β-diyl diacetate (previously known as 3,16-diacetyl pseudosolasodine B), C 31 H 47 NO 4 , or (22S,25R)-16β-hydr@@oxy-22,26-epimino@@cholesta-5-en-3β-yl acetate (a derivative of the naturally occurring alkaloid oblonginine), C 29 H 47 NO 3 . In both cases, the reactions are carried out with retention of chirality at the C16, C20 and C25 stereogenic centers, which are found to be S, S and R, respectively. Although pseudosolasodine was synthesized 50 years ago, these accurate assignments clarify some controversial points about the actual stereochemistry for these alkaloids. This is of particular importance in the case of oblonginine, since this compound is currently under consideration for the treatment of aphasia arising from apoplexy; the present study defines a diastereoisomerically pure compound for pharmacological studies

  15. HIPPI: highly accurate protein family classification with ensembles of HMMs

    Directory of Open Access Journals (Sweden)

    Nam-phuong Nguyen

    2016-11-01

    Full Text Available Abstract Background Given a new biological sequence, detecting membership in a known family is a basic step in many bioinformatics analyses, with applications to protein structure and function prediction and metagenomic taxon identification and abundance profiling, among others. Yet family identification of sequences that are distantly related to sequences in public databases or that are fragmentary remains one of the more difficult analytical problems in bioinformatics. Results We present a new technique for family identification called HIPPI (Hierarchical Profile Hidden Markov Models for Protein family Identification. HIPPI uses a novel technique to represent a multiple sequence alignment for a given protein family or superfamily by an ensemble of profile hidden Markov models computed using HMMER. An evaluation of HIPPI on the Pfam database shows that HIPPI has better overall precision and recall than blastp, HMMER, and pipelines based on HHsearch, and maintains good accuracy even for fragmentary query sequences and for protein families with low average pairwise sequence identity, both conditions where other methods degrade in accuracy. Conclusion HIPPI provides accurate protein family identification and is robust to difficult model conditions. Our results, combined with observations from previous studies, show that ensembles of profile Hidden Markov models can better represent multiple sequence alignments than a single profile Hidden Markov model, and thus can improve downstream analyses for various bioinformatic tasks. Further research is needed to determine the best practices for building the ensemble of profile Hidden Markov models. HIPPI is available on GitHub at https://github.com/smirarab/sepp .

  16. Toward accurate and fast iris segmentation for iris biometrics.

    Science.gov (United States)

    He, Zhaofeng; Tan, Tieniu; Sun, Zhenan; Qiu, Xianchao

    2009-09-01

    Iris segmentation is an essential module in iris recognition because it defines the effective image region used for subsequent processing such as feature extraction. Traditional iris segmentation methods often involve an exhaustive search of a large parameter space, which is time consuming and sensitive to noise. To address these problems, this paper presents a novel algorithm for accurate and fast iris segmentation. After efficient reflection removal, an Adaboost-cascade iris detector is first built to extract a rough position of the iris center. Edge points of iris boundaries are then detected, and an elastic model named pulling and pushing is established. Under this model, the center and radius of the circular iris boundaries are iteratively refined in a way driven by the restoring forces of Hooke's law. Furthermore, a smoothing spline-based edge fitting scheme is presented to deal with noncircular iris boundaries. After that, eyelids are localized via edge detection followed by curve fitting. The novelty here is the adoption of a rank filter for noise elimination and a histogram filter for tackling the shape irregularity of eyelids. Finally, eyelashes and shadows are detected via a learned prediction model. This model provides an adaptive threshold for eyelash and shadow detection by analyzing the intensity distributions of different iris regions. Experimental results on three challenging iris image databases demonstrate that the proposed algorithm outperforms state-of-the-art methods in both accuracy and speed.

  17. Accurate mass and velocity functions of dark matter haloes

    Science.gov (United States)

    Comparat, Johan; Prada, Francisco; Yepes, Gustavo; Klypin, Anatoly

    2017-08-01

    N-body cosmological simulations are an essential tool to understand the observed distribution of galaxies. We use the MultiDark simulation suite, run with the Planck cosmological parameters, to revisit the mass and velocity functions. At redshift z = 0, the simulations cover four orders of magnitude in halo mass from ˜1011M⊙ with 8783 874 distinct haloes and 532 533 subhaloes. The total volume used is ˜515 Gpc3, more than eight times larger than in previous studies. We measure and model the halo mass function, its covariance matrix w.r.t halo mass and the large-scale halo bias. With the formalism of the excursion-set mass function, we explicit the tight interconnection between the covariance matrix, bias and halo mass function. We obtain a very accurate (function. We also model the subhalo mass function and its relation to the distinct halo mass function. The set of models obtained provides a complete and precise framework for the description of haloes in the concordance Planck cosmology. Finally, we provide precise analytical fits of the Vmax maximum velocity function up to redshift z publicly available in the Skies and Universes data base.

  18. Accurate prediction of the enthalpies of formation for xanthophylls.

    Science.gov (United States)

    Lii, Jenn-Huei; Liao, Fu-Xing; Hu, Ching-Han

    2011-11-30

    This study investigates the applications of computational approaches in the prediction of enthalpies of formation (ΔH(f)) for C-, H-, and O-containing compounds. Molecular mechanics (MM4) molecular mechanics method, density functional theory (DFT) combined with the atomic equivalent (AE) and group equivalent (GE) schemes, and DFT-based correlation corrected atomization (CCAZ) were used. We emphasized on the application to xanthophylls, C-, H-, and O-containing carotenoids which consist of ∼ 100 atoms and extended π-delocaization systems. Within the training set, MM4 predictions are more accurate than those obtained using AE and GE; however a systematic underestimation was observed in the extended systems. ΔH(f) for the training set molecules predicted by CCAZ combined with DFT are in very good agreement with the G3 results. The average absolute deviations (AADs) of CCAZ combined with B3LYP and MPWB1K are 0.38 and 0.53 kcal/mol compared with the G3 data, and are 0.74 and 0.69 kcal/mol compared with the available experimental data, respectively. Consistency of the CCAZ approach for the selected xanthophylls is revealed by the AAD of 2.68 kcal/mol between B3LYP-CCAZ and MPWB1K-CCAZ. Copyright © 2011 Wiley Periodicals, Inc.

  19. Podiatry Ankle Duplex Scan: Readily Learned and Accurate in Diabetes.

    Science.gov (United States)

    Normahani, Pasha; Powezka, Katarzyna; Aslam, Mohammed; Standfield, Nigel J; Jaffer, Usman

    2018-03-01

    We aimed to train podiatrists to perform a focused duplex ultrasound scan (DUS) of the tibial vessels at the ankle in diabetic patients; podiatry ankle (PodAnk) duplex scan. Thirteen podiatrists underwent an intensive 3-hour long simulation training session. Participants were then assessed performing bilateral PodAnk duplex scans of 3 diabetic patients with peripheral arterial disease. Participants were assessed using the duplex ultrasound objective structured assessment of technical skills (DUOSATS) tool and an "Imaging Score". A total of 156 vessel assessments were performed. All patients had abnormal waveforms with a loss of triphasic flow. Loss of triphasic flow was accurately detected in 145 (92.9%) vessels; the correct waveform was identified in 139 (89.1%) cases. Participants achieved excellent DUOSATS scores (median 24 [interquartile range: 23-25], max attainable score of 26) as well as "Imaging Scores" (8 [8-8], max attainable score of 8) indicating proficiency in technical skills. The mean time taken for each bilateral ankle assessment was 20.4 minutes (standard deviation ±6.7). We have demonstrated that a focused DUS for the purpose of vascular assessment of the diabetic foot is readily learned using intensive simulation training.

  20. Accurate calculations of bound rovibrational states for argon trimer

    Energy Technology Data Exchange (ETDEWEB)

    Brandon, Drew; Poirier, Bill [Department of Chemistry and Biochemistry, and Department of Physics, Texas Tech University, Box 41061, Lubbock, Texas 79409-1061 (United States)

    2014-07-21

    This work presents a comprehensive quantum dynamics calculation of the bound rovibrational eigenstates of argon trimer (Ar{sub 3}), using the ScalIT suite of parallel codes. The Ar{sub 3} rovibrational energy levels are computed to a very high level of accuracy (10{sup −3} cm{sup −1} or better), and up to the highest rotational and vibrational excitations for which bound states exist. For many of these rovibrational states, wavefunctions are also computed. Rare gas clusters such as Ar{sub 3} are interesting because the interatomic interactions manifest through long-range van der Waals forces, rather than through covalent chemical bonding. As a consequence, they exhibit strong Coriolis coupling between the rotational and vibrational degrees of freedom, as well as highly delocalized states, all of which renders accurate quantum dynamical calculation difficult. Moreover, with its (comparatively) deep potential well and heavy masses, Ar{sub 3} is an especially challenging rare gas trimer case. There are a great many rovibrational eigenstates to compute, and a very high density of states. Consequently, very few previous rovibrational state calculations for Ar{sub 3} may be found in the current literature—and only for the lowest-lying rotational excitations.

  1. Can numerical simulations accurately predict hydrodynamic instabilities in liquid films?

    Science.gov (United States)

    Denner, Fabian; Charogiannis, Alexandros; Pradas, Marc; van Wachem, Berend G. M.; Markides, Christos N.; Kalliadasis, Serafim

    2014-11-01

    Understanding the dynamics of hydrodynamic instabilities in liquid film flows is an active field of research in fluid dynamics and non-linear science in general. Numerical simulations offer a powerful tool to study hydrodynamic instabilities in film flows and can provide deep insights into the underlying physical phenomena. However, the direct comparison of numerical results and experimental results is often hampered by several reasons. For instance, in numerical simulations the interface representation is problematic and the governing equations and boundary conditions may be oversimplified, whereas in experiments it is often difficult to extract accurate information on the fluid and its behavior, e.g. determine the fluid properties when the liquid contains particles for PIV measurements. In this contribution we present the latest results of our on-going, extensive study on hydrodynamic instabilities in liquid film flows, which includes direct numerical simulations, low-dimensional modelling as well as experiments. The major focus is on wave regimes, wave height and wave celerity as a function of Reynolds number and forcing frequency of a falling liquid film. Specific attention is paid to the differences in numerical and experimental results and the reasons for these differences. The authors are grateful to the EPSRC for their financial support (Grant EP/K008595/1).

  2. Accurate ab initio vibrational energies of methyl chloride

    International Nuclear Information System (INIS)

    Owens, Alec; Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan; Thiel, Walter

    2015-01-01

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH 3 35 Cl and CH 3 37 Cl. The respective PESs, CBS-35  HL , and CBS-37  HL , are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY 3 Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35  HL and CBS-37  HL PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm −1 , respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH 3 Cl without empirical refinement of the respective PESs

  3. Accurate ab initio vibrational energies of methyl chloride

    Energy Technology Data Exchange (ETDEWEB)

    Owens, Alec, E-mail: owens@mpi-muelheim.mpg.de [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Yurchenko, Sergei N.; Yachmenev, Andrey; Tennyson, Jonathan [Department of Physics and Astronomy, University College London, Gower Street, WC1E 6BT London (United Kingdom); Thiel, Walter [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany)

    2015-06-28

    Two new nine-dimensional potential energy surfaces (PESs) have been generated using high-level ab initio theory for the two main isotopologues of methyl chloride, CH{sub 3}{sup 35}Cl and CH{sub 3}{sup 37}Cl. The respective PESs, CBS-35{sup  HL}, and CBS-37{sup  HL}, are based on explicitly correlated coupled cluster calculations with extrapolation to the complete basis set (CBS) limit, and incorporate a range of higher-level (HL) additive energy corrections to account for core-valence electron correlation, higher-order coupled cluster terms, scalar relativistic effects, and diagonal Born-Oppenheimer corrections. Variational calculations of the vibrational energy levels were performed using the computer program TROVE, whose functionality has been extended to handle molecules of the form XY {sub 3}Z. Fully converged energies were obtained by means of a complete vibrational basis set extrapolation. The CBS-35{sup  HL} and CBS-37{sup  HL} PESs reproduce the fundamental term values with root-mean-square errors of 0.75 and 1.00 cm{sup −1}, respectively. An analysis of the combined effect of the HL corrections and CBS extrapolation on the vibrational wavenumbers indicates that both are needed to compute accurate theoretical results for methyl chloride. We believe that it would be extremely challenging to go beyond the accuracy currently achieved for CH{sub 3}Cl without empirical refinement of the respective PESs.

  4. Accurate line intensities of methane from first-principles calculations

    Science.gov (United States)

    Nikitin, Andrei V.; Rey, Michael; Tyuterev, Vladimir G.

    2017-10-01

    In this work, we report first-principle theoretical predictions of methane spectral line intensities that are competitive with (and complementary to) the best laboratory measurements. A detailed comparison with the most accurate data shows that discrepancies in integrated polyad intensities are in the range of 0.4%-2.3%. This corresponds to estimations of the best available accuracy in laboratory Fourier Transform spectra measurements for this quantity. For relatively isolated strong lines the individual intensity deviations are in the same range. A comparison with the most precise laser measurements of the multiplet intensities in the 2ν3 band gives an agreement within the experimental error margins (about 1%). This is achieved for the first time for five-atomic molecules. In the Supplementary Material we provide the lists of theoretical intensities at 269 K for over 5000 strongest transitions in the range below 6166 cm-1. The advantage of the described method is that this offers a possibility to generate fully assigned exhaustive line lists at various temperature conditions. Extensive calculations up to 12,000 cm-1 including high-T predictions will be made freely available through the TheoReTS information system (http://theorets.univ-reims.fr, http://theorets.tsu.ru) that contains ab initio born line lists and provides a user-friendly graphical interface for a fast simulation of the absorption cross-sections and radiance.

  5. Generating Facial Expressions Using an Anatomically Accurate Biomechanical Model.

    Science.gov (United States)

    Wu, Tim; Hung, Alice; Mithraratne, Kumar

    2014-11-01

    This paper presents a computational framework for modelling the biomechanics of human facial expressions. A detailed high-order (Cubic-Hermite) finite element model of the human head was constructed using anatomical data segmented from magnetic resonance images. The model includes a superficial soft-tissue continuum consisting of skin, the subcutaneous layer and the superficial Musculo-Aponeurotic system. Embedded within this continuum mesh, are 20 pairs of facial muscles which drive facial expressions. These muscles were treated as transversely-isotropic and their anatomical geometries and fibre orientations were accurately depicted. In order to capture the relative composition of muscles and fat, material heterogeneity was also introduced into the model. Complex contact interactions between the lips, eyelids, and between superficial soft tissue continuum and deep rigid skeletal bones were also computed. In addition, this paper investigates the impact of incorporating material heterogeneity and contact interactions, which are often neglected in similar studies. Four facial expressions were simulated using the developed model and the results were compared with surface data obtained from a 3D structured-light scanner. Predicted expressions showed good agreement with the experimental data.

  6. Accurate calculation of field and carrier distributions in doped semiconductors

    Directory of Open Access Journals (Sweden)

    Wenji Yang

    2012-06-01

    Full Text Available We use the numerical squeezing algorithm(NSA combined with the shooting method to accurately calculate the built-in fields and carrier distributions in doped silicon films (SFs in the micron and sub-micron thickness range and results are presented in graphical form for variety of doping profiles under different boundary conditions. As a complementary approach, we also present the methods and the results of the inverse problem (IVP - finding out the doping profile in the SFs for given field distribution. The solution of the IVP provides us the approach to arbitrarily design field distribution in SFs - which is very important for low dimensional (LD systems and device designing. Further more, the solution of the IVP is both direct and much easy for all the one-, two-, and three-dimensional semiconductor systems. With current efforts focused on the LD physics, knowing of the field and carrier distribution details in the LD systems will facilitate further researches on other aspects and hence the current work provides a platform for those researches.

  7. Qualitative discussion of quantitative radiography

    International Nuclear Information System (INIS)

    Berger, H.; Motz, J.W.

    1975-01-01

    Since radiography yields an image that can be easily related to the tested object, it is superior to many nondestructive testing techniques in revealing the size, shape, and location of certain types of discontinuities. The discussion is limited to a description of the radiographic process, examination of some of the quantitative aspects of radiography, and an outline of some of the new ideas emerging in radiography. The advantages of monoenergetic x-ray radiography and neutron radiography are noted

  8. Quantitative analysis of coupler tuning

    International Nuclear Information System (INIS)

    Zheng Shuxin; Cui Yupeng; Chen Huaibi; Xiao Liling

    2001-01-01

    The author deduces the equation of coupler frequency deviation Δf and coupling coefficient β instead of only giving the adjusting direction in the process of matching coupler, on the basis of coupling-cavity chain equivalent circuits model. According to this equation, automatic measurement and quantitative display are realized on a measuring system. It contributes to industrialization of traveling-wave accelerators for large container inspection systems

  9. Quantitative Methods for Teaching Review

    OpenAIRE

    Irina Milnikova; Tamara Shioshvili

    2011-01-01

    A new method of quantitative evaluation of teaching processes is elaborated. On the base of scores data, the method permits to evaluate efficiency of teaching within one group of students and comparative teaching efficiency in two or more groups. As basic characteristics of teaching efficiency heterogeneity, stability and total variability indices both for only one group and for comparing different groups are used. The method is easy to use and permits to rank results of teaching review which...

  10. Computational complexity a quantitative perspective

    CERN Document Server

    Zimand, Marius

    2004-01-01

    There has been a common perception that computational complexity is a theory of "bad news" because its most typical results assert that various real-world and innocent-looking tasks are infeasible. In fact, "bad news" is a relative term, and, indeed, in some situations (e.g., in cryptography), we want an adversary to not be able to perform a certain task. However, a "bad news" result does not automatically become useful in such a scenario. For this to happen, its hardness features have to be quantitatively evaluated and shown to manifest extensively. The book undertakes a quantitative analysis of some of the major results in complexity that regard either classes of problems or individual concrete problems. The size of some important classes are studied using resource-bounded topological and measure-theoretical tools. In the case of individual problems, the book studies relevant quantitative attributes such as approximation properties or the number of hard inputs at each length. One chapter is dedicated to abs...

  11. In-vivo quantitative measurement

    International Nuclear Information System (INIS)

    Ito, Takashi

    1992-01-01

    So far by positron CT, the quantitative analyses of oxygen consumption rate, blood flow distribution, glucose metabolic rate and so on have been carried out. The largest merit of using the positron CT is the observation and verification of mankind have become easy. Recently, accompanying the rapid development of the mapping tracers for central nervous receptors, the observation of many central nervous receptors by the positron CT has become feasible, and must expectation has been placed on the elucidation of brain functions. The conditions required for in vitro processes cannot be realized in strict sense in vivo. The quantitative measurement of in vivo tracer method is carried out by measuring the accumulation and movement of a tracer after its administration. The movement model of the mapping tracer for central nervous receptors is discussed. The quantitative analysis using a steady movement model, the measurement of dopamine receptors by reference method, the measurement of D 2 receptors using 11C-Racloprode by direct method, and the possibility of measuring dynamics bio-reaction are reported. (K.I.)

  12. Use of reference samples for more accurate RBS analyses

    International Nuclear Information System (INIS)

    Lanford, W.A.; Pelicon, P.; Zorko, B.; Budnar, M.

    2002-01-01

    While one of the primary assets of RBS analysis is that it is quantitative without use of reference samples, for certain types of analyses the precision of the method can be improved by measuring RBS spectra of unknowns relative to the RBS spectra of a similar known sample. The advantage of such an approach is that one can reduce (or eliminate) the uncertainties that arise from error in the detector solid angle, beam current integration efficiency, scattering cross-section, and stopping powers. We have used this approach extensively to determine the composition (x) of homogeneous thin films of TaN x using as reference samples films of pure Ta. Our approach is to measure R=(Ta count) unknown /(Ta count) standard and use RUMP to determine the function x(R). Once the function x(R) has been determined, this approach makes it easy to analyze many samples quickly. Other analyses for which this approach has proved useful are determination of the composition (x) of WN x , SiO x H y and SiN x H y , using W, SiO 2 and amorphous Si as reference samples, respectively

  13. Towards more accurate and reliable predictions for nuclear applications

    International Nuclear Information System (INIS)

    Goriely, S.

    2015-01-01

    The need for nuclear data far from the valley of stability, for applications such as nuclear astrophysics or future nuclear facilities, challenges the robustness as well as the predictive power of present nuclear models. Most of the nuclear data evaluation and prediction are still performed on the basis of phenomenological nuclear models. For the last decades, important progress has been achieved in fundamental nuclear physics, making it now feasible to use more reliable, but also more complex microscopic or semi-microscopic models in the evaluation and prediction of nuclear data for practical applications. In the present contribution, the reliability and accuracy of recent nuclear theories are discussed for most of the relevant quantities needed to estimate reaction cross sections and beta-decay rates, namely nuclear masses, nuclear level densities, gamma-ray strength, fission properties and beta-strength functions. It is shown that nowadays, mean-field models can be tuned at the same level of accuracy as the phenomenological models, renormalized on experimental data if needed, and therefore can replace the phenomenogical inputs in the prediction of nuclear data. While fundamental nuclear physicists keep on improving state-of-the-art models, e.g. within the shell model or ab initio models, nuclear applications could make use of their most recent results as quantitative constraints or guides to improve the predictions in energy or mass domain that will remain inaccessible experimentally. (orig.)

  14. Generalized PSF modeling for optimized quantitation in PET imaging.

    Science.gov (United States)

    Ashrafinia, Saeed; Mohy-Ud-Din, Hassan; Karakatsanis, Nicolas A; Jha, Abhinav K; Casey, Michael E; Kadrmas, Dan J; Rahmim, Arman

    2017-06-21

    Point-spread function (PSF) modeling offers the ability to account for resolution degrading phenomena within the PET image generation framework. PSF modeling improves resolution and enhances contrast, but at the same time significantly alters image noise properties and induces edge overshoot effect. Thus, studying the effect of PSF modeling on quantitation task performance can be very important. Frameworks explored in the past involved a dichotomy of PSF versus no-PSF modeling. By contrast, the present work focuses on quantitative performance evaluation of standard uptake value (SUV) PET images, while incorporating a wide spectrum of PSF models, including those that under- and over-estimate the true PSF, for the potential of enhanced quantitation of SUVs. The developed framework first analytically models the true PSF, considering a range of resolution degradation phenomena (including photon non-collinearity, inter-crystal penetration and scattering) as present in data acquisitions with modern commercial PET systems. In the context of oncologic liver FDG PET imaging, we generated 200 noisy datasets per image-set (with clinically realistic noise levels) using an XCAT anthropomorphic phantom with liver tumours of varying sizes. These were subsequently reconstructed using the OS-EM algorithm with varying PSF modelled kernels. We focused on quantitation of both SUV mean and SUV max , including assessment of contrast recovery coefficients, as well as noise-bias characteristics (including both image roughness and coefficient of-variability), for different tumours/iterations/PSF kernels. It was observed that overestimated PSF yielded more accurate contrast recovery for a range of tumours, and typically improved quantitative performance. For a clinically reasonable number of iterations, edge enhancement due to PSF modeling (especially due to over-estimated PSF) was in fact seen to lower SUV mean bias in small tumours. Overall, the results indicate that exactly matched PSF

  15. Distributed 3-D iterative reconstruction for quantitative SPECT

    International Nuclear Information System (INIS)

    Ju, Z.W.; Frey, E.C.; Tsui, B.M.W.

    1995-01-01

    The authors describe a distributed three dimensional (3-D) iterative reconstruction library for quantitative single-photon emission computed tomography (SPECT). This library includes 3-D projector-backprojector pairs (PBPs) and distributed 3-D iterative reconstruction algorithms. The 3-D PBPs accurately and efficiently model various combinations of the image degrading factors including attenuation, detector response and scatter response. These PBPs were validated by comparing projection data computed using the projectors with that from direct Monte Carlo (MC) simulations. The distributed 3-D iterative algorithms spread the projection-backprojection operations for all the projection angles over a heterogeneous network of single or multi-processor computers to reduce the reconstruction time. Based on a master/slave paradigm, these distributed algorithms provide dynamic load balancing and fault tolerance. The distributed algorithms were verified by comparing images reconstructed using both the distributed and non-distributed algorithms. Computation times for distributed 3-D reconstructions running on up to 4 identical processors were reduced by a factor approximately 80--90% times the number of the processors participating, compared to those for non-distributed 3-D reconstructions running on a single processor. When combined with faster affordable computers, this library provides an efficient means for implementing accurate reconstruction and compensation methods to improve quality and quantitative accuracy in SPECT images

  16. Liquid chromatography-mass spectrometry-based quantitative proteomics.

    Science.gov (United States)

    Linscheid, Michael W; Ahrends, Robert; Pieper, Stefan; Kühn, Andreas

    2009-01-01

    During the last decades, molecular sciences revolutionized biomedical research and gave rise to the biotechnology industry. During the next decades, the application of the quantitative sciences--informatics, physics, chemistry, and engineering--to biomedical research brings about the next revolution that will improve human healthcare and certainly create new technologies, since there is no doubt that small changes can have great effects. It is not a question of "yes" or "no," but of "how much," to make best use of the medical options we will have. In this context, the development of accurate analytical methods must be considered a cornerstone, since the understanding of biological processes will be impossible without information about the minute changes induced in cells by interactions of cell constituents with all sorts of endogenous and exogenous influences and disturbances. The first quantitative techniques, which were developed, allowed monitoring relative changes only, but they clearly showed the significance of the information obtained. The recent advent of techniques claiming to quantify proteins and peptides not only relative to each other, but also in an absolute fashion, promised another quantum leap, since knowing the absolute amount will allow comparing even unrelated species and the definition of parameters will permit to model biological systems much more accurate than before. To bring these promises to life, several approaches are under development at this point in time and this review is focused on those developments.

  17. Radio Astronomers Set New Standard for Accurate Cosmic Distance Measurement

    Science.gov (United States)

    1999-06-01

    A team of radio astronomers has used the National Science Foundation's Very Long Baseline Array (VLBA) to make the most accurate measurement ever made of the distance to a faraway galaxy. Their direct measurement calls into question the precision of distance determinations made by other techniques, including those announced last week by a team using the Hubble Space Telescope. The radio astronomers measured a distance of 23.5 million light-years to a galaxy called NGC 4258 in Ursa Major. "Ours is a direct measurement, using geometry, and is independent of all other methods of determining cosmic distances," said Jim Herrnstein, of the National Radio Astronomy Observatory (NRAO) in Socorro, NM. The team says their measurement is accurate to within less than a million light-years, or four percent. The galaxy is also known as Messier 106 and is visible with amateur telescopes. Herrnstein, along with James Moran and Lincoln Greenhill of the Harvard- Smithsonian Center for Astrophysics; Phillip Diamond, of the Merlin radio telescope facility at Jodrell Bank and the University of Manchester in England; Makato Inoue and Naomasa Nakai of Japan's Nobeyama Radio Observatory; Mikato Miyoshi of Japan's National Astronomical Observatory; Christian Henkel of Germany's Max Planck Institute for Radio Astronomy; and Adam Riess of the University of California at Berkeley, announced their findings at the American Astronomical Society's meeting in Chicago. "This is an incredible achievement to measure the distance to another galaxy with this precision," said Miller Goss, NRAO's Director of VLA/VLBA Operations. "This is the first time such a great distance has been measured this accurately. It took painstaking work on the part of the observing team, and it took a radio telescope the size of the Earth -- the VLBA -- to make it possible," Goss said. "Astronomers have sought to determine the Hubble Constant, the rate of expansion of the universe, for decades. This will in turn lead to an

  18. A new, accurate predictive model for incident hypertension.

    Science.gov (United States)

    Völzke, Henry; Fung, Glenn; Ittermann, Till; Yu, Shipeng; Baumeister, Sebastian E; Dörr, Marcus; Lieb, Wolfgang; Völker, Uwe; Linneberg, Allan; Jørgensen, Torben; Felix, Stephan B; Rettig, Rainer; Rao, Bharat; Kroemer, Heyo K

    2013-11-01

    Data mining represents an alternative approach to identify new predictors of multifactorial diseases. This work aimed at building an accurate predictive model for incident hypertension using data mining procedures. The primary study population consisted of 1605 normotensive individuals aged 20-79 years with 5-year follow-up from the population-based study, that is the Study of Health in Pomerania (SHIP). The initial set was randomly split into a training and a testing set. We used a probabilistic graphical model applying a Bayesian network to create a predictive model for incident hypertension and compared the predictive performance with the established Framingham risk score for hypertension. Finally, the model was validated in 2887 participants from INTER99, a Danish community-based intervention study. In the training set of SHIP data, the Bayesian network used a small subset of relevant baseline features including age, mean arterial pressure, rs16998073, serum glucose and urinary albumin concentrations. Furthermore, we detected relevant interactions between age and serum glucose as well as between rs16998073 and urinary albumin concentrations [area under the receiver operating characteristic (AUC 0.76)]. The model was confirmed in the SHIP validation set (AUC 0.78) and externally replicated in INTER99 (AUC 0.77). Compared to the established Framingham risk score for hypertension, the predictive performance of the new model was similar in the SHIP validation set and moderately better in INTER99. Data mining procedures identified a predictive model for incident hypertension, which included innovative and easy-to-measure variables. The findings promise great applicability in screening settings and clinical practice.

  19. Accurate diagnosis of CHD by Paediatricians with Expertise in Cardiology.

    Science.gov (United States)

    Jacob, Hannah C; Massey, Hannah; Yates, Robert W M; Kelsall, A Wilfred

    2017-08-01

    Introduction Paediatricians with Expertise in Cardiology assess children with a full history, examination, and often perform an echocardiogram. A minority are then referred to an outreach clinic run jointly with a visiting paediatric cardiologist. The accuracy of the echocardiography diagnosis made by the Paediatrician with Expertise in Cardiology is unknown. Materials and methods We conducted a retrospective review of clinic letters for children seen in the outreach clinic for the first time between March, 2004 and March, 2011. Children with CHD diagnosed antenatally or elsewhere were excluded. We recorded the echocardiography diagnosis made by the paediatric cardiologist and previously by the Paediatrician with Expertise in Cardiology. The Paediatrician with Expertise in Cardiology referred 317/3145 (10%) children seen in the local cardiac clinics to the outreach clinic over this period, and among them 296 were eligible for inclusion. Their median age was 1.5 years (range 1 month-15.1 years). For 244 (82%) children, there was complete diagnostic agreement between the Paediatrician with Expertise in Cardiology and the paediatric cardiologist. For 29 (10%) children, the main diagnosis was identical with additional findings made by the paediatric cardiologist. The abnormality had resolved in 17 (6%) cases by the time of clinic attendance. In six (2%) patients, the paediatric cardiologist made a different diagnosis. In total, 138 (47%) patients underwent a surgical or catheter intervention. Discussion Paediatricians with Expertise in Cardiology can make accurate diagnoses of CHD in children referred to their clinics. This can allow effective triage of children attending the outreach clinic, making best use of limited specialist resources.

  20. Accurate mobile malware detection and classification in the cloud.

    Science.gov (United States)

    Wang, Xiaolei; Yang, Yuexiang; Zeng, Yingzhi

    2015-01-01

    As the dominator of the Smartphone operating system market, consequently android has attracted the attention of s malware authors and researcher alike. The number of types of android malware is increasing rapidly regardless of the considerable number of proposed malware analysis systems. In this paper, by taking advantages of low false-positive rate of misuse detection and the ability of anomaly detection to detect zero-day malware, we propose a novel hybrid detection system based on a new open-source framework CuckooDroid, which enables the use of Cuckoo Sandbox's features to analyze Android malware through dynamic and static analysis. Our proposed system mainly consists of two parts: anomaly detection engine performing abnormal apps detection through dynamic analysis; signature detection engine performing known malware detection and classification with the combination of static and dynamic analysis. We evaluate our system using 5560 malware samples and 6000 benign samples. Experiments show that our anomaly detection engine with dynamic analysis is capable of detecting zero-day malware with a low false negative rate (1.16 %) and acceptable false positive rate (1.30 %); it is worth noting that our signature detection engine with hybrid analysis can accurately classify malware samples with an average positive rate 98.94 %. Considering the intensive computing resources required by the static and dynamic analysis, our proposed detection system should be deployed off-device, such as in the Cloud. The app store markets and the ordinary users can access our detection system for malware detection through cloud service.

  1. SNPdetector: a software tool for sensitive and accurate SNP detection.

    Directory of Open Access Journals (Sweden)

    Jinghui Zhang

    2005-10-01

    Full Text Available Identification of single nucleotide polymorphisms (SNPs and mutations is important for the discovery of genetic predisposition to complex diseases. PCR resequencing is the method of choice for de novo SNP discovery. However, manual curation of putative SNPs has been a major bottleneck in the application of this method to high-throughput screening. Therefore it is critical to develop a more sensitive and accurate computational method for automated SNP detection. We developed a software tool, SNPdetector, for automated identification of SNPs and mutations in fluorescence-based resequencing reads. SNPdetector was designed to model the process of human visual inspection and has a very low false positive and false negative rate. We demonstrate the superior performance of SNPdetector in SNP and mutation analysis by comparing its results with those derived by human inspection, PolyPhred (a popular SNP detection tool, and independent genotype assays in three large-scale investigations. The first study identified and validated inter- and intra-subspecies variations in 4,650 traces of 25 inbred mouse strains that belong to either the Mus musculus species or the M. spretus species. Unexpected heterozygosity in CAST/Ei strain was observed in two out of 1,167 mouse SNPs. The second study identified 11,241 candidate SNPs in five ENCODE regions of the human genome covering 2.5 Mb of genomic sequence. Approximately 50% of the candidate SNPs were selected for experimental genotyping; the validation rate exceeded 95%. The third study detected ENU-induced mutations (at 0.04% allele frequency in 64,896 traces of 1,236 zebra fish. Our analysis of three large and diverse test datasets demonstrated that SNPdetector is an effective tool for genome-scale research and for large-sample clinical studies. SNPdetector runs on Unix/Linux platform and is available publicly (http://lpg.nci.nih.gov.

  2. How accurate are the weather forecasts for Bierun (southern Poland)?

    Science.gov (United States)

    Gawor, J.

    2012-04-01

    Weather forecast accuracy has increased in recent times mainly thanks to significant development of numerical weather prediction models. Despite the improvements, the forecasts should be verified to control their quality. The evaluation of forecast accuracy can also be an interesting learning activity for students. It joins natural curiosity about everyday weather and scientific process skills: problem solving, database technologies, graph construction and graphical analysis. The examination of the weather forecasts has been taken by a group of 14-year-old students from Bierun (southern Poland). They participate in the GLOBE program to develop inquiry-based investigations of the local environment. For the atmospheric research the automatic weather station is used. The observed data were compared with corresponding forecasts produced by two numerical weather prediction models, i.e. COAMPS (Coupled Ocean/Atmosphere Mesoscale Prediction System) developed by Naval Research Laboratory Monterey, USA; it runs operationally at the Interdisciplinary Centre for Mathematical and Computational Modelling in Warsaw, Poland and COSMO (The Consortium for Small-scale Modelling) used by the Polish Institute of Meteorology and Water Management. The analysed data included air temperature, precipitation, wind speed, wind chill and sea level pressure. The prediction periods from 0 to 24 hours (Day 1) and from 24 to 48 hours (Day 2) were considered. The verification statistics that are commonly used in meteorology have been applied: mean error, also known as bias, for continuous data and a 2x2 contingency table to get the hit rate and false alarm ratio for a few precipitation thresholds. The results of the aforementioned activity became an interesting basis for discussion. The most important topics are: 1) to what extent can we rely on the weather forecasts? 2) How accurate are the forecasts for two considered time ranges? 3) Which precipitation threshold is the most predictable? 4) Why

  3. Quantum Monte Carlo: Faster, More Reliable, And More Accurate

    Science.gov (United States)

    Anderson, Amos Gerald

    2010-06-01

    The Schrodinger Equation has been available for about 83 years, but today, we still strain to apply it accurately to molecules of interest. The difficulty is not theoretical in nature, but practical, since we're held back by lack of sufficient computing power. Consequently, effort is applied to find acceptable approximations to facilitate real time solutions. In the meantime, computer technology has begun rapidly advancing and changing the way we think about efficient algorithms. For those who can reorganize their formulas to take advantage of these changes and thereby lift some approximations, incredible new opportunities await. Over the last decade, we've seen the emergence of a new kind of computer processor, the graphics card. Designed to accelerate computer games by optimizing quantity instead of quality in processor, they have become of sufficient quality to be useful to some scientists. In this thesis, we explore the first known use of a graphics card to computational chemistry by rewriting our Quantum Monte Carlo software into the requisite "data parallel" formalism. We find that notwithstanding precision considerations, we are able to speed up our software by about a factor of 6. The success of a Quantum Monte Carlo calculation depends on more than just processing power. It also requires the scientist to carefully design the trial wavefunction used to guide simulated electrons. We have studied the use of Generalized Valence Bond wavefunctions to simply, and yet effectively, captured the essential static correlation in atoms and molecules. Furthermore, we have developed significantly improved two particle correlation functions, designed with both flexibility and simplicity considerations, representing an effective and reliable way to add the necessary dynamic correlation. Lastly, we present our method for stabilizing the statistical nature of the calculation, by manipulating configuration weights, thus facilitating efficient and robust calculations. Our

  4. Accurate absolute measurement of trapped Cs atoms in a MOT

    International Nuclear Information System (INIS)

    Talavera O, M.; Lopez R, M.; Carlos L, E. de; Jimenez S, S.

    2007-01-01

    A Cs-133 Magneto-Optical Trap (MOT) has been developed at the Time and Frequency Division of the Centro Nacional de Metrologia, CENAM, in Mexico. This MOT is part of a primary frequency standard based on ultra-cold Cs atoms, called CsF-1 clock, under development at CENAM. In this Cs MOT, we use the standard configuration (σ + - σ - ) 4-horizontal 2-vertical laser beams 1.9 cm in diameter, with 5 mW each. We use a 852 nm, 5 mW, DBR laser as a master laser which is stabilized by saturation spectroscopy. Emission linewidth of the master laser is l MHz. In order to amplify the light of the master laser, a 50 mW, 852 nm AlGaAs laser is used as slave laser. This slave laser is stabilized by light injection technique. A 12 MHz red shift of the light is performed by two double passes through two Acusto-Optic Modulators (AOMs). The optical part of the CENAMs MOT is very robust against mechanical vibration, acoustic noise and temperature changes in our laboratory, because none of our diode lasers use an extended cavity to reduce the linewidth. In this paper, we report results of our MOT characterization as a function of several operation parameters such as the intensity of laser beams, the laser beam diameter, the red shift of light, and the gradient of the magnetic field. We also report accurate absolute measurement of the number of Cs atoms trapped in our Cs MOT. We found up to 6 x 10 7 Cs atoms trapped in our MOT measured with an uncertainty no greater than 6.4%. (Author)

  5. Bioaccessibility tests accurately estimate bioavailability of lead to quail

    Science.gov (United States)

    Beyer, W. Nelson; Basta, Nicholas T; Chaney, Rufus L.; Henry, Paula F.; Mosby, David; Rattner, Barnett A.; Scheckel, Kirk G.; Sprague, Dan; Weber, John

    2016-01-01

    Hazards of soil-borne Pb to wild birds may be more accurately quantified if the bioavailability of that Pb is known. To better understand the bioavailability of Pb to birds, we measured blood Pb concentrations in Japanese quail (Coturnix japonica) fed diets containing Pb-contaminated soils. Relative bioavailabilities were expressed by comparison with blood Pb concentrations in quail fed a Pb acetate reference diet. Diets containing soil from five Pb-contaminated Superfund sites had relative bioavailabilities from 33%-63%, with a mean of about 50%. Treatment of two of the soils with phosphorus significantly reduced the bioavailability of Pb. Bioaccessibility of Pb in the test soils was then measured in six in vitro tests and regressed on bioavailability. They were: the “Relative Bioavailability Leaching Procedure” (RBALP) at pH 1.5, the same test conducted at pH 2.5, the “Ohio State University In vitro Gastrointestinal” method (OSU IVG), the “Urban Soil Bioaccessible Lead Test”, the modified “Physiologically Based Extraction Test” and the “Waterfowl Physiologically Based Extraction Test.” All regressions had positive slopes. Based on criteria of slope and coefficient of determination, the RBALP pH 2.5 and OSU IVG tests performed very well. Speciation by X-ray absorption spectroscopy demonstrated that, on average, most of the Pb in the sampled soils was sorbed to minerals (30%), bound to organic matter (24%), or present as Pb sulfate (18%). Additional Pb was associated with P (chloropyromorphite, hydroxypyromorphite and tertiary Pb phosphate), and with Pb carbonates, leadhillite (a lead sulfate carbonate hydroxide), and Pb sulfide. The formation of chloropyromorphite reduced the bioavailability of Pb and the amendment of Pb-contaminated soils with P may be a thermodynamically favored means to sequester Pb.

  6. Toward accurate and precise estimates of lion density.

    Science.gov (United States)

    Elliot, Nicholas B; Gopalaswamy, Arjun M

    2017-08-01

    Reliable estimates of animal density are fundamental to understanding ecological processes and population dynamics. Furthermore, their accuracy is vital to conservation because wildlife authorities rely on estimates to make decisions. However, it is notoriously difficult to accurately estimate density for wide-ranging carnivores that occur at low densities. In recent years, significant progress has been made in density estimation of Asian carnivores, but the methods have not been widely adapted to African carnivores, such as lions (Panthera leo). Although abundance indices for lions may produce poor inferences, they continue to be used to estimate density and inform management and policy. We used sighting data from a 3-month survey and adapted a Bayesian spatially explicit capture-recapture (SECR) model to estimate spatial lion density in the Maasai Mara National Reserve and surrounding conservancies in Kenya. Our unstructured spatial capture-recapture sampling design incorporated search effort to explicitly estimate detection probability and density on a fine spatial scale, making our approach robust in the context of varying detection probabilities. Overall posterior mean lion density was estimated to be 17.08 (posterior SD 1.310) lions >1 year old/100 km 2 , and the sex ratio was estimated at 2.2 females to 1 male. Our modeling framework and narrow posterior SD demonstrate that SECR methods can produce statistically rigorous and precise estimates of population parameters, and we argue that they should be favored over less reliable abundance indices. Furthermore, our approach is flexible enough to incorporate different data types, which enables robust population estimates over relatively short survey periods in a variety of systems. Trend analyses are essential to guide conservation decisions but are frequently based on surveys of differing reliability. We therefore call for a unified framework to assess lion numbers in key populations to improve management and

  7. Precise and accurate isotope ratio measurements by ICP-MS.

    Science.gov (United States)

    Becker, J S; Dietze, H J

    2000-09-01

    The precise and accurate determination of isotope ratios by inductively coupled plasma mass spectrometry (ICP-MS) and laser ablation ICP-MS (LA-ICP-MS) is important for quite different application fields (e.g. for isotope ratio measurements of stable isotopes in nature, especially for the investigation of isotope variation in nature or age dating, for determining isotope ratios of radiogenic elements in the nuclear industry, quality assurance of fuel material, for reprocessing plants, nuclear material accounting and radioactive waste control, for tracer experiments using stable isotopes or long-lived radionuclides in biological or medical studies). Thermal ionization mass spectrometry (TIMS), which used to be the dominant analytical technique for precise isotope ratio measurements, is being increasingly replaced for isotope ratio measurements by ICP-MS due to its excellent sensitivity, precision and good accuracy. Instrumental progress in ICP-MS was achieved by the introduction of the collision cell interface in order to dissociate many disturbing argon-based molecular ions, thermalize the ions and neutralize the disturbing argon ions of plasma gas (Ar+). The application of the collision cell in ICP-QMS results in a higher ion transmission, improved sensitivity and better precision of isotope ratio measurements compared to quadrupole ICP-MS without the collision cell [e.g., for 235U/238U approximately 1 (10 microg x L(-1) uranium) 0.07% relative standard deviation (RSD) vs. 0.2% RSD in short-term measurements (n = 5)]. A significant instrumental improvement for ICP-MS is the multicollector device (MC-ICP-MS) in order to obtain a better precision of isotope ratio measurements (with a precision of up to 0.002%, RSD). CE- and HPLC-ICP-MS are used for the separation of isobaric interferences of long-lived radionuclides and stable isotopes by determination of spallation nuclide abundances in an irradiated tantalum target.

  8. Achieving target voriconazole concentrations more accurately in children and adolescents.

    Science.gov (United States)

    Neely, Michael; Margol, Ashley; Fu, Xiaowei; van Guilder, Michael; Bayard, David; Schumitzky, Alan; Orbach, Regina; Liu, Siyu; Louie, Stan; Hope, William

    2015-01-01

    Despite the documented benefit of voriconazole therapeutic drug monitoring, nonlinear pharmacokinetics make the timing of steady-state trough sampling and appropriate dose adjustments unpredictable by conventional methods. We developed a nonparametric population model with data from 141 previously richly sampled children and adults. We then used it in our multiple-model Bayesian adaptive control algorithm to predict measured concentrations and doses in a separate cohort of 33 pediatric patients aged 8 months to 17 years who were receiving voriconazole and enrolled in a pharmacokinetic study. Using all available samples to estimate the individual Bayesian posterior parameter values, the median percent prediction bias relative to a measured target trough concentration in the patients was 1.1% (interquartile range, -17.1 to 10%). Compared to the actual dose that resulted in the target concentration, the percent bias of the predicted dose was -0.7% (interquartile range, -7 to 20%). Using only trough concentrations to generate the Bayesian posterior parameter values, the target bias was 6.4% (interquartile range, -1.4 to 14.7%; P = 0.16 versus the full posterior parameter value) and the dose bias was -6.7% (interquartile range, -18.7 to 2.4%; P = 0.15). Use of a sample collected at an optimal time of 4 h after a dose, in addition to the trough concentration, resulted in a nonsignificantly improved target bias of 3.8% (interquartile range, -13.1 to 18%; P = 0.32) and a dose bias of -3.5% (interquartile range, -18 to 14%; P = 0.33). With the nonparametric population model and trough concentrations, our control algorithm can accurately manage voriconazole therapy in children independently of steady-state conditions, and it is generalizable to any drug with a nonparametric pharmacokinetic model. (This study has been registered at ClinicalTrials.gov under registration no. NCT01976078.). Copyright © 2015, American Society for Microbiology. All Rights Reserved.

  9. Qualitative versus quantitative assessment of cerebrovascular reserve capacity

    International Nuclear Information System (INIS)

    Okuguchi, Taku

    2000-01-01

    Quantitative studies of cerebral blood flow (CBF) combined with a acetazolamide (ACZ) challenge have defined a subgroup of patients with symptomatic carotid or middle cerebral artery (MCA) occlusive diseases who are at an increased risk for stroke. Recent reports suggest that qualitative CBF techniques could also define the same high-risk subgroup. To evaluate the accuracy of the qualitative method, we compared qualitative ratios with quantitative CBF data, obtained using iodine-123-N-isopropyl-p-iodoamphetamine (IMP) single-photon emission computed tomography (SPECT). We analyzed qualitative and quantitative IMP SPECT images for 50 patients with symptomatic carotid or middle cerebral artery occlusive diseases. Quantitative CBF data were measured by the autoradiographic technique. One region-of-interest within each hemisphere was within the MCA territory. Relative cerebrovascular reserve capacity (CVRC) obtained using qualitative images before and after the intravenous administration of 1 g of ACZ was defined as follows: ( ACZ C occl / ACZ C non )/( baseline C occl / baseline C non ). The threshold for abnormal relative CVRC was defined as less than 1.0. Quantitative CBF was considered abnormal when the response to ACZ (percent change) on the symptomatic side (absolute CVRC) was a decrease of more than 10%. Of 39 patients whose relative CVRC were considered abnormal, 29 (74%) were normal in absolute CVRC (i.e., false positive). Two of 12 (17%) who were not considered compromised by qualitative criteria had abnormal absolute CVRC (i.e., false negative). This study demonstrates that this important subgroup cannot be accurately defined with qualitative methodology. (author)

  10. Quantitative Peptidomics with Five-plex Reductive Methylation labels

    Science.gov (United States)

    Tashima, Alexandre K.; Fricker, Lloyd D.

    2018-05-01

    Quantitative peptidomics and proteomics often use chemical tags to covalently modify peptides with reagents that differ in the number of stable isotopes, allowing for quantitation of the relative peptide levels in the original sample based on the peak height of each isotopic form. Different chemical reagents have been used as tags for quantitative peptidomics and proteomics, and all have strengths and weaknesses. One of the simplest approaches uses formaldehyde and sodium cyanoborohydride to methylate amines, converting primary and secondary amines into tertiary amines. Up to five different isotopic forms can be generated, depending on the isotopic forms of formaldehyde and cyanoborohydride reagents, allowing for five-plex quantitation. However, the mass difference between each of these forms is only 1 Da per methyl group incorporated into the peptide, and for many peptides there is substantial overlap from the natural abundance of 13C and other isotopes. In this study, we calculated the contribution from the natural isotopes for 26 native peptides and derived equations to correct the peak intensities. These equations were applied to data from a study using human embryonic kidney HEK293T cells in which five replicates were treated with 100 nM vinblastine for 3 h and compared with five replicates of cells treated with control medium. The correction equations brought the replicates to the expected 1:1 ratios and revealed significant decreases in levels of 21 peptides upon vinblastine treatment. These equations enable accurate quantitation of small changes in peptide levels using the reductive methylation labeling approach. [Figure not available: see fulltext.

  11. Qualitative versus quantitative assessment of cerebrovascular reserve capacity

    Energy Technology Data Exchange (ETDEWEB)

    Okuguchi, Taku [Iwate Medical Univ., Morioka (Japan). School of Medicine

    2000-06-01

    Quantitative studies of cerebral blood flow (CBF) combined with a acetazolamide (ACZ) challenge have defined a subgroup of patients with symptomatic carotid or middle cerebral artery (MCA) occlusive diseases who are at an increased risk for stroke. Recent reports suggest that qualitative CBF techniques could also define the same high-risk subgroup. To evaluate the accuracy of the qualitative method, we compared qualitative ratios with quantitative CBF data, obtained using iodine-123-N-isopropyl-p-iodoamphetamine (IMP) single-photon emission computed tomography (SPECT). We analyzed qualitative and quantitative IMP SPECT images for 50 patients with symptomatic carotid or middle cerebral artery occlusive diseases. Quantitative CBF data were measured by the autoradiographic technique. One region-of-interest within each hemisphere was within the MCA territory. Relative cerebrovascular reserve capacity (CVRC) obtained using qualitative images before and after the intravenous administration of 1 g of ACZ was defined as follows: ({sub ACZ}C{sub occl}/{sub ACZ}C{sub non})/({sub baseline}C{sub occl}/{sub baseline}C{sub n}= {sub on}). The threshold for abnormal relative CVRC was defined as less than 1.0. Quantitative CBF was considered abnormal when the response to ACZ (percent change) on the symptomatic side (absolute CVRC) was a decrease of more than 10%. Of 39 patients whose relative CVRC were considered abnormal, 29 (74%) were normal in absolute CVRC (i.e., false positive). Two of 12 (17%) who were not considered compromised by qualitative criteria had abnormal absolute CVRC (i.e., false negative). This study demonstrates that this important subgroup cannot be accurately defined with qualitative methodology. (author)

  12. Radionuclide angiocardiography. Improved diagnosis and quantitation of left-to-right shunts using area ratio techniques in children

    International Nuclear Information System (INIS)

    Alderson, P.O.; Jost, R.G.; Strauss, A.W.; Boonvisut, S.; Markham, J.

    1975-01-01

    A comparison of several reported methods for detection and quantitation of left-to-right shunts by radionuclides was performed in 50 children. Count ratio (C2/C1) techniques were compared with the exponential extrapolation and gamma function area ratio techniques. C2/C1 ratios accurately detected shunts and could reliably separate shunts from normals, but there was a high rate of false positives in children with valvular heart disease. The area ratio methods provided more accurate shunt quantitation and a better separation of patients with valvular heart disease than did the C2/C1 ratio. The gamma function method showed a higher correlation with oximetry than the exponential method, but the difference was not statistically significant. For accurate shunt quantitation and a reliable separation of patients with valvular heart disease from those with shunts, area ratio calculations are preferable to the C2/C1 ratio

  13. Quantitative Analysis of cardiac SPECT

    International Nuclear Information System (INIS)

    Nekolla, S.G.; Bengel, F.M.

    2004-01-01

    The quantitative analysis of myocardial SPECT images is a powerful tool to extract the highly specific radio tracer uptake in these studies. If compared to normal data bases, the uptake values can be calibrated on an individual basis. Doing so increases the reproducibility of the analysis substantially. Based on the development over the last three decades starting from planar scinitigraphy, this paper discusses the methods used today incorporating the changes due to tomographic image acquisitions. Finally, the limitations of these approaches as well as consequences from most recent hardware developments, commercial analysis packages and a wider view of the description of the left ventricle are discussed. (orig.)

  14. Improved management of radiotherapy departments through accurate cost data

    International Nuclear Information System (INIS)

    Kesteloot, K.; Lievens, Y.; Schueren, E. van der

    2000-01-01

    Escalating health care expenses urge Governments towards cost containment. More accurate data on the precise costs of health care interventions are needed. We performed an aggregate cost calculation of radiation therapy departments and treatments and discussed the different cost components. The costs of a radiotherapy department were estimated, based on accreditation norms for radiotherapy departments set forth in the Belgian legislation. The major cost components of radiotherapy are the cost of buildings and facilities, equipment, medical and non-medical staff, materials and overhead. They respectively represent around 3, 30, 50, 4 and 13% of the total costs, irrespective of the department size. The average cost per patient lowers with increasing department size and optimal utilization of resources. Radiotherapy treatment costs vary in a stepwise fashion: minor variations of patient load do not affect the cost picture significantly due to a small impact of variable costs. With larger increases in patient load however, additional equipment and/or staff will become necessary, resulting in additional semi-fixed costs and an important increase in costs. A sensitivity analysis of these two major cost inputs shows that a decrease in total costs of 12-13% can be obtained by assuming a 20% less than full time availability of personnel; that due to evolving seniority levels, the annual increase in wage costs is estimated to be more than 1%; that by changing the clinical life-time of buildings and equipment with unchanged interest rate, a 5% reduction of total costs and cost per patient can be calculated. More sophisticated equipment will not have a very large impact on the cost (±4000 BEF/patient), provided that the additional equipment is adapted to the size of the department. That the recommendations we used, based on the Belgian legislation, are not outrageous is shown by replacing them by the USA Blue book recommendations. Depending on the department size, costs in

  15. New simple method for fast and accurate measurement of volumes

    International Nuclear Information System (INIS)

    Frattolillo, Antonio

    2006-01-01

    A new simple method is presented, which allows us to measure in just a few minutes but with reasonable accuracy (less than 1%) the volume confined inside a generic enclosure, regardless of the complexity of its shape. The technique proposed also allows us to measure the volume of any portion of a complex manifold, including, for instance, pipes and pipe fittings, valves, gauge heads, and so on, without disassembling the manifold at all. To this purpose an airtight variable volume is used, whose volume adjustment can be precisely measured; it has an overall capacity larger than that of the unknown volume. Such a variable volume is initially filled with a suitable test gas (for instance, air) at a known pressure, as carefully measured by means of a high precision capacitive gauge. By opening a valve, the test gas is allowed to expand into the previously evacuated unknown volume. A feedback control loop reacts to the resulting finite pressure drop, thus contracting the variable volume until the pressure exactly retrieves its initial value. The overall reduction of the variable volume achieved at the end of this process gives a direct measurement of the unknown volume, and definitively gets rid of the problem of dead spaces. The method proposed actually does not require the test gas to be rigorously held at a constant temperature, thus resulting in a huge simplification as compared to complex arrangements commonly used in metrology (gas expansion method), which can grant extremely accurate measurement but requires rather expensive equipments and results in time consuming methods, being therefore impractical in most applications. A simple theoretical analysis of the thermodynamic cycle and the results of experimental tests are described, which demonstrate that, in spite of its simplicity, the method provides a measurement accuracy within 0.5%. The system requires just a few minutes to complete a single measurement, and is ready immediately at the end of the process. The

  16. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    than 0.05% of the tomograph radius value. These results demonstrate that the proposed approaches can estimate an object’s position accurately based on EIT measurements if enough process information is available for training or modelling. Since they do not require complex calculations it is possible to use them in real-time applications without requiring high-performance computers.

  17. Towards an accurate real-time locator of infrasonic sources

    Science.gov (United States)

    Pinsky, V.; Blom, P.; Polozov, A.; Marcillo, O.; Arrowsmith, S.; Hofstetter, A.

    2017-11-01

    Infrasonic signals propagate from an atmospheric source via media with stochastic and fast space-varying conditions. Hence, their travel time, the amplitude at sensor recordings and even manifestation in the so-called "shadow zones" are random. Therefore, the traditional least-squares technique for locating infrasonic sources is often not effective, and the problem for the best solution must be formulated in probabilistic terms. Recently, a series of papers has been published about Bayesian Infrasonic Source Localization (BISL) method based on the computation of the posterior probability density function (PPDF) of the source location, as a convolution of a priori probability distribution function (APDF) of the propagation model parameters with likelihood function (LF) of observations. The present study is devoted to the further development of BISL for higher accuracy and stability of the source location results and decreasing of computational load. We critically analyse previous algorithms and propose several new ones. First of all, we describe the general PPDF formulation and demonstrate that this relatively slow algorithm might be among the most accurate algorithms, provided the adequate APDF and LF are used. Then, we suggest using summation instead of integration in a general PPDF calculation for increased robustness, but this leads us to the 3D space-time optimization problem. Two different forms of APDF approximation are considered and applied for the PPDF calculation in our study. One of them is previously suggested, but not yet properly used is the so-called "celerity-range histograms" (CRHs). Another is the outcome from previous findings of linear mean travel time for the four first infrasonic phases in the overlapping consecutive distance ranges. This stochastic model is extended here to the regional distance of 1000 km, and the APDF introduced is the probabilistic form of the junction between this travel time model and range-dependent probability

  18. Accurate light-time correction due to a gravitating mass

    Energy Technology Data Exchange (ETDEWEB)

    Ashby, Neil [Department of Physics, University of Colorado, Boulder, CO (United States); Bertotti, Bruno, E-mail: ashby@boulder.nist.go [Dipartimento di Fisica Nucleare e Teorica, Universita di Pavia (Italy)

    2010-07-21

    This technical paper of mathematical physics arose as an aftermath of the 2002 Cassini experiment (Bertotti et al 2003 Nature 425 374-6), in which the PPN parameter {gamma} was measured with an accuracy {sigma}{sub {gamma}} = 2.3 x 10{sup -5} and found consistent with the prediction {gamma} = 1 of general relativity. The Orbit Determination Program (ODP) of NASA's Jet Propulsion Laboratory, which was used in the data analysis, is based on an expression (8) for the gravitational delay {Delta}t that differs from the standard formula (2); this difference is of second order in powers of m-the gravitational radius of the Sun-but in Cassini's case it was much larger than the expected order of magnitude m{sup 2}/b, where b is the distance of the closest approach of the ray. Since the ODP does not take into account any other second-order terms, it is necessary, also in view of future more accurate experiments, to revisit the whole problem, to systematically evaluate higher order corrections and to determine which terms, and why, are larger than the expected value. We note that light propagation in a static spacetime is equivalent to a problem in ordinary geometrical optics; Fermat's action functional at its minimum is just the light-time between the two end points A and B. A new and powerful formulation is thus obtained. This method is closely connected with the much more general approach of Le Poncin-Lafitte et al (2004 Class. Quantum Grav. 21 4463-83), which is based on Synge's world function. Asymptotic power series are necessary to provide a safe and automatic way of selecting which terms to keep at each order. Higher order approximations to the required quantities, in particular the delay and the deflection, are easily obtained. We also show that in a close superior conjunction, when b is much smaller than the distances of A and B from the Sun, say of order R, the second-order correction has an enhanced part of order m{sup 2}R/b{sup 2}, which

  19. Quantitative Trait Loci in Inbred Lines

    NARCIS (Netherlands)

    Jansen, R.C.

    2001-01-01

    Quantitative traits result from the influence of multiple genes (quantitative trait loci) and environmental factors. Detecting and mapping the individual genes underlying such 'complex' traits is a difficult task. Fortunately, populations obtained from crosses between inbred lines are relatively

  20. A quantitative framework for assessing ecological resilience

    Science.gov (United States)

    Quantitative approaches to measure and assess resilience are needed to bridge gaps between science, policy, and management. In this paper, we suggest a quantitative framework for assessing ecological resilience. Ecological resilience as an emergent ecosystem phenomenon can be de...

  1. Operations management research methodologies using quantitative modeling

    NARCIS (Netherlands)

    Bertrand, J.W.M.; Fransoo, J.C.

    2002-01-01

    Gives an overview of quantitative model-based research in operations management, focusing on research methodology. Distinguishes between empirical and axiomatic research, and furthermore between descriptive and normative research. Presents guidelines for doing quantitative model-based research in

  2. Quantitative remote visual inspection in nuclear power industry

    International Nuclear Information System (INIS)

    Stone, M.C.

    1992-01-01

    A borescope is an instrument that is used within the power industry to visually inspect remote locations. It is typically used for inspections of heat exchangers, condensers, boiler tubes, and steam generators and in many general inspection applications. The optical system of a borescope, like the human eye, does not have a fixed magnification. When viewing an object close up, it appears large; when the same object is viewed from afar, it appears small. Humans, though, have two separate eyes and a brain that process information to calculate the size of an object. These attributes are considered secondary information. Until now, making a measurement using a borescope has been an educated guess. There has always been a need to make accurate measurements from borescope images. The realization of this capability would make remote visual inspection a quantitative nondestructive testing method versus a qualitative one. For nuclear power plants, it is an excellent technique for maintaining radiation levels as low as reasonably achievable. Remote visual measurement provides distance and limits the exposure time needed to make accurate measurements. The design problem, therefore, was to develop the capability to make accurate and repeatable measurements of objects or physical defects with a borescope-type instrument. The solution was achieved by designing a borescope with a novel shadow projection mechanism, integrated with an electronics module containing the video display circuitry and a measurement computer

  3. Quantitative prediction of drug side effects based on drug-related features.

    Science.gov (United States)

    Niu, Yanqing; Zhang, Wen

    2017-09-01

    Unexpected side effects of drugs are great concern in the drug development, and the identification of side effects is an important task. Recently, machine learning methods are proposed to predict the presence or absence of interested side effects for drugs, but it is difficult to make the accurate prediction for all of them. In this paper, we transform side effect profiles of drugs as their quantitative scores, by summing up their side effects with weights. The quantitative scores may measure the dangers of drugs, and thus help to compare the risk of different drugs. Here, we attempt to predict quantitative scores of drugs, namely the quantitative prediction. Specifically, we explore a variety of drug-related features and evaluate their discriminative powers for the quantitative prediction. Then, we consider several feature combination strategies (direct combination, average scoring ensemble combination) to integrate three informative features: chemical substructures, targets, and treatment indications. Finally, the average scoring ensemble model which produces the better performances is used as the final quantitative prediction model. Since weights for side effects are empirical values, we randomly generate different weights in the simulation experiments. The experimental results show that the quantitative method is robust to different weights, and produces satisfying results. Although other state-of-the-art methods cannot make the quantitative prediction directly, the prediction results can be transformed as the quantitative scores. By indirect comparison, the proposed method produces much better results than benchmark methods in the quantitative prediction. In conclusion, the proposed method is promising for the quantitative prediction of side effects, which may work cooperatively with existing state-of-the-art methods to reveal dangers of drugs.

  4. Quantitative graph theory mathematical foundations and applications

    CERN Document Server

    Dehmer, Matthias

    2014-01-01

    The first book devoted exclusively to quantitative graph theory, Quantitative Graph Theory: Mathematical Foundations and Applications presents and demonstrates existing and novel methods for analyzing graphs quantitatively. Incorporating interdisciplinary knowledge from graph theory, information theory, measurement theory, and statistical techniques, this book covers a wide range of quantitative-graph theoretical concepts and methods, including those pertaining to real and random graphs such as:Comparative approaches (graph similarity or distance)Graph measures to characterize graphs quantitat

  5. Fluorescent foci quantitation for high-throughput analysis

    Directory of Open Access Journals (Sweden)

    Elena Ledesma-Fernández

    2015-06-01

    Full Text Available A number of cellular proteins localize to discrete foci within cells, for example DNA repair proteins, microtubule organizing centers, P bodies or kinetochores. It is often possible to measure the fluorescence emission from tagged proteins within these foci as a surrogate for the concentration of that specific protein. We wished to develop tools that would allow quantitation of fluorescence foci intensities in high-throughput studies. As proof of principle we have examined the kinetochore, a large multi-subunit complex that is critical for the accurate segregation of chromosomes during cell division. Kinetochore perturbations lead to aneuploidy, which is a hallmark of cancer cells. Hence, understanding kinetochore homeostasis and regulation are important for a global understanding of cell division and genome integrity. The 16 budding yeast kinetochores colocalize within the nucleus to form a single focus. Here we have created a set of freely-available tools to allow high-throughput quantitation of kinetochore foci fluorescence. We use this ‘FociQuant’ tool to compare methods of kinetochore quantitation and we show proof of principle that FociQuant can be used to identify changes in kinetochore protein levels in a mutant that affects kinetochore function. This analysis can be applied to any protein that forms discrete foci in cells.

  6. Realizing the quantitative potential of the radioisotope image

    International Nuclear Information System (INIS)

    Brown, N.J.G.; Britton, K.E.; Cruz, F.R.

    1977-01-01

    The sophistication and accuracy of a clinical strategy depends on the accuracy of the results of the tests used. When numerical values are given in the test report powerful clinical strategies can be developed. The eye is well able to perceive structures in a high-quality grey-scale image. However, the degree of difference in density between two points cannot be estimated quantitatively by eye. This creates a problem particularly when there is only a small difference between the count-rate at a suspicious point or region and the count-rate to be expected there if the image were normal. To resolve this problem methods of quantitation of the amplitude of a feature, defined as the difference between the observed and expected values at the region of the feature, have been developed. The eye can estimate the frequency of light entering it very accurately (perceived as colour). Thus, if count-rate data are transformed into colour in a systematic way then information about realtive count-rate can be perceived. A computer-driven, interactive colour display system is used in which the count-rate range of each colour is computed as a percentage of a reference count-rate value. This can be used to obtain quantitative estimates of the amplitude of an image feature. The application of two methods to normal and pathological data are described and the results discussed. (author)

  7. Methods for Quantitative Creatinine Determination.

    Science.gov (United States)

    Moore, John F; Sharer, J Daniel

    2017-04-06

    Reliable measurement of creatinine is necessary to assess kidney function, and also to quantitate drug levels and diagnostic compounds in urine samples. The most commonly used methods are based on the Jaffe principal of alkaline creatinine-picric acid complex color formation. However, other compounds commonly found in serum and urine may interfere with Jaffe creatinine measurements. Therefore, many laboratories have made modifications to the basic method to remove or account for these interfering substances. This appendix will summarize the basic Jaffe method, as well as a modified, automated version. Also described is a high performance liquid chromatography (HPLC) method that separates creatinine from contaminants prior to direct quantification by UV absorption. Lastly, a liquid chromatography-tandem mass spectrometry (LC-MS/MS) method is described that uses stable isotope dilution to reliably quantify creatinine in any sample. This last approach has been recommended by experts in the field as a means to standardize all quantitative creatinine methods against an accepted reference. © 2017 by John Wiley & Sons, Inc. Copyright © 2017 John Wiley & Sons, Inc.

  8. Quantitative risk assessment system (QRAS)

    Science.gov (United States)

    Weinstock, Robert M (Inventor); Smidts, Carol S (Inventor); Mosleh, Ali (Inventor); Chang, Yung-Hsien (Inventor); Swaminathan, Sankaran (Inventor); Groen, Francisco J (Inventor); Tan, Zhibin (Inventor)

    2001-01-01

    A quantitative risk assessment system (QRAS) builds a risk model of a system for which risk of failure is being assessed, then analyzes the risk of the system corresponding to the risk model. The QRAS performs sensitivity analysis of the risk model by altering fundamental components and quantifications built into the risk model, then re-analyzes the risk of the system using the modifications. More particularly, the risk model is built by building a hierarchy, creating a mission timeline, quantifying failure modes, and building/editing event sequence diagrams. Multiplicities, dependencies, and redundancies of the system are included in the risk model. For analysis runs, a fixed baseline is first constructed and stored. This baseline contains the lowest level scenarios, preserved in event tree structure. The analysis runs, at any level of the hierarchy and below, access this baseline for risk quantitative computation as well as ranking of particular risks. A standalone Tool Box capability exists, allowing the user to store application programs within QRAS.

  9. Quantitative Characterization of Nanostructured Materials

    Energy Technology Data Exchange (ETDEWEB)

    Dr. Frank (Bud) Bridges, University of California-Santa Cruz

    2010-08-05

    The two-and-a-half day symposium on the "Quantitative Characterization of Nanostructured Materials" will be the first comprehensive meeting on this topic held under the auspices of a major U.S. professional society. Spring MRS Meetings provide a natural venue for this symposium as they attract a broad audience of researchers that represents a cross-section of the state-of-the-art regarding synthesis, structure-property relations, and applications of nanostructured materials. Close interactions among the experts in local structure measurements and materials researchers will help both to identify measurement needs pertinent to real-world materials problems and to familiarize the materials research community with the state-of-the-art local structure measurement techniques. We have chosen invited speakers that reflect the multidisciplinary and international nature of this topic and the need to continually nurture productive interfaces among university, government and industrial laboratories. The intent of the symposium is to provide an interdisciplinary forum for discussion and exchange of ideas on the recent progress in quantitative characterization of structural order in nanomaterials using different experimental techniques and theory. The symposium is expected to facilitate discussions on optimal approaches for determining atomic structure at the nanoscale using combined inputs from multiple measurement techniques.

  10. Quantitative information in medical imaging

    International Nuclear Information System (INIS)

    Deconinck, F.

    1985-01-01

    When developing new imaging or image processing techniques, one constantly has in mind that the new technique should provide a better, or more optimal answer to medical tasks than existing techniques do 'Better' or 'more optimal' imply some kind of standard by which one can measure imaging or image processing performance. The choice of a particular imaging modality to answer a diagnostic task, such as the detection of coronary artery stenosis is also based on an implicit optimalisation of performance criteria. Performance is measured by the ability to provide information about an object (patient) to the person (referring doctor) who ordered a particular task. In medical imaging the task is generally to find quantitative information on bodily function (biochemistry, physiology) and structure (histology, anatomy). In medical imaging, a wide range of techniques is available. Each technique has it's own characteristics. The techniques discussed in this paper are: nuclear magnetic resonance, X-ray fluorescence, scintigraphy, positron emission tomography, applied potential tomography, computerized tomography, and compton tomography. This paper provides a framework for the comparison of imaging performance, based on the way the quantitative information flow is altered by the characteristics of the modality

  11. Digital radiography: a quantitative approach

    International Nuclear Information System (INIS)

    Retraint, F.

    2004-01-01

    'Full-text:' In a radiograph the value of each pixel is related to the material thickness crossed by the x-rays. Using this relationship, an object can be characterized by parameters such as depth, surface and volume. Assuming a locally linear detector response and using a radiograph of reference object, the quantitative thickness map of object can be obtained by applying offset and gain corrections. However, for an acquisition system composed of cooled CCD camera optically coupled to a scintillator screen, the radiographic image formation process generates some bias which prevent from obtaining the quantitative information: non uniformity of the x-ray source, beam hardening, Compton scattering, scintillator screen, optical system response. In a first section, we propose a complete model of the radiographic image formation process taking account of these biases. In a second section, we present an inversion scheme of this model for a single material object, which enables to obtain the thickness map of the object crossed by the x-rays. (author)

  12. Infrared thermography quantitative image processing

    Science.gov (United States)

    Skouroliakou, A.; Kalatzis, I.; Kalyvas, N.; Grivas, TB

    2017-11-01

    Infrared thermography is an imaging technique that has the ability to provide a map of temperature distribution of an object’s surface. It is considered for a wide range of applications in medicine as well as in non-destructive testing procedures. One of its promising medical applications is in orthopaedics and diseases of the musculoskeletal system where temperature distribution of the body’s surface can contribute to the diagnosis and follow up of certain disorders. Although the thermographic image can give a fairly good visual estimation of distribution homogeneity and temperature pattern differences between two symmetric body parts, it is important to extract a quantitative measurement characterising temperature. Certain approaches use temperature of enantiomorphic anatomical points, or parameters extracted from a Region of Interest (ROI). A number of indices have been developed by researchers to that end. In this study a quantitative approach in thermographic image processing is attempted based on extracting different indices for symmetric ROIs on thermograms of the lower back area of scoliotic patients. The indices are based on first order statistical parameters describing temperature distribution. Analysis and comparison of these indices result in evaluating the temperature distribution pattern of the back trunk expected in healthy, regarding spinal problems, subjects.

  13. Quantitative criticism of literary relationships.

    Science.gov (United States)

    Dexter, Joseph P; Katz, Theodore; Tripuraneni, Nilesh; Dasgupta, Tathagata; Kannan, Ajay; Brofos, James A; Bonilla Lopez, Jorge A; Schroeder, Lea A; Casarez, Adriana; Rabinovich, Maxim; Haimson Lushkov, Ayelet; Chaudhuri, Pramit

    2017-04-18

    Authors often convey meaning by referring to or imitating prior works of literature, a process that creates complex networks of literary relationships ("intertextuality") and contributes to cultural evolution. In this paper, we use techniques from stylometry and machine learning to address subjective literary critical questions about Latin literature, a corpus marked by an extraordinary concentration of intertextuality. Our work, which we term "quantitative criticism," focuses on case studies involving two influential Roman authors, the playwright Seneca and the historian Livy. We find that four plays related to but distinct from Seneca's main writings are differentiated from the rest of the corpus by subtle but important stylistic features. We offer literary interpretations of the significance of these anomalies, providing quantitative data in support of hypotheses about the use of unusual formal features and the interplay between sound and meaning. The second part of the paper describes a machine-learning approach to the identification and analysis of citational material that Livy loosely appropriated from earlier sources. We extend our approach to map the stylistic topography of Latin prose, identifying the writings of Caesar and his near-contemporary Livy as an inflection point in the development of Latin prose style. In total, our results reflect the integration of computational and humanistic methods to investigate a diverse range of literary questions.

  14. [A new method of calibration and positioning in quantitative analysis of multicomponents by single marker].

    Science.gov (United States)

    He, Bing; Yang, Shi-Yan; Zhang, Yan

    2012-12-01

    This paper aims to establish a new method of calibration and positioning in quantitative analysis of multicomponents by single marker (QAMS), using Shuanghuanglian oral liquid as the research object. Establishing relative correction factors with reference chlorogenic acid to other 11 active components (neochlorogenic acid, cryptochlorogenic acid, cafferic acid, forsythoside A, scutellarin, isochlorogenic acid B, isochlorogenic acid A, isochlorogenic acid C, baicalin and phillyrin wogonoside) in Shuanghuanglian oral liquid by 3 correction methods (multipoint correction, slope correction and quantitative factor correction). At the same time chromatographic peak was positioned by linear regression method. Only one standard uas used to determine the content of 12 components in Shuanghuanglian oral liquid, in stead of needing too many reference substance in quality control. The results showed that within the linear ranges, no significant differences were found in the quantitative results of 12 active constituents in 3 batches of Shuanghuanglian oral liquid determined by 3 correction methods and external standard method (ESM) or standard curve method (SCM). And this method is simpler and quicker than literature methods. The results were accurate and reliable, and had good reproducibility. While the positioning chromatographic peaks by linear regression method was more accurate than relative retention time in literature. The slope and the quantitative factor correction controlling the quality of Chinese traditional medicine is feasible and accurate.

  15. Partial volume correction and image segmentation for accurate measurement of standardized uptake value of grey matter in the brain.

    Science.gov (United States)

    Bural, Gonca; Torigian, Drew; Basu, Sandip; Houseni, Mohamed; Zhuge, Ying; Rubello, Domenico; Udupa, Jayaram; Alavi, Abass

    2015-12-01

    Our aim was to explore a novel quantitative method [based upon an MRI-based image segmentation that allows actual calculation of grey matter, white matter and cerebrospinal fluid (CSF) volumes] for overcoming the difficulties associated with conventional techniques for measuring actual metabolic activity of the grey matter. We included four patients with normal brain MRI and fluorine-18 fluorodeoxyglucose (F-FDG)-PET scans (two women and two men; mean age 46±14 years) in this analysis. The time interval between the two scans was 0-180 days. We calculated the volumes of grey matter, white matter and CSF by using a novel segmentation technique applied to the MRI images. We measured the mean standardized uptake value (SUV) representing the whole metabolic activity of the brain from the F-FDG-PET images. We also calculated the white matter SUV from the upper transaxial slices (centrum semiovale) of the F-FDG-PET images. The whole brain volume was calculated by summing up the volumes of the white matter, grey matter and CSF. The global cerebral metabolic activity was calculated by multiplying the mean SUV with total brain volume. The whole brain white matter metabolic activity was calculated by multiplying the mean SUV for the white matter by the white matter volume. The global cerebral metabolic activity only reflects those of the grey matter and the white matter, whereas that of the CSF is zero. We subtracted the global white matter metabolic activity from that of the whole brain, resulting in the global grey matter metabolism alone. We then divided the grey matter global metabolic activity by grey matter volume to accurately calculate the SUV for the grey matter alone. The brain volumes ranged between 1546 and 1924 ml. The mean SUV for total brain was 4.8-7. Total metabolic burden of the brain ranged from 5565 to 9617. The mean SUV for white matter was 2.8-4.1. On the basis of these measurements we generated the grey matter SUV, which ranged from 8.1 to 11.3. The

  16. Comparison of conventional, model-based quantitative planar, and quantitative SPECT image processing methods for organ activity estimation using In-111 agents

    International Nuclear Information System (INIS)

    He, Bin; Frey, Eric C

    2006-01-01

    Accurate quantification of organ radionuclide uptake is important for patient-specific dosimetry. The quantitative accuracy from conventional conjugate view methods is limited by overlap of projections from different organs and background activity, and attenuation and scatter. In this work, we propose and validate a quantitative planar (QPlanar) processing method based on maximum likelihood (ML) estimation of organ activities using 3D organ VOIs and a projector that models the image degrading effects. Both a physical phantom experiment and Monte Carlo simulation (MCS) studies were used to evaluate the new method. In these studies, the accuracies and precisions of organ activity estimates for the QPlanar method were compared with those from conventional planar (CPlanar) processing methods with various corrections for scatter, attenuation and organ overlap, and a quantitative SPECT (QSPECT) processing method. Experimental planar and SPECT projections and registered CT data from an RSD Torso phantom were obtained using a GE Millenium VH/Hawkeye system. The MCS data were obtained from the 3D NCAT phantom with organ activity distributions that modelled the uptake of 111 In ibritumomab tiuxetan. The simulations were performed using parameters appropriate for the same system used in the RSD torso phantom experiment. The organ activity estimates obtained from the CPlanar, QPlanar and QSPECT methods from both experiments were compared. From the results of the MCS experiment, even with ideal organ overlap correction and background subtraction, CPlanar methods provided limited quantitative accuracy. The QPlanar method with accurate modelling of the physical factors increased the quantitative accuracy at the cost of requiring estimates of the organ VOIs in 3D. The accuracy of QPlanar approached that of QSPECT, but required much less acquisition and computation time. Similar results were obtained from the physical phantom experiment. We conclude that the QPlanar method, based

  17. Stable and high order accurate difference methods for the elastic wave equation in discontinuous media

    KAUST Repository

    Duru, Kenneth; Virta, Kristoffer

    2014-01-01

    to be discontinuous. The key feature is the highly accurate and provably stable treatment of interfaces where media discontinuities arise. We discretize in space using high order accurate finite difference schemes that satisfy the summation by parts rule. Conditions

  18. Development of salt-tolerance interface for an high performance liquid chromatography/inductively coupled plasma mass spectrometry system and its application to accurate quantification of DNA samples.

    Science.gov (United States)

    Takasaki, Yuka; Sakagawa, Shinnosuke; Inagaki, Kazumi; Fujii, Shin-Ichiro; Sabarudin, Akhmad; Umemura, Tomonari; Haraguchi, Hiroki

    2012-02-03

    Accurate quantification of DNA is highly important in various fields. Determination of phosphorus by ICP-MS is one of the most effective methods for accurate quantification of DNA due to the fixed stoichiometry of phosphate to this molecule. In this paper, a smart and reliable method for accurate quantification of DNA fragments and oligodeoxythymidilic acids by hyphenated HPLC/ICP-MS equipped with a highly efficient interface device is presented. The interface was constructed of a home-made capillary-attached micronebulizer and temperature-controllable cyclonic spray chamber (IsoMist). As a separation column for DNA samples, home-made methacrylate-based weak anion-exchange monolith was employed. Some parameters, which include composition of mobile phase, gradient program, inner and outer diameters of capillary, temperature of spray chamber etc., were optimized to find the best performance for separation and accurate quantification of DNA samples. The proposed system could achieve many advantages, such as total consumption for small amount sample analysis, salt-tolerance for hyphenated analysis, high accuracy and precision for quantitative analysis. Using this proposed system, the samples of 20 bp DNA ladder (20, 40, 60, 80, 100, 120, 140, 160, 180, 200, 300, 400, 500 base pairs) and oligodeoxythymidilic acids (dT(12-18)) were rapidly separated and accurately quantified. Copyright © 2011 Elsevier B.V. All rights reserved.

  19. 9 CFR 442.3 - Scale requirements for accurate weights, repairs, adjustments, and replacements after inspection.

    Science.gov (United States)

    2010-01-01

    ... 9 Animals and Animal Products 2 2010-01-01 2010-01-01 false Scale requirements for accurate... PROCEDURES AND REQUIREMENTS FOR ACCURATE WEIGHTS § 442.3 Scale requirements for accurate weights, repairs, adjustments, and replacements after inspection. (a) All scales used to determine the net weight of meat and...

  20. Validation of a single nucleotide polymorphism (SNP) typing assay with 49 SNPs for forensic genetic testing in a laboratory accredited according to the ISO 17025 standard

    DEFF Research Database (Denmark)

    Børsting, Claus; Rockenbauer, Eszter; Morling, Niels

    2009-01-01

    cases and 33 twin cases were typed at least twice for the 49 SNPs. All electropherograms were analysed independently by two expert analysts prior to approval. Based on these results, detailed guidelines for analysis of the SBE products were developed. With these guidelines, the peak height ratio...... of a heterozygous allele call or the signal to noise ratio of a homozygous allele call is compared with previously obtained ratios. A laboratory protocol for analysis of SBE products was developed where allele calls with unusual ratios were highlighted to facilitate the analysis of difficult allele calls......A multiplex assay with 49 autosomal single nucleotide polymorphisms (SNPs) developed for human identification was validated for forensic genetic casework and accredited according to the ISO 17025 standard. The multiplex assay was based on the SNPforID 52plex SNP assay [J.J. Sanchez, C. Phillips, C...